We do our highest value work all the time.

Every person at Luna does the most valuable work they're capable of. AI supports the rest. This page explains what that means here, what's changing, and what we're protecting.

Why now

We make overnight glucose control effortless. Our work shouldn't be.

Luna ships a small device and a simple routine that helps people on long-acting insulin wake up in range. The mission is calm, careful, and clinical — and the work behind it is anything but: regulatory submissions that span thousands of pages, clinical-trial deviations that need same-day judgment, hardware iterations on a tight schedule, a customer support queue we never want to be slow on.

The principle here is the same one we hold for our patients: the system should do the careful repetitive parts so the human can focus on the parts that matter. For our team, that means giving everyone on the team the AI support they need so the work that requires judgment, taste, or human contact is the work they actually do.

This site is the workplace where that happens.

What's changing

More leverage. Faster cycles. Less repetition.

  • The hardest, most repetitive parts of every role can lean on a dedicated agent.
  • Workflows we already run informally can become explicit, observed, and improvable.
  • The first hour of your day can shift from triage to direction-setting.
  • Onboarding can speed up — new team members ramping on a question instead of a wiki crawl.

What's not

Your judgment. Your craft. Patient outcomes.

  • Decisions that affect patients stay with the humans whose names are on the submissions.
  • Performance is judged by your work, not by how often you used an agent.
  • Anything that touches PHI stays inside our HIPAA boundary, with a separate access path.

The honest part

The fears people have about AI at work — and how we're tackling them.

We're standing up a small team — a skunkworks — to work through this with the rest of the company. The fears below are ones we expect to hear, ones we share, and ones we want to be honest about up front. Each is paired with the design choice we're aiming for. If a row reads as marketing rather than as a design choice we'll actually build, tell us.

The fear

Am I being replaced?

What we built

No. Luna's headcount plan grows alongside this — agents are a force multiplier, not a substitute. The work AI removes is the work nobody wanted in the first place: meeting-note typing, triage, status rollups, format-converting the same data three times. The work it can't do is what we hire for.

Enforced by: Hiring plan owned by leadership; no agent has 'replace headcount' as a goal.

The fear

If the AI gets it wrong, who takes the hit?

What we built

The human whose name is on the work. Agents are tools your judgment uses, not co-authors. Anything that goes to a regulator, a patient, an investor, or a clinician carries your sign-off — and the audit log shows you reviewed it. If you spot a consistent failure mode, tell us; that's how we improve the agent or pull it.

Enforced by: Audit log records the reviewer for every regulated artifact; agent outputs are flagged 'unreviewed' until then.

The fear

What about my data — my words, my conversations?

What we built

Your conversations are yours. Each person at Luna has a private Durable Object instance with their own memory. Anthropic's Zero Data Retention is contractually on. Audit logs capture metadata (who asked what agent, when, how much it cost) — never the contents. You can see and delete what each agent remembers about you on My Agents.

Enforced by: ZDR is a contract clause, not a setting; per-user Durable Objects are how Cloudflare physically partitions state.

The fear

Is this going to feel like another tool to keep up with?

What we built

The places you already work — Slack, your inbox, this intranet — are where the agents live. We're not adding a new place to check. Onboarding is a 20-minute self-serve flow keyed to your role. If a workflow doesn't make your day better in the first week, we want to hear it.

Enforced by: Slack-first delivery for personal agents; /onboarding tracks are role-scoped so you only see what's yours.

The fear

Will I lose the parts of my job I actually like?

What we built

The opposite is the goal. Luna people stayed because the work matters and the craft is real. We're not automating the craft — we're trying to clear the bureaucratic and repetitive layer around it so there's more time for craft, not less. If a workflow we ship feels like the wrong call here, that's a signal to revise the workflow, not the person.

Enforced by: Workflow ownership lives with the team that does the work — not a central 'automation team.'

How decisions get made

Small group, written reasons, reversible.

  1. Anyone can propose a new agent or workflow. Drop it in #ai-platform in Slack or open a request on this site (coming with /agents/me).
  2. A small group reviews scope and risk. For shared agents (Class A), that's John plus the team lead whose domain it lives in. For personal agents (Class B), it's just you.
  3. Reasons are written down. Every agent and workflow has an "owner," a "why," and a "what it can't do." If a decision is reversed later, the original reasoning stays so we can learn from it.
  4. If you disagree, say so. The right place is #ai-platform for an open conversation, or [email protected] for anything you'd rather not put in a channel. We've changed our minds before. We'll do it again.

What to do next

Take the 20-minute path for your role.

Onboarding is short, role-specific, and ends in a real task you do with an agent. The first module on every track is the same: what we're not asking you to do.