Zingage Engineering

Applied AI, data engineering, and anthropomorphic software to scale the Bedrock Economy

Zingage Engineering

How Zingage does 996

How Zingage does 996

One of our core values at Zingage is velocity. Startups are a dogfight and to win you need speed. But you also need direction.

We chose velocity because velocity is a vector. It is not just a measure of speed but also direction. Going fast only matters when you are going the right way. What most people miss is that velocity looks different for different people.

Building high velocity teams means understanding the role everyone plays in the collective velocity of the company. I have always understood this through the lens of rowing and how we built a crew. For those familiar, Crew is the ultra WASP sport of northeast prep schools. It was one of my biggest motivators to attend Exeter. Coming from New York, the closest I had been to a boat was the Staten Island Ferry.

2012, Exeter River. Victor rowing in 2 Seat

After my first year on the crew team I was invited to the varsity pre-season in March. We broke the ice with our oars in skintight spandex before spending hours on the Charles River. This was where I first saw how coaches selected the composition of boats, known as “crews.” Each crew had eight rowers and one coxswain. The big guys who looked like the Winklevoss twins sat in the middle. The lanky guys who looked like cyclists or cross country runners either sat in the stroke and two seat, setting rhythm for the rest of us, or they served as the bow pair, feeling every wobble of the shell and adjusting before anyone else noticed.

It is not easy to see why small changes in that composition can alter the boat’s velocity. The clearest way you learn this is when random pairs are sent to row two-man sculls. Take two inexperienced rowers from seat three and stroke, and the boat spins in circles. Scale that to eight rowers and even a slight misplacement can ruin your crew’s chances of reaching Henley.

When I think about velocity at Zingage, it’s never about how fast one engineer ships a feature. It’s about how the whole boat moves.

There’s a lot of noise about 996. We only believe in it in two specific moments as a gear, not a lifestyle:

  1. The final sprint: the last 300 meters when the call comes and everyone empties the tank.
  2. Power sprinters: the engine seats that can throw down big watts on command during the body of the race.

Those two moments only work when each seat knows its job.

Your product and engineering leader is the coxswain. She sets direction. Her voice is the only one everyone hears, and when she calls for “ten hard” or “more from port,” you fucking pull because she’s the only person who can see the line we’re taking and what it will take to win.

Then there’s the stroke, usually the most senior engineer. The stroke sets rhythm and rate so the boat can sustain speed and still have a kick left. When the cox calls, the stroke lifts the cadence cleanly so everyone can follow without blowing up.

Next comes the Engine Room aka the power seats. This is where many new grads and junior devs start. Your job here is torque: clean, heavy pulls in rhythm, plus short, violent bursts when the boat needs it. That’s what I mean by power sprinters. You don’t pace the race. That’s the stroke’s job, but you decide it by delivering coordinated power without throwing the set. Done right, this is 996 in micro: intense, time‑boxed effort that serves the boat, not a performative grind.

Finally, there’s the bow pair, our stabilizers. They’re the technicians who keep the set, feel the crosswind first, and make the subtle corrections that keep the shell running straight. In a startup, these are the quiet pros who keep quality high, spot risk early, smooth releases, and protect customers when things rock.

Velocity is a vector. It belongs to the whole crew. Know your seat, pull in time, and make the boat fly.

Healthcare AI’s Uncanny Valley

Healthcare AI’s Uncanny Valley

“Hello, this is Linda from MediCorp—how may I assist you today?”

While Meta plows billions of dollars into one‑shotting seniors to become fodder for the leviathan, healthcare builders are wading through the ugliest stretch of making care great for everyone. This is the part where frontline founders feel the pain of close‑but‑not‑quite models. From mechanical tone to medical‑code hallucinations, we’re so close and still uncomfortably incomplete—just shy of automation that doesn’t make you wince.

Welcome to healthcare’s uncanny valley. It hurts because we know what great looks like. We know how good it will be when voice AI sounds like the receptionist you’ve spoken with every week for years. We know how deliriously happy it feels when the paperwork isn’t work at all.

I dream of a world where health just happens—where regulations don’t slow my grandmother’s care and the nurse is listening, not note‑taking. A world where care coordination means house visits to patients, not mouse clicks in Epic. Our mission is to make that world real.

Most healthcare AI vendors will tout a perfect product. If this is what “perfect” looks like to them, we’re in trouble. In senior care, we benefit from an older ear that’s happy to hear any voice instead of a dial pad—even if it sounds a little too happy sometimes. Our customers know this is still Day 1, and they’re committed to the journey because any future is better than the present.

At the same time, the bar is higher for healthcare AI builders. We’re not judged like humans doing their best to keep up. We’re measured against do‑no‑harm at a scale that prizes coverage over novelty and stability above all. The companies we’re building now will orchestrate care for billions. Before we get there, we’ll keep doing the hard work in the bowels of infrastructure and prompting—to make it work, and make it human.

At Zingage we're doing this hard to make AI actually work in Home Care. If you're interested in learning how we're tackling this problem with Zingage's data platform, check out our latest piece: How We Built a Data Platform for AI Agents.

How We Built a Data Platform for AI Agents

How We Built a Data Platform for AI Agents

When Zingage first launched, our data infrastructure looked like many early-stage systems: cron-driven ETLs, periodic syncs, and a warehouse where truth converged slowly. It worked — until we stopped reporting on care and started orchestrating it. Once we began making calls, scheduling visits, and updating records, the abstraction broke. Our software entered the physical world, and the physical world doesn’t wait for nightly jobs.

We saw the failures immediately: Voice AI calling caregivers about visits that were already closed. Operators overwriting each other’s edits. Automations racing humans across time zones on inconsistent state. None of these were application bugs—they were contradictions born from stale or conflicting data. We had treated consistency as a global property. In practice, it’s actor-specific. We rebuilt our data layer not just to scale operations, but to give AI agents a consistent and reliable view of the world they operate in.


From Batch to Proxy

We realized a shift was needed: stop asking “how fresh is our data?” and start asking “how coherent must it be for this actor, right now?” The answer changes by use case. The Voice interface needs near-instant coherence. The scheduling dashboard needs read-your-own-writes. The payroll export can live with delay.

The only architecture that could express those guarantees cleanly was a proxy layer. Every read and write now flows through Zephyr, our data plane. It doesn’t blindly fetch data—it reasons about which data to serve, how stale it can be, and what level of consistency each actor requires. Each request includes Consumer Directives: latency budgets, staleness tolerance, and consistency guarantees.x

type ConsumerDirectives = {
  maxStalenessMs: number;
  requireReadYourOwnWrites: boolean; // RYOW guarantee
};

When the proxy sees a read, it asks: “Given this actor’s contract, what’s the most coherent view I can safely return?” That framing—no single master, only policy-governed caches behind a proxy—reshaped everything.


Defining Correctness in a World Without a Master

Once we abandoned a central datastore, we faced a deeper question: when there’s no monolithic database, what does correctness even mean? Traditional databases give you consistency for free through ACID transactions. Zephyr doesn’t have that luxury. EMRs, HR systems, and operator dashboards all modify overlapping slices of reality. We had to invent our own invariants—rules that make a decentralized system behave predictably.

The first rule: explicit ownership. Every field in our merged graph is annotated with a single physical writer; everything else is derived. That map underpins merges, conflict detection, and write routing.

const OWNERSHIP: Record<string, "EMR" | "HRIS" | "ZINGAGE"> = {
  "visit.scheduledStart": "EMR",
  "visit.scheduledEnd":   "EMR",
  "visit.timeEntries":    "EMR",
  "practitioner.language":"HRIS",
  "assignment.status":    "ZINGAGE",
};

At read time, Zephyr uses this ownership map to deterministically merge records. When external IDs align, identity is clear. When they don’t, we fall back to a learned policy that defines what counts as “the same visit”—matching patient, caregiver, and service day within an adaptive epsilon (ε). The epsilon adjusts per integration, based on observed rounding and clock-in drift.

/* Fallback identity predicate for read-time reconciliation.
   ε (epsilon) is per-integration tolerance to account for drift. */
SELECT
  external_id_a IS NULL AND external_id_b IS NULL
  AND caregiver_a = caregiver_b
  AND patient_a = patient_b
  AND date_trunc('day', start_a AT TIME ZONE biz_tz)
      = date_trunc('day', start_b AT TIME ZONE biz_tz)
  AND tstzrange(start_a, end_a)
      && (tstzrange(start_b, end_b) + make_interval(mins => :epsilon_minutes))
AS is_same_visit;

Writes enforce the same invariants in reverse. Every update carries an ETag (Entity Tag) so the proxy can detect concurrent edits and preserve read-your-own-writes. If two writers race, we return a deterministic 409 Conflict with a structured diff—never a silent overwrite.

async function updateVisit(id: VisitId, patch: Partial<Visit>, ifMatch: string) {
  const current = await loadNormalized(id);
  if (current.etag !== ifMatch) {
    return {
      status: 409,
      body: {
        code: "ETAG_MISMATCH",
        current,
        diff: computeDiff(current, patch),
        hint: "Rebase and retry with new ETag",
      },
    };
  }

  for (const field of Object.keys(patch)) {
    if (OWNERSHIP[`visit.${field}`] !== "ZINGAGE") {
      throw new Error(`Cannot edit non-owned field: ${field}`);
    }
  }

  await sagaWriteToOwners(id, patch);
  const next = await loadNormalized(id);
  return { status: 200, body: next, headers: { ETag: next.etag } };
}

Together, explicit ownership, adaptive identity, and conditional writes form Zephyr’s version of a transaction. And because Zephyr doesn’t maintain a canonical datastore, the system effectively operates as multi-master. AxisCare writes clock-ins, HRIS manages caregiver metadata, Zingage orchestrates assignments. Zephyr reconciles them all at read time. In practice, it behaves like a business-logic CRDT - conflict-free not by algebra, but by policy.


The Zephyr Data Plane

With correctness defined, we built the plane to enforce it in real time. Zephyr has four layers: Source Translators (one per EMR), the Proxy Layer (decision engine), a lightweight FHIR-inspired model (shared vocabulary), and Per-Actor Caches (snapshots governed by freshness policies).

The diagram below shows how Zephyr reads from an EMR, normalizes data into a unified schema, and streams it into the cache and event topics for consumption.

Each request is a policy invocation. Actors—voice, UI, or batch—define how fresh data must be, how much latency they can tolerate, and what consistency semantics they require.

type ConsumerDirectives = {
  maxStalenessMs: number;
  requireReadYourOwnWrites: boolean; // RYOW guarantee
  latencyBudgetMs: number;
};

When a read lands:

async function getVisit(id: VisitId, policy: ConsumerDirectives) {
  const cached = await cache.get(id);
  if (cached && !isStale(cached, policy)) return cached;
  if (cached) revalidateInBackground(id);
  return await fetchFromSource(id);
}

This yields bounded staleness per actor, not a single global metric. Voice AI trades detail for speed. Dashboards require stronger coherence. Batch jobs prioritize throughput. Each interaction is negotiated.

Writes follow the same pattern—conditional, versioned, and deterministic.

When failures happen, we aim for predictable degradation, not brittle coupling. If an EMR is unreachable, the proxy fails fast on those fields but continues serving Zephyr-owned state flagged as “pending-sync.” Schema drift is quarantined, not guessed. We track per-actor coherence: p95 latency, stale ratio, invalidation lag. The key question isn’t “is the system up?” but “is each actor seeing a coherent world within the bounds it was promised?”


What’s Next with AI: Self-Healing Data at the Edges

With Zephyr’s consistency layer in place — ownership, merging, conflict logic — there’s still plenty left to build. But LLMs today has opened a new set of possibilities. We’re beginning to explore how AI might help Zephyr repair itself: flagging drift, suggesting schema patches, or even filling gaps when systems fail. Below are a few directions that feel especially promising.

Schema Drift

EMRs often change schema: drop, rename, or mutate fields. Those changes propagate silently unless caught. We’re building monitors that detect drift in incoming payloads, propose patches (e.g. remapping fields or migration logic), or issue alerts. We're particularly excited about the research in LLM-Powered Proactive Data Systems (Zeighami et al., 2025), which argues systems should be proactive — rewriting user inputs, inputs’ structure, and query logic.

Data Anomaly & Correction

Data is messy. Records get duplicated, conflict, or violate implicit rules. We plan validator models that flag suspect cases and suggest corrections. Examples: negative durations, timestamp mismatches, out-of-range values. We might output something like:

{
  "verdict": "anomaly",
  "field": "visit.duration",
  "reason": "duration = 0; clock-out missing or wrong",
  "suggestedFix": { "duration": "derive from timeEntries" },
  "confidence": 0.87
}

Depending on policy, we may block writes, stage reviews, or auto-correct high-confidence cases.

Web Agents for UI Remediation

Sometimes APIs break, scrapers drift, or data hides behind UIs. In those cases, a headless browser agent controlled by an AI may log into the EMR portal, navigate UI flows, extract or patch data, and feed it through Zephyr’s same invariants. We treat this as fallback repair, not the primary path.

const uiResult = await runAgent(directive);
if (!uiResult.visitTime) {
  // Use LLM to infer missing field from existing context
  const fix = await callLLM({
    prompt: `Given these time entries: ${record.timeEntries}, infer missing visitTime.`,
  });
  uiResult.visitTime = fix.visitTime;
}
// Feed the result back through Zephyr’s merge + ownership logic
normalized.visitTime = uiResult.visitTime;

If any of this sounds interesting? We’re currently hiring for Senior Engineer to join us and lead our Data Platform. Reach out to Daniel at daniel@zingage.com or visit our job board here.

The Zingage Way: Keep Grandma Out of the Hospital

The Zingage Way: Keep Grandma Out of the Hospital

Our Story

Victor and I never set out to build software for other software people. From the beginning, our goal was a generational company in the real economy - systems that move the world outside a browser tab. Victor had just exited Astorian, a marketplace that helped property managers find contractors. I had just watched my family scramble to find caregivers for my grandfather with dementia during COVID. We didn’t know yet that home care would be our industry, but we knew our ambition belonged where failure has consequences.

We reconnected in 2023 at South Park Commons and started exploring. Our first step was Zingage Perform, a layer on top of the EMR (electronic medical record) to automate communication, engagement, and rewards. At the time we assumed two things: full automation wasn’t yet feasible, and most agencies had foundations we could build on.

We were wrong. The work itself sets humans up to fail. Agencies live in a 24/7 cycle - 2 a.m. hospitalizations, daytime call queues, texts and emails piling up - while trying to coordinate care in systems built for records, billing, and compliance, not minute-to-minute staffing. These are good people doing heroic work inside constraints that make reliability overly dependent on brute force.

So, we stopped hedging and built what we always meant to build: Zingage Operator - coordination infrastructure that frees agencies from the back-office grind and makes care delivery dependable at scale. For a few months we even became schedulers ourselves to feel the weight of it and learn fast. That changed how we work. Every line of code now has a patient on the other end. A race condition isn’t an edge case - it’s a missed visit. A clumsy workflow isn’t an inconvenience - it’s a family crisis.

And now the curve has bent. Since launching post-pilot in August, four weeks of selling took us past seven figures in annual contracts, and we expect to 10x by year-end. Agencies aren’t just adopting; they’re telling us it’s changing their days. Laura Curry, who runs a veterans-focused CareBuilders agency in Kentucky, once bought cruise tickets with her husband and never used them - she was on call around the clock. When she saw Operator in action, she told us it was the first time in years she could put her phone down and know that her veterans would still get the care they needed.

That’s why we’re writing this now. We’ve shared these values internally, but with the team growing, the company at an inflection point, and the stakes higher than ever, it felt right to share them publicly too.


The Zingage Way

Right now, at this moment, there's a caregiver call out that will end in a hospitalization or worse. Families are drowning in chaos: spreadsheets, late-night panics, missed visits, caregivers churning at 80% per year. Zingage exists to end this. We're building the infrastructure so healthcare can happen in the home, so our parents can age with dignity, and so their children can live without sacrificing everything.

We are the coordination layer that makes home care automatic. That's the mission behind every line of code, every sprint, and every late night. We know that behind every bug fix is a family counting on us. Behind every feature is someone's parent waiting for care.

We move fast, bend convention, and take risks others won't. Our customers need us to be this way because their livelihoods and lives depend on it. We courageously build what others avoid, commit to deliver excellence, and uphold the integrity of our commitments.

Zingage may not be an easy mission to achieve but it is an easy mission to champion. Ultimately, our values will determine whether we win.

Customers First

We are building so that every provider, caregiver, and family can live the life they always dreamed.

We build for the providers whose dream is to deliver excellent care to thousands of people without having to give up their own lives staring at screens and staying up late.

We ship so that caregivers who show up for their patients never have to do this alone.

We deliver so that the families who need care can trust they will always get care.

Serving our customers is not only a privilege, it is a duty that we pledge ourselves to fully.

When Zingage succeeds it means that a daughter sleeps easy knowing her bed-bound mother hundreds of miles away will get the care she needs.

It means a caregiver is supported when her patient suffers a stroke in the middle of a visit, instead of being so overwhelmed that they quit.

It means a provider can continue serving thousands of patients without unforeseen compliance hits shutting them down.

Tradeoff: we will prioritize customer impact over internal preferences or engineering elegance, even if it means cutting scope, scrapping work, or redoing something we personally liked.

Velocity

We move fast and we ship responsibly because we care. Our customers’ lives will not wait for us to reach perfect certainty nor will they tolerate careless mistakes. At Zingage we create velocity as a marathon of sprints, punctuated by bursts of intensity and recovery.

We focus on the inputs: work long, work hard, or work smart; pick what works for you, but pick something. Velocity isn't negotiable, but how you achieve it is.

We're pirates. We hired an actress off Craigslist to crash our first conference. We snuck into WellSky's event posing as customers. We hand-deliver donuts at 6am. We take the shots others won't because playing by the rules means families suffer.

We're pirates because this industry needs pirates. The treasure we're after isn't gold, it's every family sleeping soundly knowing care is handled. We burn the ships behind us because there's no retreat when lives are on the line.

In the end, all that matters is what we have done for the customers.

Tradeoff: we accept fatigue and messiness in bursts in order to move fast, but we also commit to cleanup cycles so we can keep sprinting again. If you want a steady, predictable pace, this isn’t the place.

Extreme Ownership

We take extreme ownership at Zingage. We don’t make excuses. We don’t blame anyone or anything. We take ownership of the problems and the solutions. We take ownership because it is the bedrock of trust, which is the lifeblood of success.

Ownership shows up when no one is looking. It appears when you choose to sweat the details to fix a bug no one else caught. It’s there when you go out of your way to teach a customer how to use our product. It’s present when you show up in person to listen and to build beside the customer.

Most importantly, extreme ownership is taking care of each other. There is no blame at Zingage and making excuses without providing solutions is intolerable. When something breaks, we don't point fingers, we make solutions. We not only accept but praise support, encouragement, and respect.

Extreme ownership is the manifestation of high agency. If you want to start a podcast then do it; if you want to understand a customer then call them; if you want to build a feature then ship it.

Tradeoff: we stomach the discomfort of eating shit when we make mistakes and we accept everyone's fallibility. Zingage is not a place you can hide nor is it a place to begrudge teammates who try and fail.

Our mission is worth every difficulty. With these values, we won't just succeed, we'll build a world where no call goes unanswered, no family drowns in chaos, and everyone can age with dignity at home.

It's (Still) Time to Build: The Case for Startups in 2025

It's (Still) Time to Build: The Case for Startups in 2025

A friend and I recently debated the meaning of work in the looming shadow of AGI. The premise was simple: if OpenAI - or any organization - achieves superintelligence, what's the point of doing anything at all?

In truth, I've had this conversation repeatedly with founder friends. Each new OpenAI release sparks awe and dread, steadily devouring startups conceived just months ago. The meme of startups reduced to mere ChatGPT wrappers feels painfully real. These discussions typically land us at two bleak conclusions: either join an AI lab to stay relevant or succumb to nihilism, lounging on Universal Basic Income in the supposed “post-scarcity” future. Advocates imagine humans pivoting gracefully toward art or leisure, but that vision feels patronizingly hollow.

Why does this scenario feel inevitable and limiting? Perhaps because we’ve mistakenly assumed that a single, centralized AGI - one supreme intelligence directing human affairs - is the optimal and natural outcome. Yet history challenges this assumption. Attempts at centralized planning, such as Mao’s Great Leap Forward or Lenin’s collectivization, repeatedly failed due to oversimplification of complex human systems. 

James C. Scott vividly illustrates this danger in Seeing Like a State. Colonial powers in Tanzania enforced monoculture farming, planting a single crop uniformly for maximum yield. Their "scientific" method disastrously ignored local wisdom. Indigenous farmers had traditionally practiced polyculture - planting multiple crops together. While seemingly inefficient and messy, polyculture safeguarded soil health, diversified risk, and allowed flexible responses to unpredictable conditions. The colonial approach, though theoretically optimized, proved rigid and catastrophically vulnerable.

The core misconception underlying singular AGI echoes this colonial mindset: the belief that superintelligence can - and inevitably should - become a digital god capable of making all decisions optimally. Yet real-world decision-making rarely offers neat solutions; it more closely resembles the messy moral complexity of the trolley problem. Intelligence alone, no matter how advanced, cannot dictate correct answers to inherently subjective moral dilemmas. Thus, we must clearly separate intelligence - the neutral ability to solve problems - from agency (authority to act) and values (the moral principles guiding actions). 

Intelligence, in its purest form, involves computational power, data processing, and predictive modeling capabilities. It is fundamentally about pattern recognition, scenario forecasting, and logical analysis - essentially neutral skills that can enhance decision-making but do not inherently carry ethical weight or moral guidance. Agency, on the other hand, concerns who or what has the authority and accountability to act upon the outputs of this intelligence. Agency requires legitimacy, trust, and transparency - qualities that purely intelligent systems alone cannot ensure. Values represent the most human dimension of all; they encompass the moral frameworks, cultural contexts, and ethical considerations that ultimately guide decisions.

Today, systems like ChatGPT already display overarching personalities and value frameworks, intentionally designed by organizations like OpenAI. While this approach helps in establishing baseline safety and ethical guardrails, it presents two significant issues. First, these predetermined values might not fully align with the diverse perspectives, cultural contexts, and nuanced ethical landscapes of all users. Second, embedding a singular value system risks oversimplifying complex moral decisions, potentially resulting in outcomes disconnected from local realities or community-specific priorities. Therefore, a more robust approach would empower users and communities to tailor and tune these AI personalities and values to their specific needs and ethical standards, ensuring greater relevance, acceptance, and genuine alignment.

Startups uniquely embody this critical separation of intelligence, agency, and values. They deploy intelligence as technological infrastructure - powerful yet neutral tools capable of addressing specific problems. They restore agency by enabling local communities and users to actively choose, adapt, or reject these tools based on their distinct circumstances. Most crucially, startups allow values to remain community-defined and responsive to context, rather than universally imposed. For example, a rural healthcare clinic might adopt AI specifically tuned for resource-constrained environments, emphasizing preventive care aligned with local priorities. An urban hospital might choose a different AI optimized for managing high patient volumes and specialist coordination. Each community retains genuine agency, reinforcing accountability and achieving true alignment between technological capabilities and diverse human values.

This approach mirrors how governance functions at its best: overarching federal policies exist alongside state laws, city ordinances, trade associations, and grassroots organizations. While centralized institutions like OpenAI attempt broad alignment efforts analogous to federal policy, startups act as local policymakers - crafting tailored, bottom-up solutions that reflect community-specific needs and values.

Such decentralization doesn’t just enable startups to gain initial traction - it positions them for sustained relevance. Startups rapidly build trust through close alignment with local communities, steadily compounding their advantages by integrating powerful open-source models like Llama and DeepSeek with specialized expertise, proprietary data loops, and deep relationships. These assets form an enduring edge, similar to how local clinicians remain indispensable because their practical insights and patient relationships withstand technological disruption.

Ultimately, I’m not advocating for decentralized intelligence and the startups that embody it out of nostalgia or a luddite fear of our soon-to-come AI overlords. Sure, spending retirement as mediocre painters surviving on UBI sounds grimly amusing. But the real danger is more serious: placing all our trust in a single, omnipotent AI planner whose perfectly rational decisions could lead us straight off a cliff. Startups offer something far better - a morally diverse ecosystem of intelligences, built from the ground up by real communities. If history teaches us anything, it’s that pluralism - not centralization - is our strongest safeguard for human liberty. So yes, despite the looming shadow of AGI, it’s (still) time to build.

Fixing Error Handling in TypeScript: Why a Standard Result Type Isn’t Enough

Fixing Error Handling in TypeScript: Why a Standard Result Type Isn’t Enough

Error handling in JavaScript and TypeScript is notoriously challenging. Unlike many modern statically-typed languages (Rust, Swift, Kotlin, Go), TypeScript lacks built-in ways to track error types, often leading to fragile, opaque error-handling code. Although third-party Result type implementations have emerged to fill this gap, none have gained widespread adoption—mainly due to cumbersome syntax, complicated interactions with existing error-handling patterns, and unintuitive APIs.

In his latest blog post, Ethan Resnick explores these challenges, critically assesses current solutions, and proposes an improved Result type designed specifically for TypeScript. By aligning closely with the familiar Promise API, reducing boilerplate, and seamlessly integrating with async workflows, Ethan offers a practical, incremental path toward clearer, safer error handling. Read the full post to learn how to rethink your approach to robust error handling in TypeScript—and why simply porting a standard Result pattern isn’t enough.

Read full article here: https://medium.com/@ethanresnick/fixing-error-handling-in-typescript-340873a31ecd

Zingage IDs: Engineering Secure and Scalable Multi-Tenancy

Zingage IDs: Engineering Secure and Scalable Multi-Tenancy

As Zingage rapidly expanded to hundreds of customers nationwide, our engineering team faced increasingly complex technical challenges: robust data isolation, seamless scaling, and high availability during intensive operations. Standard UUIDs quickly proved insufficient, exposing several critical issues:

  • Data Leakage Risk: Forgetting to filter queries by businessId could expose sensitive data across businesses.
  • Complex Partitioning: Lack of inherent business context made data partitioning challenging and inefficient.
  • Ambiguous Entity Scope: Without clear entity boundaries, managing data across multiple tenants became error-prone.

The Limitations of Traditional UUIDs

Consider this problematic scenario:

const profileId = uuidv4();

// Risky query (business context omitted)
const profile = await db.profiles.findOne({ id: profileId });
// Potentially exposes data from another business inadvertently

This approach, although common, risks critical data leaks in multi-tenant environments.

Introducing Zingage IDs: A Robust Multi-Tenant Solution

To address these challenges, we designed a structured UUIDv8-based identifier system, embedding clear business context and distinct entity scopes directly within the IDs:

  • Business IDs (000 prefix): Represent unique business entities.
  • Business-scoped Entity IDs (1 prefix): Clearly tied to specific businesses, embedding business identifiers.
  • Cross-business Entity IDs (001 prefix): Explicitly defined to represent resources shared across businesses.

Code Example

Here's how this looks in practice:

import { generateBusinessId, generateScopedId } from 'zingage-id';

const businessId = generateBusinessId();
const profileId = generateScopedId(businessId, 'PROFILE');

// Secure query with embedded business context
const profile = await db.profiles.findOne({ id: profileId });
// Built-in safeguards ensure correct business scope, preventing leaks

Advanced Collision Resistance and Debugging Capabilities

Zingage IDs leverage structured components—42-bit timestamps, 10-bit entity type hints, and opaque random data—to provide strong collision resistance and powerful debugging:

  • Collision Resistance: By combining precise timestamps with robust random bits, we drastically lower collision risks, even under high-load scenarios. For example, generating up to 100,000 IDs per day produces only a minimal annual collision probability (~7% under highly conservative assumptions).
  • Debugging Efficiency: Entity type hints embedded within IDs enable rapid issue identification during debugging, without imposing rigid constraints. This ensures flexibility for future entity restructuring or data migration tasks.

Built-In Database-Level Security Enforcement

Our ID scheme integrates seamlessly with database-level Row-Level Security (RLS) policies, providing automatic, foolproof data isolation:

-- Enforce strict business context at the database level
CREATE POLICY business_scope_policy ON profiles
USING (extract_business_id(id) = current_setting('app.current_business_id')::uuid);

With this policy, database queries automatically apply business scoping, significantly reducing the risk of accidental data exposure.

Middleware further enhances security by automatically setting business context on a request level.

// Middleware example
app.use((req, res, next) => {
  const businessId = extractBusinessIdFromRequest(req);
  db.setBusinessContext(businessId);
  next();
});

// Database query implicitly scoped
const profile = await db.profiles.findOne({ id: profileId });
// Automatically executes as:
// SELECT * FROM profiles WHERE id = :profileId AND business_id = :activeBusinessId

Simplified and Efficient Data Partitioning

Explicitly embedding business identifiers simplifies data partitioning dramatically:

  • Business-scoped Entities: Directly embed business IDs, enabling straightforward partitioning and isolation per business.
  • Cross-business Entities: Clearly separated and replicated across partitions to ensure consistency and accessibility.

Practical partitioning example:

CREATE TABLE profiles (
  id UUID PRIMARY KEY,
  ...
) PARTITION BY HASH (business_id_embedded_in_uuid);

CREATE TABLE workflow_templates (
  id UUID PRIMARY KEY,
  ...
) -- Replicated across partitions due to cross-business applicability

This explicit delineation dramatically enhances scalability, performance, and operational efficiency.

Key Benefits of the Zingage ID Scheme

  • Robust Security: Intrinsic business isolation prevents accidental cross-tenant data breaches.
  • Scalable Architecture: Simplified, efficient partitioning supports effortless horizontal scaling.
  • Improved Developer Experience: Reduced manual context management and minimized risk of oversight.

Design at Zingage

Design at Zingage

Design at Zingage is not a support function. It is a way of seeing, reasoning, and building systems that actually work for the people delivering care.

We are not just here to ship features. We are here to define what it means to work alongside AI in one of the most human industries in the world.


The Problem: Designing for AI in a Trust-Based Industry

In home care, people don’t use software because they want to. They use it because they have to.

Schedulers and caregivers operate under constant stress: backlogs of patient visits, late cancellations, last-minute reassignments. Their current tools make this worse—clunky, slow, opaque.

Now, introduce AI. Software that doesn’t just coordinate schedules but acts on its own. Agents that message caregivers, reassign visits, and resolve gaps without human input. This is powerful. But it also creates a new kind of risk:

What happens when something goes wrong, and no one knows why?

Traditional UI patterns break down here. The job of design is no longer just to simplify a workflow—it’s to help users build trust in a system that behaves more like a colleague than a tool.


The Opportunity: Interfaces for Delegation, Not Just Execution

Our goal is not to make care schedulers faster typists. It’s to help them delegate work to intelligent agents. But delegation requires:

  • Knowing what the agent is doing
  • Understanding why it did something
  • Stepping in when something goes off track

In other words: clarity, not control. We are designing for a world where the default state of software is action, not waiting. Where the system moves first, and the human refines it.

This requires new UI patterns, new metaphors, and new forms of accountability. Most importantly, it requires a deep understanding of the users who will live in this hybrid loop.


Core Design Principles

1. Supervision Over Control

Users shouldn’t be expected to micromanage the AI. The interface must:

  • Show intent, not implementation
  • Offer intuitive paths for feedback and override
  • Make risk visible without creating fear

2. Asynchronous by Default

Work doesn’t happen in one sitting. Schedulers reach out, wait for replies, reschedule, escalate, wait again. Our interfaces must:

  • Support interrupted workflows
  • Track state across time and agents
  • Handle reversals and replans with grace

3. Mental Models, Not Just Screens

We don’t design pages. We design systems of meaning:

  • What is a plan?
  • What does it mean to delegate something?
  • When has a task truly been resolved?

These are design questions.

4. Internal Surfaces Matter

We take Sarah Tavel’s advice seriously: "It’s okay to have a human in the loop. It’s better if it’s your human, not your customer's."

Our internal ops team supervises AI decisions, resolves edge cases, and monitors quality. These tools require the same care as the external product. They are the levers that make AI feel dependable.

5. Make Work Delightful

Design is emotional. Even more so in a field like home care.

We use reward loops, milestones, social reinforcement, and small moments of joy to make invisible progress feel tangible. This is not gamification. It’s respect—for people whose work is often ignored.


Why It Matters

Just as early mobile apps mimicked real-world textures (skeuomorphism) before evolving into native mobile patterns, AI will go through its own transition. Right now, AI systems need scaffolding. They need explanation, supervision, and context. Over time, they will disappear into the background.

Our design work must accelerate that curve. We believe the design challenges in AI are not about novelty. They are about sequencing trust. They are about making delegation possible in domains where mistakes have real consequences.

Sounds interesting? - We're looking for the next Diego Zaks to join us as our Founding Product Designer :)

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Zingage Engineering.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.