Zingage Engineering

Applied AI, data engineering, and anthropomorphic software to scale the Bedrock Economy

Zingage Engineering

The Zingage Way: Keep Grandma Out of the Hospital

The Zingage Way: Keep Grandma Out of the Hospital

Our Story

Victor and I never set out to build software for other software people. From the beginning, our goal was a generational company in the real economy - systems that move the world outside a browser tab. Victor had just exited Astorian, a marketplace that helped property managers find contractors. I had just watched my family scramble to find caregivers for my grandfather with dementia during COVID. We didn’t know yet that home care would be our industry, but we knew our ambition belonged where failure has consequences.

We reconnected in 2023 at South Park Commons and started exploring. Our first step was Zingage Perform, a layer on top of the EMR (electronic medical record) to automate communication, engagement, and rewards. At the time we assumed two things: full automation wasn’t yet feasible, and most agencies had foundations we could build on.

We were wrong. The work itself sets humans up to fail. Agencies live in a 24/7 cycle - 2 a.m. hospitalizations, daytime call queues, texts and emails piling up - while trying to coordinate care in systems built for records, billing, and compliance, not minute-to-minute staffing. These are good people doing heroic work inside constraints that make reliability overly dependent on brute force.

So, we stopped hedging and built what we always meant to build: Zingage Operator - coordination infrastructure that frees agencies from the back-office grind and makes care delivery dependable at scale. For a few months we even became schedulers ourselves to feel the weight of it and learn fast. That changed how we work. Every line of code now has a patient on the other end. A race condition isn’t an edge case - it’s a missed visit. A clumsy workflow isn’t an inconvenience - it’s a family crisis.

And now the curve has bent. Since launching post-pilot in August, four weeks of selling took us past seven figures in annual contracts, and we expect to 10x by year-end. Agencies aren’t just adopting; they’re telling us it’s changing their days. Laura Curry, who runs a veterans-focused CareBuilders agency in Kentucky, once bought cruise tickets with her husband and never used them - she was on call around the clock. When she saw Operator in action, she told us it was the first time in years she could put her phone down and know that her veterans would still get the care they needed.

That’s why we’re writing this now. We’ve shared these values internally, but with the team growing, the company at an inflection point, and the stakes higher than ever, it felt right to share them publicly too.


The Zingage Way

Right now, at this moment, there's a caregiver call out that will end in a hospitalization or worse. Families are drowning in chaos: spreadsheets, late-night panics, missed visits, caregivers churning at 80% per year. Zingage exists to end this. We're building the infrastructure so healthcare can happen in the home, so our parents can age with dignity, and so their children can live without sacrificing everything.

We are the coordination layer that makes home care automatic. That's the mission behind every line of code, every sprint, and every late night. We know that behind every bug fix is a family counting on us. Behind every feature is someone's parent waiting for care.

We move fast, bend convention, and take risks others won't. Our customers need us to be this way because their livelihoods and lives depend on it. We courageously build what others avoid, commit to deliver excellence, and uphold the integrity of our commitments.

Zingage may not be an easy mission to achieve but it is an easy mission to champion. Ultimately, our values will determine whether we win.

Customers First

We are building so that every provider, caregiver, and family can live the life they always dreamed.

We build for the providers whose dream is to deliver excellent care to thousands of people without having to give up their own lives staring at screens and staying up late.

We ship so that caregivers who show up for their patients never have to do this alone.

We deliver so that the families who need care can trust they will always get care.

Serving our customers is not only a privilege, it is a duty that we pledge ourselves to fully.

When Zingage succeeds it means that a daughter sleeps easy knowing her bed-bound mother hundreds of miles away will get the care she needs.

It means a caregiver is supported when her patient suffers a stroke in the middle of a visit, instead of being so overwhelmed that they quit.

It means a provider can continue serving thousands of patients without unforeseen compliance hits shutting them down.

Tradeoff: we will prioritize customer impact over internal preferences or engineering elegance, even if it means cutting scope, scrapping work, or redoing something we personally liked.

Velocity

We move fast and we ship responsibly because we care. Our customers’ lives will not wait for us to reach perfect certainty nor will they tolerate careless mistakes. At Zingage we create velocity as a marathon of sprints, punctuated by bursts of intensity and recovery.

We focus on the inputs: work long, work hard, or work smart; pick what works for you, but pick something. Velocity isn't negotiable, but how you achieve it is.

We're pirates. We hired an actress off Craigslist to crash our first conference. We snuck into WellSky's event posing as customers. We hand-deliver donuts at 6am. We take the shots others won't because playing by the rules means families suffer.

We're pirates because this industry needs pirates. The treasure we're after isn't gold, it's every family sleeping soundly knowing care is handled. We burn the ships behind us because there's no retreat when lives are on the line.

In the end, all that matters is what we have done for the customers.

Tradeoff: we accept fatigue and messiness in bursts in order to move fast, but we also commit to cleanup cycles so we can keep sprinting again. If you want a steady, predictable pace, this isn’t the place.

Extreme Ownership

We take extreme ownership at Zingage. We don’t make excuses. We don’t blame anyone or anything. We take ownership of the problems and the solutions. We take ownership because it is the bedrock of trust, which is the lifeblood of success.

Ownership shows up when no one is looking. It appears when you choose to sweat the details to fix a bug no one else caught. It’s there when you go out of your way to teach a customer how to use our product. It’s present when you show up in person to listen and to build beside the customer.

Most importantly, extreme ownership is taking care of each other. There is no blame at Zingage and making excuses without providing solutions is intolerable. When something breaks, we don't point fingers, we make solutions. We not only accept but praise support, encouragement, and respect.

Extreme ownership is the manifestation of high agency. If you want to start a podcast then do it; if you want to understand a customer then call them; if you want to build a feature then ship it.

Tradeoff: we stomach the discomfort of eating shit when we make mistakes and we accept everyone's fallibility. Zingage is not a place you can hide nor is it a place to begrudge teammates who try and fail.

Our mission is worth every difficulty. With these values, we won't just succeed, we'll build a world where no call goes unanswered, no family drowns in chaos, and everyone can age with dignity at home.

It's (Still) Time to Build: The Case for Startups in 2025

It's (Still) Time to Build: The Case for Startups in 2025

A friend and I recently debated the meaning of work in the looming shadow of AGI. The premise was simple: if OpenAI - or any organization - achieves superintelligence, what's the point of doing anything at all?

In truth, I've had this conversation repeatedly with founder friends. Each new OpenAI release sparks awe and dread, steadily devouring startups conceived just months ago. The meme of startups reduced to mere ChatGPT wrappers feels painfully real. These discussions typically land us at two bleak conclusions: either join an AI lab to stay relevant or succumb to nihilism, lounging on Universal Basic Income in the supposed “post-scarcity” future. Advocates imagine humans pivoting gracefully toward art or leisure, but that vision feels patronizingly hollow.

Why does this scenario feel inevitable and limiting? Perhaps because we’ve mistakenly assumed that a single, centralized AGI - one supreme intelligence directing human affairs - is the optimal and natural outcome. Yet history challenges this assumption. Attempts at centralized planning, such as Mao’s Great Leap Forward or Lenin’s collectivization, repeatedly failed due to oversimplification of complex human systems. 

James C. Scott vividly illustrates this danger in Seeing Like a State. Colonial powers in Tanzania enforced monoculture farming, planting a single crop uniformly for maximum yield. Their "scientific" method disastrously ignored local wisdom. Indigenous farmers had traditionally practiced polyculture - planting multiple crops together. While seemingly inefficient and messy, polyculture safeguarded soil health, diversified risk, and allowed flexible responses to unpredictable conditions. The colonial approach, though theoretically optimized, proved rigid and catastrophically vulnerable.

The core misconception underlying singular AGI echoes this colonial mindset: the belief that superintelligence can - and inevitably should - become a digital god capable of making all decisions optimally. Yet real-world decision-making rarely offers neat solutions; it more closely resembles the messy moral complexity of the trolley problem. Intelligence alone, no matter how advanced, cannot dictate correct answers to inherently subjective moral dilemmas. Thus, we must clearly separate intelligence - the neutral ability to solve problems - from agency (authority to act) and values (the moral principles guiding actions). 

Intelligence, in its purest form, involves computational power, data processing, and predictive modeling capabilities. It is fundamentally about pattern recognition, scenario forecasting, and logical analysis - essentially neutral skills that can enhance decision-making but do not inherently carry ethical weight or moral guidance. Agency, on the other hand, concerns who or what has the authority and accountability to act upon the outputs of this intelligence. Agency requires legitimacy, trust, and transparency - qualities that purely intelligent systems alone cannot ensure. Values represent the most human dimension of all; they encompass the moral frameworks, cultural contexts, and ethical considerations that ultimately guide decisions.

Today, systems like ChatGPT already display overarching personalities and value frameworks, intentionally designed by organizations like OpenAI. While this approach helps in establishing baseline safety and ethical guardrails, it presents two significant issues. First, these predetermined values might not fully align with the diverse perspectives, cultural contexts, and nuanced ethical landscapes of all users. Second, embedding a singular value system risks oversimplifying complex moral decisions, potentially resulting in outcomes disconnected from local realities or community-specific priorities. Therefore, a more robust approach would empower users and communities to tailor and tune these AI personalities and values to their specific needs and ethical standards, ensuring greater relevance, acceptance, and genuine alignment.

Startups uniquely embody this critical separation of intelligence, agency, and values. They deploy intelligence as technological infrastructure - powerful yet neutral tools capable of addressing specific problems. They restore agency by enabling local communities and users to actively choose, adapt, or reject these tools based on their distinct circumstances. Most crucially, startups allow values to remain community-defined and responsive to context, rather than universally imposed. For example, a rural healthcare clinic might adopt AI specifically tuned for resource-constrained environments, emphasizing preventive care aligned with local priorities. An urban hospital might choose a different AI optimized for managing high patient volumes and specialist coordination. Each community retains genuine agency, reinforcing accountability and achieving true alignment between technological capabilities and diverse human values.

This approach mirrors how governance functions at its best: overarching federal policies exist alongside state laws, city ordinances, trade associations, and grassroots organizations. While centralized institutions like OpenAI attempt broad alignment efforts analogous to federal policy, startups act as local policymakers - crafting tailored, bottom-up solutions that reflect community-specific needs and values.

Such decentralization doesn’t just enable startups to gain initial traction - it positions them for sustained relevance. Startups rapidly build trust through close alignment with local communities, steadily compounding their advantages by integrating powerful open-source models like Llama and DeepSeek with specialized expertise, proprietary data loops, and deep relationships. These assets form an enduring edge, similar to how local clinicians remain indispensable because their practical insights and patient relationships withstand technological disruption.

Ultimately, I’m not advocating for decentralized intelligence and the startups that embody it out of nostalgia or a luddite fear of our soon-to-come AI overlords. Sure, spending retirement as mediocre painters surviving on UBI sounds grimly amusing. But the real danger is more serious: placing all our trust in a single, omnipotent AI planner whose perfectly rational decisions could lead us straight off a cliff. Startups offer something far better - a morally diverse ecosystem of intelligences, built from the ground up by real communities. If history teaches us anything, it’s that pluralism - not centralization - is our strongest safeguard for human liberty. So yes, despite the looming shadow of AGI, it’s (still) time to build.

Fixing Error Handling in TypeScript: Why a Standard Result Type Isn’t Enough

Fixing Error Handling in TypeScript: Why a Standard Result Type Isn’t Enough

Error handling in JavaScript and TypeScript is notoriously challenging. Unlike many modern statically-typed languages (Rust, Swift, Kotlin, Go), TypeScript lacks built-in ways to track error types, often leading to fragile, opaque error-handling code. Although third-party Result type implementations have emerged to fill this gap, none have gained widespread adoption—mainly due to cumbersome syntax, complicated interactions with existing error-handling patterns, and unintuitive APIs.

In his latest blog post, Ethan Resnick explores these challenges, critically assesses current solutions, and proposes an improved Result type designed specifically for TypeScript. By aligning closely with the familiar Promise API, reducing boilerplate, and seamlessly integrating with async workflows, Ethan offers a practical, incremental path toward clearer, safer error handling. Read the full post to learn how to rethink your approach to robust error handling in TypeScript—and why simply porting a standard Result pattern isn’t enough.

Read full article here: https://medium.com/@ethanresnick/fixing-error-handling-in-typescript-340873a31ecd

Zingage IDs: Engineering Secure and Scalable Multi-Tenancy

Zingage IDs: Engineering Secure and Scalable Multi-Tenancy

As Zingage rapidly expanded to hundreds of customers nationwide, our engineering team faced increasingly complex technical challenges: robust data isolation, seamless scaling, and high availability during intensive operations. Standard UUIDs quickly proved insufficient, exposing several critical issues:

  • Data Leakage Risk: Forgetting to filter queries by businessId could expose sensitive data across businesses.
  • Complex Partitioning: Lack of inherent business context made data partitioning challenging and inefficient.
  • Ambiguous Entity Scope: Without clear entity boundaries, managing data across multiple tenants became error-prone.

The Limitations of Traditional UUIDs

Consider this problematic scenario:

const profileId = uuidv4();

// Risky query (business context omitted)
const profile = await db.profiles.findOne({ id: profileId });
// Potentially exposes data from another business inadvertently

This approach, although common, risks critical data leaks in multi-tenant environments.

Introducing Zingage IDs: A Robust Multi-Tenant Solution

To address these challenges, we designed a structured UUIDv8-based identifier system, embedding clear business context and distinct entity scopes directly within the IDs:

  • Business IDs (000 prefix): Represent unique business entities.
  • Business-scoped Entity IDs (1 prefix): Clearly tied to specific businesses, embedding business identifiers.
  • Cross-business Entity IDs (001 prefix): Explicitly defined to represent resources shared across businesses.

Code Example

Here's how this looks in practice:

import { generateBusinessId, generateScopedId } from 'zingage-id';

const businessId = generateBusinessId();
const profileId = generateScopedId(businessId, 'PROFILE');

// Secure query with embedded business context
const profile = await db.profiles.findOne({ id: profileId });
// Built-in safeguards ensure correct business scope, preventing leaks

Advanced Collision Resistance and Debugging Capabilities

Zingage IDs leverage structured components—42-bit timestamps, 10-bit entity type hints, and opaque random data—to provide strong collision resistance and powerful debugging:

  • Collision Resistance: By combining precise timestamps with robust random bits, we drastically lower collision risks, even under high-load scenarios. For example, generating up to 100,000 IDs per day produces only a minimal annual collision probability (~7% under highly conservative assumptions).
  • Debugging Efficiency: Entity type hints embedded within IDs enable rapid issue identification during debugging, without imposing rigid constraints. This ensures flexibility for future entity restructuring or data migration tasks.

Built-In Database-Level Security Enforcement

Our ID scheme integrates seamlessly with database-level Row-Level Security (RLS) policies, providing automatic, foolproof data isolation:

-- Enforce strict business context at the database level
CREATE POLICY business_scope_policy ON profiles
USING (extract_business_id(id) = current_setting('app.current_business_id')::uuid);

With this policy, database queries automatically apply business scoping, significantly reducing the risk of accidental data exposure.

Middleware further enhances security by automatically setting business context on a request level.

// Middleware example
app.use((req, res, next) => {
  const businessId = extractBusinessIdFromRequest(req);
  db.setBusinessContext(businessId);
  next();
});

// Database query implicitly scoped
const profile = await db.profiles.findOne({ id: profileId });
// Automatically executes as:
// SELECT * FROM profiles WHERE id = :profileId AND business_id = :activeBusinessId

Simplified and Efficient Data Partitioning

Explicitly embedding business identifiers simplifies data partitioning dramatically:

  • Business-scoped Entities: Directly embed business IDs, enabling straightforward partitioning and isolation per business.
  • Cross-business Entities: Clearly separated and replicated across partitions to ensure consistency and accessibility.

Practical partitioning example:

CREATE TABLE profiles (
  id UUID PRIMARY KEY,
  ...
) PARTITION BY HASH (business_id_embedded_in_uuid);

CREATE TABLE workflow_templates (
  id UUID PRIMARY KEY,
  ...
) -- Replicated across partitions due to cross-business applicability

This explicit delineation dramatically enhances scalability, performance, and operational efficiency.

Key Benefits of the Zingage ID Scheme

  • Robust Security: Intrinsic business isolation prevents accidental cross-tenant data breaches.
  • Scalable Architecture: Simplified, efficient partitioning supports effortless horizontal scaling.
  • Improved Developer Experience: Reduced manual context management and minimized risk of oversight.

Design at Zingage

Design at Zingage

Design at Zingage is not a support function. It is a way of seeing, reasoning, and building systems that actually work for the people delivering care.

We are not just here to ship features. We are here to define what it means to work alongside AI in one of the most human industries in the world.


The Problem: Designing for AI in a Trust-Based Industry

In home care, people don’t use software because they want to. They use it because they have to.

Schedulers and caregivers operate under constant stress: backlogs of patient visits, late cancellations, last-minute reassignments. Their current tools make this worse—clunky, slow, opaque.

Now, introduce AI. Software that doesn’t just coordinate schedules but acts on its own. Agents that message caregivers, reassign visits, and resolve gaps without human input. This is powerful. But it also creates a new kind of risk:

What happens when something goes wrong, and no one knows why?

Traditional UI patterns break down here. The job of design is no longer just to simplify a workflow—it’s to help users build trust in a system that behaves more like a colleague than a tool.


The Opportunity: Interfaces for Delegation, Not Just Execution

Our goal is not to make care schedulers faster typists. It’s to help them delegate work to intelligent agents. But delegation requires:

  • Knowing what the agent is doing
  • Understanding why it did something
  • Stepping in when something goes off track

In other words: clarity, not control. We are designing for a world where the default state of software is action, not waiting. Where the system moves first, and the human refines it.

This requires new UI patterns, new metaphors, and new forms of accountability. Most importantly, it requires a deep understanding of the users who will live in this hybrid loop.


Core Design Principles

1. Supervision Over Control

Users shouldn’t be expected to micromanage the AI. The interface must:

  • Show intent, not implementation
  • Offer intuitive paths for feedback and override
  • Make risk visible without creating fear

2. Asynchronous by Default

Work doesn’t happen in one sitting. Schedulers reach out, wait for replies, reschedule, escalate, wait again. Our interfaces must:

  • Support interrupted workflows
  • Track state across time and agents
  • Handle reversals and replans with grace

3. Mental Models, Not Just Screens

We don’t design pages. We design systems of meaning:

  • What is a plan?
  • What does it mean to delegate something?
  • When has a task truly been resolved?

These are design questions.

4. Internal Surfaces Matter

We take Sarah Tavel’s advice seriously: "It’s okay to have a human in the loop. It’s better if it’s your human, not your customer's."

Our internal ops team supervises AI decisions, resolves edge cases, and monitors quality. These tools require the same care as the external product. They are the levers that make AI feel dependable.

5. Make Work Delightful

Design is emotional. Even more so in a field like home care.

We use reward loops, milestones, social reinforcement, and small moments of joy to make invisible progress feel tangible. This is not gamification. It’s respect—for people whose work is often ignored.


Why It Matters

Just as early mobile apps mimicked real-world textures (skeuomorphism) before evolving into native mobile patterns, AI will go through its own transition. Right now, AI systems need scaffolding. They need explanation, supervision, and context. Over time, they will disappear into the background.

Our design work must accelerate that curve. We believe the design challenges in AI are not about novelty. They are about sequencing trust. They are about making delegation possible in domains where mistakes have real consequences.

Sounds interesting? - We're looking for the next Diego Zaks to join us as our Founding Product Designer :)

Request for Engineers: Rethinking Data Infrastructure for Healthcare AI

Request for Engineers: Rethinking Data Infrastructure for Healthcare AI

At Zingage, we’re building AI-powered systems to automate the critical back-office operations of healthcare providers. Our goal this year is ambitious: scale from $X million to $XX million ARR by delivering intelligent staffing and scheduling agents to home care agencies nationwide.

We’ve found a powerful wedge: integrating deeply with Electronic Medical Records (EMRs). Today, we’re connected with over 300 healthcare sites, representing thousands of caregivers and patients. However, our growth has exposed significant foundational gaps in our data infrastructure, and we’re looking for exceptional engineers to help us solve them.

The Problem: Healthcare Data is Messy

Our competitive advantage lies in seamless EMR integrations—but EMR data is notoriously fragmented and unreliable:

  • Multiple Integration Methods: Today, we integrate with EMRs through APIs, file dumps, and RPA. Each method brings a different set of challenges and can introduce issues like delayed responses, unpredictable changes, inconsistent schemas, and unreliable join keys.
  • Scaling Pains (300 → 1,000 customers): Our current ETL architecture (Kubernetes pods → transforms → PostgreSQL) was fine initially. But now, write-intensive ETL tasks significantly impact our primary application database’s performance. Worse, performing transformations upfront (ETL) means losing raw data context, hindering debugging, and reducing flexibility.
  • Limited Observability & Data Lineage: When data ingestion breaks (and it often does), our lack of lineage and replayability makes debugging slow and painful. Identifying root causes, replaying failed jobs, and rapidly restoring pipelines is currently difficult.
  • Data Interpretability Challenges: Healthcare semantics are tricky—simple medical conditions can appear in many forms across EMRs. For example, the diagnosis “diabetes” might appear as five separate coded entries, though clinically identical. Building reliable AI means solving these interpretability challenges systematically.
  • AI Data Readiness & Raw Data Storage: Our AI agent accesses data directly from our PostgreSQL database—but it’s restricted to already-transformed, limited-context data. Our AI needs access to richer historical and raw context data to perform optimally.

This combination of challenges—unreliable data, rigid ETL architecture, poor debugging capabilities, interpretability complexity, and insufficient AI-readiness—is limiting our scale, velocity, and product quality.

We must rethink our data architecture from first principles.

The Opportunity: A New Data Stack for Healthcare AI

We’re calling on talented backend and data engineers to help us architect and build a next-generation data pipeline capable of scaling to thousands of healthcare customers. This infrastructure will form the bedrock of our AI-powered staffing and scheduling solutions.

Here's some initial thoughts from our team:

1. Moves from ETL → ELT

  • Extract & load raw data first (to a data lake like S3, BigQuery, or Snowflake).
  • Delay transformations until downstream, enabling iterative experimentation, faster debugging, and improved raw data access.

2. Implements Event-Driven, Replayable Pipelines

  • Capture EMR snapshot outputs as event streams (e.g., Kafka, Google Pub/Sub).
  • Achieve full data lineage, observability, replayability, and rapid debugging—transforming our pipeline maintenance from reactive to proactive.

3. Adopts Modern Data Warehousing & Separation of Concerns

  • Clearly separate transactional workloads (PostgreSQL) from analytical & AI workloads (BigQuery, Snowflake).
  • Ensure high availability, query optimization, and real-time analytics without compromising app performance.

4. Scales Integration Operations

  • Invest in automated integration infrastructure, robust error handling, and alerting.
  • Scale to 1,000+ healthcare customers gracefully, without proportional increases in operational overhead.

5. Solves Data Interpretability with Systematic Normalization

  • Develop modular semantic mapping frameworks or leverage healthcare data standards (FHIR, HL7, EVV).
  • Tackle healthcare-specific data nuances systematically, ensuring AI models see consistent, high-quality data.

Why Join Zingage Now?

You’ll join a team of exceptional engineers (early Ramp, Amazon, Block/Square), backed by seasoned operators from healthcare and SaaS, all dedicated to rebuilding how healthcare is delivered through AI and automation.

If you’re energized by the idea of solving messy, mission-critical problems in healthcare—building a new foundational data architecture, owning impactful engineering decisions, and working on high-leverage problems at early-stage scale—we’d love to talk.

We forgot about the Home Front. Now it's breaking.

We forgot about the Home Front. Now it's breaking.

In 1965, the United States made a generational promise. Standing beside Harry Truman, Lyndon Johnson signed Medicare and Medicaid into law and declared that “no longer will older Americans be denied the healing miracle of modern medicine… no longer will illness crush and destroy the savings they have carefully put away… no longer will young families see their own incomes, and their own hopes, eaten away simply because they are carrying out their deep moral obligations.” These programs weren’t just policy—they were a commitment to institutionalized dignity. And for a time, they worked.

But that promise is unraveling—not because we’ve lost the will to care, but because we’ve failed to build the infrastructure that care requires. America’s fastest-growing care need—home-based long-term support—is on the brink of collapse. Providers are underfunded. Medicaid reimbursement rates hover near break-even. Caregiver turnover exceeds 60% annually. For every caregiver available, five patients wait. In many parts of the country, families are told it will be weeks—sometimes months—before someone can come help.

When the state fails to care, the burden falls back on the family. A daughter steps back from her job to care for her mother. A son starts dipping into savings to cover private-pay aides. A couple decides to put off having a second child—because they’re already caring for an aging parent. And slowly, silently, the load adds up. Dual-income households fracture. Women exit the workforce. Siblings fight over who will take on more hours. Young people, watching all this, start to wonder whether they can afford to have kids of their own.

A recent study published in Demography found that caregiving burdens significantly reduce fertility intent among adults in their peak childbearing years. The OECD reports that countries with high eldercare demands and low institutional support—like the U.S.—consistently see lower birth rates, greater family stress, and rising mental health burdens. In Japan, where eldercare has similarly overwhelmed the household, the government has identified aging care as a key barrier to national fertility recovery. A society that can’t care for its elders makes it harder to imagine creating the next generation.

And yet, economists call this a “labor shortage.” As if the only problem were a missing headcount. But the truth is deeper—and more dangerous.

At the bottom of the demand curve are not price-sensitive consumers. They are the elderly, the disabled, the poor. Classical economics tells us the invisible hand will resolve this. When demand rises, so too should wages—until the market clears. But that logic breaks down in home care. If providers raise wages to attract workers, patients who rely on fixed Medicaid reimbursements are priced out of the system. The labor supply doesn’t grow—it just shifts toward wealthier, private-pay clients. And the public safety net quietly fails.

It fails not only due to underfunding, but because the infrastructure required to deliver care is fundamentally broken. Today’s home care providers operate with vanishingly thin margins—not because care is too expensive, but because it is built on manual coordination. Scheduling, compliance, shift replacements, documentation, and payroll are stitched together by humans working across spreadsheets, texts, and brittle workflows. The result is a model where every dollar is eaten by complexity before it reaches the caregiver’s paycheck.

This is where technology must intervene—not to replace humans, but to unburden them. A typical home care provider might serve 300 clients with a back office of 15 staff. That means one coordinator for every 20 to 50 caregivers—each one juggling shift coverage, compliance tasks, payroll, and constant crisis management. That ratio should look more like one admin per 1,000 caregivers. That’s not a fantasy—that’s the level of operational leverage already achieved by modern platforms like Uber, which coordinates over 33 million trips per day with just 30,000 employees.

If we can bring that level of software-driven efficiency to home care, the economics shift meaningfully. Today, a typical Medicaid reimbursement of $25 an hour breaks down into $15 for the caregiver, $3 to $4 for administrative overhead, and $1 to $2 in agency margin—leaving little room for stability or growth. But with automation compressing overhead by up to 80 percent, agencies could preserve their margins while redirecting $2 to $3 an hour back into caregiver wages. That’s a 15 to 20 percent raise—without increasing what the system pays.

This isn’t just an economic adjustment. It’s behavioral leverage. Turnover in home care exceeds 65 percent per year, but studies show that even a one-dollar wage increase can reduce churn by up to 20 percent. If we can consistently raise caregiver pay from $15 to $18 an hour, we change the decision calculus for millions of workers—making caregiving a competitive alternative to retail, warehouse, or gig work. In doing so, we not only stabilize the workforce, we begin to grow it.

But the potential goes beyond margin recovery. If we get the operational layer right, we can also expand the clinical scope—and the strategic role—of home care itself.

Today, home care is often treated as glorified nannying: help with meals, light housekeeping, companionship. It’s seen as soft, non-clinical, and low-skilled—which is exactly why reimbursement rates remain so low. But this perception is shaped less by the nature of the work than by the limits of the system that delivers it. When care is fragmented, undocumented, and manually coordinated, it’s easy for policymakers to underfund it and for clinicians to ignore it.

With the right software infrastructure, home care becomes a platform. It becomes the connective tissue between daily living and clinical insight: a place where remote patient monitoring feeds into real-time interventions, where in-home aides are supported by intelligent workflows, where telehealth, vitals tracking, medication adherence, and caregiver observations flow into one continuous stream of context-rich care.

This is the same transformation we’ve seen across other service industries. DoorDash didn’t just digitize food delivery—it redefined the restaurant. Amazon didn’t just optimize shopping—it redefined distribution. Zoom didn’t just replace meetings—it rewired the geography of work. In each case, technology didn’t serve the legacy system—it replaced the coordination layer, and in doing so, changed the nature of the service.

Home care is next. With an intelligent, AI-driven operating system, the home is no longer the periphery of care. It becomes the center—the first line of observation, the earliest site of intervention, and the most stable setting for long-term health.

If we treat the home as the center of care—not just a setting for support, but a site of coordination—then the economic logic changes too. Suddenly, we’re not just talking about lowering costs through efficiency. We’re talking about unlocking a reallocation of the $1.5 trillion the U.S. spends annually on hospital and institutional care.

Acute care settings—hospitals, nursing homes, rehab facilities—absorb the majority of healthcare spending. But much of that spending is reactive: managing crises that could have been prevented with earlier intervention. One in five Medicare patients is readmitted to the hospital within 30 days. Chronic conditions like diabetes, heart failure, and COPD account for the majority of admissions—and nearly all of them are daily-life-sensitive.

If we can shift just a fraction of that spend upstream—toward smarter home-based monitoring, proactive check-ins, and consistent care delivered by trained aides—we don’t just lower costs. We improve outcomes. We catch problems earlier. We avoid hospitalizations entirely. A 2022 CMS pilot found that enhanced home care with remote monitoring reduced hospitalizations by 30 percent and emergency room visits by 40 percent in high-risk populations. If even a modest share of Medicare and Medicaid budgets were redirected toward this kind of integrated home care, it would represent a doubling or tripling of current home care reimbursement rates—without increasing total spend.

This is the opportunity in front of us—not just to fix a broken system, but to finish the work that was started generations ago. When Lyndon Johnson signed Medicare and Medicaid into law, he wasn’t just solving a policy problem; he was laying the foundation for a functional society—a system that recognized care not as a luxury or a handout, but as a condition for national vitality. We don’t need new slogans or new entitlements. We need to rebuild the Great Society with the tools of this century. That means applying AI not as a novelty, but as institutional infrastructure: software that replaces manual coordination, stabilizes agency margins, and turns caregiving into a job worth doing.

Code Quality Principles at Zingage

Code Quality Principles at Zingage

[Originally written by Zingage Principle Engineer Ethan Resnick as an internal memo]

Carefully Balance What Your Code Promises and What it Demands; Err on the Side of Promising Less

Every piece of code promises some things to its users — e.g., “this API endpoint will return an object with fields x, y, z”; or, “this object supports methods a, b, c”.

Put simply, the more things that a piece of code promises, the harder the code is to change, because a new version of the code either has to continue to deliver all the things that the old code was promising, or all the users of the old code must be updated to no longer depend on the parts of the old promise that the new code will no longer fulfill.

Therefore, the key to long-term engineering velocity is to keep the set of things your code promises small. This usually boils down to not returning data and not supporting operations that aren’t truly necessary.

Some examples of this in practice:

  • We've gradually removed methods from our repository classes, and stopped adding a bunch of methods by default, because keeping that API surface small makes it easier to refactor how the underlying data is stored/queried. Similarly, we're phasing out the ability to load entities with their relations, and to save an entity with a bunch of related entities, because those promised abilities are very hard to maintain under many refactors (e.g., as we move off TypeORM, or as we split data across dbs/services).
  • As a large-scale, non-Zingage example, consider how the QUIC protocol, which powers HTTP/3, encrypts more than just the message’s content — e.g., it also encrypts details about what protocol features/extensions are being used. These details aren’t actually private, but they’re encrypted primarily so that boxes between the sending and receiving ends of the HTTP request (e.g., routers, firewalls, caches, proxies, etc) can’t see these details, and therefore can’t create implementations that rely on this information in any way. Exposing less information to these middle boxes was a conscious design decision aimed at making the protocol more evolvable, after TCP proved essentially impossible to change at internet scale.

However, there is one real tension when adhering to this principle of “keeping the contract small”, namely: the flip-side of a piece of code promising fewer things is that the user of the code can depend on less.

A great example is whether a field in a returned object can be null. If the code that produces the object promises that the field will be non-null, that promise directly limits the system’s evolvability; supporting a new use case where there is no sensible value for the field (so it should be null) becomes a complicated task of updating all the code’s users to handle null. However, before a use case arises where the field does need to be null, promising that the field will be non-null simplifies all the code that has to work with it; it doesn’t have to be built to handle the case of the field being missing prematurely.

So, when deciding exactly what your code should promise, consider:

  • Are the users of the code under your control? For example, our main API is only used by our frontend, which we control. Therefore, it’s relatively-straightforward to make a breaking change in the contract exposed by the API, as we can also update the frontend. Contrast this with an API endpoint in the partnership API, which is called by our customers: we don’t control their code, so changing the endpoint’s contract to no longer return certain data requires a long, complicated coordination process with them. In situations like this, where you don’t control the consumers, keeping your promises small is essential.
  • How easy is it to identify and coordinate with all the code’s users? In the case of an endpoint called by our customers, we at least have a comprehensive list of our customers, a way to email all of them; plus, we have a way to see which customers are using which endpoints (through API usage logs or similar). This makes it possible, if time consuming, to coordinate breaking changes with them. However, imagine we had an API endpoint that was open to the world with no authentication; in that case, we’d have no way to know who’s using it, and no effective way to coordinate with them to update their code to accommodate a breaking change. Unauthenticated open endpoints are one extreme; the other extreme might be a Typescipt utility function in our API repo. In a case like that, if we change the function’s return type or to take an extra argument, Typescript will literally guide us to all the callers of the function that are broken by this change.
    • Corollary: Invest in tracking the usage of your APIs, because the easier you can identify callers, the easier it is to change the API.
  • How much would the callers benefit from a particular addition your code’s promises? Promising that a field won’t be null is a great example of promising something that isn’t strictly necessary to promise — and yet doing so can be worth it if it saves a lot of users of the code a lot of boilerplate and/or edge-case-handling work, which they might need were the field marked as nullable.
  • How hard will it be to uphold the promise over time?

Finally, it’s not just the case that code promises things to its users; it also demands some things from them — usually in the form of required arguments. Here again, there’s a similar balancing act: the more your code demands, the more flexible it is — future use cases may be much easier to support if your code can count on having certain arguments provided to it, e.g., and those arguments have no cost from its perspective (it can always ignore some of them, or can loosen those demands later by making some optional). However, everything your code demands is something that its users must be able to promise, so this can complicate all users (e.g., to get the data for the required argument into them).

Make Illegal States Unrepresentable

A huge part of what makes programming hard is that the system can be in an inordinate amount of states, and its very hard to write bug-free code that properly accounts for all of them. By making illegal states/inputs unrepresentable, we can greatly reduce the number of bugs and make our code more reliable and easier to reason about, with the need for fewer assertions.

See https://www.hillelwayne.com/post/constructive/, which refers to this as “constructive data modeling” and reviews some common approaches.

One common manifestation of this principal in our code base is the use of discriminated unions rather than having multiple cases smooshed into one object type with some nullable properties.

To give a concrete example, you should avoid something like:

type ProfileOrBusinessId = { profileId?: string; businessId?: string }

Instead, prefer:

type ProfileOrBusinessId = 
  | { type: "BUSINESS_ID", id: string }
  | { type: "PROFILE_ID", id: string }

The difference is that the former type allows zero or two ids to be provided — both of which should be illegal — and forces all the consuming code to handle both illegal cases. This can create cascading complexity (e.g., if a consuming function throws when it gets no ids, then its caller has to be prepared to catch that error too) and different bits of consuming code could easily end up doing different things in the case where both ids are provided (if they the properties in a different order). By contrast, the second type requires exactly one id to be provided.

“Parse, Don’t Validate”

After your code verifies something about the structure of input value it’s working with, strongly consider encoding that result into the types. See also https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/

Our extensive use of tagged string types are a good example of that: after we verify something at runtime, we record it in the types so that we can write code that, at the type level, requires the right kind of argument. (See, e.g., OwnedEntityId and isOwnedEntityId).

Duplicate Coincidentally-Identical Code

Two pieces of code that currently do the same thing should only reference common logic or types for that functionality (e.g., a shared base class or a shared utility function) — that is, the code should only be “made DRY” — if the two pieces of code should always, automatically evolve together.

Some concrete examples:

  • The input type (e.g., DTO) for updating an entity should not inherit from the type used when creating the entity. If the update type were to extend the creation type, that would mean that any new field that can be provided on creation would also automatically be allowed in the input to an update. However, not every new field should be set updateable: having fields be automatically accepted in an update input can create security risks, and it extends the contract that your code has to uphold. On balance, then, it’s likely better to force the developer to decide, for each new field, whether and how that field should be allowed on update — even if that requires a bit more duplication and boilerplate for fields that are settable on create and update. (This example is just a variation of the “fragile base class” problem.)
  • The types used in the application to reflect the shape of values stored in the database should not be the same as, or in any way linked to, the types that the application’s services use to reflect their legal inputs, outputs, or intermediate values. The reason is that a change to the type the application is working with does not automatically change the shape of data in the database; that data will only change if the developer remembers to explicitly migrate it. Therefore, having the type used for data in the database be linked to (and therefore automatically change with) other types turns the database types into lies, and actively hides the need to first migrate the data in the database, which the compiler would otherwise warn about.

In general, consider that, when multiple pieces of code reference some shared logic, updating that logic in the one place where it’s defined will lead those changes to cascade everywhere; that’s the whole point of DRY, but it’s a double-edged sword: sometimes, it prevents bugs by keeping every user of the shared logic in sync, on the latest (most-correct) version; but, sometimes the automatic propagation of changes (to places where the new assumptions/behavior of the new code should not apply) is itself the cause of bugs. Hence the original guidance: DRY up code that you’re reasonably confident should always, automatically evolve together, or DRY up code where, because of the specifics of the domain, the benefits of automatic change propagation outweigh the risks.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Zingage Engineering.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.