Zingage Engineering

Applied AI, data engineering, and anthropomorphic software to scale the Bedrock Economy

Zingage Engineering

Design at Zingage

Design at Zingage

Design at Zingage is not a support function. It is a way of seeing, reasoning, and building systems that actually work for the people delivering care.

We are not just here to ship features. We are here to define what it means to work alongside AI in one of the most human industries in the world.


The Problem: Designing for AI in a Trust-Based Industry

In home care, people don’t use software because they want to. They use it because they have to.

Schedulers and caregivers operate under constant stress: backlogs of patient visits, late cancellations, last-minute reassignments. Their current tools make this worse—clunky, slow, opaque.

Now, introduce AI. Software that doesn’t just coordinate schedules but acts on its own. Agents that message caregivers, reassign visits, and resolve gaps without human input. This is powerful. But it also creates a new kind of risk:

What happens when something goes wrong, and no one knows why?

Traditional UI patterns break down here. The job of design is no longer just to simplify a workflow—it’s to help users build trust in a system that behaves more like a colleague than a tool.


The Opportunity: Interfaces for Delegation, Not Just Execution

Our goal is not to make care schedulers faster typists. It’s to help them delegate work to intelligent agents. But delegation requires:

  • Knowing what the agent is doing
  • Understanding why it did something
  • Stepping in when something goes off track

In other words: clarity, not control. We are designing for a world where the default state of software is action, not waiting. Where the system moves first, and the human refines it.

This requires new UI patterns, new metaphors, and new forms of accountability. Most importantly, it requires a deep understanding of the users who will live in this hybrid loop.


Core Design Principles

1. Supervision Over Control

Users shouldn’t be expected to micromanage the AI. The interface must:

  • Show intent, not implementation
  • Offer intuitive paths for feedback and override
  • Make risk visible without creating fear

2. Asynchronous by Default

Work doesn’t happen in one sitting. Schedulers reach out, wait for replies, reschedule, escalate, wait again. Our interfaces must:

  • Support interrupted workflows
  • Track state across time and agents
  • Handle reversals and replans with grace

3. Mental Models, Not Just Screens

We don’t design pages. We design systems of meaning:

  • What is a plan?
  • What does it mean to delegate something?
  • When has a task truly been resolved?

These are design questions.

4. Internal Surfaces Matter

We take Sarah Tavel’s advice seriously: "It’s okay to have a human in the loop. It’s better if it’s your human, not your customer's."

Our internal ops team supervises AI decisions, resolves edge cases, and monitors quality. These tools require the same care as the external product. They are the levers that make AI feel dependable.

5. Make Work Delightful

Design is emotional. Even more so in a field like home care.

We use reward loops, milestones, social reinforcement, and small moments of joy to make invisible progress feel tangible. This is not gamification. It’s respect—for people whose work is often ignored.


Why It Matters

Just as early mobile apps mimicked real-world textures (skeuomorphism) before evolving into native mobile patterns, AI will go through its own transition. Right now, AI systems need scaffolding. They need explanation, supervision, and context. Over time, they will disappear into the background.

Our design work must accelerate that curve. We believe the design challenges in AI are not about novelty. They are about sequencing trust. They are about making delegation possible in domains where mistakes have real consequences.

Sounds interesting? - We're looking for the next Diego Zaks to join us as our Founding Product Designer :)

Request for Engineers: Rethinking Data Infrastructure for Healthcare AI

Request for Engineers: Rethinking Data Infrastructure for Healthcare AI

At Zingage, we’re building AI-powered systems to automate the critical back-office operations of healthcare providers. Our goal this year is ambitious: scale from $X million to $XX million ARR by delivering intelligent staffing and scheduling agents to home care agencies nationwide.

We’ve found a powerful wedge: integrating deeply with Electronic Medical Records (EMRs). Today, we’re connected with over 300 healthcare sites, representing thousands of caregivers and patients. However, our growth has exposed significant foundational gaps in our data infrastructure, and we’re looking for exceptional engineers to help us solve them.

The Problem: Healthcare Data is Messy

Our competitive advantage lies in seamless EMR integrations—but EMR data is notoriously fragmented and unreliable:

  • Multiple Integration Methods: Today, we integrate with EMRs through APIs, file dumps, and RPA. Each method brings a different set of challenges and can introduce issues like delayed responses, unpredictable changes, inconsistent schemas, and unreliable join keys.
  • Scaling Pains (300 → 1,000 customers): Our current ETL architecture (Kubernetes pods → transforms → PostgreSQL) was fine initially. But now, write-intensive ETL tasks significantly impact our primary application database’s performance. Worse, performing transformations upfront (ETL) means losing raw data context, hindering debugging, and reducing flexibility.
  • Limited Observability & Data Lineage: When data ingestion breaks (and it often does), our lack of lineage and replayability makes debugging slow and painful. Identifying root causes, replaying failed jobs, and rapidly restoring pipelines is currently difficult.
  • Data Interpretability Challenges: Healthcare semantics are tricky—simple medical conditions can appear in many forms across EMRs. For example, the diagnosis “diabetes” might appear as five separate coded entries, though clinically identical. Building reliable AI means solving these interpretability challenges systematically.
  • AI Data Readiness & Raw Data Storage: Our AI agent accesses data directly from our PostgreSQL database—but it’s restricted to already-transformed, limited-context data. Our AI needs access to richer historical and raw context data to perform optimally.

This combination of challenges—unreliable data, rigid ETL architecture, poor debugging capabilities, interpretability complexity, and insufficient AI-readiness—is limiting our scale, velocity, and product quality.

We must rethink our data architecture from first principles.

The Opportunity: A New Data Stack for Healthcare AI

We’re calling on talented backend and data engineers to help us architect and build a next-generation data pipeline capable of scaling to thousands of healthcare customers. This infrastructure will form the bedrock of our AI-powered staffing and scheduling solutions.

Here's some initial thoughts from our team:

1. Moves from ETL → ELT

  • Extract & load raw data first (to a data lake like S3, BigQuery, or Snowflake).
  • Delay transformations until downstream, enabling iterative experimentation, faster debugging, and improved raw data access.

2. Implements Event-Driven, Replayable Pipelines

  • Capture EMR snapshot outputs as event streams (e.g., Kafka, Google Pub/Sub).
  • Achieve full data lineage, observability, replayability, and rapid debugging—transforming our pipeline maintenance from reactive to proactive.

3. Adopts Modern Data Warehousing & Separation of Concerns

  • Clearly separate transactional workloads (PostgreSQL) from analytical & AI workloads (BigQuery, Snowflake).
  • Ensure high availability, query optimization, and real-time analytics without compromising app performance.

4. Scales Integration Operations

  • Invest in automated integration infrastructure, robust error handling, and alerting.
  • Scale to 1,000+ healthcare customers gracefully, without proportional increases in operational overhead.

5. Solves Data Interpretability with Systematic Normalization

  • Develop modular semantic mapping frameworks or leverage healthcare data standards (FHIR, HL7, EVV).
  • Tackle healthcare-specific data nuances systematically, ensuring AI models see consistent, high-quality data.

Why Join Zingage Now?

You’ll join a team of exceptional engineers (early Ramp, Amazon, Block/Square), backed by seasoned operators from healthcare and SaaS, all dedicated to rebuilding how healthcare is delivered through AI and automation.

If you’re energized by the idea of solving messy, mission-critical problems in healthcare—building a new foundational data architecture, owning impactful engineering decisions, and working on high-leverage problems at early-stage scale—we’d love to talk.

We forgot about the Home Front. Now it's breaking.

We forgot about the Home Front. Now it's breaking.

In 1965, the United States made a generational promise. Standing beside Harry Truman, Lyndon Johnson signed Medicare and Medicaid into law and declared that “no longer will older Americans be denied the healing miracle of modern medicine… no longer will illness crush and destroy the savings they have carefully put away… no longer will young families see their own incomes, and their own hopes, eaten away simply because they are carrying out their deep moral obligations.” These programs weren’t just policy—they were a commitment to institutionalized dignity. And for a time, they worked.

But that promise is unraveling—not because we’ve lost the will to care, but because we’ve failed to build the infrastructure that care requires. America’s fastest-growing care need—home-based long-term support—is on the brink of collapse. Providers are underfunded. Medicaid reimbursement rates hover near break-even. Caregiver turnover exceeds 60% annually. For every caregiver available, five patients wait. In many parts of the country, families are told it will be weeks—sometimes months—before someone can come help.

When the state fails to care, the burden falls back on the family. A daughter steps back from her job to care for her mother. A son starts dipping into savings to cover private-pay aides. A couple decides to put off having a second child—because they’re already caring for an aging parent. And slowly, silently, the load adds up. Dual-income households fracture. Women exit the workforce. Siblings fight over who will take on more hours. Young people, watching all this, start to wonder whether they can afford to have kids of their own.

A recent study published in Demography found that caregiving burdens significantly reduce fertility intent among adults in their peak childbearing years. The OECD reports that countries with high eldercare demands and low institutional support—like the U.S.—consistently see lower birth rates, greater family stress, and rising mental health burdens. In Japan, where eldercare has similarly overwhelmed the household, the government has identified aging care as a key barrier to national fertility recovery. A society that can’t care for its elders makes it harder to imagine creating the next generation.

And yet, economists call this a “labor shortage.” As if the only problem were a missing headcount. But the truth is deeper—and more dangerous.

At the bottom of the demand curve are not price-sensitive consumers. They are the elderly, the disabled, the poor. Classical economics tells us the invisible hand will resolve this. When demand rises, so too should wages—until the market clears. But that logic breaks down in home care. If providers raise wages to attract workers, patients who rely on fixed Medicaid reimbursements are priced out of the system. The labor supply doesn’t grow—it just shifts toward wealthier, private-pay clients. And the public safety net quietly fails.

It fails not only due to underfunding, but because the infrastructure required to deliver care is fundamentally broken. Today’s home care providers operate with vanishingly thin margins—not because care is too expensive, but because it is built on manual coordination. Scheduling, compliance, shift replacements, documentation, and payroll are stitched together by humans working across spreadsheets, texts, and brittle workflows. The result is a model where every dollar is eaten by complexity before it reaches the caregiver’s paycheck.

This is where technology must intervene—not to replace humans, but to unburden them. A typical home care provider might serve 300 clients with a back office of 15 staff. That means one coordinator for every 20 to 50 caregivers—each one juggling shift coverage, compliance tasks, payroll, and constant crisis management. That ratio should look more like one admin per 1,000 caregivers. That’s not a fantasy—that’s the level of operational leverage already achieved by modern platforms like Uber, which coordinates over 33 million trips per day with just 30,000 employees.

If we can bring that level of software-driven efficiency to home care, the economics shift meaningfully. Today, a typical Medicaid reimbursement of $25 an hour breaks down into $15 for the caregiver, $3 to $4 for administrative overhead, and $1 to $2 in agency margin—leaving little room for stability or growth. But with automation compressing overhead by up to 80 percent, agencies could preserve their margins while redirecting $2 to $3 an hour back into caregiver wages. That’s a 15 to 20 percent raise—without increasing what the system pays.

This isn’t just an economic adjustment. It’s behavioral leverage. Turnover in home care exceeds 65 percent per year, but studies show that even a one-dollar wage increase can reduce churn by up to 20 percent. If we can consistently raise caregiver pay from $15 to $18 an hour, we change the decision calculus for millions of workers—making caregiving a competitive alternative to retail, warehouse, or gig work. In doing so, we not only stabilize the workforce, we begin to grow it.

But the potential goes beyond margin recovery. If we get the operational layer right, we can also expand the clinical scope—and the strategic role—of home care itself.

Today, home care is often treated as glorified nannying: help with meals, light housekeeping, companionship. It’s seen as soft, non-clinical, and low-skilled—which is exactly why reimbursement rates remain so low. But this perception is shaped less by the nature of the work than by the limits of the system that delivers it. When care is fragmented, undocumented, and manually coordinated, it’s easy for policymakers to underfund it and for clinicians to ignore it.

With the right software infrastructure, home care becomes a platform. It becomes the connective tissue between daily living and clinical insight: a place where remote patient monitoring feeds into real-time interventions, where in-home aides are supported by intelligent workflows, where telehealth, vitals tracking, medication adherence, and caregiver observations flow into one continuous stream of context-rich care.

This is the same transformation we’ve seen across other service industries. DoorDash didn’t just digitize food delivery—it redefined the restaurant. Amazon didn’t just optimize shopping—it redefined distribution. Zoom didn’t just replace meetings—it rewired the geography of work. In each case, technology didn’t serve the legacy system—it replaced the coordination layer, and in doing so, changed the nature of the service.

Home care is next. With an intelligent, AI-driven operating system, the home is no longer the periphery of care. It becomes the center—the first line of observation, the earliest site of intervention, and the most stable setting for long-term health.

If we treat the home as the center of care—not just a setting for support, but a site of coordination—then the economic logic changes too. Suddenly, we’re not just talking about lowering costs through efficiency. We’re talking about unlocking a reallocation of the $1.5 trillion the U.S. spends annually on hospital and institutional care.

Acute care settings—hospitals, nursing homes, rehab facilities—absorb the majority of healthcare spending. But much of that spending is reactive: managing crises that could have been prevented with earlier intervention. One in five Medicare patients is readmitted to the hospital within 30 days. Chronic conditions like diabetes, heart failure, and COPD account for the majority of admissions—and nearly all of them are daily-life-sensitive.

If we can shift just a fraction of that spend upstream—toward smarter home-based monitoring, proactive check-ins, and consistent care delivered by trained aides—we don’t just lower costs. We improve outcomes. We catch problems earlier. We avoid hospitalizations entirely. A 2022 CMS pilot found that enhanced home care with remote monitoring reduced hospitalizations by 30 percent and emergency room visits by 40 percent in high-risk populations. If even a modest share of Medicare and Medicaid budgets were redirected toward this kind of integrated home care, it would represent a doubling or tripling of current home care reimbursement rates—without increasing total spend.

This is the opportunity in front of us—not just to fix a broken system, but to finish the work that was started generations ago. When Lyndon Johnson signed Medicare and Medicaid into law, he wasn’t just solving a policy problem; he was laying the foundation for a functional society—a system that recognized care not as a luxury or a handout, but as a condition for national vitality. We don’t need new slogans or new entitlements. We need to rebuild the Great Society with the tools of this century. That means applying AI not as a novelty, but as institutional infrastructure: software that replaces manual coordination, stabilizes agency margins, and turns caregiving into a job worth doing.

Code Quality Principles at Zingage

Code Quality Principles at Zingage

[Originally written by Zingage Principle Engineer Ethan Resnick as an internal memo]

Carefully Balance What Your Code Promises and What it Demands; Err on the Side of Promising Less

Every piece of code promises some things to its users — e.g., “this API endpoint will return an object with fields x, y, z”; or, “this object supports methods a, b, c”.

Put simply, the more things that a piece of code promises, the harder the code is to change, because a new version of the code either has to continue to deliver all the things that the old code was promising, or all the users of the old code must be updated to no longer depend on the parts of the old promise that the new code will no longer fulfill.

Therefore, the key to long-term engineering velocity is to keep the set of things your code promises small. This usually boils down to not returning data and not supporting operations that aren’t truly necessary.

Some examples of this in practice:

  • We've gradually removed methods from our repository classes, and stopped adding a bunch of methods by default, because keeping that API surface small makes it easier to refactor how the underlying data is stored/queried. Similarly, we're phasing out the ability to load entities with their relations, and to save an entity with a bunch of related entities, because those promised abilities are very hard to maintain under many refactors (e.g., as we move off TypeORM, or as we split data across dbs/services).
  • As a large-scale, non-Zingage example, consider how the QUIC protocol, which powers HTTP/3, encrypts more than just the message’s content — e.g., it also encrypts details about what protocol features/extensions are being used. These details aren’t actually private, but they’re encrypted primarily so that boxes between the sending and receiving ends of the HTTP request (e.g., routers, firewalls, caches, proxies, etc) can’t see these details, and therefore can’t create implementations that rely on this information in any way. Exposing less information to these middle boxes was a conscious design decision aimed at making the protocol more evolvable, after TCP proved essentially impossible to change at internet scale.

However, there is one real tension when adhering to this principle of “keeping the contract small”, namely: the flip-side of a piece of code promising fewer things is that the user of the code can depend on less.

A great example is whether a field in a returned object can be null. If the code that produces the object promises that the field will be non-null, that promise directly limits the system’s evolvability; supporting a new use case where there is no sensible value for the field (so it should be null) becomes a complicated task of updating all the code’s users to handle null. However, before a use case arises where the field does need to be null, promising that the field will be non-null simplifies all the code that has to work with it; it doesn’t have to be built to handle the case of the field being missing prematurely.

So, when deciding exactly what your code should promise, consider:

  • Are the users of the code under your control? For example, our main API is only used by our frontend, which we control. Therefore, it’s relatively-straightforward to make a breaking change in the contract exposed by the API, as we can also update the frontend. Contrast this with an API endpoint in the partnership API, which is called by our customers: we don’t control their code, so changing the endpoint’s contract to no longer return certain data requires a long, complicated coordination process with them. In situations like this, where you don’t control the consumers, keeping your promises small is essential.
  • How easy is it to identify and coordinate with all the code’s users? In the case of an endpoint called by our customers, we at least have a comprehensive list of our customers, a way to email all of them; plus, we have a way to see which customers are using which endpoints (through API usage logs or similar). This makes it possible, if time consuming, to coordinate breaking changes with them. However, imagine we had an API endpoint that was open to the world with no authentication; in that case, we’d have no way to know who’s using it, and no effective way to coordinate with them to update their code to accommodate a breaking change. Unauthenticated open endpoints are one extreme; the other extreme might be a Typescipt utility function in our API repo. In a case like that, if we change the function’s return type or to take an extra argument, Typescript will literally guide us to all the callers of the function that are broken by this change.
    • Corollary: Invest in tracking the usage of your APIs, because the easier you can identify callers, the easier it is to change the API.
  • How much would the callers benefit from a particular addition your code’s promises? Promising that a field won’t be null is a great example of promising something that isn’t strictly necessary to promise — and yet doing so can be worth it if it saves a lot of users of the code a lot of boilerplate and/or edge-case-handling work, which they might need were the field marked as nullable.
  • How hard will it be to uphold the promise over time?

Finally, it’s not just the case that code promises things to its users; it also demands some things from them — usually in the form of required arguments. Here again, there’s a similar balancing act: the more your code demands, the more flexible it is — future use cases may be much easier to support if your code can count on having certain arguments provided to it, e.g., and those arguments have no cost from its perspective (it can always ignore some of them, or can loosen those demands later by making some optional). However, everything your code demands is something that its users must be able to promise, so this can complicate all users (e.g., to get the data for the required argument into them).

Make Illegal States Unrepresentable

A huge part of what makes programming hard is that the system can be in an inordinate amount of states, and its very hard to write bug-free code that properly accounts for all of them. By making illegal states/inputs unrepresentable, we can greatly reduce the number of bugs and make our code more reliable and easier to reason about, with the need for fewer assertions.

See https://www.hillelwayne.com/post/constructive/, which refers to this as “constructive data modeling” and reviews some common approaches.

One common manifestation of this principal in our code base is the use of discriminated unions rather than having multiple cases smooshed into one object type with some nullable properties.

To give a concrete example, you should avoid something like:

type ProfileOrBusinessId = { profileId?: string; businessId?: string }

Instead, prefer:

type ProfileOrBusinessId = 
  | { type: "BUSINESS_ID", id: string }
  | { type: "PROFILE_ID", id: string }

The difference is that the former type allows zero or two ids to be provided — both of which should be illegal — and forces all the consuming code to handle both illegal cases. This can create cascading complexity (e.g., if a consuming function throws when it gets no ids, then its caller has to be prepared to catch that error too) and different bits of consuming code could easily end up doing different things in the case where both ids are provided (if they the properties in a different order). By contrast, the second type requires exactly one id to be provided.

“Parse, Don’t Validate”

After your code verifies something about the structure of input value it’s working with, strongly consider encoding that result into the types. See also https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/

Our extensive use of tagged string types are a good example of that: after we verify something at runtime, we record it in the types so that we can write code that, at the type level, requires the right kind of argument. (See, e.g., OwnedEntityId and isOwnedEntityId).

Duplicate Coincidentally-Identical Code

Two pieces of code that currently do the same thing should only reference common logic or types for that functionality (e.g., a shared base class or a shared utility function) — that is, the code should only be “made DRY” — if the two pieces of code should always, automatically evolve together.

Some concrete examples:

  • The input type (e.g., DTO) for updating an entity should not inherit from the type used when creating the entity. If the update type were to extend the creation type, that would mean that any new field that can be provided on creation would also automatically be allowed in the input to an update. However, not every new field should be set updateable: having fields be automatically accepted in an update input can create security risks, and it extends the contract that your code has to uphold. On balance, then, it’s likely better to force the developer to decide, for each new field, whether and how that field should be allowed on update — even if that requires a bit more duplication and boilerplate for fields that are settable on create and update. (This example is just a variation of the “fragile base class” problem.)
  • The types used in the application to reflect the shape of values stored in the database should not be the same as, or in any way linked to, the types that the application’s services use to reflect their legal inputs, outputs, or intermediate values. The reason is that a change to the type the application is working with does not automatically change the shape of data in the database; that data will only change if the developer remembers to explicitly migrate it. Therefore, having the type used for data in the database be linked to (and therefore automatically change with) other types turns the database types into lies, and actively hides the need to first migrate the data in the database, which the compiler would otherwise warn about.

In general, consider that, when multiple pieces of code reference some shared logic, updating that logic in the one place where it’s defined will lead those changes to cascade everywhere; that’s the whole point of DRY, but it’s a double-edged sword: sometimes, it prevents bugs by keeping every user of the shared logic in sync, on the latest (most-correct) version; but, sometimes the automatic propagation of changes (to places where the new assumptions/behavior of the new code should not apply) is itself the cause of bugs. Hence the original guidance: DRY up code that you’re reasonably confident should always, automatically evolve together, or DRY up code where, because of the specifics of the domain, the benefits of automatic change propagation outweigh the risks.

Forget the Navy, Join the Pirate Ship

Forget the Navy, Join the Pirate Ship

[Written by Zingage Head of Operations Samantha Tepper]

The mythology of startup creation follows a familiar script: brilliant founder has breakthrough insight, starts company in garage, changes world. But there's a more interesting pattern hiding in plain sight: many of the most transformative companies aren't born from solitary inspiration – they emerge from the DNA of other great companies.

This isn't just about talented people leaving to start new ventures. It's about how solving hard problems inside fast-growing companies creates the perfect conditions for identifying massive opportunities.

Consider Snowflake, now valued at over $50 billion. Its founders, Benoit Dageville and Thierry Cruanes, spent years as data architects at Oracle, where they intimately understood the limitations of traditional data platforms. This deep operational experience with Oracle's technology and customer needs didn't just inform Snowflake's creation – it was essential to it. They didn't just have an idea for better data warehousing; they had years of pattern recognition about what actually worked and what didn't at massive scale.

We see similar patterns elsewhere in enterprise software. Some trace Retool's innovative approach to internal tooling back to experiences at Palantir, where the challenges of working with complex data systems reportedly inspired new thinking about how to build internal tools. While the exact details of this lineage aren't widely documented, it points to a broader truth about how innovation propagates through the technology industry.

Why does this pattern keep repeating? Three factors make great companies natural incubators for even greater ones:

  1. Scale creates visibility into problems worth solving. When you're operating at scale, you encounter problems that aren't just annoying – they represent massive market opportunities if solved. The internal tools teams build to solve these problems often have immediate product-market fit because they're built for real needs.
  2. High-performance teams develop exceptional pattern recognition. Working on complex problems at scale gives builders a sixth sense for which solutions could become standalone products. This isn't just about technical insight – it's about understanding what makes a solution truly valuable.
  3. These environments force pragmatic innovation. When you're building internal tools for a growing company, you can't hide behind theory. The solutions either work at scale or they don't. This creates a unique kind of builder – one who combines vision with practical experience.

We're entering an era where the most valuable companies will emerge from the DNA of today's scale-ups. Not through acquisition or investment, but through the natural evolution of solving hard problems with great teams. The next wave of breakthrough startups are probably being built right now as an internal tool somewhere.

At Zingage, we're assembling a team of renegades who refuse the AI replacement narrative, and embrace abundance. We're building Agent Swarm that allows everyday entrepreneurs - not just those in Silicon Valley - to bootstrap their businesses from zero to thousands of customers. If this sounds interesting, we'd love to chat at hiring@kuzushi.io.

The future of America and its Bedrock Economy

The future of America and its Bedrock Economy

Kuzushi is the team behind Zingage.com. We are a team of builders deploying AI for the Bedrock Economy.

The Bedrock Economy is the part of the economy that’s essential to our everyday lives, yet hard to automate and impossible to offshore. These are the same people taking care of our parents, moving goods from coast to coast, and building infrastructure for the coming decades.

When COVID hit in 2021, more than 200,000 businesses in healthcare, construction, food, and hospitality closed doors permanently. Though demand was often through the roof, these businesses struggled operationally. Behind the scenes an army of back offices battling with fragmented software, employee turnover, and a changing regulatory environment. With healthcare costs up 25% in the last 5 years and the price of housing seemingly unreachable for millions of Americans, the stakes have never been higher.

"Our goal is to make running these essential businesses as easy as selling on Shopify."

By adopting AI, the Bedrock Economy can leapfrog from paper forms and fax machines to autonomous agents. Companies will soon be able to automate whole categories of back-office processes without ripping out their system of records, going through expensive retraining, or needing hyper-customized software.

A picture of our customer workflow pre-Kuzushi. They were using 5 different systems to manage their patient onboarding process.

At Kuzushi, we are building anthropomorphic AI – a digital colleague on your side 24/7.

We see a future where everyday entrepreneurs can scale without growing their headcount, reinvest these savings into their employees, and elevate the role of back offices from paper pushers to process designers. We see a future where new entrepreneurs are ushered into best operational practices from Day 1, so that they can get back to growth.

Since launching in 2023, we’ve successfully scaled our first vertical. We’re now working with some of America’s largest Healthcare providers, onboarding thousands of new patients and clinicians every month, and slated to reach profitability this year. We raised a seed round from the same investors behind generational companies like Airtable, Deel, and Figma.

At Kuzushi, we’re humanists at heart. We believe entrepreneurship is the essence of the American Dream, and we believe that our work will enable today’s employees to earn a stake in tomorrow’s economy. You’ll be working alongside teammates who’ve scaled marketplaces to 9-figures in GMV, early builders at AI and fintech unicorns like Ramp.

Kuzushi is always looking to work with talented engineers, designers, and researchers. We heavily prioritize learning speed over seniority and give highly competitive ownership for ones who are in it for the long run. When you join, expect to work on large-scale data platforms, pioneering building blocks for “work”, and designing AI-native user interaction.

What is Kuzushi?

Kuzushi means to break balance. It is a term used in Judo that describes what a Judoka must establish or capitalize on to take down their opponent. We named the company Kuzushi because we believe that winning in markets is the same. It is a series of Kuzushi moments that we either create or seize to push forward against tiny odds of success. Kuzushi is the most high agency way to describe “why now” moments.

We see Kuzushi or the opportunity for Kuzushi in so many places. Two examples: on engineering we are able to ship code multiple folds faster by leveraging Cursor; in sales, we’re automating high touch b2b outreach with AI copy that would’ve cost hours of sales time to create. Neither of these opportunities will last forever but while they do we will capitalize to maximize distribution and fortify our position.

“Whatever you do, do it constantly and massively increase the scope of your ambition.”

This quote from Napoleon defines our culture at Kuzushi. We apply unrelenting and continuous pressure forward until we’ve won every customer. Every week we ask ourselves, how can I be 10x faster, how can I do 10x more?

Interested in joining us?

We are New York-based, well-funded, and are hiring for our Founding Team. If you are interested in joining us, reach out to us at dt@kuzushi.io.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Zingage Engineering.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.