A friend and I recently debated the meaning of work in the looming shadow of AGI. The premise was simple: if OpenAI - or any organization - achieves superintelligence, what's the point of doing anything at all?
In truth, I've had this conversation repeatedly with founder friends. Each new OpenAI release sparks awe and dread, steadily devouring startups conceived just months ago. The meme of startups reduced to mere ChatGPT wrappers feels painfully real. These discussions typically land us at two bleak conclusions: either join an AI lab to stay relevant or succumb to nihilism, lounging on Universal Basic Income in the supposed “post-scarcity” future. Advocates imagine humans pivoting gracefully toward art or leisure, but that vision feels patronizingly hollow.
Why does this scenario feel inevitable and limiting? Perhaps because we’ve mistakenly assumed that a single, centralized AGI - one supreme intelligence directing human affairs - is the optimal and natural outcome. Yet history challenges this assumption. Attempts at centralized planning, such as Mao’s Great Leap Forward or Lenin’s collectivization, repeatedly failed due to oversimplification of complex human systems.
James C. Scott vividly illustrates this danger in Seeing Like a State. Colonial powers in Tanzania enforced monoculture farming, planting a single crop uniformly for maximum yield. Their "scientific" method disastrously ignored local wisdom. Indigenous farmers had traditionally practiced polyculture - planting multiple crops together. While seemingly inefficient and messy, polyculture safeguarded soil health, diversified risk, and allowed flexible responses to unpredictable conditions. The colonial approach, though theoretically optimized, proved rigid and catastrophically vulnerable.
The core misconception underlying singular AGI echoes this colonial mindset: the belief that superintelligence can - and inevitably should - become a digital god capable of making all decisions optimally. Yet real-world decision-making rarely offers neat solutions; it more closely resembles the messy moral complexity of the trolley problem. Intelligence alone, no matter how advanced, cannot dictate correct answers to inherently subjective moral dilemmas. Thus, we must clearly separate intelligence - the neutral ability to solve problems - from agency (authority to act) and values (the moral principles guiding actions).
Intelligence, in its purest form, involves computational power, data processing, and predictive modeling capabilities. It is fundamentally about pattern recognition, scenario forecasting, and logical analysis - essentially neutral skills that can enhance decision-making but do not inherently carry ethical weight or moral guidance. Agency, on the other hand, concerns who or what has the authority and accountability to act upon the outputs of this intelligence. Agency requires legitimacy, trust, and transparency - qualities that purely intelligent systems alone cannot ensure. Values represent the most human dimension of all; they encompass the moral frameworks, cultural contexts, and ethical considerations that ultimately guide decisions.
Today, systems like ChatGPT already display overarching personalities and value frameworks, intentionally designed by organizations like OpenAI. While this approach helps in establishing baseline safety and ethical guardrails, it presents two significant issues. First, these predetermined values might not fully align with the diverse perspectives, cultural contexts, and nuanced ethical landscapes of all users. Second, embedding a singular value system risks oversimplifying complex moral decisions, potentially resulting in outcomes disconnected from local realities or community-specific priorities. Therefore, a more robust approach would empower users and communities to tailor and tune these AI personalities and values to their specific needs and ethical standards, ensuring greater relevance, acceptance, and genuine alignment.
Startups uniquely embody this critical separation of intelligence, agency, and values. They deploy intelligence as technological infrastructure - powerful yet neutral tools capable of addressing specific problems. They restore agency by enabling local communities and users to actively choose, adapt, or reject these tools based on their distinct circumstances. Most crucially, startups allow values to remain community-defined and responsive to context, rather than universally imposed. For example, a rural healthcare clinic might adopt AI specifically tuned for resource-constrained environments, emphasizing preventive care aligned with local priorities. An urban hospital might choose a different AI optimized for managing high patient volumes and specialist coordination. Each community retains genuine agency, reinforcing accountability and achieving true alignment between technological capabilities and diverse human values.
This approach mirrors how governance functions at its best: overarching federal policies exist alongside state laws, city ordinances, trade associations, and grassroots organizations. While centralized institutions like OpenAI attempt broad alignment efforts analogous to federal policy, startups act as local policymakers - crafting tailored, bottom-up solutions that reflect community-specific needs and values.
Such decentralization doesn’t just enable startups to gain initial traction - it positions them for sustained relevance. Startups rapidly build trust through close alignment with local communities, steadily compounding their advantages by integrating powerful open-source models like Llama and DeepSeek with specialized expertise, proprietary data loops, and deep relationships. These assets form an enduring edge, similar to how local clinicians remain indispensable because their practical insights and patient relationships withstand technological disruption.
Ultimately, I’m not advocating for decentralized intelligence and the startups that embody it out of nostalgia or a luddite fear of our soon-to-come AI overlords. Sure, spending retirement as mediocre painters surviving on UBI sounds grimly amusing. But the real danger is more serious: placing all our trust in a single, omnipotent AI planner whose perfectly rational decisions could lead us straight off a cliff. Startups offer something far better - a morally diverse ecosystem of intelligences, built from the ground up by real communities. If history teaches us anything, it’s that pluralism - not centralization - is our strongest safeguard for human liberty. So yes, despite the looming shadow of AGI, it’s (still) time to build.