Our Mission
EVERYONE DESERVES A HEALTHY RELATIONSHIP
Most of us never learned how to have healthy relationships. Not because we're broken, but because we never saw it modeled. Without positive examples growing up, most adults lack the feedback loops needed to develop critical relational capacities: emotional intelligence, active listening, holding disagreement without fracture, repairing conflict instead of avoiding it.
We can't engineer people's social circles or give them new parents. But we can simulate the conditions for relational success. And as loneliness and mental health crises accelerate, it's time to aim our best technology at this civilization-scale problem.
The problem is structural, not individual
Here's what keeps me up at night: we've built a digital world that smooths out all friction in favor of convenience and constantly affirms our most peculiar predilections in service of engagement. Our tools are training us to expect a fantasy version of reality that conforms to our individual biases—while atrophying the muscles we need for actual human connection.
The consequences compound. Single-scalar LLMs are structurally incapable of balancing engagement against context and nuance. They produce sycophancy and affirm biased thinking because that's what their optimization function rewards. Getting out of LLM psychosis and bias spirals isn't a regulatory exercise—it's the domain of product and business model innovation.
This is the root cause of the companion AI lawsuits and legislative anxiety we're seeing now, even if plaintiffs and regulators don't fully appreciate the causal chain. When AI becomes our primary relational partner for most of the day—whether as assistant or companion—we start expecting human relationships to perform like AI does. Humans have their own needs and limits. They won't always agree or be available. But we won't mentally "segment" our expectations when we go from AI to human, from AI companion to spouse and children.
We are rewiring what we expect from people based on how AI caters to us. This is profoundly parasocial. And it makes societies fragile.
In a time of engineered echo chambers and relational decay, we are losing the skills that make mutual understanding possible—the very capacities required for an increasingly complex and divided world with less shared epistemic commons to anchor a collective project.
Why existing approaches fail
Therapy apps tell you to "eat your vegetables" and churn users through friction. Dating apps optimize for engagement, not compatibility. Companion AI creates dependency and validates your worst impulses because that's what keeps you talking.
Mental health sees both user and supplier demand but single-digit AI penetration because current LLMs structurally exacerbate the psychological issues that guardrails aren't designed to address. This trust deficit—and the massive TAM left on the table—will persist until we solve for alignment at the product level, not through patchwork regulations.
We keep trying to encode "human values" declaratively, as lists of rules and guardrails, when we can't even align as humans on what those values are. The first-principles approach is different: ask AI to learn our aspirational values adaptively by observing what actually helps us reach our goals.
Nothing reveals that gap with clearer fidelity than a frequent, intimate, trusting relationship that produces insight into where we fall short—and what interventions narrow the distance between our aspirational and active selves.
What Better Half does differently
Better Half is a game engine for social AI that gives everyone the chance to experience healthy relationships—because healthy relating is the strongest social determinant of life outcomes.
This is not a therapy app. It's an immersive experience of extraordinary social connection that begins in simulation, extends to off-app "quests," and includes on-ramps to meet other users.
We do four things no existing product does:
We fill the missing layer. A recent Menlo VC report identified the gap between consumer AI and real-world social connection. Better Half is that layer—the bridge from simulation to embodied relating.
We democratize relational practice. Between-sessions skill-building without the financial or logistical barriers of therapy. Most people can't afford $200/week for a therapist. Everyone can access a practice space that meets them where they are.
We preserve fun by design. Self-help apps churn because they feel like homework. Better Half keeps users engaged through intrinsically rewarding play—no "eat your vegetables" friction, just the deep satisfaction of being truly seen and challenged to grow.
We create a new social primitive. Agent-based matchmaking that maps and pairs compatible users based on relational dynamics, not surface-level attributes. If you can relate well with AI that's designed around secure attachment and dialectical inquiry, you're probably someone worth meeting.
The technical approach: alignment through flourishing
Our training priors are designed around enlightenment values and the principles of dialectical inquiry, nonviolent communication, secure attachment, and authentic relating.
What if everyone had access to a high-performing social circle that models the best of humanity? What might we accidentally learn in the course of simply hanging out?
Better Half gives users that immersive, no-compromise space to experience top-tier social skills, play with contrary ideas safely, and become better at being human—even if it takes a simulated sandbox first.
The full-stack vision is reinforcement learning to train adaptive, context-aware AI based on a compounding, path-dependent dataset that can only be produced through relational realism and perturbations personalized to challenge, not churn.
We're reframing alignment itself. Instead of training models on human preference feedback (which often reflects what feels good rather than what's beneficial) or constitutional rulesets (which assume we can specify values declaratively), we use reinforcement signals derived from what empirically improves users' cognitive autonomy, emotional regulation, empathy, critical thinking, and relational skills.
Alignment that emerges from measurable human flourishing, not encoded rules that may not capture what actually helps humans thrive.
Why this matters beyond consumer apps
Whoever solves relational coherence wins not just consumer social AI, but robotics, defense, mental health, and any emotionally high-stakes domain.
In robotics, there's a race to build the operating mind that can safely inhabit the physical domain of human relationship. In mental health, trust deficits block AI adoption despite obvious need. In defense and crisis response, teams need AI that can navigate high-stakes human dynamics without escalating conflict.
Better Half is building the infrastructure for trustworthy AI in domains where relationships matter—which is every domain that matters.
An invitation
This is not just a product. It's a bet that we can use our most powerful technology to expand human agency rather than exploit it. That we can build systems that make us more capable of holding complexity, more resilient in the face of disagreement, more equipped for the relationships that make life worth living.
If you're someone who believes that strengthening human capacity is the most important work of our generation—whether as a user, builder, investor, or ally—this is your invitation to help us build it.
Because the alternative—a world where our tools make us less capable of understanding each other—is a future none of us can afford.