How We Build: Trust, Ownership, and the Systems That Support Them
Earlier this year, as ELLA moved from early access into production, we stepped back and looked at how we were building. The first version shipped the way most startups ship: fast and focused on proving the idea worked. That pace was right for the stage we were in, but as advisors began relying on the platform for real client work, the calculus changed. The practices that got us to launch weren't the same ones that would earn trust at scale.
So we rewrote our engineering principles. What came out of those updates was a set of practices that often run counter to conventional startup engineering wisdom. Where the default advice says ship fast and iterate, we chose to slow down at specific points to get it right. Where most teams apply ownership as a management philosophy, we operationalized it into engineering-specific heuristics. And where AI tooling is typically configured as a utility, we started testing what happens when you onboard it as a team member.
What follows is what we landed on and why it matters for the advisors’ experience. It's a working document, so still changing, but we're sharing it because people ask us what makes ELLA different, and the answer starts with how we build.
Why the stakes are different here
With 40% of the 33 million small business owners in the U.S. approaching retirement, an estimated $10 trillion in business assets will change hands over the coming years. Fewer than a third of these owners have any formal succession plan. This is the generation of exits that people in the industry call the "Silver Tsunami."
It's not an abstract problem. It's millions of businesses that employ over 25 million Americans: restaurants, manufacturing firms, HVAC companies, and the service businesses that keep communities running. When these exits go well, businesses keep operating and jobs stay local. When they don't, we lose more than a storefront.
The advisors who guide these transitions carry extraordinary responsibility. Every piece of guidance they provide, every valuation they deliver reflects on relationships they've spent years building. When an advisor uses software to help a client understand the value of their business, that software becomes an extension of the advisor's judgment.
Most software companies think about trust as a direct relationship: company to user. Our trust problem is transitive. When our tool produces a wrong number, the advisor doesn't lose trust in ELLA. The advisor loses credibility in front of a business owner they may have been guiding for years. That's a three-party trust chain, and the failure cascades outward from the person who had nothing to do with the bug. We can't patch that with a hotfix or an apology email.
We've talked to advisors who have horror stories. Valuation tools that produced numbers they couldn't defend. Discovery platforms that lost client data. Report generators that made them look unprepared in front of their most important clients. Each failure cost them something money can't replace.
"The advisors who use ELLA are putting the reputations of themselves, their firms, their clients, and their clients' companies on the line. We respect this by putting trust paramount to release velocity."
That's from our internal engineering-principles.md document. The document, checked in to version control, is a filter we apply to every decision about what to build, when to ship, and how to test. For a startup, choosing trust over velocity is a countercultural stance. The default advice is to ship fast and iterate. We do ship fast, but we treat trust failures differently from feature gaps. A missing feature is an inconvenience. A wrong datapoint in front of a client is a career-defining moment for the advisor who relied on us.
Trust as a design constraint
When an advisor brings a client into ELLA, they're extending their reputation. Every screen and calculation reflects on them. That's a weight we carry into every line of code.
Startups can't test everything. Resources are limited and the backlog is endless. Most testing philosophies either say "test everything" (aspirational and unfollowed) or "test the critical paths" (vague enough to mean anything). We needed something more specific: a triage protocol that answers the question engineers actually face when they have an hour left in the day and time to write one more test.
Our testing philosophy starts from a clear hierarchy of what to protect:
Security first. Authentication flows, permission checks, data access controls, and row-level security validation. If a client's data leaks because of our negligence, no feature set will matter.
Financial accuracy second. Calculations and metrics displayed to advisors need to be spot-on. A wrong number in front of a client can derail years of planning and relationship-building.
Data integrity third. CRUD operations and state transitions. The data advisors collect from their clients is irreplaceable. Losing it or corrupting it is unforgivable.
UX-critical paths fourth. Onboarding and core workflows, including how the application handles errors. If advisors can't figure out how to use the tool, or if it fails in confusing ways, they'll stop trusting it.
This ordering is deliberate. Security outranks financial accuracy because a data leak affects every client simultaneously, while a calculation error affects one engagement. Financial accuracy outranks data integrity because a wrong number actively misleads, while a data loss is at least obviously wrong. Each rank reflects a judgment about blast radius and how quickly the advisor can recover trust with their client.
For intricate logic like value chain calculations and authentication flows, we write failing tests first. Tests define the expected behavior before implementation begins. This forces us to think through edge cases and assumptions before we write a single line of production code. By the time we implement, we've already scaffolded our understanding of the problem.
When a bug reaches production, we don't just fix it. We write a regression test alongside the fix, one that fails without the fix and passes with it. This documents the specific failure so it can't recur undetected.
Ownership without bureaucracy
We work only with high-agency, highly-engaged contributors. It's a cultural expectation that shapes how we operate.
Our team runs on a principle borrowed from Jocko Willink: Extreme Ownership. Every team member owns their outcomes completely. When something fails, whether that's a production bug or a miscommunication that leads to wasted work, the first question is always "What could I have done differently?" rather than seeking external blame.
There are blog posts about applying Extreme Ownership to engineering. Most stay at the level of management philosophy: leaders should take responsibility, teams should avoid blame. That's a mindset we follow, but what we needed were engineering-specific heuristics; practices concrete enough to change daily decisions about code, communication, and tooling.
Extreme Ownership plays out in four ways:
Own the mission. Understand the why behind your work. If requirements are unclear, it's your responsibility to clarify them, not to ship ambiguity. An advisor using ELLA to guide a client through a $5 million business sale doesn't care about your interpretation of a vague spec. They care that the tool works correctly.
Own the communication. If a stakeholder doesn't understand something, the failure is in the explanation, not the audience. Adjust until alignment is achieved. This applies to code comments and pull request descriptions as much as it does to conversations with the team.
Own the quality. Code that reaches review is code you stand behind. Don't rely on reviewers to catch what you should have caught yourself. The reviewer's job is to provide perspective, not to serve as a safety net.
Own the outcome. When things break, resist the instinct to blame tools or unclear specs. Ask what you could have done to prevent it. It's about control. If you can identify what you could have done differently, you can actually improve. If the fault is always external, you're stuck.
If you're not used to the concept of Extreme Ownership, this can sound harsh, but it's actually liberating. Problems get identified and fixed instead of litigated.
This mindset directly enables continuous improvement. Identifying gaps in process or communication only works when individuals accept responsibility for the outcomes they influence. And as we'll describe later, this ownership principle extends beyond human contributors. It shapes how we work with AI, too.
The dichotomy of leadership
Extreme Ownership is the foundation, but effective teams also balance opposing forces. Willink calls these tensions "The Dichotomy of Leadership." We've adapted his Laws of Combat into engineering principles, each with an engineering-specific tension that keeps the principle honest.
Cover and Move. No one operates alone. Reach out when you need help. Offer context when there's room for improvement. Support your teammates' PRs with timely reviews. When someone is blocked, help unblock them. The tension here: lead when needed, follow when others lead better. Your success is the team's success, and their failure is yours too. Helping a teammate isn't altruism. It's self-interest, because their blocked PR is blocking the team's velocity.
Simple. Complexity is the enemy of execution. Keep plans, code, documentation, and communication as simple as possible. If a solution requires extensive explanation, consider whether a simpler approach exists. If an LLM outputs functional yet sloppy code, it's on you to make it more elegant and durable. The tension: be disciplined, but not rigid. Simple doesn't mean simplistic. Sometimes complexity is genuinely required, but it should be justified complexity, not accidental complexity born from not thinking hard enough about the problem.
Prioritize and Execute. We're a fast-moving team, adjusting to customer feedback and finding our place in the industry. When overwhelmed, step back and identify the highest-impact problem. Solve that first. When dependencies conflict, escalate immediately rather than context-switching into paralysis. The tension: be aggressive, but not reckless. Move fast without breaking trust.
Decentralized Command. Every team member should be empowered to make decisions within their domain without waiting for permission. This requires understanding the product vision and business priorities well enough to act autonomously. When you understand the why, you can make good decisions without constant oversight. The tension: decentralized does not mean independent. Seek clarity before committing effort in the wrong direction. Lone wolves fail teams. Autonomy is earned through alignment.
These principles are baseline expectations. A team of owners who balance these tensions, accountable yet empowering, confident yet humble, is how we build something serious and durable.
AI as team member: an experiment in ownership
This is where Extreme Ownership gets interesting in 2026, and where we're running a test.
Most engineering teams treat AI tools as utilities: configure them, use them, review their output. We're trying something different. We wrote onboarding documentation for our AI collaborators the same way we'd onboard a human engineer, and we're treating the results as data about what works and what doesn't.
Our CLAUDE.md (the file that shapes how AI coding tools behave in our codebase) doesn't just describe the tech stack and project structure. It includes the same ownership expectations we hold for every human contributor. The AI is instructed to "seek to understand the why before implementing," to "investigate root causes rather than surface fixes," and to produce "code I'd stand behind." We wanted to test a hypothesis: what happens when you don't just give an AI context about your architecture, but ask it to internalize your engineering culture?
What we've observed so far: the behavioral expectations change how the AI works. It asks more clarifying questions before implementing. It flags trust implications that a generic model configuration wouldn't surface. Our pre-work checklist includes the question "Would a bug here damage advisor reputation or client confidentiality?" and the AI applies that filter because we taught it to. These are early signals, not conclusions. The boundary between "configure as a tool" and "onboard as a team member" is still shifting, and we're paying close attention to where each approach earns its weight.
We use Claude Code and Cursor for daily development, and Mesa.dev for agentic code review with six specialist agents: Architecture, FinOps, Security, Performance, Next.js, and Release Management. Each agent reviews PRs through its own lens before a human reviewer sees the code. The ownership piece: when an agent repeatedly flags a non-issue, a team member submits a PR to refine that agent's definition. When an agent misses a recurring problem, we update it. We train our review agents the way we'd coach a new team member, because their blind spots are our responsibility. This training loop is ongoing and imperfect. The agents still flag things that don't matter, and occasionally miss things that do. We're learning where the boundaries are.
When serious or recurring concerns emerge around reliability, security, or recurring design questions, we encode them into "skills": structured documents that AI tools can reference during development. Most companies build runbooks and wikis for human engineers. We build them for AI engineers too. Each skill captures an architectural decision and a set of constraints that the AI should apply consistently. This is institutional knowledge designed for AI consumption, and it requires the same maintenance discipline as any other documentation. Stale skills are as dangerous as stale docs.
Here's where this connects back to Extreme Ownership. When the AI produces bad output, the first question is the same one we ask when anything fails: "What could I have done differently?" Maybe the CLAUDE.md wasn't clear enough about the pattern. Maybe the skill document didn't capture the constraint. Maybe the review agent wasn't trained on that class of problem. The ownership chain doesn't stop at the keyboard. If we've delegated work to AI and the output is wrong, we own the instructions that led there. Cover and Move means helping your AI tools get better. Own the communication means writing instructions clear enough that the AI can follow them.
We share this as an experiment in progress, not a solved problem, because one of our experience principles demands it. We call it Do Right: "Err on the side of transparency in goals, intentions, and processes every step of the way. Create space for discretion, treat customers (and their data) with respect, and grant grace in light of missteps." That principle governs how we build the product, and it governs how we build with AI. We're transparent with our team about where AI is being used and where its output requires heightened scrutiny. We're transparent with advisors about how AI factors into the platform. And we're being transparent here, sharing an approach that's still evolving, because we believe the engineering community benefits from honest accounts of what's actually happening inside teams integrating agentic workflows into trust-critical software.
The principle holds: speed with safeguards. AI enables faster iteration, which demands stronger automated guardrails. We use AI tools to catch errors introduced by AI-assisted development.
Why we share this
Advisors are putting their clients, their firms, their clients' companies, and their own reputations on the line when they use ELLA. They deserve to know how seriously we take that responsibility.
Trust, not transactions, drives advantageous exits. The same principle guides how we build the product that supports those relationships. And Do Right means being transparent about the process, including the parts we're still figuring out.
If you're an advisor evaluating ELLA, or an engineer curious about how we work, this is what you're looking at: a team that treats your reputation as a design constraint from the start, and believes that honesty about how we build matters as much as what we build.
ELLA is an AI-native platform for trusted advisors guiding business owners through consequential decisions. Learn more at exitwithella.io.
