Eclipticlink
Custom Software11 min read

DDD, TDD & Trunk-Based Development: How We Build Software

Most teams inherit their development practices by accident. We chose ours deliberately — Domain-Driven Design, Test-Driven Development, trunk-based development, and Railway Oriented Programming. Here is why these four practices work together, and what codebases look like without them.

By EclipticLink Team·

Most software teams do not choose how they work. They inherit it. A set of practices accumulates over years — some introduced by a senior engineer who read a book, some copied from a Stack Overflow answer, some adopted because a vendor recommended them — and the result is a patchwork that nobody fully understands or defends. The team moves fast in the beginning and wonders why velocity degrades as the codebase grows.

We made deliberate choices. We use Domain-Driven Design to make sure the code reflects the real business. We use Test-Driven Development to design software through its interfaces before worrying about its implementation. We use trunk-based development to keep the entire team on the same page and eliminate the slow-motion disaster of long-lived feature branches. And we use the Railway Oriented Programming pattern to handle errors honestly rather than optimistically.

These are not independent choices. They reinforce each other in ways that compound over time. This post explains what each practice is, why we use it, and what we consistently observe in codebases that do not.

Domain-Driven Design: Code That Speaks the Business Language

Domain-Driven Design — DDD — is a software development approach that centres the business domain in all design decisions. The core idea is that the language the business uses to describe its problems should be the same language the code uses. Not a translation of it. Not a simplified version of it. The same language.

This sounds obvious until you look at a codebase where the business calls something an 'Enrollment' and the code calls it a 'UserCourseRelationship'. Or where a business rule like 'a subscription can only be paused twice per billing cycle' is scattered across three services, two stored procedures, and a frontend validation that is inconsistently applied. That drift between the domain and the code is not a naming problem. It is a comprehension problem — the engineers built what they understood, not what was needed.

Ubiquitous Language

The first and most foundational DDD concept is ubiquitous language: a shared vocabulary between the business and the engineering team that is enforced in conversations, documentation, and code. When a domain expert says 'claim' and a developer says 'ticket', one of them is wrong — and the friction from that misalignment compounds silently over months. In a codebase we maintain, the words in the code should be the words the business uses, and the business should be willing to use the words in the code.

Bounded Contexts

The second core concept is bounded contexts: explicit boundaries around different parts of the domain where a consistent model applies. A large business domain contains sub-domains that use the same words to mean different things. 'Customer' in a billing context means the legal entity that receives invoices. 'Customer' in a support context means the human being having a problem. These are different concepts and they should live in separate models, not share a single anemic entity that tries to serve all contexts and serves none of them well.

What Teams Without DDD Look Like

Codebases without deliberate domain modelling develop a recognisable pattern: God objects that accumulate fields from every context, business logic hiding in controllers or database triggers, and a growing gap between what the requirements say and what the code does. Adding a feature requires understanding an ever-larger surface area of the system. Onboarding a new developer takes months because the domain knowledge is implicit rather than encoded. Bugs emerge not because the code is complex but because the model no longer reflects reality.

Test-Driven Development: Design First, Code Second

Test-Driven Development is widely misunderstood. Most people think it is about testing. It is not — or not primarily. TDD is a design practice. Writing the test first forces you to define the interface of what you are about to build before you think about how to implement it. That shift in thinking produces cleaner, more composable code than almost any other practice we have adopted.

The cycle is three steps: write a failing test that describes the behaviour you want (red), write the minimum code that makes the test pass (green), then refactor the code to be clean while keeping the tests green. This cycle repeats dozens of times per hour. The discipline is not optional — skipping the red step, or writing implementation before the test, loses most of the design benefit.

What TDD Actually Produces

Code written test-first tends to be more modular because tightly coupled code is hard to test in isolation. It tends to have clearer interfaces because you are calling the code from the test before it exists, so you design the call site first. And it accumulates a comprehensive test suite as a side effect of development — not as additional work, but as the artifact of the design process itself.

The compounding benefit appears over months. A codebase with high test coverage and tests written to describe behaviour is one where changes can be made with confidence. You refactor without fear because the test suite tells you immediately if you broke something. You add features without worrying about regressions because the existing behaviour is specified in executable form.

What Teams Without TDD Look Like

Without TDD, tests get written after the fact — if at all. Tests written after the implementation tend to mirror the implementation rather than specify the behaviour, which means they test that the code does what it does rather than that it does what it should. The test suite becomes a burden rather than a safety net. Refactoring slows down because nobody is confident what the tests actually cover. Features get shipped with less certainty, and the 'it works on my machine' response to production bugs becomes routine.

If writing a test for your code feels hard, that is the code telling you something about its design. TDD makes that feedback immediate instead of something you discover six months later.

Trunk-Based Development: One Branch, One Truth

Trunk-based development is a source control strategy where all developers commit to a single shared branch — the trunk, or main — either directly or through short-lived branches that are merged within a day or two. This is the opposite of the Gitflow or long-lived-branch model where features live on separate branches for days or weeks before being integrated.

The argument for long-lived branches feels intuitively safe: isolate the work, integrate when it is ready, reduce the risk of breaking the main branch. The problem is that this intuition is backwards. The longer code lives in isolation, the more it diverges from the main branch, and the more expensive and risky the eventual merge becomes. Teams using long-lived feature branches spend a significant portion of their sprint on merge conflict resolution — and they pay the tax again and again, on every branch.

How Trunk-Based Development Works in Practice

Trunk-based development requires that every commit to the trunk leaves the codebase in a deployable state. Incomplete features are hidden behind feature flags rather than kept on a separate branch. This means a developer can merge a partially built feature today, another developer can merge unrelated changes tomorrow, and the trunk is always clean. Feature flags let you deploy code without releasing the feature — testing it in production, gradually rolling it out, or rolling it back instantly without a code change.

The Continuous Integration Connection

Trunk-based development and continuous integration are inseparable. When everyone is merging to the same branch multiple times per day, a CI pipeline that runs the full test suite on every push is not optional — it is the mechanism that makes the whole approach safe. This is another reason TDD is in the stack: high test coverage is what gives you the confidence to merge frequently without fear.

What Teams on Long-Lived Branches Look Like

Long-lived branch teams develop a recognisable rhythm. The first half of a sprint is productive. The second half involves resolving conflicts, fixing the breakages that merges introduced, and delayed releases because 'the branch isn't ready yet'. Deployments cluster at the end of sprints. The main branch is often not deployable. Integration testing becomes a project in itself. The team is not slow because the engineers are slow — they are slow because the process creates friction that grows with team size.

Railway Oriented Programming: Honest Error Handling

Railway Oriented Programming — sometimes called the Result pattern or two-track design — is an approach to error handling drawn from functional programming. The name comes from an analogy: think of the execution of a function as a train on a track. There is a happy path (the success track) and a failure path (the error track). Once a train switches to the failure track, it stays there, carrying the error information forward until something handles it.

In most conventional code, error handling is an afterthought expressed through a combination of exceptions, null returns, error codes, and defensive if-checks scattered through the logic. The happy path is clear; the error paths are implicit, inconsistent, and easy to miss. Railway Oriented Programming makes both tracks explicit and composable.

How It Works

Instead of a function returning a value or throwing an exception, it returns a Result type that is either a Success containing the value or a Failure containing the error. Operations can be chained: if the previous step succeeded, apply the next operation; if it failed, pass the failure forward unchanged. The calling code handles both outcomes explicitly at the point where it actually knows what to do about them.

This produces several concrete improvements. Errors become visible in function signatures rather than hidden in documentation or discovered at runtime. Chaining operations becomes clean and readable — a sequence of transformations where failure at any step is handled uniformly. Testing error paths becomes straightforward because failures are values, not control flow exceptions. And error handling logic is concentrated rather than scattered — the railway consolidates what would otherwise be defensive checks throughout the codebase.

What Code Without This Pattern Looks Like

Without an explicit result pattern, error handling tends to degrade over time. Early in a project, developers check errors carefully. Under deadline pressure, checks get skipped with the intention of returning to them. Exceptions bubble up to generic top-level handlers that log the error and return a 500. Edge cases get discovered in production. The code that is supposed to handle failure is the least-tested, least-reviewed code in the system — because it is the hardest to reach in tests and the easiest to deprioritize when the happy path is working.

Why These Four Practices Belong Together

Each of these practices is valuable individually. Together, they form a system where each one reinforces the others in ways that compound over the lifetime of a project.

  • DDD gives you the right model — the code reflects the real domain, the language is shared, and the boundaries are explicit
  • TDD ensures the model is correct and stays correct — every piece of domain logic has an executable specification that runs on every commit
  • Trunk-based development keeps the entire team aligned on the same model — there is one version of truth, not eight branches each with slightly different interpretations of the requirements
  • Railway Oriented Programming makes the domain model's error cases explicit — the things that can go wrong in the domain are represented as values, not silent failures

The result, in practice: codebases that remain comprehensible and fast to extend years after the initial build. New developers can read the code and understand the business rules it encodes. Features can be added without archaeologically excavating the existing logic to understand its implicit assumptions. Production incidents are rarer, and when they occur, the observability and test coverage make them faster to diagnose and fix.

Is This More Work?

Honestly: yes, upfront. Writing tests before code takes longer in the first pass than writing code without tests. Maintaining ubiquitous language requires ongoing conversation between business and engineering. Trunk-based development requires the discipline of feature flags and smaller commits. Railway-style error handling requires more explicit code than throwing exceptions everywhere.

But software is not a sprint. The relevant comparison is not the first two weeks of a project. It is the total cost and velocity over twelve, twenty-four, thirty-six months. On that timeline, teams without these practices consistently spend more time fighting their own codebase than building new functionality. The return on the upfront investment is not marginal — it is the difference between a product that can evolve and one that eventually has to be rebuilt.

PracticeUpfront Cost12-Month Payoff
Domain-Driven DesignSlower initial modelling, more conversations with stakeholdersNew features fit naturally; onboarding is faster; fewer misunderstandings shipped as bugs
Test-Driven DevelopmentFirst-pass development is slowerRefactoring is fearless; regressions are rare; defect rate drops significantly
Trunk-Based DevelopmentFeature flag discipline; smaller, more frequent commitsMerge conflicts are rare; CI is fast; the team is always integrated
Railway Oriented ProgrammingMore explicit code for error casesError handling is consistent and tested; production surprises decrease; code is easier to reason about

What We Have Seen in Practice

We have maintained codebases built with these practices and codebases inherited without them. The difference is not subtle. In codebases without deliberate domain modelling, adding a feature that touches the core domain is a multi-day exercise in archaeology. In codebases without TDD, every release carries a fear that something unrelated will break. In teams on long-lived branches, the end of a sprint involves as much integration work as development work. In codebases with inconsistent error handling, production incidents are harder to diagnose because failure modes are implicit.

We are not saying every other team is wrong. We are saying these practices exist because smart engineers learned from painful experience what makes software maintainable at scale — and we have absorbed those lessons into how we work by default, not as aspirational guidelines but as actual daily practice.

If you are building software and want a team that brings this level of engineering rigour to your project, explore our custom software development services or get in touch to talk about how we work.

Domain-Driven DesignTest-Driven Developmenttrunk-based developmentRailway Oriented Programmingsoftware engineering best practicesDDD TDDagile engineering practicesclean codesoftware delivery practicescontinuous integrationfeature flags

Ready to put this into practice?

EclipticLink builds custom software, AI integrations, and automation systems for startups and enterprises. Let's talk about your project.