Skip to content
All Issues

The Architect's Brief — Issue #28

Q2 Retrospective: What Worked

Subject: Q2 retrospective: what actually worked

Hey there,

Every quarter I review patterns across the teams I advise. Not to publish benchmarks ... to find the patterns that separate teams shipping confidently from teams spinning in circles.

Q2 2026 had three clear patterns worth sharing.


This Week's Decision

The Situation: You're heading into H2 planning and need to decide where to invest engineering capacity. The options always outnumber the resources. The question: what's actually moving the needle for teams similar to yours?

The Insight:

Pattern 1: Developer experience investments paid off 40% faster in H2.

Three teams I work with invested 15-20% of Q1-Q2 capacity in developer experience: local dev environment setup (from 4 hours to 20 minutes), CI pipeline optimization (from 18 minutes to 6 minutes), and better error messages in internal tools.

The measurable result: feature velocity increased 40% in the following quarter. Not because engineers got better ... because they stopped waiting. One team tracked it precisely: 47 minutes per engineer per day reclaimed from tooling friction. Across 14 engineers, that's 11 engineer-hours per day returned to feature work.

Pattern 2: The "we need microservices" conversation is declining.

In Q1 2025, 6 of my 12 advisory clients were actively discussing microservice extraction. In Q2 2026: 2 of 14. The monolith renaissance is real, and it's driven by practical experience rather than blog posts. The teams that did extract services in 2024-2025 are now dealing with the operational overhead, and candidly, most wish they hadn't.

The exception: teams that extracted a single, high-throughput service (image processing, webhook delivery, notification dispatch) report positive ROI. The pattern isn't "never extract" ... it's "extract only the service with a fundamentally different scaling profile."

Pattern 3: AI-generated code moved from experiments to production ... with friction.

Every team I advise now uses AI coding assistants. The teams succeeding have established clear boundaries: AI generates boilerplate, test stubs, and migration files. Humans write business logic, security-sensitive code, and architectural decisions. The teams struggling treat AI output as reviewed code. One team shipped an AI-generated SQL query that worked perfectly in testing but created a full table scan on their 40M-row production table.

The emerging best practice: treat AI-generated code with the same review rigor as code from a junior engineer on their first week. Read every line. Question every assumption.

When to Apply This:

  • Teams planning H2 capacity allocation ... prioritize developer experience if tooling friction exceeds 30 minutes per engineer per day
  • Organizations re-evaluating microservice strategies ... consider consolidation if operational overhead exceeds 15% of infrastructure spend
  • Engineering leaders setting AI coding assistant policies ... establish clear boundaries between AI-appropriate and human-required code

Worth Your Time

  1. DX: Developer Experience Benchmarks ... DX publishes quarterly benchmarks on developer productivity metrics. Their Q2 2026 data confirms the developer experience ROI pattern: teams in the top quartile for CI speed ship 2.3x more deployments per week than the bottom quartile.

  2. Kelsey Hightower: Monoliths Are the Future ... Hightower's argument has aged well. The core insight: distributed systems are an organizational pattern, not a technical one. If your organization isn't distributed, your architecture shouldn't be either.

  3. Simon Willison: AI-Assisted Development Patterns ... Willison documents the most rigorous approach to AI-assisted development I've seen. His pattern of "AI proposes, human reviews, tests verify" prevents the class of bugs that come from trusting AI output without examination.


Tool of the Week

DevPod ... Open-source dev environments as code. If developer experience is your H2 priority, DevPod eliminates the "works on my machine" problem without the cost of GitHub Codespaces. Three teams I work with adopted it in Q2 and cut onboarding time from days to under an hour. Free, works with any IDE, and runs anywhere.


That's it for this week.

Hit reply with what worked (or didn't) for your team in Q2 ... I'm compiling patterns for a mid-year report. I read every response.

– Alex

P.S. For the leadership framework on making these capacity allocation decisions: Engineering Leadership: Founder to CTO.

Get insights like this weekly

Join The Architect's Brief — one actionable insight every Tuesday.