We’re entering a new phase of AI-Assisted software development
I’m in the position where I am both an avid developer and a Technology Manager/Director.
I have to admit that although I’ve been writing code most of my life, it’s rare I’m able to do so in a work context these days. Most of my coding happens at home, in the evenings on my own projects.
I’ve been using AI coding tools almost daily since it was possible to do so, and I’ve had mixed emotions - from incredible frustration to being in absolute awe at the magic unfolding before my eyes.
Today I can build large projects in a couple of weeks which would have taken me six months to over a year before - despite the fact that I'm only coding in the evenings. I appreciate I don't have the same overheads you see typically in Enterprise software delivery, however I'm more productive across key areas of the development lifecycle and the output is higher quality, maintainable, and production-ready.
I know this because I’m in control, ensuring any code written by AI meets my standards and adheres to key software design principles.
AI adoption – How do we start?
In an enterprise software engineering context, every senior exec and engineering leader wants their teams to use these tools. Why? Productivity gains. Ultimately can we do more for less, or the same in less time, or at reduced cost.
From a developer’s perspective though the real benefit of AI is that it helps us build better software.
AI can assist the whole Software development lifecycle (and beyond). However, to achieve this is not easy. Even if our teams are able to write more code, without strong engineering foundations in place (practices and capabilities) bottlenecks in other areas of the SDLC can mean we don’t see the benefit (for example as measured by the DORA metrics).
The first step is to increase adoption and buy-in from our teams. We need to evangelise and measurably increase adoption. The goal? Developers use these tools effectively, realising their value, and crucially still enjoying their work whilst doing so.
We can’t command teams into this. Instead, we must educate and enable. The willingness has to come from within.
Resistance
Over the years I’ve seen mature, high-performing, empowered teams push adoption of other new tools and practices themselves, however with AI I've witnessed more resistance and scepticism.
The real barrier
So the real question - if these tools are indeed beneficial, why isn’t adoption higher?
There are many factors outside of the teams control - restrictive AI policy being one. Where the team do have AI opportunity though (or at an individual experimentation level) it’s not that the tools aren’t good it’s that they weren’t good enough without the foundation of experience to know:
- Where AI adds value versus where it slows you down
- How to adapt to the pace of change
- How to handle quirks, hallucinations, and ignored instructions
- How to prompt effectively and debug AI output
This meant a steep learning curve. If the results are inconsistent or hinder flow the tool will be dismissed. It took time to know the strengths and weaknesses of AI co-pilots, and crucially where human judgment needed to intervene. Generally only those experimenting and iterating since the early days unlocked any real value.
However, for the average developer, these tools didn’t “just work.” The result? Frustration, scepticism, and a lack of engagement with the technology.
Disillusionment
We’ve been in a phase where the tools were powerful but not seamless.
Many tried them once or twice, encountered strange behaviour or got poor results, and walked away. Meanwhile, more experienced users are able to extract immense value widening the perception gap.
The tide is turning
If you haven’t read David Singleton’s post on advances in agentic AI coding tools, do. It really is a fantastic read that echoes my experiences:
https://blog.singleton.io/posts/2025-06-14-coding-agents-cross-a-chasm/
In the early days, quirks, hallucinations, and inconsistent output limited real-world enterprise use of AI coding tools, meaning many gave up early.
Now, agentic coding tools have crossed the chasm as David puts it. They’ve moved from experimental to fundamental tools that “just work”. They are now meeting expectations and delivering powerful results.
We’re leaving the frustration phase. Tools that were once unreliable are becoming intuitive and consistent. These tools can now build, debug, and iterate rapidly and the results are now meeting our previous expectations.
Context is critical. This means providing agents with background. That might be diagrams, product intent, requirements, plans and designs. These dramatically boost effectiveness of the tools.
What’s next?
- For developers - If AI coding has felt clunky so far, don’t be discouraged. Experiment with the latest wave of agentic tools - learning to use these tools effectively is now easier than ever.
- For leaders - Real mass adoption may be closer than ever. Invest in education and enablement. Focus on low friction workflows, measure adoption, and be cautiously optimistic.
My experience mirrors this shift
As previously mentioned I’ve been using AI coding tools since the very beginning, In the early days I had to adapt to the inconsistency of the tools. This meant learning to prompt effectively, knowing which tasks AI performed well, and understanding where human input was essential.
I’ve also seen a real leap in recent months. This new generation of agentic co-pilots is more reliable, less error-prone, and consistently able to solve real problems effectively. They can build entire features reliably and they do it consistently.
How I Code Today
One of the biggest contributors to this shift has been context and spec-based engineering.
My workflow mirrors some of the ideas used by Kiro, Amazon’s new IDE. I currently use Claude Code, Cursor, and GitHub Co-pilot in parallel, one tool per project (I can get so much done at once now but I do have a clear favourite today out of the three). Tools and models evolve so quickly that this approach gives me the best chance to continuously evaluate what works in practice.
Despite differences in tooling my workflow evolves but key principles remain consistent.
I involve AI early in the development lifecycle. I spend time creating detailed specs and technical plans - co-developing and refining these with the AI. This collaboration builds better context for implementation and ensures the build phase is well structured. These living documents create a shared understanding between me and the AI ensuring clarity of architecture, shared understanding of edge cases and software design trade-offs before code is written,
Before implementation, I bring in a second AI as a reviewer of the specs. This new perspective provides feedback, highlights issues, offers alternative designs, and identifies any gaps.
Through all of this, I remain in control of the flow. I manage how the tools are used, when to step in, and how the software evolves. I may not write every line of code anymore (or any line of code in fact!), but I still design, decide, validate (crucial!) and direct every step.
For me this is a form of collaborative engineering at a scale I couldn’t have imagined possible.
I spend time shaping and structuring the information available to the AI. I give the tooling access to relevant research, code snippets, product definition, architectural diagrams, design constraints, architectural principles, coding rules, or feature specs. This dramatically improves output quality. I’ve also learnt that if you work with the AI to write clean, modular, testable code from the start, the long-term quality of the codebase holds up over time.
My Approach
I follow a structured approach with Agentic coding tools:
- Plan the next feature from a product and requirements perspective, creating supporting markdown files for context.
- Create an implementation plan and explore design decisions and complexity.
- I will review the plan and pass to a second AI for feedback.
- Update plans and guidelines (in markdown) as necessary to ensure long term quality and clean, modular code.
- New code and tests are written together (UI updates are generally built last), sometimes TDD, sometimes not, I decide.
- Assess if any refactoring is required.
- Review commits
- Tests and Automated code quality checks are run by the AI. I review the results. This includes enforcing separation of concerns, dependency Analysis, SRP Pressure and size limits (ensuring large functions are broken down into smaller focussed units), complexity limits, code style & consistency & other architectural fitness functions.
- Update documentation. I like to keep the documentation lean and relevant:
- Iterate on plans
- Update product (what’s been built)
- Update technical (how it’s been built).
Right now I'm storing all documentation as markdown within the codebase. This serves two purposes:
- Living documentation (via Docusaurus but a sync into Confluence wouldn't be difficult)
- Future AI context for subsequent tasks
Who's in control?
Although I've iterated my approach in line with evolutions of the tooling one aspect has always remained consistent - I still operate with intention and control - I am the orchestrator.
I use these tools in an incremental flow - prompting, validating, refining. I still decide how the codebase evolves, when and how to refactor, and what architectural decisions to make. I no longer write the code manually but I’m very much still the architect and engineer.
Who Wrote This Post?
Thanks for taking the time to read this - I will continue to write about my experiences with AI as the tooling inevitably evolves.
The writing is mine, ideas and thinking are my own and based on my experience. Let’s never forget how important human insight, innovation and experimentation are in this new age of AI.
Of course though, AI lent a hand putting this together through reviewing and offering an opinion and feedback. “making it, in a way, a live demonstration of the very principles I’ve described”. - AI