Grounded decision records from AI conversations
If you've read some of my posts before or worked with me, you know I like using Architectural Decision Records (ADRs) for lots of reasons. To me the most important one is documenting the why of a decision.
If you've worked with AI models before, you've probably asked them for options when brainstorming solution ideas when you're not sure about direction. In this situation, I've found it quite easy, maybe even more logical, to document the decision in a decision record with help from the AI model you have been working with. After all, you got feedback from it anyway.
This post briefly explains how I use Claude Code to write decision records (not necessarily architectural ones) when it's helped me make a decision. My goal isn't to dive into an elaborate investigation into the pros and cons of this approach, but I will touch upon a few points. The process is not specific to Claude Code but can be adapted to other AI models or tools as well.
Here's the process I've been following:
1. Ask for the decision record during the conversation
After going back and forth on options and reaching a decision, I ask something like:
Given your feedback, I think Option 2 is the way I want to implement.
Principally I do not want to complicate things with 2 tools at the moment.
Before implementing, summarise this into an ADR in the 'adr/' directory as a way of confirming our mutual understanding.
So, write an ADR first, then ask me to confirm the ADR. Once I do, continue implementing.
The phrasing is a bit sloppy, as this is what I actually wrote in my recent changes, but that's OK, Claude Code can work with this.
2. Review and edit the generated draft
What it comes up with initially is actually pretty good. A lot will depend on the conversation you had with it and the input you gave it, of course. But the decision record mostly contains what was discussed and decided quite well. The text will be structured like an ADR even without me having to explain what an ADR is. It even picked up the decision record's next number too.
Usually content editing is needed, but having a complete draft to start with makes a big difference instead of starting from a blank page. So after the draft is written, I edit it mostly on content only.
3. Use as implementation foundation
When done, the decision is basically documented and it can serve as the basis of our next step. To make the continuation based on the decision record only, at this point you could clear or restart the Claude Code session to start fresh.
This last step is meant to ground the decision record in reality. When you include the decision record as basis for implementation, it becomes living documentation that creates a feedback loop. During implementation you may encounter issues not anticipated during the decision making process. These can then be documented in the decision record, creating a continuous improvement cycle.
I often use my home projects as playgrounds to experiment with new ideas, technologies, and methodologies I want to try out because they're new to me or because I want to confirm they're still useful. This is why most of my home projects serve two purposes: build the thing and apply what I think are best practices.
In this particular project, I felt the need to create decision records. They aren't necessarily architecture related decisions but more decisions I'd like to document to remember why I went this way. I mainly make use of the markdown format of ADRs to document my decisions.
Generally speaking, this is mostly an experiment at the moment. I'm trying this out in the setting of personal projects, but I think it may lower the bar for writing decision records in general. Not everyone wants to go to great lengths to record decisions, even if they see the value. Personally, I think it doesn't need to take a lot of time to document a decision, yet I often end up spending much more time on it than I anticipated or hoped for when I started writing one.
Here is a decision record example for those interested. It is written by an LLM mostly. I know this may be controversial, but it is also useful. I do not think this is 'AI slop', if you review it and it all makes sense, I don't see much need for a complete rewrite. Having a documented decision is worth more than not documenting because it was written by AI. I haven't tried this in a team or organisational setting, but I'm curious to see how it works out and how people would feel about it. In any case, whoever created the decision record with help of an AI is always responsible for the decision and its recording.
Also, "working with an LLM replaces thinking" is an often heard argument. I agree and it's a reason for concern for me too, the impact on my own thinking. But an LLM also comes up with good ideas. It is easy and tempting to simply accept what's there. Maybe not all decisions require deep thinking though, and we just want to note down why we're doing the thing we're doing this way.
I am also thinking it may be useful to add to the decision record to what extent an LLM was used to generate the content. Firstly it will get it out of the way when people are suspicious of AI being used and to what extent. I think this is comparable to time or other constraints one has when writing ADRs or decision records in general: this may be worthwhile to document too. Sometimes there is not enough time to think deeply about the decision or consider more options and you have to make a decision with quite some unknowns. In these cases I recommend documenting this in the decision record too.
Finally, there are quite some interesting things that can be done next. When working with Claude Code, if you would create decision records more often it makes sense to create a command to use as 'saved prompt'. I actually created an agent for this as well (to try out agents mostly, I admit).
I've written about ADRs before, if you want to read more about them:
Thank you for reading, Hans