Test Naming Guidelines
TLDR: Below are test naming guidelines that help me write consistent, clear test names.
This is quick post with the Test Naming Guidelines I have been using to make my test names consistent across multiple projects. I use also these with LLMs when writing tests or refactoring multiple tests to this style.
I used one of my personal projects as a basis to develop, refine and structure these guidelines with help from an LLM.
Below are the guidelines.
Format
[subject]_should_[expected_behavior]_[optional_when_condition]
Components
Subject: The component, feature, or system under test
- Examples:
user,api,filter,time_report
Expected Behaviour: What should happen, described as an action or outcome
- Examples:
return_success,validate_input,fail_with_error
Optional When Condition: Include only when necessary for clarity or disambiguation
Format:
when_[condition]Examples:
when_input_valid,when_user_authenticated
Guidelines for When Conditions
Include when_[condition] if:
Essential for understanding: The condition is crucial to know what's being tested
Multiple variants exist: Similar tests with different conditions need distinction
Specific circumstances: The behavior only occurs under particular conditions
Omit when_[condition] for:
Basic/default behavior: Standard functionality that doesn't require special conditions
Self-evident scenarios: Cases where the expected behavior already implies the context
Overly obvious conditions: When the condition adds no meaningful information
Examples
Good (concise when appropriate):
health_check_should_return_200
user_should_be_created
Avoid (unnecessarily verbose):
health_check_should_return_200_when_request_valid // "valid" is implied by 200 response
user_should_be_created_when_data_provided // data is obviously needed
Good (meaningful distinctions):
user_should_login_successfully_when_credentials_valid
user_should_be_rejected_when_credentials_invalid
Principle
Keep test names as short as possible while maintaining clarity. The when_ clause is a tool for disambiguation, not a mandatory requirement.
Key Takeaways
Architecture benefits from being grounded in implementation. Adding a Verification section to your ADRs creates feedback loops and improves their quality.
The verification section serves as more than a checklist. It:
Connects architecture with implementation
Provides a mechanism for feedback in both directions
Reveals when decisions aren't being acted upon
Makes ADRs more actionable and effective
This grounding helps prevent architectural decisions from remaining theoretical or becoming shelfware.