Bugs drive the cost of software projects the most
Effective defect detection and removal strategies
“The single largest cost driver for most projects is unplanned defect correction work.”
“Projects that focus on quality tend to have higher quality and shorter schedule.”
Types of upfront planned work that reduce total work on the project
Early defect detection (unit tests, pair programming, inspections, prototyping, proofs of concept) reduces unplanned defect correction. (See also “Ideas for reinforcing risky implementation” here.)
Active risk identification and mitigation. (See Failure mode and effects analysis, good Adrian Cockroft’s post on this.)
Appropriate scoping and staffing.
Regular status tracking (e. g. daily standups)
ATAMS project case study
Delivered 1 month early on a 12-month schedule, only 2 bugs were found after the delivery.
Extensively prototype UX.
Tackle riskiest parts of the system first.
Perfect a component before moving to the next one.
Root cause analysis for each defect found. Mini-postmortem after each bug:
"How did this defect slip into this part of the project?"
"How could we have detected this defect earlier?"
"How could we have avoided inserting this defect in the first place?"
Do technical peer reviews (code and design).
Managers/leads actively ensure that reviews are performed timely.
Different defect removal activities have different effectiveness
On the list below, the lower end of the range means that on some projects (typically poorly run, or least suitable for this defect removal activity) the activity helped to find and remove the given “lowest” percent of bugs of the total bugs that were in the design/code before that activity was applied, the higher end is the “highest” percent of the bugs found by the activity (typically on the projects best run, or most suitable for this type of activity).
Integration test: 25–40%
Regression test: 15–30%
System test: 25–55%
Low-volume beta test: 25–40%
High-volume beta testing: 60–85%
Informal code reviews: 20–35%
Formal code inspections: 45–70%
Modelling or prototyping: 35–80% (I would include property-based testing here, too)
Unit test: 15–50%
Insights from these statistics:
No single technique by itself provides very good removal efficiency.
Conventional activities (standard code reviews, unit test) typically have quite low efficiency.
Steve McConnell recommends for using at least one highly-efficient technique (such as beta testing, formal (i. e., thorough) code reviews), or modelling). They are costlier than conventional techniques, but applying even just one such technique alone can remove more bugs than a “stack” of low-efficiency techniques.
Latent defects (i. e., the defects that have been committed to the codebase) lead to unplanned (re)work, low quality of the product (unhappy users), and schedule uncertainty (we don’t know when we will fix all critical issues). Also, in the codebase with many latent bugs, there are some adverse second-order effects, such as bugs masking other bugs, bugs that impede fixing other bugs, etc.
Defect removal gap is the period of time before bug insertion and removal. The bigger the gap, the costlier it is to remove the bug. For example, it’s cheaper to remove a bug before even committing to a brunch, when looking at your own code or running tests locally, than during code review. The cost of a bug rises sharply after merge into master and deployment.
Code-level practices to minimize the defect removal gap
*Very timely* code reviews
Single stepping through the code in a debugger
Architecture proof of concept
Architecture/design at all as a deliberate activity, i. e., write RFCs (my RFC template is here)
Timely architecture/design reviews
Writing requirements as test cases
Frequent releases, a-la Scrum
Steve McConnell. Understanding Software Projects. Most of the content on this page is adapted from this course.