@jeffvanderstoep Thanks for your reply! I don’t doubt the validity of your measurement. I’d argue about two things:
- The simpler thing is communication: the phrase “half-life” or “decay” implies that vulns disappear without explicit dev intervention, e.g. as a side-effect of unrelated code changes (or even the passage of time!). While this may be true in some cases I don’t see how the data would (or could) support such an observation.
- My understanding is that when we look at overall results of different vuln discovery strategies (your study) or applying the same strategy with “more force” (Böhme-Falk) we basically see the effects of testing coverage, and it’s no surprise we can grow coverage faster in new code. What I think would be more revealing is looking at new vulns(/LoC?) vs code age when a new discovery method (e.g. a new sanitizer or more intelligent test-case generation ) is introduced. FTR: I bet such data would actually confirm your results, but without data about the effect of new discovery methods I think drawing conclusions about code “maturity” is much harder.