93 days Median time to detect a malware breach, versus 19 days for ransomware. Based on 1,031 breach filings that reported detection timing.

How long does it take to detect a data breach? The answer depends on who you ask and how they collected the data. IBM's Cost of a Data Breach Report, based on structured interviews with 604 organisations across 16 countries, puts the figure at 194 days. Mandiant's M-Trends report, drawn from their incident response engagements, says 10 days. Verizon's DBIR uses a different methodology again and reaches different conclusions.

These are three different measurement approaches applied to three different populations. IBM interviews a broad cross-section but relies on voluntary participation and recall. Mandiant's sample skews toward organisations with the budget to hire Mandiant. Each produces a useful number, but comparing them directly is misleading without accounting for what each one measures.

Washington State publishes a different kind of dataset. It is one of the few US states that requires breached organisations to report, in their regulatory filing, how many days it took to identify the breach, how many days to contain it, and how many days the data was exposed. These are mandatory filings, not survey responses, which means the sample is not self-selected. The numbers are still self-reported by the breached organisation, and companies may underreport detection times or round to convenient numbers. But the filing obligation makes this dataset structurally different from voluntary surveys.

We analysed all 1,388 breach notifications filed with the Washington Attorney General. Of those, 1,031 included detection timing data. The overall median time to detection was 28 days. But that median conceals a more important pattern: detection speed varies dramatically depending on whether the attack announces itself.

The quiet attacks take five times longer to find

Attack typeFilingsMedian detect (days)Median contain (days)
Ransomware538194
Phishing100258
Malware (non-ransomware)1699315
Unclear/unknown791317
Skimmers101776

Ransomware has a median detection time of 19 days in these filings. That is relatively fast, but likely not because organisations are good at finding it. Ransomware is designed to announce itself. Encrypted systems and ransom notes create symptoms that are difficult to miss. The 19-day figure probably reflects dwell time between initial access and payload deployment rather than active detection by the victim.

Malware that does not encrypt systems, does not display ransom notes, and does not disrupt operations has a median detection time of 93 days. Nearly five times longer. Skimmers (payment card theft devices) show a median of 177 days, though the sample size is small (n=10) and should be treated with caution.

The 79 incidents classified as "unclear/unknown" have a median detection time of 131 days. Organisations that could not identify the attack method also tended to take longer to discover the breach, though the causal relationship could run in either direction.

This is the central pattern in the data. The breaches that generate headlines tend to be the ones that are fastest to detect, because ransomware forces detection. The breaches that persist for months are the ones involving silent data exfiltration. A detection programme that measures itself primarily against ransomware response times may be measuring the least difficult part of the problem.

Once found, containment is relatively fast

MetricFilingsMedianMean25th percentile75th percentile
Days to identify1,031281126151
Days to contain274527123
Days of exposure97418834103

The gap between median (28 days) and mean (112 days) for detection time indicates a heavy right tail: most breaches are found within a month, but a subset go undetected for much longer. The maximum in the dataset is 3,728 days, more than ten years.

Containment tells a different story. Once a breach is identified, the median time to contain it is 5 days. Three-quarters of organisations that reported containment data did so within 23 days. In this dataset, the detection phase accounts for substantially more elapsed time than the containment phase. The bottleneck, at least as reported in these filings, appears to be finding the breach rather than stopping it.

Non-profits take 160 days. Healthcare takes 15.

SectorFilingsMedian detect (days)Median exposure (days)Median contain (days)
Healthcare27915104
Financial services2212074
Government5227285
Business59329205
Education124374717
Non-profit/charity1191601037

Healthcare and financial services report the fastest detection times: 15 and 20 days respectively. Both sectors operate under regulatory frameworks (HIPAA, PCI-DSS) that mandate monitoring and incident reporting. Whether the faster detection reflects better tooling, more staff, or a different mix of attack types within these sectors is not something this data can distinguish on its own.

Non-profits sit at the other end: a 160-day median. More than five months. Non-profits typically operate with smaller IT budgets and fewer dedicated security staff, which may explain the gap, though the data does not isolate the cause. The non-profit sample also tends toward smaller organisations, which could be a confounding factor.

Education presents a different pattern. Detection is moderately slow (37 days), but containment takes 17 days, more than three times any other sector in the dataset. Schools and universities appear to take longer to contain breaches once found, which may reflect the complexity of distributed campus networks or fewer dedicated incident response resources.

Detection speed is improving, with caveats

YearFilingsMedian detect (days)
201731197
201856129
201972112
2020260160
202118519
202213217
202330411
202423116
20251047

The median detection time fell from 197 days in 2017 to 7 days in 2025. On the surface, that is a 96% reduction over eight years.

The improvement was not gradual. The sharpest drop occurred between 2020 and 2021, from 160 days to 19 days. Three shifts coincided in that period: EDR and MDR adoption accelerated across the industry; ransomware became a larger share of reported attacks (and ransomware, as shown above, is inherently fast to detect); and organisations that had struggled with monitoring during the rapid shift to remote work in 2020 had time to stabilise.

2020 was the worst year in the dataset. Filing volume tripled (260 filings, up from 72 in 2019) and the median spiked to 160 days. COVID-era disruption, rapid remote work adoption, and stretched incident response teams all likely contributed.

The post-2021 improvement is real but needs an important caveat. If ransomware now makes up a larger proportion of filings, and ransomware is detected faster by its nature, then the falling median may partly reflect a shift in the mix of attack types being reported rather than a genuine improvement in the ability to detect quiet threats. This data cannot separate those two effects.

2025 is a partial year. The 7-day median may shift as more filings are submitted.

Bigger breaches are found faster

People affectedFilingsMedian detect (days)
Under 1,00029229
1,000 to 10,00050029
10,000 to 100,00018322
100,000+4316

Breaches affecting more people tend to be detected faster. The pattern holds across all four scale brackets. Breaches affecting 100,000+ Washington residents have a median detection time of 16 days, roughly half that of breaches under 10,000.

Two plausible explanations, neither of which the data can confirm: larger organisations tend to invest more in security monitoring, and larger compromises may be more likely to trigger automated alerts or be noticed by employees, customers, or third parties. A confounding factor: many large-scale breaches in this dataset are ransomware, which as noted above is inherently faster to detect.

Named organisations and their detection times

OrganisationPeople affectedCauseDays to detect
Change Healthcare Inc.3,121,209Cyberattack4
Comcast Cable Communications3,100,608Cyberattack9
T-Mobile USA2,079,648Cyberattack26
Fred Hutchinson Cancer Center1,694,184CyberattackN/A
Boy Scouts of America981,068Cyberattack160
MGM Resorts International811,740CyberattackN/A
Neopets, Inc.788,415Unauthorized access563
Caesars Entertainment, Inc.784,234Unauthorized accessN/A
T-Mobile USA772,593Unauthorized access41

The range is striking. Change Healthcare reported detection in 4 days. Neopets reported 563 days. T-Mobile appears twice for separate incidents. Boy Scouts of America reported 160 days for a breach affecting nearly a million people. These are the numbers these organisations filed with a state regulator.

Where this data comes from

Every US state has a breach notification law. Most require companies to notify affected individuals. Washington goes further: it requires companies to report how long the breach went undetected, how long it took to contain, and how long data was exposed.

We are not aware of another US state that publishes equivalent timing data at this level of detail. California publishes organisation names and dates. Most states publish less. Washington's requirement creates what appears to be a unique publicly available dataset of breach response timelines in the United States.

How does this compare to other industry benchmarks? IBM's Cost of a Data Breach Report surveys 604 organisations across 16 countries and includes cost modelling and qualitative analysis. Mandiant's M-Trends draws from their own incident response caseload. Washington's data covers 1,388 mandatory filings with one state regulator. The sample is broader than Mandiant's and more structured than IBM's, but shallower: three timing fields, an industry label, and an affected count. Each dataset has blind spots. They are best used as complementary views rather than competing answers.

What to do with these numbers

If you run a security programme: Compare your mean time to detect against the sector medians in this dataset. Healthcare: 15 days. Finance: 20 days. Business: 29 days. Non-profit: 160 days. If your detection time is above your sector median, that is a data point worth investigating. These are not aspirational targets. They are what organisations reported to a regulator after an actual breach.

If you sell detection or response services: The 19-day vs 93-day gap between ransomware and non-ransomware malware detection is worth understanding. Ransomware tends to surface through its own symptoms. The commercial case for managed detection is stronger for the threats that do not create obvious symptoms.

If you assess third-party risk: Ask your vendors whether they have filed a Washington AG breach notification. If they have, the filing includes response timeline data that most other states do not require.

Method note

Data source: Washington State Attorney General's Data Breach Notification database, accessed via the public JSON API at data.wa.gov (dataset ID: sb4j-ca4h).

Scope: All 1,388 breach notifications in the dataset as of April 2026. The earliest incident date is 2008; the latest submission date is January 2026. Of these, 1,031 included a non-zero value for days_to_identify_breach, 274 for days_to_contain_breach, and 974 for days_of_exposure. All timing analysis uses the subset with non-zero values for each metric.

Attack type classification: Attack types are as reported by the filing organisation. Washington requires classification into categories including ransomware, malware, phishing, and skimming. "Other" (137 filings) is excluded from the attack-type table. The "malware" category represents non-ransomware malware only.

Timing fields: Self-reported by the breached organisation. We cannot independently verify these numbers. Organisations may underreport detection times, conflate identification with notification, or round to convenient numbers.

Industry classification: As reported in the filing. Six categories: Business (593), Health (279), Finance (221), Education (124), Non-Profit/Charity (119), and Government (52).

Affected count: Represents Washington State residents affected, not total individuals nationally. The cumulative figure of 40.1 million includes individuals counted in multiple filings.

Year-over-year analysis: Years based on incident date, not filing date. Pre-2017 sample sizes are too small for reliable medians. 2025 is a partial year.

Survivorship bias: This dataset contains breaches that were eventually detected and filed. Breaches that were never discovered are not represented. The true distribution of detection times would skew longer.

What we did not include

This article analyses timing data from a single source. Internally, we cross-referenced these 1,388 organisations against HIPAA filings, ransomware threat actor claims, UK regulatory enforcement actions, and SEC cyber incident disclosures. Some organisations appear in multiple datasets with different reported timelines for the same incident.

That cross-source view is where the analysis becomes more useful: comparing what an organisation reported to one regulator against what a threat actor claimed, against what another regulator published. If you work in security, risk, or sales and want to see that combined picture, request a demo.