Most SEOs assess link building risk the wrong way. They run a backlink audit, panic at every domain with a low spam score, and reach for the disavow tool — which Google has spent the last four years telling them not to use. The result is over-cleanup, accidental signal removal, and rankings that get worse, not better.
A proper 2026 risk assessment framework looks at seven dimensions of backlink profile health and produces a single composite risk score. It tells you whether you need to disavow, dilute, or do nothing. This article gives you that framework, backed by the most current Google policy data, recent 2026 spam update analysis, and survey-based industry benchmarks.
| The 30-second answer In 2026, ~80% of sites that ‘feel’ they have a toxic-link problem don’t. Google’s SpamBrain system algorithmically neutralises most low-quality links before they can cause harm. Risk assessment in 2026 means scoring seven dimensions — anchor saturation, link velocity, source diversity, network footprint, geo/language relevance, manual-action proximity, and AI-citation interference — then acting only when the composite score crosses a documented threshold. |
Why penalty risk in 2026 is not what it was in 2020
The 2018–2022 penalty landscape was dominated by reactive disavow culture. Every SEO ran weekly toxic-link audits, every agency upsold cleanup services, and every Penguin update produced a wave of preemptive disavow files. That era is over. Three structural changes between 2023 and 2026 have rewritten the rules:
- Manual actions are now rare. Google’s own documentation confirms that the overwhelming majority of link spam is handled algorithmically. Manual reviews are reserved for egregious or scaled abuse — the days of typical SaaS sites or small businesses receiving an ‘unnatural links to your site’ notification have largely passed.
- SpamBrain is now the dominant enforcement layer. The March 2026 spam update completed in 19.5 hours — the fastest in Google’s history. That speed signals targeted, surgical enforcement, not a broad recalibration. SpamBrain knows what it’s looking for, and what it doesn’t flag, it ignores.
- AI citation has added a second risk vector. Toxic links in 2026 don’t just risk ranking suppression; they introduce ‘identity noise’ that confuses LLMs about your brand’s category and authority. This is a new dimension of risk that didn’t exist in the Penguin era.
Translating that into practitioner terms: the universe of sites that genuinely need penalty intervention is now small — but for the sites that do, the cost of getting it wrong is high. The framework below is calibrated for that reality.
The 2026 penalty risk baseline (what the data shows)
Before you assess your own profile, anchor yourself to the industry baseline. The numbers below reflect the most current 2026 data on penalty incidence, disavow behaviour, and link decay:
| Metric | 2026 baseline | What it means for risk |
| Manual action incidence (small/mid sites) | Low — most sites never receive one | Google’s own spam handling has shifted heavily to algorithmic suppression rather than manual notifications. |
| SEOs who disavow in 2026 | ~58% selectively / ~61% never | Editorial.Link’s 2026 LinkedIn survey shows the community split on the disavow tool, with the majority either never using it or using it only as a last resort. |
| March 2026 spam update rollout time | 19.5 hours (fastest ever) | Google Search Status Dashboard data confirms SpamBrain is enforcing existing policies faster and more precisely. |
| Older backlinks that decay naturally | ~66.5% over time | Backlink loss is the rule, not the exception. Profile churn is normal. |
| Sites whose ‘toxic link’ anxiety is real | Estimated <20% on manual review | Most sites have an audit-hygiene problem, not a link-profile problem. |
These numbers tell you something important before you score anything: assume your profile is fine until proven otherwise. The framework below is built around that assumption — it forces you to find evidence before you act, rather than the other way round.
The seven-dimension link building risk framework
Every backlink profile carries risk along seven independent dimensions. The framework scores each one on a 0–3 scale, where 0 is no concern and 3 is critical. The composite score determines what action, if any, you need to take.
Treat the dimensions as independent. A site can be clean on six dimensions and critical on one (an active manual action, for example). The framework’s job is to surface that one without forcing you to over-clean the other six.
| How to use the scale Score each of the seven dimensions from 0 to 3. 0 = no concern. 1 = minor concern, monitor. 2 = elevated risk, investigate. 3 = critical, act now. Sum the scores. The composite (0–21) maps to one of four recommended actions, detailed at the end of this section. |
Dimension 1: Anchor text saturation
Exact-match commercial anchor text is the single most reliable algorithmic spam signal. SpamBrain and its Penguin predecessor both prioritise anchor distribution analysis because the math of natural linking is well understood: real editorial links use branded, naked-URL, and generic anchors most of the time. Commercial-keyword anchors are the exception, not the rule.
Healthy 2026 distributions hold exact-match commercial anchors below ~5–8% of total backlinks. Profiles where 20%+ of links use commercial anchors are the classic Penguin signature.
| Score | Anchor saturation criteria |
| 0 — No concern | Exact-match commercial anchors <5% of total. Branded + naked URL + generic = >70%. |
| 1 — Minor | Commercial anchors 5–10%. Monitor for trend over 90 days. |
| 2 — Elevated | Commercial anchors 10–20%. Investigate which campaigns are driving the skew. |
| 3 — Critical | Commercial anchors >20%, particularly with money-keyword concentration. Classic spam pattern. |
How to measure: pull your anchor distribution from Ahrefs (Site Explorer → Backlinks → Anchors) or Semrush (Backlink Audit → Anchors). Bucket into branded, naked URL, generic (‘click here’, ‘read more’), partial-match, and exact-match commercial. The exact-match commercial percentage is your score input.
Healthy distributions hold this commercial-anchor ceiling regardless of industry, although the exact split between branded, naked-URL, and generic anchors does shift by stage of site maturity.
Dimension 2: Link velocity
Link velocity is the rate at which new referring domains are added to your profile. The mistake most SEOs make is assuming ‘high velocity = bad’. It isn’t. Newsworthy content, a successful digital PR campaign, or a viral product launch all produce legitimate velocity spikes. What matters is whether the velocity is supported by other natural-pattern signals.
A velocity spike from 5 referring domains/month to 200/month is a red flag when those 200 domains share footprints, use commercial anchors, and come from foreign-language sources. The same spike is healthy when the 200 domains are diverse, geographically appropriate, and using branded anchors.
| Score | Link velocity criteria |
| 0 — No concern | Velocity is steady or growing in line with content/PR output. Spikes correlate with known campaigns. |
| 1 — Minor | Unexplained mild velocity increase. Source diversity still acceptable. |
| 2 — Elevated | Velocity spike without a corresponding campaign, weighted toward low-DR domains. |
| 3 — Critical | Velocity spike of 5x+ baseline, low source diversity, foreign-language sources, commercial anchors. Possible negative SEO. |
Dimension 3: Source diversity
Source diversity measures how varied your linking domains are across IP, hosting, registrar, and editorial profile. Low diversity is the classic PBN/link-network signature: many domains that look superficially different but share underlying infrastructure or templates.
The five technical signals to check are IP address overlap (multiple linking domains on the same IP block), hosting overlap (same provider with sequential account IDs), registrar and WHOIS proximity (registered within days of each other), template similarity (identical WordPress themes and plugin sets), and outbound-link patterns (linking to the same set of unrelated commercial sites).
| Score | Source diversity criteria |
| 0 — No concern | Wide spread across IPs, hosts, registrars; clearly independent editorial properties. |
| 1 — Minor | Occasional infrastructure overlap consistent with shared hosting providers (legitimate). |
| 2 — Elevated | Cluster of 10–20 linking domains sharing 2+ technical signals. Possible link scheme. |
| 3 — Critical | Clear PBN footprint: shared hosting, templates, and outbound patterns across the cluster. |
Dimension 4: Network and PBN footprint
This dimension overlaps with diversity but goes further. A PBN footprint isn’t just ‘shared infrastructure’ — it’s the active pattern of multiple expired domains rebuilt to host outbound links to unrelated commercial properties. SpamBrain’s detection models target this pattern directly.
The classic PBN tell is a domain with strong historical authority (DR 40+) but near-zero current organic traffic and a high outbound-to-content ratio. The site exists to host outbound links, not to serve readers. Tools like Ahrefs flag these as ‘low organic traffic relative to DR’ — a useful first filter.
| Score | PBN footprint criteria |
| 0 — No concern | No suspected PBN links in profile. All linking domains have healthy organic traffic-to-DR ratios. |
| 1 — Minor | 1–2 isolated links from sites that look suspicious but are not clearly part of a network. |
| 2 — Elevated | Multiple links from network-pattern domains. Historical link-building service exposure suspected. |
| 3 — Critical | Active PBN exposure documented; or paid link-building service with known PBN delivery in past 24 months. |
Dimension 5: Geographic and language relevance
Geo and language mismatch is one of the cleanest spam signals to detect. A UK B2B SaaS receiving 400 backlinks from Cyrillic-language sites whose audience could not plausibly include their buyers is showing the classic negative-SEO or scraped-link pattern. The links carry no possible business benefit; their only purpose is anchor-text manipulation.
The nuance: legitimate cross-border links exist. A French SEO publication linking to a UK link building blog is a perfectly natural trade reference. The framework distinguishes by audience-plausibility, not by language alone.
| Score | Geo/language criteria |
| 0 — No concern | Linking languages and geographies match your target audience or trade-reference pattern. |
| 1 — Minor | Occasional out-of-market links from related verticals. No volume concern. |
| 2 — Elevated | 10–30% of links from out-of-market sources with no editorial reason. |
| 3 — Critical | >30% of links from foreign-language sources with no business overlap. Likely negative SEO or scraped links. |
Dimension 6: Manual action proximity
This is the binary dimension. Either Google’s Manual Actions report shows an active or recently-resolved unnatural links notification, or it doesn’t. Manual actions are scored heavily because they are the one case where the framework’s bias-toward-inaction reverses: an active manual action requires immediate, aggressive cleanup.
| Score | Manual action criteria |
| 0 — No concern | No history of manual actions. Manual Actions report shows ‘No issues detected’. |
| 1 — Minor | Manual action resolved >24 months ago. Continued monitoring warranted. |
| 2 — Elevated | Manual action resolved within last 24 months. Profile still rehabilitating. |
| 3 — Critical | Active manual action for ‘unnatural links to your site’ or ‘unnatural links from your site’. |
Check status directly inside Google Search Console under Security and Manual Actions → Manual Actions. Google publishes the full list of possible actions and recovery procedures in the Manual actions report help documentation. If you are dealing with an active action, our dedicated guide on Google Manual Action for Unnatural Links: How to Recover covers the full reconsideration process step-by-step.
Dimension 7: AI citation interference
This is the dimension that wasn’t in any 2020-era penalty framework. Toxic links in 2026 don’t only risk ranking suppression. They risk ‘identity noise’ — confusing LLMs about your brand category, expertise, and trustworthiness. For regulated and trust-sensitive verticals (legal, finance, health), AI citation interference can be the more expensive of the two risks.
The mechanism: LLMs learn brand associations from the surrounding context of mentions. If your SaaS is mentioned alongside HubSpot and Salesforce on a B2B SaaS comparison page, the model learns to associate your brand with that category. If your SaaS is mentioned alongside online casinos and CBD vendors on a scraped link farm, the model learns a contradictory association. Over enough exposure, that ambiguity reduces citation rates.
| Score | AI citation interference criteria |
| 0 — No concern | Linking contexts are topically consistent with your brand category. |
| 1 — Minor | Occasional out-of-category mentions. Categorisation noise low. |
| 2 — Elevated | Significant exposure to off-topic or unregulated-vertical link sources. Possible LLM confusion. |
| 3 — Critical | Profile shows pattern of mentions in adult, gambling, or unregulated pharma contexts. AI citation already suppressed. |
For a deeper treatment of how backlinks feed into AI search visibility, see Link Building for AI Search Visibility: The 2026 Playbook, which goes into citation mechanics in depth.
The composite score and recommended action map
Sum your seven dimension scores. The composite (0–21) maps directly to a recommended action. The thresholds below are calibrated to the 2026 baseline — they intentionally bias against aggressive intervention, because aggressive intervention is the most common cause of self-inflicted SEO damage in modern profiles.
| Composite score | Recommended action | What to do | ||
| 0–3 | Do nothing | Profile is healthy. No disavow needed. Continue normal acquisition. Re-score quarterly. | ||
| 4–7 | Monitor and dilute | Mild risk. Do not disavow. Increase legitimate link acquisition to dilute weak signals. Re-score in 90 days. | ||
| 8–12 | Investigate and remediate | Real concern. Identify the dimension(s) driving the score. Outreach for manual removal of the worst offenders. Reconsider disavow only if removal fails. | ||
| 13–17 | Active cleanup | Significant risk. Compile disavow file at domain level for clearly toxic clusters. Document removal attempts. Re-audit monthly. | ||
| 18–21 | Manual action recovery mode | Critical. Active manual action almost certain. Full backlink audit, aggressive disavow, reconsideration request workflow. See our recovery guide. | ||
| The single most important rule of 2026 risk assessment Disavow is a chainsaw, not pruning shears. If your composite score is below 8 and you have no manual action, do not submit a disavow file. The risk of removing legitimate signal exceeds the benefit of suppressing weak signal that Google is already ignoring. | ||||
The over-disavow trap: how clean-up causes damage
Misuse of the disavow tool is responsible for more 2026 ranking losses than spammy links themselves. Google’s official position, repeated by John Mueller across multiple Webmaster hangouts and confirmed in 2026 Search Central documentation, is that the disavow tool is not a routine maintenance feature. It is a recovery tool for specific, evidenced situations.
The most common over-disavow mistakes seen in 2026 backlink audits:
- Disavowing by Domain Authority alone. DA and DR are third-party metrics, not Google ranking inputs. A DR-12 niche industry blog can be a more relevant link than a DR-70 generalist content farm. Disavowing by DR threshold removes legitimate signal.
- Disavowing foreign-language sources that share an audience. A French SEO publication linking to a UK link building blog is editorial trade reference, not noise.
- Disavowing old links from sites that have since gone downmarket. A 2018 link from a then-respectable publication that has since become a content farm remains a real editorial endorsement. Source decay isn’t your problem.
- ‘Just to be safe’ disavows after a core update. Core updates re-assess content quality. They do not target link spam. Reactive disavowing after an algorithmic ranking change is the single most common self-inflicted wound in 2026.
- Disavowing tool-flagged ‘toxic’ scores without manual review. Third-party toxicity scores are heuristic. Ahrefs, Semrush, and others use proprietary models that bucket many legitimate links as risky. Always verify manually before adding any domain to a disavow file.
We cover the disavow decision in dedicated depth in The Disavow File: When to Use It and When Not to in 2026, which includes the exact file format, submission workflow, and post-submission monitoring cadence. The dimensional risk framework here tells you whether you need that guide; the dedicated article tells you how to execute it.
The 90-day risk assessment cadence
Risk assessment isn’t a one-off audit. The framework is designed to be re-run quarterly. Most profiles will score consistently between 0–7 quarter after quarter, requiring no intervention. The cadence catches the rare cases where something has changed — a negative SEO attack, an old link-building service whose links are catching up with you, or an algorithm update that has shifted the threshold.
Quarterly checklist
- Pull a fresh backlink export from Ahrefs, Semrush, or Google Search Console (Links → Top linking sites → Export). Use Search Console as the source of truth for what Google sees; cross-check with a third-party crawler for what it may have missed.
- Re-score the seven dimensions against the criteria tables above. Each dimension takes 5–10 minutes; the full audit is under an hour for most sites.
- Note the composite score and the dominant risk dimension. If the score has moved by 4+ points since last quarter, investigate which dimension drove the change.
- Check the Manual Actions report in Search Console. This is a 30-second check that catches the most important single signal in the framework.
- Document everything. Even if no action is taken, a quarterly risk log is invaluable evidence when something does change and you need to demonstrate due diligence.
Continuous monitoring (the always-on layer)
Between quarterly audits, two always-on monitors catch fast-moving risks:
- Backlink alerts in Ahrefs, Semrush, or Mention for sudden new-domain velocity spikes. A 10x baseline spike inside 48 hours is the negative-SEO signature.
- Google Search Console email alerts for manual actions. Enable them in the user preferences. The notification is immediate and is the cleanest possible trigger for the framework’s critical action map.
If you score 13– 21: the recovery workflow
For profiles in the critical zone — either active manual action or score-evidenced toxic exposure — the framework prescribes a five-step recovery workflow. This compresses what is typically a 3–6 month process into a structured sequence.
Step 1: Compile the full link inventory
Export every backlink from Search Console (Links report → Latest links and Top linking sites) and from at least one third-party crawler. Deduplicate by domain. The deduped list is your audit universe.
Step 2: Manually classify each domain
Against the seven-dimension framework, classify each linking domain as keep, monitor, or remove. Domains tagged ‘remove’ are your action list. Do not rely on automated toxicity scores alone — use them as a first filter, then verify manually for the action-list candidates.
Step 3: Attempt manual removal
Google’s reconsideration request reviewers expect to see documented removal attempts. For each domain on the action list, send an outreach email to the webmaster requesting link removal. Record dates, recipient addresses, and outcomes. Even unanswered emails count as documented attempts.
Outreach for link removal uses a different tone and structure than acquisition outreach. The principles we cover in our broader link building outreach guide still apply on personalisation and relationship, but the removal-specific email should be polite, short, and lead with a clear, single ask. Document every send — dates, recipients, outcomes — because that documentation is what reconsideration request reviewers expect to see.
Step 4: Build the disavow file
For domains where manual removal failed or was not possible, build the disavow file. Format: plain text, UTF-8, one entry per line. Use domain-level entries (domain:badnetwork.com) in ~95% of cases — URL-level entries are only appropriate for sub-page-specific issues on otherwise-clean domains. Submit through Search Console’s Disavow Tool, scoped to the verified property.
Step 5: Submit reconsideration request (manual action sites only)
If you have an active manual action, the disavow file alone does not lift it. You must submit a reconsideration request through Search Console explaining: (a) what caused the violation, (b) what specific links were removed or disavowed, (c) what process you will follow going forward to prevent recurrence. Google’s reviewers want honesty, specificity, and a credible prevention plan. Vague or boilerplate requests are routinely rejected.
Reconsideration review typically takes 1–4 weeks. If rejected, the response identifies remaining issues. Address them and resubmit. Some severe cases require multiple submissions over several months.
What a healthy 2026 backlink profile actually looks like
Practitioners benchmark against a healthy profile rarely enough that ‘healthy’ becomes a moving target. The table below captures the empirical pattern across well-performing 2026 sites in B2B SaaS, professional services, and editorial verticals — the categories where data is most consistently available.
| Profile dimension | Healthy 2026 range | Notes |
| Anchor text — branded | 40–60% | Brand name, brand + keyword, brand + ‘review’/’blog’ variants. |
| Anchor text — naked URL | 10–20% | Bare URL anchors. Strong natural-pattern signal. |
| Anchor text — generic | 10–15% | ‘Click here’, ‘read more’, ‘this article’. Common in real editorial. |
| Anchor text — partial-match | 10–20% | Topical phrases that contain but don’t exact-match the target keyword. |
| Anchor text — exact-match commercial | <5–8% | The high-risk bucket. Hold below this ceiling at all costs. |
| Referring-domain growth | Steady or campaign-aligned | Spikes are fine when correlated with known content/PR events. |
| DR distribution | Right-skewed | Most links from mid-DR domains; a tail of high-DR. Not a wall of identical-DR. |
| Geo / language match | 70–90%+ | Most links from your target market; the remainder from plausible trade references. |
| Manual actions | None, ever | Even resolved manual actions remain a footprint signal for ~24 months. |
| Disavow file size | Empty or near-empty | Most healthy profiles need no disavow file at all in 2026. |
Two notes on the table. First, the ranges are descriptive, not prescriptive. A B2B SaaS in a regulated vertical will skew higher on geographic concentration; a global editorial site will skew lower. Use these as orientation, not targets. Second, the right-skewed DR distribution is the underappreciated signal. Profiles where every referring domain is in a narrow DR band (every link is DR 50–60, say) are statistically improbable and a soft tell for paid placement.
How risk assessment connects to acquisition strategy
Risk assessment is the defensive half of a link building program. The offensive half is in our hub article 15 link building strategies that work in 2026, which covers the acquisition side. The two halves are interdependent: a clean profile created by careful acquisition rarely needs aggressive risk intervention, and aggressive risk intervention without underlying acquisition discipline produces short-lived recovery.
On the tooling side, the framework is workflow-agnostic but most efficient when paired with Ahrefs or Semrush for the backlink audit, Search Console for the Google-side ground truth, and a structured documentation system for the quarterly risk log. We cover the full link building tool stack in our link building tools guide, which compares the major options against price, depth, and the specific use cases the framework requires.
For the broader industry context that informs the framework’s calibration, our 2026 link building statistics roundup is the reference companion. It covers the survey data on disavow behaviour, the algorithmic-versus-manual enforcement split, and the empirical penalty incidence numbers used to set the action thresholds.
Finally, this article is the framework. For the granular detection side — identifying individual toxic links rather than scoring profile-level risk — see our complete guide to toxic backlinks, which goes deeper on the five technical detection signals (network footprint, exact-match anchors, outbound-link farms, geographic mismatch, and template patterns) summarised in Dimension 4 above.
Frequently asked questions
How often should I run a link building risk assessment?
Quarterly is the right cadence for most sites. Run the full seven-dimension audit every 90 days, with always-on monitoring (Search Console manual action alerts, backlink-velocity alerts in your audit tool) in between. Sites in regulated verticals or those recovering from a past manual action should re-score monthly until the composite has been stable for two consecutive quarters.
Does Google actually penalise sites for link building in 2026?
Yes, but rarely and surgically. Manual actions for ‘unnatural links to your site’ are still issued, but they target egregious or scaled violations — typical small and mid-sized sites almost never receive one. The dominant enforcement mechanism is algorithmic: SpamBrain neutralises low-quality links without notification, which feels invisible but is the system working as intended.
If Google ignores most spam links, why do I need to assess risk at all?
Because the exceptions matter. The ~15–20% of sites that genuinely have a profile-level problem can lose 30–80% of organic traffic if it goes unaddressed. The framework’s value is in cheaply ruling out the 80% who don’t have a problem, so resources can focus on the minority who do. Risk assessment is fast and quarterly; recovery is expensive and reactive.
Should I disavow links that Ahrefs or Semrush flag as toxic?
Not on the tool’s recommendation alone. Third-party toxicity scores are heuristic and routinely flag legitimate low-DR niche links as risky. Use the scores as a first-pass filter, then manually verify each flagged domain against the seven-dimension framework before adding anything to a disavow file. In 2026, over-disavow causes more measurable damage than under-disavow.
How long does it take to recover from a manual action for unnatural links?
Typical recovery is 2–6 months from first reconsideration request submission, assuming the underlying violation is genuinely fixed. Severe cases involving large historical link-building service exposure can take 12+ months and multiple reconsideration cycles. Some sites never fully recover — the reputational footprint persists in the algorithmic memory even after the manual action is lifted. Prevention is always cheaper than recovery.
Is the disavow file still relevant in 2026?
Yes, but in a narrower band than it used to be. Google’s 2026 official guidance is consistent: the disavow tool is for sites with manual actions or clear, documented patterns of unnatural links — not for routine cleanup. Industry survey data shows 58% of SEOs use it selectively, 61% don’t use it at all, and almost no senior practitioners run regular disavow cycles. Use it when the framework’s composite score crosses 13, not before.
What’s the difference between an algorithmic penalty and a manual action?
A manual action is a human-applied penalty reported explicitly in Search Console under Security and Manual Actions. It tells you exactly what the violation is and requires a reconsideration request to resolve. An algorithmic penalty is automatic ranking suppression with no notification, triggered by SpamBrain or related systems. It requires fixing the underlying signal and waiting for re-crawl. The first is rare and explicit; the second is common and silent. Both are real, but they require different remediation paths.
Can a negative SEO attack actually hurt my rankings in 2026?
Less than the SEO rumour mill suggests, but more than zero. Most negative SEO attempts — mass-built spam links from PBN networks — are now filtered algorithmically without affecting the target site. Successful attacks require either profile-level fragility (a site with thin existing authority) or scale (tens of thousands of links, sustained over months). Strong, diversified profiles are difficult to damage. Vigilance — backlink-velocity monitoring and quarterly risk scoring — is the appropriate defence, not paranoia.
Does my disavow file get processed immediately?
No. Google processes disavow files gradually as the relevant URLs are re-crawled. Typical processing window is 2–12 weeks, with most observable change in the 4–8 week range. There is no acknowledgement, no progress indicator, and no confirmation that the file has been fully processed. If you submitted a disavow file for a manual action, the reconsideration request review is the explicit confirmation; for algorithmic recovery, observed ranking changes are the only signal.
What’s the single most common mistake when assessing penalty risk?
Reactive over-cleanup after a ranking drop that wasn’t caused by links. Core updates re-assess content quality. Helpful content signals re-evaluate utility. AI Overviews compress organic CTR. None of these are link-driven, but every quarter we see profiles whose owners ran a panic disavow after a content-quality update — removing legitimate signal and worsening their rankings. The framework’s bias-toward-inaction at low composite scores exists to prevent exactly this mistake.
