What is PageRank? How Google Really Ranks Pages

What is PageRank? How Google Really Ranks Pages (2026 Guide)

PageRank is one of the most discussed and least understood concepts in modern search engine optimisation. It is routinely declared dead, repeatedly resurrected, and almost universally invoked without precision. Practitioners reference it confidently in client meetings; conference speakers cite it as a foundational principle; tool vendors build proxy metrics that claim to approximate it. And yet, when asked to define exactly what PageRank measures, how it is calculated, or what role it plays inside Google’s 2026 ranking systems, even experienced SEOs frequently struggle to give a precise answer.

This is unfortunate, because PageRank is not merely a historical curiosity. It is the conceptual foundation on which the entire discipline of link building rests. Every backlink your site earns, every internal link you place, and every authority signal that flows through your site can be traced, in some form, back to the principles Larry Page and Sergey Brin formalised in 1998. Understanding PageRank — what it actually does, how it has evolved, and what role it plays today — is therefore not optional knowledge for any serious practitioner of SEO. It is the bedrock of strategy.

This guide presents PageRank in three parts. First, the algorithm itself: where it came from, how it works mathematically, and what its founders intended it to measure. Second, the evidence: what Google has publicly confirmed about PageRank’s ongoing role, what the 2024 internal API leak revealed, and how the algorithm has evolved into multiple specialised variants. Third, the practical implications: how a clear understanding of PageRank should inform link building, internal linking, content strategy, and site architecture in 2026.

If you are new to the broader subject of link acquisition, it is worth pairing this guide with our foundational article on what link building is and how it works. For a definitional overview of the link itself as a unit of authority, see our companion piece on backlinks and how they pass value between pages. Together, those two articles establish the vocabulary this guide will assume from this point forward.

1. The Origins of PageRank

1.1 The problem PageRank was invented to solve

To understand PageRank, you must first understand the problem its inventors were trying to solve. In 1996, when Larry Page and Sergey Brin began their PhD research project at Stanford, the dominant search engines of the era — AltaVista, Lycos, Excite, Inktomi — relied almost exclusively on on-page signals to rank documents. They counted keyword occurrences, weighted by location and frequency, and returned results sorted by simple textual relevance. The approach worked tolerably well when the web was small. As the web grew, it failed catastrophically.

The reason it failed was that on-page signals are trivially easy to manipulate. Anyone can write the word “insurance” five hundred times in invisible white text at the bottom of a page. Anyone can stuff a meta keywords tag with every conceivable variation of a search query. By the mid-1990s, vast swathes of search results were dominated by spam pages that had gamed these primitive ranking systems with brute-force keyword repetition. Search results were genuinely poor. The web was becoming harder, not easier, to navigate.

Page and Brin recognised, building on insights from academic citation analysis, that the structure of the web itself contained information that no one was using. When one website links to another, that link is — in some sense — an editorial endorsement. The author of the linking page has chosen, deliberately, to direct readers towards a different document. They have implicitly vouched for it. If you could quantify these endorsements at scale, you could estimate which pages on the web were actually considered valuable by the people creating it.

This insight was not entirely original. Citation analysis had been used in academia for decades; the h-index, journal impact factors, and bibliometric ranking systems all rested on similar foundations. What Page and Brin contributed was the elegant mathematical formulation that allowed this principle to be applied to the entire web in a tractable, recursive way. They called it PageRank.

1.2 The original definition

In their seminal 1998 paper, “The Anatomy of a Large-Scale Hypertextual Web Search Engine”, Page, Brin, Motwani, and Winograd defined PageRank as a recursive measure of a page’s importance based on the structure of inbound links. The core intuition is simple. A page is important if other important pages link to it.

Notice that this definition is recursive: importance is defined in terms of importance. To make the recursion compute, the algorithm uses an iterative process. Every page on the web begins with an equal initial PageRank. In each iteration, every page distributes its current PageRank evenly across all the pages it links to. Pages then sum the PageRank flowing in from their inbound links. After many iterations — typically several dozen passes over the entire link graph — the values stabilise. Pages that receive a great deal of link equity from many other high-PageRank pages converge to high scores. Pages with few or low-quality inbound links converge to low scores.

The name itself is a deliberate double pun. “PageRank” refers to the ranking of pages, but it is also a play on the surname of Larry Page, the algorithm’s lead author. This dual meaning has been a frequent source of confusion ever since.

1.3 The random surfer model

The cleanest intuition for PageRank — the one Brin and Page themselves used to explain it — is what they called the random surfer model. Imagine an internet user who has no particular goal in mind and no particular preference for any one site. They begin at a random page on the web and begin clicking links. From whatever page they land on, they pick one of the outbound links uniformly at random and follow it. They continue this process indefinitely.

PageRank is, in this framing, simply the long-run probability that this random surfer is on any given page at any given moment. Pages that many other pages link to will naturally attract more of this random traffic; pages that almost nothing links to will rarely be visited. The PageRank score of a page is, quite literally, the steady-state probability of finding the random surfer there.

There is one important refinement. A purely random surfer who only follows links would eventually get stuck. They might wander into a region of the web with no outbound links — a so-called “dangling page” — or into a small group of pages that only link to each other. To prevent this, the algorithm introduces a damping factor, traditionally denoted d, which represents the probability that the surfer continues clicking rather than getting bored and jumping to a completely random page. Page and Brin set d ≈ 0.85 in their original paper, meaning that on each step the surfer has an 85% chance of following a link and a 15% chance of teleporting to a random page. This single elegant device prevents the algorithm from getting trapped, ensures that the calculation always converges, and provides a small floor of PageRank to every page on the web.

1.4 Why the algorithm worked

PageRank worked, in 1998, for a reason that is easy to forget two and a half decades later: links were genuinely difficult to manipulate. The web was small enough, and the population of webmasters expert enough, that the link graph reflected something close to authentic editorial endorsement. Spam pages had no way to acquire high-quality inbound links at scale. Authoritative pages — the BBC, the Encyclopaedia Britannica, university homepages, well-established news organisations — accumulated links naturally over time, simply because people referencing those domains in writing had a habit of linking to them.

PageRank, by ranking results in rough proportion to this organic link distribution, returned dramatically better search results than any of its competitors. Within a few years, Google had displaced every previous search engine. PageRank was the technical insight that made it possible. Everything Google has built since has been an elaboration, refinement, or extension of that original idea.

2. The Mathematics of PageRank

2.1 The simplified formula

Although Google’s production implementation has long since departed from the original 1998 formulation, the simplified version remains the cleanest way to develop intuition. In its most accessible form, the PageRank of a page A is calculated as:

PR(A) = (1 − d) / N + d × Σ [ PR(Tᵢ) / C(Tᵢ) ]

Where:

  • PR(A) is the PageRank of page A, the value being calculated.
  • d is the damping factor, conventionally set to 0.85.
  • N is the total number of pages in the link graph.
  • Tᵢ are the pages that link to page A.
  • PR(Tᵢ) is the PageRank of each such linking page.
  • C(Tᵢ) is the total number of outbound links on each linking page.

In plain language: a page’s PageRank is the sum, over every page that links to it, of that linking page’s own PageRank divided by its total number of outbound links — multiplied by 0.85, with a small constant added to ensure no page has a PageRank of zero.

The division by C(Tᵢ) — the number of outbound links on the linking page — is the most strategically important component of the formula for SEOs to understand. It implies that PageRank is conserved when a page links out: the PageRank of the linking page is divided evenly among its outbound links, not duplicated for each one.

A page with a PageRank of, say, 10 that links to a single other page passes the full 8.5 (after damping) to that destination. A page with the same PageRank that links to ten different destinations passes only 0.85 to each. This is the mathematical foundation of what SEOs informally call “link equity” or “link juice.” It explains, among many other things, why a link from a page with five outbound links is more valuable than a link from a page with five hundred — even when both linking pages have identical authority. It also explains why footer link farms, blogroll widgets, and pages with hundreds of low-quality outbound links pass very little PageRank per link.

2.3 Iteration and convergence

PageRank is calculated iteratively because the algorithm is recursive. The PageRank of every page depends on the PageRank of every page linking to it, which in turn depends on the PageRank of every page linking to those, and so on through the entire link graph. There is no closed-form solution that can be computed in a single pass.

Instead, the algorithm proceeds as follows. All pages begin with an identical initial PageRank — typically 1/N, where N is the total number of pages. The algorithm then applies the formula to every page in the graph, producing a new set of PageRank values. It repeats this process many times. With each iteration, the values move closer to their stable equilibrium. After enough passes — typically forty to fifty for the original web-scale calculations — the values stop changing significantly between iterations and the algorithm is said to have converged. Those final values are the PageRanks.

Modern variants use mathematical optimisations — power iteration, sparse matrix techniques, parallel computation across distributed systems — to compute these values for hundreds of billions of pages in feasible time. The principle, however, is unchanged.

2.4 The logarithmic public scale

The internal PageRank values that Google computes are real numbers between 0 and 1. The familiar 0–10 scale that was once visible in the Google Toolbar was a logarithmic compression of these internal values designed for human readability. A page with a Toolbar PageRank of 4 was, in rough terms, ten times more valuable from a link equity perspective than a page with a Toolbar PageRank of 3 — and the gap between PR8 and PR9 was vastly larger than the gap between PR1 and PR2.

This logarithmic property has not gone away simply because Google retired the public display in 2016. Modern proxy metrics like Ahrefs’ Domain Rating, Moz’s Domain Authority, and Semrush’s Authority Score all use comparable logarithmic scales precisely because the underlying distribution of link equity on the web is itself extraordinarily skewed. We will return to these proxy metrics in Section 6, and you can read our standalone analysis of how to interpret them in our guide to Domain Authority and what it actually measures.

3. The History of PageRank, From Public Score to Hidden Engine

3.1 The Toolbar era

In December 2000, Google released the Google Toolbar — a browser plugin that displayed, among other things, the PageRank of whatever page the user was currently viewing. The score was rendered as a small green bar with a value from 0 to 10, accompanied by a numeric tooltip. For the first time, anyone could check the PageRank of any URL on the web simply by visiting it.

The consequences were predictable and, in retrospect, calamitous for the integrity of the link graph. SEOs began obsessing over the visible score. Link buyers and sellers used Toolbar PageRank as a price benchmark. Entire industries grew up around manipulating it: link wheels, link farms, paid placements on PR8 and PR9 pages, comment spam at industrial scale. The same algorithm that had been designed to surface authentic editorial endorsements was now being systematically gamed by the very people whose links were supposed to constitute the signal.

Google’s response evolved gradually. The company introduced the rel=”nofollow” attribute in 2005 to allow webmasters to deny PageRank flow to specific outbound links. It rolled out the Penguin algorithm update in 2012, which targeted manipulative link patterns at scale. It updated the Toolbar PageRank values less and less frequently — once per quarter, then once per year, then not at all. In October 2014, Matt Cutts confirmed that there would be no further public updates. In April 2016, Google formally retired the Toolbar PageRank API. The public score was gone.

3.2 What “retired” actually meant

It is critical to understand precisely what was retired in 2016 and what was not. The Toolbar PageRank — the public-facing score visible to webmasters — was deprecated. The underlying PageRank algorithm was not. Google representatives stated explicitly, both at the time and repeatedly since, that PageRank continued to be calculated internally and continued to be used as part of the core ranking system.

In a March 2016 Google Q&A hangout, Andrey Lipattsev was asked which signals Google considered the most important. His response was unambiguous: content and links pointing to your site, alongside RankBrain, were the top three. Other Google engineers — Gary Illyes, John Mueller, Danny Sullivan — have made similar statements at various points, consistently confirming that the PageRank algorithm in some form continues to operate inside Google’s core ranking systems.

What had genuinely changed by 2016 was that PageRank was no longer the dominant signal. RankBrain, neural ranking, semantic relevance, behavioural data, and dozens of other systems had grown to share the algorithmic stage. PageRank had moved from being almost the entire show in 2002 to being one important member of an ensemble cast in 2016. But it had not left the cast.

3.3 The 2024 leak

On 27 May 2024, the SEO community received the most significant external confirmation of PageRank’s continued role to date. An automated repository scraper accidentally exposed approximately 2,500 pages of internal Google API documentation on GitHub. The leak was discovered by SEO Erfan Azimi, who passed it to Rand Fishkin and Mike King, who in turn analysed and publicised the contents. Google subsequently confirmed the documents’ authenticity, while cautioning that they were partial and out of context.

The relevance of the leak to PageRank specifically is that it definitively settled a long-running debate about whether the algorithm was still in use. The leaked documentation referenced not one but multiple active PageRank variants running concurrently inside Google’s ranking infrastructure. The named variants included:

  • RawPageRank — believed to be the basic, unadjusted PageRank score for a page based on its raw inbound link graph.
  • PageRank2 — an updated, modern variant of the algorithm whose precise differences from RawPageRank have not been disclosed.
  • PageRank_NS (Nearest Seed) — a clustering variant that appears to be used for assessing topical relevance and identifying low-quality pages within content clusters.
  • FirstCoveragePageRank — the PageRank value associated with a page when Google first discovers and indexes it.
  • ToolBarPageRank — the legacy Toolbar score, which despite the 2016 public retirement appears still to be referenced internally, particularly within the NavBoost click-data system.

In total, the leaked documentation referenced seven distinct PageRank-related fields. The implication is unmistakable: PageRank was not retired in 2016. It was diversified, refined, and embedded more deeply into Google’s ranking architecture than ever before. The public score was retired; the algorithm was not.

3.4 Google’s public confirmation

In a development almost as significant as the leak itself, Google updated its own official “A Guide to Google Search Ranking Systems” documentation in 2024 to explicitly acknowledge PageRank as a continuing ranking signal. The current version of that page states, in plain English, that Google has “various systems that understand how pages link to each other” and identifies PageRank by name as “one of our core ranking systems used when Google first launched” whose mechanics “have evolved a lot since then” but which “continues to be part of our core ranking systems.”

Between the leak and the official documentation update, the question of whether PageRank still matters in 2026 is now closed. It is no longer a matter of inference, speculation, or reading between the lines of evasive Google PR statements. The algorithm runs. It influences rankings. The only remaining questions are about how much weight it carries relative to other signals, and how it interacts with the dozens of other ranking systems Google now operates.

4. Modern PageRank: Variants, Refinements, and the Reasonable Surfer

4.1 The reasonable surfer

The most important refinement of PageRank since the original 1998 paper is the so-called reasonable surfer model, introduced in a Google patent filed in 2004. The original random surfer model assumed that a person on a webpage was equally likely to click any of the outbound links on that page. This is, on reflection, an obviously poor model of actual human behaviour. People do not click footer links and prominent in-content links with equal probability. They click contextual editorial links far more often than they click navigational menu items, and they almost never click on small links buried in disclaimers, copyright notices, or sponsorship disclosures.

The reasonable surfer model adjusts PageRank flow to reflect this. Links in prominent in-content positions, surrounded by relevant text, with descriptive anchor text, and likely to be clicked by an actual reader pass more PageRank than links in footers, sidebars, blogroll widgets, or boilerplate that no human ever interacts with. The model uses a variety of heuristic features — link position, surrounding text, font size, link prominence, anchor text relevance, click-through data — to estimate the probability that a real user would click a given link, and weights the PageRank flow accordingly.

Several practical implications follow from the reasonable surfer model. The position of a link on the linking page matters. The contextual relevance of the link to the surrounding content matters. Click-through rate matters. A link buried in a 200-link footer passes a tiny fraction of the PageRank that the same backlink would pass from a prominent contextual placement in the body of an article. This is why our guidance in our complete guide to anchor text and link placement emphasises in-content placement over peripheral positions, and why our broader work on link building strategies that actually produce results prioritises editorial placements over directory and profile links.

4.2 Topic-sensitive PageRank

A second major refinement is topic-sensitive PageRank, originally developed in academic research by Taher Haveliwala in 2002 and subsequently incorporated, in some form, into Google’s production ranking. The basic idea is to compute multiple PageRank scores for each page — not a single score representing general authority, but a separate score for each of a small set of topical categories. A page about cardiovascular medicine, for example, might have a high PageRank within the medical topic cluster but a low PageRank in the consumer technology cluster.

This refinement is essential for combating one of the most common forms of link manipulation: the acquisition of high-authority links from topically irrelevant sources. Under pure classical PageRank, a link from a high-authority gardening blog to a personal injury law firm would pass significant link equity. Under a topic-sensitive system, the same link passes much less, because the gardening blog has no authority in the legal topic cluster. The 2024 leak’s reference to PageRank_NS, the “Nearest Seed” variant, is widely interpreted as a clustering and topical-relevance system that operates on similar principles.

The strategic implication is straightforward. Topical relevance now matters at least as much as raw authority. A DR 60 link from a publication directly relevant to your industry will frequently outperform a DR 80 link from a tangentially related lifestyle site. We treat this question in detail in our guide to white hat link building and what genuinely works, and discuss it again in the context of avoiding penalties in our analysis of toxic backlinks and how Google identifies them.

4.3 Trust and seed-based PageRank

A third significant refinement is the family of trust-based PageRank variants — concepts such as TrustRank, originally proposed in 2004 by Yahoo researchers, and apparently echoed in Google’s PageRank_NS naming convention. The principle is that some pages on the web are known, with high confidence, to be trustworthy: government domains, established universities, the BBC, well-curated directories of reputable information. These pages are designated “seeds.” Trust then propagates outward through the link graph from these seeds, with pages closer to seed pages — measured by link distance — receiving higher trust scores.

This propagation of trust serves as a partial defence against link spam. A page that is twenty link-hops away from any trusted seed, even if it has accumulated thousands of inbound links from PBNs and link farms, scores poorly on trust metrics. A page that is one or two link-hops from multiple trusted seeds, even with a more modest total inbound link count, scores well. This dynamic is one important reason why links from genuinely authoritative publications carry disproportionate weight, and why a small number of high-quality editorial placements often outperform large quantities of cheap directory or paid links.

4.4 The seven PageRank variants of 2026

VariantInferred Function
RawPageRankThe unadjusted, classical PageRank score for a page based on its inbound link graph. Likely the closest surviving descendant of the original 1998 algorithm.
PageRank2A modernised PageRank variant whose precise refinements remain undocumented but which presumably incorporates damping and weighting adjustments learned from two decades of production experience.
PageRank_NS“Nearest Seed” — a clustering and topical-relevance variant that appears to evaluate pages within content clusters and to flag low-quality pages by their proximity to known low-quality seeds.
FirstCoveragePageRankThe PageRank value at the moment a page is first discovered and indexed by Google. Likely used as a baseline reference for measuring how a page’s authority evolves over time.
ToolBarPageRankThe legacy public score from the deprecated Google Toolbar. Apparently still referenced internally, particularly in connection with the NavBoost click-data system, despite the 2016 public retirement.
HomepagePageRankA site-level signal derived from the PageRank of the homepage, considered for every document on the site. Strongly emphasises the strategic importance of the homepage in any link building plan.
Site AuthorityAlthough not strictly a PageRank field by name, the leaked documentation references a siteAuthority signal that draws heavily on backlink quality and diversity. Functionally, it is closely related to domain-level link equity.

Two implications follow from this structure. First, PageRank in 2026 is not a single score but a family of related signals, each computed somewhat differently and used for somewhat different purposes inside the broader ranking system. Second, the PageRank of the homepage is treated as a near-universal site-level signal — which means that link building campaigns that ignore the homepage in favour of deep-page acquisition are leaving substantial value on the table. We discuss the strategic implications of this finding at length below.

5. PageRank in 2026: Where It Sits in the Ranking Hierarchy

5.1 PageRank is not the whole algorithm

The most important framing point about PageRank in 2026 is that it is one signal among many — not the dominant signal, and certainly not the entire ranking system. Google has invested heavily, over the past decade, in non-link signals: behavioural metrics, semantic relevance models, neural ranking systems, freshness signals, E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) evaluations, and a great many others. The 2024 leak referenced over fourteen thousand distinct features. PageRank-related variants account for seven of them.

Anyone who tells you in 2026 that link building is the only thing that matters in SEO is wrong. The picture is more nuanced than that. But anyone who tells you that links no longer matter, that PageRank is dead, or that you can rank in competitive niches without serious investment in authority is also wrong. The truth is in the middle. Links matter; they remain a foundational signal; the algorithm rewards them; and they continue to be among the most reliable ways to differentiate sites of comparable on-page quality.

5.2 The current weight of PageRank-style signals

Google’s public position, articulated by various representatives over the years, is that links are one of the top three ranking factors alongside content quality and RankBrain. The 2024 leak does not contradict this; if anything, it reinforces it by showing that link-based signals are still computed in many specialised forms and applied across the ranking pipeline. Quantifying the weight of PageRank precisely is impossible — the leaked documentation describes which features exist, not how they are combined or weighted — but the qualitative picture is clear. Link signals, including PageRank in its various modern forms, are heavyweight ranking factors in 2026, particularly in competitive commercial niches.

5.3 What complements PageRank

Several non-link systems work alongside PageRank to determine final rankings:

  • Content quality and helpfulness signals — including the ongoing series of Helpful Content updates and the underlying quality scoring systems.
  • E-E-A-T evaluation — Google’s framework for assessing experience, expertise, authoritativeness, and trustworthiness, particularly important for YMYL (Your Money or Your Life) topics.
  • Semantic and intent-matching systems — including BERT, MUM, and the neural matching pipelines that interpret query intent.
  • NavBoost and behavioural signals — including click-through data, dwell time, and engagement metrics derived in part from Chrome browser data.
  • Core Web Vitals and technical signals — page experience metrics including loading, interactivity, and visual stability.
  • Freshness and recency signals — particularly for query-deserves-freshness intents.

These systems do not replace PageRank; they complement it. A page with strong link equity but weak content quality will still under-rank a competitor with comparable links and stronger content. A page with strong content quality but no link equity will still struggle to outrank a comparable competitor with both. Sustainable rankings in 2026 require both.

5.4 The interaction with NavBoost

One of the most consequential revelations from the 2024 leak was the description of NavBoost, a system that uses click data — including data from Chrome browser usage — to influence rankings. NavBoost interacts with PageRank in important ways. The leaked documents indicated that Google categorises links into different quality tiers, with click data influencing the tier assignment and, in turn, the PageRank flow that the link contributes.

In other words, links that are actually clicked by real users pass more PageRank than links that are not. This is the reasonable surfer model, operationalised against actual user data rather than estimated heuristically. It has profound implications for link building. A link from a well-trafficked, genuinely useful page in a relevant niche will pass meaningful PageRank because real users actually click it. A link from a high-DR page in a tangentially related niche, embedded in a paragraph no one reads, will pass much less.

This is one of several reasons why we recommend, in our guidance on identifying and assessing competitor backlink profiles, that you weight your evaluation toward links from pages with demonstrated traffic and topical relevance, rather than relying on raw DR or DA scores in isolation.

6. PageRank vs Domain Authority, DR, AS, and Other Proxies

6.1 The key distinction

Because Google’s actual PageRank values have been hidden from public view since 2016, third-party SEO tools have built proprietary metrics designed to approximate them. The most widely used are Moz’s Domain Authority (DA), Ahrefs’ Domain Rating (DR), Semrush’s Authority Score (AS), and Majestic’s Trust Flow and Citation Flow. Each operates on a 0–100 scale; each is logarithmic; each is calculated from each tool’s own crawl of the web link graph.

The critical conceptual distinction is this: these metrics are proxies, not measurements. They do not have access to Google’s internal PageRank values. They estimate what those values are likely to be, based on each tool vendor’s own approximation of the link graph and their own modelling of how authority probably propagates through it. They are useful — sometimes very useful — but they are not the same as PageRank itself.

6.2 What each proxy measures

MetricWhat It EstimatesVendor
Domain Rating (DR)Strength of a domain’s backlink profile, weighted by linking domain authority. Logarithmic 0–100 scale.Ahrefs
Domain Authority (DA)Predictive estimate of how well a domain is likely to rank in Google search. Calibrated against actual SERP data.Moz
Authority Score (AS)Composite score combining link quality, organic traffic, and natural-profile signals. 0–100 scale.Semrush
Trust Flow / Citation FlowTrust Flow estimates link quality (proximity to trusted seeds); Citation Flow estimates link quantity. 0–100 scales.Majestic
URL Rating (UR)Page-level equivalent of DR. Estimates the strength of a single page’s backlink profile rather than the whole domain.Ahrefs

Two further points are worth noting. First, these metrics are highly correlated with each other but not identical. A site might have DR 45 in Ahrefs, DA 38 in Moz, and AS 41 in Semrush. The discrepancies reflect differences in crawl depth, link graph coverage, and proprietary weighting. Second, none of these metrics is Google. A site can have a strong DR and rank poorly, or a modest DR and rank well, depending on the many other signals that influence final placement. Use them as directional indicators, not as authoritative ranking predictions. We treat the proper interpretation of these scores in greater depth in our standalone analysis of Domain Authority and what it actually tells you.

7. How to Build PageRank in Practice

Every practical recommendation in this section flows from the same underlying principle: PageRank rewards inbound links from high-authority, topically relevant pages. Anything that increases the number of such links pointing to your site, or improves the way PageRank flows through your site once it arrives, will tend to improve your rankings over time. Anything that does not do this — directory submissions on low-quality directories, comment spam, link wheels, PBNs at scale — is at best wasted effort and at worst actively harmful.

Within this principle, the most reliable categories of link acquisition in 2026 are:

  • Editorial placements in genuinely authoritative publications, earned through original research, distinctive perspectives, or genuinely useful resources.
  • Digital PR campaigns that produce coverage in mainstream and trade media within your niche.
  • Guest posts on high-quality publications that have meaningful traffic, careful editorial standards, and topical relevance to your business.
  • Resource-page placements where your content genuinely deserves to appear among the resources on offer.
  • Broken-link reclamation, where you offer your content as a replacement for outdated or dead links on relevant pages.

We treat each of these categories in detail across the rest of our content. For an overview of the strategy landscape, see our guide to link building strategies that genuinely work in 2026. For the foundational definitions, see our piece on what link building is and why it matters.

7.2 The role of internal linking

If external link acquisition is the most-discussed aspect of PageRank optimisation, internal linking is the most underrated. Once PageRank has flowed into your domain through external backlinks, the internal link structure of your site determines how that PageRank is distributed across your pages. A site with a well-designed internal link architecture will channel link equity from highly linked pages — typically the homepage and a small number of pillar pages — towards the deeper pages where it can support specific commercial or informational rankings. A site with chaotic internal linking will see that equity dissipate, trapped in dead-end pages, orphaned articles, or buried navigation paths.

The classical hub-and-spoke model — in which a small number of authoritative pillar pages link to a larger set of cluster pages on related topics, and those cluster pages link back to their pillar — is widely used because it is mathematically efficient. It concentrates external PageRank in pages that are easy to rank for broad informational terms, and channels it outward to pages that target more specific commercial or transactional intents. Our complete guide to internal linking strategy and how to design it covers this topic in dedicated detail.

7.3 Anchor text and the reasonable surfer

Because the reasonable surfer model gives more weight to links that are likely to be clicked, the way you describe your links — both as the recipient of inbound links and as the author of outbound and internal links — affects how much PageRank flows through them. Descriptive, contextually relevant anchor text in the body of an article is the strongest signal. Generic anchor text (“click here,” “read more”) is weaker. Exact-match anchor text in unnatural patterns is actively risky, as Google’s spam systems flag manipulative anchor text profiles as a sign of paid or coerced linking.

The current consensus, supported by both the published research and the practical experience of agencies operating at scale, is to maintain a natural-looking anchor text distribution dominated by branded anchors, partial-match anchors, and generic anchors, with only a small fraction of exact-match phrases. We treat this in detail in our complete guide to anchor text in 2026.

7.4 Dofollow, nofollow, and PageRank flow

Since 2005, the rel=”nofollow” attribute has been used by webmasters to prevent specific outbound links from passing PageRank. Since 2019, Google has treated nofollow as a hint rather than a strict directive: the system may, in some cases, choose to treat a nofollow link as a meaningful signal anyway, particularly if other indicators suggest it represents a genuine endorsement.

In practical terms, dofollow links remain the gold standard for PageRank flow, and a backlink profile composed entirely of nofollow placements will significantly underperform a comparable profile of dofollow placements. However, a small fraction of nofollow links is normal and natural — major social platforms, Wikipedia, large news organisations, and many forum platforms apply nofollow by default — and the absence of any nofollow links in a profile can itself be a signal of unnatural manipulation. Our standalone analysis of dofollow versus nofollow links in 2026 covers the current state of this distinction in depth.

PageRank is, in principle, computed on the current state of the link graph regardless of how that graph was assembled. In practice, however, Google’s spam systems analyse the temporal pattern of link acquisition as a signal of authenticity. A site that goes from zero to two thousand backlinks in two weeks, with no corresponding increase in brand searches, traffic, or coverage, is exhibiting a pattern that the spam systems recognise as artificial. The PageRank flowing from those links may be heavily discounted or suppressed entirely as the system flags the profile for review.

Sustainable link building therefore involves both quality and pacing. We discuss the temporal dimension of link acquisition — what “natural” growth looks like, what triggers algorithmic suspicion, and how to build at scale without crossing those thresholds — in our guide to link velocity and why it matters for SEO.

7.6 Auditing existing PageRank flow

Before building new links, it is worth understanding how PageRank currently flows through your site. A backlink audit identifies the inbound links you already have, classifies them by quality and topical relevance, surfaces unnatural patterns or risky links, and clarifies which pages on your site are receiving the bulk of the external link equity. This audit then directly informs the internal linking strategy that will distribute that equity to the pages that need it most.

Our step-by-step guide to conducting a backlink audit in 2026 walks through the practical workflow, and our companion piece on the best link building tools available in 2026 reviews the platforms that automate the data collection.

8. Common Misconceptions About PageRank

“PageRank is dead.”

This is the most common and most mistaken claim in modern SEO commentary. The Toolbar PageRank — the public 0–10 score visible in the Google Toolbar — was retired in 2016. The underlying algorithm was not. The 2024 API leak conclusively demonstrated that multiple PageRank variants run inside Google’s ranking systems today, and Google’s own ranking systems documentation explicitly identifies PageRank as a core ranking system that continues to be used. Anyone claiming in 2026 that PageRank no longer exists is operating from outdated information.

“DR is PageRank.”

Domain Rating (Ahrefs), Domain Authority (Moz), and Authority Score (Semrush) are third-party proxies. They estimate authority based on each vendor’s own crawl of the link graph and each vendor’s own modelling of how authority propagates. They are not Google’s actual PageRank values. They are useful as directional indicators, but they are not authoritative measurements of how Google views your site.

This was approximately true in the original 1998 algorithm. It has not been true in production since at least 2004, when the reasonable surfer patent introduced the idea of weighting links by their probability of being clicked. Modern PageRank weights links by position, contextual relevance, anchor text, surrounding content, and behavioural data. A footer link from a high-DR site may pass less PageRank than an in-content link from a moderate-DR site that sits in a relevant editorial paragraph.

This conflates two different concepts. More high-quality, topically relevant, naturally acquired links is generally better. More links of any kind is not. A profile that grows quickly through low-quality directory submissions, paid placements, or PBN links is more likely to attract algorithmic suppression than to improve rankings. Quality, topical relevance, and a natural acquisition pattern matter at least as much as raw count.

“You can sculpt PageRank with nofollow.”

This was a popular tactic between roughly 2005 and 2009, in which webmasters added nofollow to internal links to non-essential pages in an attempt to channel more PageRank to commercially important pages. In 2009, Matt Cutts confirmed publicly that PageRank sculpting in this fashion does not work — adding nofollow does not redirect PageRank to other links on the same page; the PageRank associated with the nofollow link is simply lost. Modern internal linking strategy works by adding or removing links, not by adding nofollow attributes to them.

“Once you have PageRank, it’s permanent.”

PageRank reflects the current state of the link graph. If your inbound links are removed, lose authority themselves, become nofollowed, or move to less prominent positions on their host pages, the PageRank they pass to your site will diminish. Authority is not a one-time deposit. It must be maintained, monitored, and where necessary actively defended through link reclamation, monitoring, and ongoing acquisition. Our guide to toxic backlinks and what to do about them covers the defensive side of authority management in detail.

9. Frequently Asked Questions

Does PageRank still exist in 2026?

Yes. The public Toolbar PageRank score was retired in 2016, but the underlying algorithm continues to operate inside Google’s core ranking systems. The 2024 internal API leak referenced multiple active PageRank variants — RawPageRank, PageRank2, PageRank_NS, FirstCoveragePageRank, ToolBarPageRank — running concurrently. Google’s own ranking systems documentation explicitly identifies PageRank as a continuing core system in 2026.

How can I check the PageRank of my page?

You cannot. Google has not provided public access to PageRank values since the Toolbar API was retired in April 2016. The closest available proxies are Ahrefs’ Domain Rating, Moz’s Domain Authority, and Semrush’s Authority Score. None of these is Google’s actual PageRank, but each provides a directional estimate of a domain’s link-based authority.

Is PageRank the same as Domain Authority?

No. PageRank is Google’s internal algorithm. Domain Authority is a proprietary metric created by Moz that estimates how well a site is likely to rank in Google. They are correlated but distinct. Domain Authority is calibrated against actual SERP performance using Moz’s own model; it is a prediction, not a measurement.

How important is PageRank compared to content quality?

Both are listed by Google representatives as among the top three ranking factors. In practice, the two reinforce each other: a page with strong content but no link equity will struggle to rank in competitive niches, and a page with strong link equity but weak content will under-rank competitors with both. Sustainable rankings require both, with the appropriate balance shifting somewhat by topic, query type, and competitive landscape.

In principle, yes. PageRank is logarithmic, so a single link from a genuinely high-authority page passes substantially more equity than many links from low-authority pages combined. In practice, a healthy backlink profile typically combines a small number of strong editorial placements with a broader base of moderate-authority links to look natural to Google’s spam systems. Relying on a single backlink, however authoritative, also exposes you to risk if that link is ever removed.

Since 2019, Google has treated nofollow as a hint rather than a strict directive — meaning that the system may choose to count a nofollow link in some cases. In practical terms, however, dofollow links remain substantially more valuable for PageRank flow, and a backlink profile dominated by nofollow placements will significantly underperform a comparable dofollow-heavy profile. A small fraction of nofollow links is natural; an entirely nofollow profile is not.

Google must first crawl the linking page, recognise the new link, and incorporate it into the next iteration of its link graph computation. In practice, this typically takes between several days and several weeks. Significant ranking effects from new links typically appear over a longer horizon — weeks to months — as the link graph propagates and other signals adjust. PageRank is a long-game signal, not a short-term lever.

The underlying mathematics is identical: PageRank flows through every link, internal or external, in proportion to the linking page’s authority and inversely with the number of outbound links. The strategic role of internal links, however, is different. External links bring new PageRank into your domain. Internal links distribute it. Both matter, and a strong site needs both.

10. Conclusion: PageRank as Strategy, Not Score

PageRank in 2026 is not a number you can look up. It is not a single algorithm. It is not, in any meaningful sense, the public score that Google retired a decade ago. What it is, instead, is a set of related ideas about how authority propagates through the structure of the web — ideas that have been refined, diversified, and embedded ever more deeply into Google’s ranking infrastructure with each passing year.

The right way for a serious practitioner to think about PageRank in 2026 is not as a score to be measured but as a strategic framework for understanding why some pages rank and others do not. Pages with many high-quality, topically relevant inbound links, embedded in prominent editorial positions on pages that real users actually engage with, accumulate authority. Pages without those signals do not. A site that distributes its accumulated authority efficiently through deliberate internal linking ranks more effectively than one that does not. A site that maintains a natural, diversified, authentic-looking backlink profile is treated as more trustworthy than one whose link patterns suggest manipulation.

None of these principles will surprise an experienced SEO. What is worth emphasising is that they are not folklore or supposition. They are derived directly from the architecture of an algorithm whose existence and continued operation are now well documented through Google’s own statements, through the 2024 API leak, and through twenty-five years of careful empirical observation by the broader search community. Building authority on these principles is the most reliable path to sustainable rankings in 2026.

Where to go next. If you are starting from foundational ground, our beginner’s guide to link building provides the broader framing within which this article sits. If you are ready to translate these principles into a campaign, our overview of fifteen link building strategies that genuinely work in 2026 sets out the tactical landscape. And if your immediate priority is to understand and improve the link equity that is already flowing through your site, our guide to internal linking strategy is the natural next step.

Leave a Reply

Your email address will not be published. Required fields are marked *

Internal Linking Strategy Previous post Internal Linking Strategy: The Complete Guide (2026)
Cold Email Outreach for SEO: 15 Templates That Get Replies Next post Cold Email Outreach for SEO: 15 Templates That Get Replies (2026)