| 
View
 

algorithms

Page history last edited by Mike 2 weeks, 6 days ago

Home > Algorithms & Scores

ReasonRank: Algorithms and the Promotion of Good Ideas

Google solved one of the hardest problems in human history: too much information, not enough signal. Before PageRank, the internet was a shouting match. After PageRank, the best pages rose to the top based on who linked to them and why.

We have the same problem with ideas. The internet hasn't given us better thinking. It's given us louder thinking. Viral beats valid. Confident beats correct. The most emotionally satisfying argument wins, not the best-supported one.

ReasonRank changes that. It applies the same logic as PageRank, but to arguments instead of web pages. Just as a page earns authority from the quality of links pointing to it, an idea earns its score from the quality of reasons supporting it.


How the Scoring System Works

ReasonRank doesn't produce a single number from nowhere. It synthesizes a stack of interdependent scores, each measuring a different dimension of an argument's quality. Think of them as successive filters that a claim has to pass through before it earns weight in the final ledger.

Fundamental Scores

  • Truth Scores are the foundation. They combine two independent checks: whether the logic holds (Logical Validity) and whether the facts check out (Verification). An argument can fail on either count, and the system catches both.
  • Linkage Scores test the connection between evidence and conclusion. A lot of bad reasoning fails here: the facts are real, but they don't actually prove what the arguer claims. Linkage asks whether the evidence actually supports this specific conclusion, or just points in the same general direction.
  • Importance Scores separate truth from relevance. Not every true statement matters equally to a given conclusion. Without this filter, a mountain of minor correct points can bury one decisive counterargument simply by outnumbering it.
  • Evidence Scores evaluate the source material itself. A peer-reviewed meta-analysis and a confident tweet aren't equally reliable, and the system treats them accordingly, using a tiered quality framework.
  • Cost or Benefit Likelihood Scores apply specifically to the Automated Cost-Benefit Analysis subsystem. They calculate the probability that a projected cost or benefit will actually occur, based on the strength of the argument trees debating it rather than a single analyst's estimate.
  • Objective Criteria Scores measure performance against standards that don't depend on values or ideology — measurable benchmarks that both sides of a debate can agree to in advance.
  • Confidence Stability Scores track how settled a score is as new arguments arrive. A high score that's been stable under sustained scrutiny means something different from one that bounces around every time a new argument enters. Stability is itself evidence of robustness.

Administrative Scores

  • Media Truth Scores and Media Genre and Style Scores flag when a source is editorializing, sensationalizing, or misleading, even when the underlying facts are technically accurate. The genre of a source (opinion, investigative journalism, peer-reviewed study) carries information about reliability that raw fact-checking misses.
  • Topic Overlap Scores prevent the same basic point from inflating a score just because ten people said it slightly differently. Repetition is not confirmation. This filter also powers the "Related Pages" feature, surfacing the highest-scoring pages on adjacent topics.
  • Belief Equivalency Scores identify when two differently-worded beliefs are making the same underlying claim, so the platform can link them and surface the better-argued version rather than running parallel redundant debates.

 

All of these feed upward through Argument Scores from Sub-Argument Scores, the recursive engine that pulls everything together. When any foundation shifts, every conclusion built on it updates automatically.


Why This Is Different From a Better Forum

Most online debate tools optimize for engagement. ReasonRank optimizes for accuracy. The difference is that engagement rewards emotional resonance and tribal confirmation, while accuracy rewards arguments that survive scrutiny from people who disagree with you.

Misinformation doesn't have to be banned or suppressed in this system. It just loses, on the merits, in public, where everyone can see exactly which arguments failed and why. A claim backed by a debunked study, a weak analogy, and a logical fallacy accumulates three separate score penalties and sinks accordingly. The reasoning is visible. Anyone can challenge the scoring. No black box.

That's the infrastructure missing from every political debate, policy discussion, and public controversy right now: not a smarter moderator, but a shared system for asking whose arguments are actually better-supported and showing the math.


 

Comments (0)

You don't have permission to comment on this page.