How the ISE Organizes Beliefs: Topics, Synonyms, Strength, and Positivity
The ISE starts with a simple observation: human knowledge isn't missing — it's disorganized.
Everything we need already exists, scattered across blogs, books, Wikipedia pages, PDFs, academic papers, podcasts, speeches, debates, and social media arguments. But none of it is grouped, connected, or compared in any universal way — which means we keep re-arguing the same points forever.
The ISE fixes this by creating a shared system for organizing beliefs by topic and linking every argument, piece of evidence, and source to exactly where it belongs.
Here's how.
1. Sorting Beliefs and Arguments by Topic
Every belief gets assigned to one or more topics, subtopics, and sub-subtopics — just like organizing a library.
But instead of inventing a new taxonomy, the ISE integrates the ones humanity already built:
Frameworks We Build On
- Dewey Decimal System
- Library of Congress Subject Headings
- Wikipedia categories
- OpenAlex academic topics
- Medical Subject Headings (MeSH)
- UNESCO fields of science
- Google Knowledge Graph categories
Each one captures a different structure of the world. The ISE blends them so we can sort:
Topic → Subtopic → Belief
General → Specific → Extremely specific
Every belief receives a topic signature, a kind of idea-DNA.
Example for: "Electric cars are good for the environment"
- Technology → Transportation → Electric Vehicles
- Environment → Climate → Emissions
- Economics → Energy Markets
- Ethics → Intergenerational → Harm Reduction
A belief can sit in several domains at once. Belief sorting makes this automatic.
2. Topic Pages: The Home Base for Each Subject
Every topic gets a topic page that acts as the control panel for all beliefs in that area.
A topic page:
- Lists all related beliefs
- Sorts them from positive → negative
- Sorts them from general → specific
- Groups similar beliefs together
- Displays the best arguments
- Shows top evidence, books, people, and datasets
Example: "Electric Cars" topic page
Positive beliefs:
- "Electric cars dramatically reduce lifetime emissions" (+85)
- "Electric cars reduce air pollution in cities" (+68)
Neutral/descriptive:
- "Electric cars require lithium-ion batteries" (0)
- "Electric cars have higher upfront costs" (0)
Negative beliefs:
- "Electric cars cause harmful mining impacts" (-55)
- "Electric cars increase demand for rare-earth minerals" (-38)
Topic pages are the spine of the ISE, built from the principle of one page per topic.
3. One Page Per Belief (Across All Languages and Synonyms)
People express the same belief in a hundred different ways:
- "Trump is an idiot"
- "Trump is stupid"
- "Trump isn't very smart"
- "Trump has low cognitive ability"
- "Trump is the dumbest president ever"
Five sentences. One underlying belief.
The ISE merges them into a single belief page.
How the ISE Knows They're the Same Belief
A same-topic score (0-100%) measures whether two statements refer to the same underlying claim using:
- Entity (same person or thing)
- Attribute (intelligence vs. competence vs. morality)
- Sentiment (positive or negative)
- Strength (mild vs. extreme wording)
- Synonyms / antonyms
- Negation ("not smart" → "dumb")
Examples:
| Belief A | Belief B | Same-Topic Score | Notes |
|---|
| "Trump is an idiot" | "Trump is a moron" | 100% | Same claim, same strength |
| "Trump is the dumbest president ever" | "Trump isn't very smart" | 100% | Same claim, different intensity |
| "Trump makes bad policy decisions" | "Trump is stupid" | 60% | Different attribute |
| "Biden is incompetent" | "Trump is an idiot" | 10% | Different entity |
This ensures each belief has one canonical home through belief sorting.
4. Strength Score: How Bold Is the Claim?
Every belief expression also gets a strength score (0-100). This captures how intense the claim is — not whether it's true.
Examples:
| Statement | Strength |
|---|
| "Trump is not very smart" | 20 |
| "Trump is dumb" | 40 |
| "Trump is extremely stupid" | 75 |
| "Trump is the dumbest president ever" | 100 |
Strength comes from:
- Adverbs ("very," "extremely")
- Comparatives ("worse," "better")
- Superlatives ("best," "dumbest")
- Absolutes ("always," "never")
- Exaggerators
Strength is not a truth score. It just helps organize beliefs along a meaningful gradient.
5. Positivity / Negativity Score
Every belief also gets a valence score, which measures whether the claim is:
- Positive: "Electric cars help the planet"
- Negative: "Electric cars harm the planet"
- Neutral: "Electric cars use lithium batteries"
Topic pages then sort beliefs:
Positive → Neutral → Negative
Strong → Moderate → Weak
This produces a clean, intuitive map of where people disagree.
6. Linking Evidence, Data, Books, People, and Arguments
Once a belief is deduplicated, scored, and sorted, the ISE attaches everything connected to that belief:
Tiered by quality:
- Tier 1: Peer-reviewed studies, official data
- Tier 2: Expert analysis, institutional reports
- Tier 3: Investigative journalism, surveys
- Tier 4: Opinion pieces, anecdotal claims
- Stakeholder analysis
- Who benefits and who loses
- Shared vs. conflicting interests
- What must be true for the belief to hold
- Required premises for accepting or rejecting
- Quantified benefits and costs
- Expected values based on likelihood and impact
- Short-term vs. long-term effects
- Local, state, federal, and international laws supporting or contradicting the belief
- Biases affecting supporters and opponents
Everything connects through linkage scores and is ranked by ReasonRank.
This is what turns scattered arguments into a navigable idea-map.
7. Why This Matters
Right now, beliefs exist in:
- Tweets
- Comment sections
- Reddit threads
- Blogs
- Books
- Academic papers
- News articles
- Video essays
Most arguments are duplicated, unindexed, isolated, and forgotten — so every generation rebuilds the same debates from scratch.
The ISE changes that:
- One page per belief
- One page per topic
- One place for all arguments
- One shared structure
- One system that updates conclusions as evidence changes
The ISE is the first system that makes human reasoning organized, not chaotic.
It lets us stop getting tired of thinking halfway through — and finally proportion our beliefs to the actual strength of the reasoning.
This is how we stop repeating the same debates forever.
This is how we reach good conclusions without burning out.
This is how thinking becomes cumulative instead of exhausting.
Contribute
Contact me to help build belief taxonomies, contribute to topic pages, or develop semantic matching algorithms.
View the technical framework on GitHub to see the underlying architecture.
This organizational system makes one page per topic possible at scale, transforming scattered arguments into structured knowledge that updates automatically as evidence changes through belief sorting and ReasonRank algorithms.
Comments (0)
You don't have permission to comment on this page.