← Back to Blog

Soulbound Reputation and Identity: Fixing the Internet's Trust Problem

January 26, 202611 min readBy RNWY
soulbound reputationverifiable reputationsoulbound tokensfake accountsonline trustpseudonymous accountabilitydigital identitymisinformation

The Edelman Trust Barometer 2025 surveyed 33,000 people across 28 countries and found that 70% believe government leaders purposely mislead them—up 11 points since 2021. The same pattern holds for business leaders and journalists. Media has become the least trusted institution globally.

The Thales 2024 Digital Trust Index delivered perhaps the most damning verdict: only 6% of people globally trust social media companies with their personal data. In the UK, that drops to 3%. In Japan, 2%.

This isn't a content moderation problem. It's an infrastructure problem. The internet was built without a trust layer, and decades of patches haven't fixed it. Soulbound tokens—non-transferable blockchain credentials tied to a single "soul"—offer a third path: soulbound reputation where trust is earned, not bought, and accountability follows identity across platforms.

The Scale of the Fake Account Epidemic

Facebook has removed 27.67 billion fake accounts since 2017—roughly 3.5 times Earth's entire population. In Q1 2024 alone, Meta removed 631 million fake accounts. Despite this aggressive enforcement, 4-5% of monthly active users remain fake at any given time.

Twitter/X's problem is contested but severe. While the company officially claims less than 5% of daily users are spam or bots, Washington University researchers found bot prevalence ranging from 25-68% depending on topics discussed. Dan Woods, a former CIA cyber-operations officer now at F5 Labs, estimates over 80% of Twitter accounts could be bots. A 2025 study published in Nature warned that "AI-powered botnets have emerged, using ChatGPT models to generate human-like content, closing the gap between bot and human."

LinkedIn blocked 80.6 million fake accounts at registration in H2 2024 alone—up from 70.1 million the previous half.

The Indiana University Observatory on Social Media found that just 6% of Twitter accounts (bots) spread 31% of low-credibility information. During COVID-19, research published in the Journal of Medical Internet Research found up to 66% of known bots were discussing pandemic-related topics.

Platform-level enforcement hasn't solved this because it can't. Delete an account, the operator creates another. The accounts are disposable. The identities are disposable. There's no continuity to defend.

The Black Market for Trust

The aged account marketplace operates openly, not in shadowy corners of the dark web.

Fresh Twitter accounts sell for $0.20-$0.30. Accounts from 2012-2019 command $50-$500+. Twitter Gold "converted" accounts—those that appear legitimately verified—trade for $1,200-$2,000 according to CloudSEK research. Reddit accounts follow similar patterns: a 6-month-old account with 1K karma costs roughly $10, while a 3-year-old account with 40K karma fetches $100-$400.

NATO StratCom COE researchers were struck by "the openness of this industry. This is no shadowy underworld, it's an open and accessible marketplace." They found Google and Bing accept advertising from manipulation service providers.

The Tech Transparency Project found over 100 Facebook groups with 531,000+ members trading business manager accounts. Many come with linked credit cards (indicating theft), are sold in bulk for cryptocurrency, and include accounts approved to run political ads.

The consequences are severe. In the 2020 Twitter Bitcoin hack, attackers who gained access to verified accounts of Barack Obama, Joe Biden, Elon Musk, Bill Gates, and Apple stole $121,000 in Bitcoin from 400+ victims. In September 2023, Ethereum co-founder Vitalik Buterin's account was hijacked for just 20 minutes—long enough for attackers to steal $500,000 through a fake NFT drop.

The German Marshall Fund describes this as "information laundering"—and it's exactly why soulbound reputation matters. Placement of misleading information, layering through multiple accounts, and integration into trusted news sources parallels money laundering techniques.

The FTC reports that $2.7 billion was lost to social media scams between 2021-2023, with 1 in 4 fraud victims saying the scam started on social media.

The False Dichotomy: Surveillance vs. Anonymity

The typical response to online trust problems falls into two camps.

The surveillance camp says: require real names, verify identities, make everyone accountable through exposure. Facebook's real-name policy, South Korea's failed internet real-name law, various "digital ID" proposals.

The anonymity camp says: privacy is paramount, pseudonymity enables free speech, any identity requirement will be weaponized against dissidents and minorities.

Both camps have legitimate concerns. Both miss the point.

Research on anonymity shows it enables bad behavior. Psychologist John Suler's foundational paper on the "online disinhibition effect" identified six factors creating toxic behavior online, with anonymity chief among them. A 2022 experimental study confirmed that participants in anonymous conditions trolled significantly more than those in identifiable conditions.

But research on surveillance shows it chills legitimate speech. Jonathon Penney's landmark study found that after the 2013 Snowden revelations, Wikipedia traffic to terrorism-related articles dropped 30% immediately—and monthly view growth reversed to decline long-term. PEN America surveys found 28% of American writers curtailed social media activities due to surveillance concerns, with 16% avoiding particular topics entirely.

Research in the Journal of Human Rights Practice found that in Uganda and Zimbabwe, "knowing—or suspecting—that State surveillance exists may itself be sufficient to create a chilling effect."

Facebook's real-name policy disproportionately harms vulnerable groups—trans people, abuse survivors, Native Americans, and others with legitimate reasons to separate their online and offline identities.

The binary choice between "everyone knows who you are" and "anyone can be anyone" serves neither privacy nor accountability. It's a false dichotomy.

The Credibility Gap for Legitimate Pseudonymous Voices

The current system punishes exactly the people it should protect.

PlanB (@100trillionUSD), the pseudonymous Dutch institutional investor who created Bitcoin's influential Stock-to-Flow valuation model, claims to manage roughly $100 billion in assets. He stays anonymous because "Bitcoin is not the first thing you think about when managing pension money"—his employer could face consequences if his advocacy were linked to his professional role. Despite 1.9 million followers and work that bridged traditional finance and crypto, he cannot prove his claimed 25+ years of financial experience.

Satoshi Nakamoto created a $1.5+ trillion technology while maintaining complete anonymity. The approximately 1.1 million BTC sitting untouched since 2010 provides a unique "proof of restraint" credibility signal—but that's unavailable to ordinary pseudonymous contributors.

Whistleblowers face an impossible bind. A Bradley University study found that nearly two-thirds of whistleblowers experienced retaliation. The National Whistleblower Center emphasizes anonymity as critical protection. But as the House Office of the Whistleblower Ombuds acknowledges, "Your credibility is your greatest asset"—and anonymous reports receive lower investigative priority.

Security researchers face similar traps. The disclose.io research-threats repository documents cases of researchers arrested, raided, or sued for legitimate vulnerability disclosure. Many now work pseudonymously—but can't build public track records.

These aren't edge cases. They represent a structural failure: the internet has no mechanism for building verifiable reputation without revealing identity.

Why Corrections Build Rather Than Destroy Credibility

Here's a counterintuitive finding that reshapes how we should think about trust infrastructure.

Research from Dartmouth and Cambridge involving 2,862 participants found that corrections increase belief accuracy dramatically (Cohen's d = 0.91) while only modestly decreasing trust (Cohen's d = 0.12). The accuracy gain is approximately 7.5 times larger than the trust loss.

Most importantly: self-corrections are MORE effective than third-party corrections at improving accuracy.

As Brendan Nyhan of Dartmouth told Nieman Lab: "You're going to take the hit anyway, but the audience will be less well-informed if you don't admit error yourself."

The pratfall effect, first documented by psychologist Elliot Aronson in 1966, explains why: highly competent individuals become MORE likable after making a minor mistake, because errors "humanize" superior performers and make them more relatable.

Columbia Journalism Dean Jelani Cobb crystallized the principle: "The definition of an untrustworthy news organization is one that has never deemed it necessary to issue a correction."

This has direct implications for identity infrastructure. A system that hides mistakes encourages cover-ups. A system that makes corrections visible—as part of a permanent, verifiable record—creates incentives for integrity.

The formula: demonstrated competence + visible mistakes + graceful correction = increased trust.

Soulbound Tokens: A Third Path

Vitalik Buterin, E. Glen Weyl, and Puja Ohlhaver proposed soulbound tokens in their May 2022 paper "Decentralized Society: Finding Web3's Soul." The term comes from World of Warcraft, where certain items become permanently bound to a character upon pickup—impossible to trade.

ERC-5192, now in Final status, implements this as a minimal extension to ERC-721. A soulbound token, once minted to an address, cannot be transferred. The locked() function returns true, and all transfer functions throw errors.

This simple mechanism addresses the core infrastructure problem:

Non-transferability prevents reputation laundering. An aged account market can't exist if accounts can't be sold. Soulbound reputation becomes meaningful precisely because it can't be purchased.

Permanent records create accountability. Actions—including corrections and mistakes—become part of a visible history that follows the identity. Bad actors can't escape their record by creating new accounts.

Pseudonymity is preserved. The system doesn't require revealing real-world identity. It requires that a consistent identity accumulates history over time.

Otterspace, backed by Coinbase Ventures with $3.7 million in funding, provides non-transferable badge protocols for DAOs. Galxe has become Web3's largest onchain distribution platform with 90+ million monthly transactions, using soulbound credentials for reputation.

The concept isn't new—David Chaum proposed anonymous credentials in 1985. What's new is the infrastructure to implement it at scale.

Why Base

Coinbase's Base blockchain, launched August 2023 as an Ethereum Layer 2 optimistic rollup, provides the practical infrastructure for deploying soulbound identity.

Built on the OP Stack in collaboration with Optimism, Base offers EVM equivalence with fees measured in cents rather than dollars—making SBT minting economically viable at scale. More importantly, Base provides a bridge to Coinbase's 110+ million verified users.

Basenames, launched mid-2024 with 450,000+ registrations, demonstrates appetite for human-readable onchain identity. The ENS-powered naming system integrates with the Ethereum Attestation Service (EAS) for standardized reputation attestations.

W3C Decentralized Identifiers (DIDs) and Verifiable Credentials provide interoperability standards. An identity doesn't have to be locked to one platform—credentials can be verified across systems.

Vitalik Buterin's analysis of proof-of-personhood approaches notes the tradeoffs between different verification mechanisms. Soulbound tokens don't prove humanity—they prove continuity. That's a different property, but for many trust applications, it's the one that matters.

The Regulatory Tailwind

The EU's Digital Services Act, fully effective since February 17, 2024, mandates Know Your Business Customer (KYBC) requirements for online marketplaces.

More significant is the European Digital Identity Wallet (EUDI) framework under eIDAS 2.0. By end of 2026, every EU member state must provide citizens with digital identity wallets. By 2027, banks, payment services, telecom, and healthcare providers must accept them.

The EUDI framework's key feature is selective disclosure: prove you're over 18 without revealing your birthdate, verify employment without exposing salary. This privacy-preserving verification matches the architectural approach of soulbound tokens.

Bruce Schneier, the security expert, put it simply: "The problem isn't anonymity; it's accountability. If someone isn't accountable, then knowing his name doesn't help."

The regulatory direction validates the thesis: the binary between full identification and complete anonymity is dissolving. What's emerging is verifiable pseudonymity—identity that proves continuity and accumulates history without requiring exposure.

What This Means in Practice

Consider three scenarios:

The Vanishing Act: An account spreads misinformation for weeks, builds engagement, then deletes before investigation completes. Under current systems, they reappear fresh tomorrow. With soulbound identity, starting over means starting from zero—the reputation cost of bad behavior is real.

The Anonymous Expert: A researcher in an authoritarian country wants to publish findings that contradict their government's position. They need anonymity for safety but credibility for impact. With verifiable pseudonymity, they can build a track record over time without revealing who they are.

The Correction: A commentator makes a prediction that turns out wrong. Under current systems, they can delete and pretend it never happened. With permanent records, the correction becomes visible—and as the research shows, visible corrections actually enhance credibility for those who make them.

None of these require knowing anyone's real name. They require soulbound reputation—identity that persists, history that's visible, and trust that cannot be bought or transferred.

The Infrastructure We Need

The internet's trust collapse isn't fixable by better content moderation, smarter AI detection, or more aggressive platform enforcement. Those are patches on a broken foundation.

The foundation needs to change. Identity infrastructure needs to support:

  • Persistence: identities that accumulate history over time
  • Non-transferability: reputation that can't be bought or sold
  • Selective disclosure: proving credentials without revealing identity
  • Visible corrections: making the record of mistakes an integrity signal rather than a liability

Soulbound reputation on credibly neutral infrastructure provides the technical mechanism. The question is whether the ecosystem builds it before the next wave of fraud makes trust impossible to recover.

UNESCO/Ipsos research found that 87% of internet users have encountered disinformation online. 72% across 25 nations call false information online a "major threat" to their country.

The problem is clear. The infrastructure exists. What remains is the work of building it.


RNWY is building soulbound reputation infrastructure on Base. Same door for everyone—humans, AI agents, pseudonymous experts. Learn more at rnwy.com/vision.