Facebook and the Erosion of Trust
Why This Story Matters
In the landscape of institutions that have reshaped how people receive information, few have had as profound -- or as corrosive -- an effect as Facebook. What began as a college social network became, within a decade, the primary news source for billions of users worldwide. Along the way, it disrupted the economics of journalism, became a vector for political manipulation, and repeatedly demonstrated a pattern: prioritise growth, apologise later, and change only when compelled by regulators or public outcry.
This is not a story about technology failing. It is a story about choices -- corporate decisions made with full knowledge of their consequences, documented in internal research, congressional testimony, and court filings. Understanding those choices matters for anyone who cares about the quality of public information, because Facebook did not merely reflect the decline of trusted media. It accelerated it.
The trajectory from a Harvard dorm room project to a company facing a $5 billion Federal Trade Commission fine and lawsuits from 42 state attorneys general traces an arc worth examining in detail. Not because the outcome was inevitable, but because at every decision point, faster growth was chosen over user protection, advertiser interests were placed above editorial integrity, and the company's own internal warnings were overridden by commercial imperatives.
FaceMash and the Origin Story
The mythology of Facebook's founding has been told so many times that the uncomfortable details have been sanded down. But the origin matters, because it established a pattern that would persist.
In October 2003, a Harvard sophomore named Mark Zuckerberg created FaceMash, a website that placed photos of female students side by side and invited visitors to judge who was more attractive. The photos were scraped from Harvard's online face books -- the student directories maintained by individual residential houses -- without the knowledge or consent of the people depicted. Within its first four hours of operation, FaceMash attracted 450 visitors and 22,000 page views. Zuckerberg was brought before Harvard's Administrative Board and charged with breaching security, violating copyrights, and violating individual privacy.
He was not expelled. And the lesson he appears to have taken was not about the importance of consent, but about the power of a platform that could generate that kind of engagement. The incident also introduced a dynamic that would recur throughout Facebook's history: when confronted with the consequences of aggressive data practices, Zuckerberg's response was to express regret while continuing to build on the same foundational logic.
Four months later, in February 2004, Zuckerberg launched TheFacebook.com from his dorm room in Kirkland House. It spread to other Ivy League schools within weeks, and to most universities in the United States and Canada by the end of the year. By December 2004, it had one million active users. By 2006, it opened to anyone over thirteen with a valid email address. By 2012, when it filed for its initial public offering, the company reported 845 million monthly active users and $3.7 billion in annual revenue, almost entirely from advertising.
The growth was extraordinary. But it came at a cost that would take years to become fully visible -- because the same instinct that led a nineteen-year-old to scrape student photos without consent would scale into a corporate culture that treated user data as a resource to be extracted, not a trust to be protected.
The Experiment Machine
The scale of Facebook's platform meant that even small changes to the News Feed algorithm could shape the emotional states of millions of people. In 2014, the world learned that Facebook had been doing exactly that -- deliberately.
In January 2012, Facebook's data science team, in collaboration with researchers from Cornell University and the University of California, San Francisco, conducted an experiment on 689,003 users without their informed consent. The study, published in the Proceedings of the National Academy of Sciences, manipulated users' News Feeds to show either more positive or more negative emotional content, then measured whether the change affected users' own posting behaviour.
It did. Users exposed to fewer positive posts produced fewer positive posts themselves, and vice versa. The researchers concluded that "emotional states can be transferred to others via emotional contagion," and that this effect operated "without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues."
The academic finding was significant. The ethical breach was staggering.
No user had been told they were part of a psychological experiment. No informed consent was obtained beyond Facebook's general terms of service -- a document that, at the time, did not mention research. The study was not reviewed by Facebook's own institutional review process until after it was completed. Cornell's Institutional Review Board determined it did not need to review the study because the data collection was performed by Facebook, not by university researchers.
"The experiment revealed something important: Facebook understood, by 2012, that it could systematically alter the emotional states of hundreds of thousands of people. What it chose to do with that knowledge is the more consequential question."
The backlash was intense but brief. Facebook issued a qualified apology, updated its terms of service, and continued to run A/B tests on its users at scale -- the core mechanism of its product development process. The emotional contagion study was not an aberration. It was a glimpse of the machine.
Cambridge Analytica: Data as Political Weapon
If the emotional contagion experiment revealed Facebook's willingness to manipulate user experience, the Cambridge Analytica scandal revealed something worse: the company had built a platform so permissive with user data that a third party could harvest the personal information of tens of millions of people and use it for political targeting -- and Facebook knew about it for years before acting.
The story broke in March 2018, when The New York Times and The Guardian simultaneously published investigations based on testimony from Christopher Wylie, a former Cambridge Analytica employee turned whistleblower. The mechanism was straightforward: a Cambridge University researcher named Aleksandr Kogan created a personality quiz app called "This Is Your Digital Life." Approximately 270,000 Facebook users downloaded it and consented to share their data. But Facebook's platform policies at the time allowed app developers to also access the data of those users' friends -- without the friends' knowledge or consent.
Through this single app, Cambridge Analytica was able to harvest data on up to 87 million Facebook users, a figure Facebook itself confirmed in an April 2018 blog post by its Chief Technology Officer. The initial estimate of 50 million was revised upward as the company conducted its own audit.
The data was used to build psychographic profiles for political advertising. Cambridge Analytica worked for the Trump 2016 presidential campaign, applying the harvested data to micro-target voters with tailored political messages. The company also did work related to the Brexit referendum in the United Kingdom, though the extent of its influence on the referendum outcome remains disputed.
What made the scandal particularly damaging was the timeline. Facebook learned in 2015 that Cambridge Analytica had obtained the data improperly. The company asked Cambridge Analytica to delete it. Cambridge Analytica said it had. Facebook did not verify the deletion and did not notify the affected users until the story became public three years later.
The regulatory consequences were substantial. In July 2019, the Federal Trade Commission imposed a $5 billion civil penalty -- the largest ever against a technology company -- for Facebook's violations of a 2012 consent order regarding user privacy. In the same month, the Securities and Exchange Commission reached a $100 million settlement with Facebook over charges that the company had misled investors about the risks of misuse of user data. Cambridge Analytica itself declared bankruptcy in May 2018, weeks after the scandal broke. But the company's dissolution did little to address the underlying problem: the data had already been harvested, the targeting had already been deployed, and the election had already been decided. No amount of corporate restructuring could undo the democratic damage.
The broader significance of Cambridge Analytica was not the specific misuse of data by one company, but what it revealed about the architecture Facebook had built. The platform's growth model depended on making user data as accessible as possible to third-party developers, because each new app built on Facebook's platform increased the reasons for users to return. The "platform ecosystem" that generated engagement and advertising revenue was, simultaneously, a massive data-leakage surface. By the time Facebook began restricting developer access to friend data -- a change implemented in 2014, but grandfathered for existing apps -- the permissions model had been exploited at scale for years.
Mark Zuckerberg's testimony before the U.S. Senate in April 2018 was notable for the gap between the questions asked and the answers given. Senators struggled to articulate the technical mechanics of the data-sharing model, while Zuckerberg repeatedly characterised the Cambridge Analytica episode as a "breach of trust" by a third party rather than a consequence of Facebook's own design choices. The distinction mattered: framing the problem as a bad actor exploiting a system is very different from acknowledging that the system was built to enable exactly that kind of exploitation.
"The Cambridge Analytica episode did not reveal a system that had been hacked from the outside. It revealed a system working as designed -- one where user data flowed freely to third parties because that openness was the business model."
The Data Partnership Web
Cambridge Analytica exploited a loophole in Facebook's developer platform. But a parallel investigation by The New York Times in December 2018 revealed something broader: Facebook had established formal data-sharing partnerships with more than 150 companies, granting them access to user data that went far beyond what most users understood.
The partnerships, which in some cases continued even after Facebook publicly stated it had restricted third-party data access, included major technology companies such as Microsoft, Amazon, Netflix, and Spotify. According to the Times investigation, some partners could read users' private messages, while others had access to users' contact information even when users had disabled third-party sharing in their privacy settings.
The scope of data accessible through these partnerships was extensive. A test conducted by the Times through one partner, BlackBerry's Hub app, showed that the partnership allowed access to more than 50 types of information for each of a user's Facebook friends -- including birthday, work history, relationship status, religion, political leanings, and upcoming events. This data was accessible without the friends' knowledge.
Facebook maintained that these partnerships were extensions of the Facebook experience and that the data was used only to provide Facebook features on partner devices and services. Privacy researchers and regulators were not persuaded. The Federal Trade Commission's 2019 settlement explicitly addressed the data-sharing arrangements, requiring Facebook to exercise greater oversight over third-party access to user data.
The pattern was consistent: Facebook treated user data as a currency for business development, entering agreements that exchanged access to personal information for integration with partner ecosystems. Users -- whose data was being shared -- were not meaningfully consulted, and the privacy controls they were offered did not always reflect the reality of how their data was being used.
Killing the News
Facebook's relationship with journalism has followed a recognisable cycle: court publishers, become their primary distribution channel, change the terms, and then withdraw. The result has been a series of whiplash pivots that have damaged newsrooms that were already economically fragile.
In 2015, Facebook launched Instant Articles, a programme that encouraged publishers to host content directly on the Facebook platform in exchange for faster load times and, implicitly, better algorithmic distribution. Major publishers -- including The New York Times, The Washington Post, The Guardian, and the BBC -- signed on. By 2017, many were scaling back their participation, frustrated by low revenue shares and the loss of direct audience relationships.
In January 2018, Zuckerberg announced a major News Feed algorithm change that would deprioritise content from publishers and brands in favour of posts from friends and family. Referral traffic from Facebook to news sites fell sharply. For newsrooms that had invested heavily in Facebook distribution -- some of which had hired social media teams and restructured their editorial processes around the platform -- the change was devastating.
But the most aggressive moves came in 2023. When Canada passed the Online News Act, legislation requiring platforms to negotiate compensation agreements with news publishers, Meta responded by blocking all news content on Facebook and Instagram for Canadian users. The ban, which took effect in August 2023, removed not only links to news articles but also posts from news organisations, including emergency information during wildfire season.
Australia had faced a similar confrontation in 2021, when Meta briefly blocked news content in response to the proposed News Media Bargaining Code. The blackout lasted five days and also affected emergency services, health agencies, and community organisations whose pages were swept up in the ban. Meta reversed course after the Australian government agreed to amendments that gave platforms more time to negotiate with publishers.
The message was clear: Facebook would rather eliminate news from its platform entirely than pay for the content that helped make it the world's dominant information distribution system. For journalism, the consequences are structural. A generation of readers accustomed to encountering news through social media feeds is being trained to see news as something that platforms provide -- or withhold -- at will.
The Teen Mental Health Crisis
While the data privacy scandals played out in regulatory proceedings and congressional hearings, a separate body of evidence was accumulating about Facebook's impact on young users -- evidence that, in many cases, came from the company's own researchers.
In September 2021, the Wall Street Journal published the Facebook Files, a series based on internal documents provided by whistleblower Frances Haugen, a former Facebook product manager. Among the most consequential revelations: Facebook's own researchers had found that Instagram was harmful to teenage mental health, particularly for adolescent girls. Internal slides stated that "thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse" and that the company had identified a pattern of social comparison that exacerbated anxiety and depression.
The internal research went further. In documents unsealed as part of litigation in November 2025, a Meta researcher described Instagram's effect on young users in blunt terms: "IG is a drug … we're basically pushers." Another internal communication noted that the company had "evidence of a harm and no remediation."
The legal response has been extensive. In October 2023, a bipartisan coalition of 42 state attorneys general filed suit against Meta, alleging that the company knowingly designed features that were addictive to children and that it misled the public about the safety of its platforms for young users. The complaint cited internal research showing that Meta understood its products were causing harm.
Separately, a federal multidistrict litigation -- MDL No. 3047 -- has consolidated more than 2,000 individual claims from families alleging that Meta's platforms contributed to mental health injuries in children and adolescents. Bellwether trials were scheduled for late 2025 and early 2026 in both California state and federal courts.
In January 2024, Zuckerberg appeared before the Senate Judiciary Committee alongside the CEOs of other social media companies. Facing families of children who had experienced harm through social media platforms, Zuckerberg turned to the audience and said: "I'm sorry for everything you have all been through. No one should have to go through the things your families have suffered." It was a rare public expression of personal responsibility -- though critics noted that it was not accompanied by specific policy commitments.
"The teen mental health litigation represents something new: the possibility that a technology company will be held legally accountable not for a data breach or a privacy violation, but for the psychological effects of a product designed to maximise engagement."
The Misinformation Ecosystem
Facebook's scale -- and its acquisition of Instagram in 2012, WhatsApp in 2014, and continued development of Messenger -- created an interconnected ecosystem through which information, and misinformation, could travel with extraordinary speed and minimal friction.
The dynamics are particularly dangerous in countries where Facebook is, for practical purposes, the internet. In Myanmar, a United Nations fact-finding mission found that Facebook had played a "determining role" in spreading hate speech that contributed to the genocide against the Rohingya minority. Internal warnings about the platform's role in amplifying violence had been raised by Facebook's own employees and external organisations as early as 2013, but the company was slow to hire Burmese-language content moderators and to address the specific patterns of incitement on its platform.
In India, WhatsApp -- which encrypts messages end-to-end, making content moderation impossible -- became a vector for mob violence driven by viral misinformation. False rumours about child kidnapping, spread through WhatsApp groups, led to multiple lynchings in 2018. The platform limited message forwarding in response, but the structural incentive -- private, encrypted, and designed for viral sharing -- remained unchanged.
In the United States, Facebook's role in political misinformation drew sustained scrutiny following the 2016 presidential election. Research published in Science found that false news stories on Facebook were shared far more widely than accurate reports during the election period, and that the platform's algorithmic amplification favoured content that provoked strong emotional reactions -- a category in which misinformation consistently outperforms verified reporting.
Facebook introduced fact-checking partnerships and content labels, but the measures were reactive, applied unevenly, and limited by the company's fundamental business incentive to keep users engaged. A study by researchers at New York University found that during the 2020 U.S. presidential election, misinformation on Facebook received six times more engagement than content from reliable news sources.
The structural problem is that misinformation is, from an engagement standpoint, a superior product. It is more emotionally provocative, more shareable, and less constrained by the verification requirements that slow down legitimate reporting. An algorithm optimised for engagement will, absent deliberate countermeasures, systematically favour misleading content over accurate reporting. Facebook's efforts to counter this dynamic -- warning labels, reduced distribution for flagged content, partnerships with third-party fact-checkers -- operated against the grain of the platform's own design.
The consequences extended beyond any single election cycle. Research from the Oxford Internet Institute documented the industrialisation of disinformation on social media platforms, finding that organised manipulation campaigns operated in more than 80 countries by 2021. While Facebook was not the only platform affected, its scale -- more than three billion monthly active users -- and its penetration in developing countries where it often served as the primary internet experience made it a particularly consequential vector.
The Political Weathervane
Facebook's handling of political content -- particularly its relationship with former President Donald Trump -- illustrates a pattern of decisions driven more by political calculation than by consistent principle.
Following the January 6, 2021, attack on the U.S. Capitol, Meta suspended Trump's Facebook and Instagram accounts, citing "the risk of further incitement of violence." The suspension was later referred to Meta's Oversight Board, which upheld the suspension but criticised the open-ended nature of the penalty. In January 2023, Meta announced that it would reinstate Trump's accounts with "new guardrails in place to deter repeat offences."
The reinstatement came as Trump was preparing for his 2024 presidential campaign. Critics argued that the decision was commercially motivated -- that Meta did not want to be seen as biasing political discourse against a major-party candidate. Supporters of the decision argued that permanently banning a political figure from one of the world's primary communication platforms set a dangerous precedent.
In 2024, Zuckerberg publicly stated that the Biden administration had pressured Meta to suppress certain COVID-19-related content, framing the disclosure as a commitment to free expression. The statement was widely interpreted as an effort to position Meta more favourably with conservative political figures and voters.
By early 2025, Meta made a series of further moves that critics described as capitulation to political pressure. The company disbanded its fact-checking programme in favour of user-generated "Community Notes," a model borrowed from the platform then known as X. It also relaxed its hate speech policies and removed protections for certain groups, changes that aligned with criticisms that had been voiced by conservative politicians and commentators.
The sequence -- ban, reinstate, court -- tracks with the broader pattern of Facebook's history: respond to immediate political pressure, adjust when the political winds shift, and frame each reversal as a principled position. For users and publishers who depend on the platform for distribution, the result is an environment where the rules are never quite stable, and where editorial decisions are made not by journalists but by a company optimising for its own survival.
What This Means for Journalism
The thread connecting Facebook's various controversies -- data extraction, psychological experimentation, political manipulation, news suppression, teen mental health harm -- is a consistent pattern of prioritising engagement and growth over the wellbeing of the information ecosystem.
For journalism, the damage is both direct and structural. The direct damage is measurable: the collapse of referral traffic after algorithm changes, the loss of advertising revenue to the platform that absorbed it, and the elimination of news content in entire countries when legislation threatens the company's preferred terms. The Pew Research Center documented that U.S. newspaper advertising revenue fell from approximately $49 billion in 2005 to under $9 billion by 2020, with a significant portion of that revenue migrating to digital platforms dominated by Meta and Google.
The structural damage is harder to quantify but potentially more consequential. A generation of news consumers has been trained by algorithmic feeds to treat information as a stream of content -- undifferentiated, decontextualised, and evaluated primarily by engagement metrics. In that environment, a carefully reported investigation competes for attention with a screenshot of a tweet, a partisan meme, or an AI-generated summary that strips away context, sourcing, and editorial judgment.
Public trust in media institutions has eroded alongside these changes. Gallup surveys have found that confidence in mass media to report the news "fully, accurately, and fairly" has fallen to historic lows, with just 31% of Americans expressing a "great deal" or "fair amount" of trust. While social media platforms are not the sole cause of this decline, their amplification of sensationalism, their suppression of quality reporting, and their repeated demonstration of editorial power exercised without editorial responsibility have all contributed.
More than 2,900 newspapers have closed in the United States since 2005, according to Northwestern University's Medill School of Journalism. The communities left without local reporting -- the news deserts -- are places where town councils meet without coverage, courts operate without scrutiny, and corruption persists without the check that professional journalism once provided.
Facebook did not create all of these conditions. The economics of digital advertising, the fragmentation of audiences, and the broader crisis of institutional trust would exist without it. But Facebook accelerated these trends, profited from them, and resisted accountability for them at every stage. That is the pattern that matters -- not any single scandal, but the consistent subordination of public interest to corporate growth.
The company's 2021 rebrand to Meta Platforms, Inc. was itself telling. By adopting a new corporate identity centred on virtual reality and the "metaverse," the company attempted to shift the conversation away from the accumulated record of its flagship social media platform. But the underlying business -- advertising supported by engagement-driven algorithms -- remained unchanged. In its 2024 annual report, Meta reported that advertising accounted for more than 96% of total revenue. The incentive structure that drove every decision described in this piece is not historical. It is current.
For independent journalism, the practical question is one of resilience. How do newsrooms build sustainable operations that are not dependent on a platform that has demonstrated, repeatedly, that it will change the terms of engagement whenever its interests require it? How do readers develop the habits and the critical frameworks to distinguish reported journalism from the algorithmic noise that surrounds it? And how do societies maintain the information infrastructure that democratic governance requires, when the most powerful information distribution system in human history is operated as a private advertising business?
The question for journalism is not whether Facebook will change. The pattern of the last two decades suggests it will change only when compelled, and only as much as required. The question is whether the institutions, funding models, and public commitments that sustain independent reporting can be built and maintained despite the damage already done -- and despite the certainty that the platforms will continue to prioritise their own interests above the information needs of the public they serve.
Understanding that dynamic -- clearly, factually, and without illusion -- is the starting point.