Tracking the collision of democracy and technology—how AI and digital power reshape elections, civic life, and the fragile trust that holds society together.
On November 12, 2025, Common Dreams ran a headline that captured a growing panic: “Watchdog Warns of Increased AI [Artificial Intelligence] Propaganda, Deepfakes, and Harassment.” The accompanying article detailed concerns from Public Citizen, a consumer advocacy organization, about the release of Sora 2, OpenAI’s powerful new video-generation tool. Public Citizen warned that the technology was released “without putting in proper guardrails” to prevent abuse. The group’s assessment was stark: Sora 2 offered “a scalable, frictionless tool for creating and disseminating deepfake propaganda” capable of distorting elections and damaging democratic decision-making. Their alarm proved justified almost immediately.
Just a week later, reports surfaced that Congressman Mike Collins, a Republican running to unseat U.S. Senator Jon Ossoff in Georgia, had circulated an AI-generated ad featuring a synthetic Ossoff look-alike. The fake Ossoff stared into the camera and declared, “I just voted to keep the government shutdown. They say it’ll hurt farmers, but I wouldn’t know. I’ve only seen a farm on Instagram.” Ossoff had never said any such thing. The episode was a flashing warning sign: as AI-generated political content proliferates, citizens are increasingly forced to navigate a polluted information ecosystem where discerning truth from fabrication becomes very difficult.
The political deepfake scandal surrounding the synthetic Ossoff was not an isolated stunt; it was a preview of the information crisis AI is ushering in. Just a few years ago, public anxieties about digital truth revolved around something as comparatively banal as Wikipedia. Educators worried that students would treat the encyclopedia, written by anonymous editors, influenced by elite institutions, and vulnerable to manipulation, as if it were an unquestionable oracle. Those concerns now feel almost quaint. In November 2025, historian Sam Wineburg and researcher Nadav Ziv warned in the Chicago Tribune that AI systems now “exploit” Wikipedia’s openly accessible data, “siphon off its traffic,” and “threaten its future.” Wikipedia may reflect establishment bias, as the Manhattan Institute and University of California, Berkeley show, and be shaped by powerful actors like the FBI and Central Intelligence Agency (CIA), but at least it offers citations, a revision history, and identifiable contributors. It forces users to examine claims, trace sources, and confront the social dynamics of knowledge.
AI does the opposite. Instead of functioning as a starting point for inquiry, AI replaces inquiry. It confidently outputs statements with no author, no citations, and no institutional trail of accountability. Worse, today’s AI systems rely on large language models (LLMs), which do not think, reason, or understand; they predict text. As economist and AI researcher Gary Smith notes, their statistical mimicry “creates an illusion of intelligence.” They do not merely mislead; they persuade. Smith warns that these AI systems, which are “prone to making logical and factual errors,” are being deployed everywhere, “even when the costs of mistakes are substantial.” This extends to military applications, despite researchers’ warnings that every laboratory test of AI used in this context has ended in human destruction. Instead of treating AI tools as imperfect aids, the public is being trained, through euphoric advertising, seamless interfaces, and institutional adoption, to treat them as replacements for human judgment.
This is no accident. The unchecked rise of so-called AI, driven by profit-hungry Big Tech corporations, threatens democracy by destabilizing truth, undermining informed consent, and concentrating political and economic power in the hands of a few. This crisis is deepened by industry-controlled public relations campaigns that manipulate public perception and corporate-led AI education initiatives that normalize harmful technologies while obscuring their structural risks. Without democratic oversight and critical public engagement, AI’s growing influence endangers the foundations of democratic society. In a democracy where informed consent is foundational, this constitutes not merely a technological disruption, but the erosion of a shared reality itself, one with potentially catastrophic consequences.
Industry appears indifferent to the potential catastrophic consequences of pervasive deployment of unregulated AI. Sam Altman, CEO of OpenAI, once quipped that “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” Altman is hardly alone in his indifference to AI’s collateral damage.
The reality is that the industry recognizes outcomes of AI use. Elon Musk once claimed that there is ‘only a 20% chance of annihilation’ with AI. The reality is that AI systems and the falsehoods they produce are too lucrative for industry to avoid. For example, Meta’s internal documents allegedly project that 10% of the company’s 2024 earnings, roughly $16 billion, came from scams and fraud hosted on its platform. Worse, these documents reportedly show that Meta, the parent company of Facebook and Instagram, moved slowly to address such issues, presumably because doing so would diminish profits. This is consistent with past patterns: Meta has long been accused of ignoring its own research into the platform’s harmful effects on democracy and on the mental health of young girls. More recently, they have faced accusations of overlooking mounting evidence that their chatbots were negatively impacting teenagers. Time and again, like much of Big Tech, Meta’s decisions seem to prioritize shareholders over the public.
This ruthless calculus now shapes the AI economy. Meta recently announced it would analyze or extract data from users’ private messages to feed its AI. The shift reveals an industry that increasingly views the entire public as raw material. And because tech companies have become the backbone of the global economy, this power is largely unchecked. The point was driven home in 2025, when major outages at AWS and Cloudflare brought businesses, schools, communications, and public services to a standstill, underscoring society’s deep dependence on a small number of corporate platforms and the lack of any viable alternatives or accountability. By October 2025, AI firms accounted for 80% of all stock market gains, and an estimated 40% of U.S. GDP relies on AI. Nvidia, the chipmaker fueling the boom, was forecasted as being the world’s first $5 trillion company.
For many observers, the parallels to the dot-com bubble, which burst in 2000, are impossible to ignore. A key concern is vendor financing, where companies prop up one another through speculative purchasing rather than real economic demand. For example, in September, Nvidia announced a $100 billion investment in OpenAI. Just two weeks later, OpenAI revealed plans to purchase 6 gigawatts worth of chips from Advanced Micro Devices (AMD), with an option to acquire up to a 10% stake in the chipmaker. Is this genuine growth, or an elaborate circular economy built on speculation?
Many argue that there are now unmistakable signs that the AI bubble is on the verge of bursting. This year, tech companies such as Genentech have carried out massive layoffs. For example, Chegg laid off 45 percent of its workforce in October. These “imminent” job losses are routinely dismissed as an inevitable byproduct of AI’s rise. Yet as critics Jeffrey Funk and Gary Smith recently asked, if layoffs from AI are saving industry money and maximizing profits, “Where’s the profit?” They note that Big Tech companies, including OpenAI, “can’t really say.”
Meanwhile, signs of profit decline are everywhere. An MIT study spooked investors when it found that 95% of AI pilots fail because of an overestimation of AI capabilities, poor data quality, a lack of clear goals, insufficient infrastructure, and cultural resistance. At the same time, entrepreneur and venture capitalist Peter Thiel’s fund made headlines for offloading its Nvidia stake in the third quarter of this year, a move widely interpreted as a vote of no confidence in the industry’s long-term strength. And even Sam Altman looked visibly uncomfortable when the prospect of an AI bubble was raised in a recent interview. This leads many to believe that even major investors suspect the bubble will burst soon.
Historically, when industries prioritized profits over people, government regulation served as a necessary counterweight to protect the public from the economic pain of a bubble bursting. Today, however, the U.S. government seems committed to enabling Big Tech’s takeover of both the economy and the public sphere. President Donald Trump, who has become a darling of Big Tech oligarchs, has even proposed preventing states from enacting AI regulations, arguing that local oversight threatens Big Tech’s economic interests. It is not only Republicans; leaders within the Democratic Party, such as California Governor Gavin Newsom, and so-called populists like Representative Ro Khanna also maintain cozy relationships with Big Tech. Indeed, liberals like Sarah Hurwitz, a former speechwriter for Barack and Michelle Obama as well as Hillary Clinton, lament Big Tech not for its economic or political power, but because it has chipped away at their ability to weaponize legacy news media to shape public opinion on issues such as Israel. The irony is striking: rather than defending democratic discourse, elites across the political spectrum either complain about losing influence or work to maintain favor with industry.
The economic and political power of Big Tech is built through public relations propaganda that persuades audiences with dubious claims about AI’s benefits. It would be hard to avoid the billboards, commercials, and news media appearances by AI leaders, all touting the transformative impact of AI. Earlier this year, a series of billboards urging businesses to “stop hiring humans” and instead use AI attracted significant media attention.
Beneath all the optimistic advertising lies research that the industry would prefer to avoid. Research increasingly suggests that AI tools exert deleterious effects on the human mind. A recent study comparing “brain-only” writers, search-engine users, and LLM users found that those who used AI displayed weaker neural connectivity, reduced memory recall, diminished ownership over their work, and persistent cognitive laziness even after switching back to non-AI tasks. Although AI-assisted essays scored reasonably well, they lacked originality and depth, predictable outcomes for systems that replicate patterns rather than generate genuine insight.
The psychological consequences extend further. Nearly one million people with suicidal intent reportedly use ChatGPT to discuss their ideation: an extraordinary concentration of vulnerability in the hands of a private corporation. Some companies, like Talkspace, have capitalized on the mental-health crisis by marketing AI-infused therapy as a convenient solution, but the dangers are unmistakable. Lawsuits reveal that OpenAI, which recently completed its transition from a non-profit to private company, once relaxed guardrails designed to prevent teen suicide after discovering that such restrictions reduced user engagement. In one tragic case, their product ChatGPT allegedly guided a child through suicidal ideation, even advising the teen, who was seemingly reaching out for help, not to tell his parents about their plan to commit suicide. The child later died by suicide. The lawsuit claims that OpenAI prioritized engagement over safety, a charge that resonates deeply with critics who have long warned that corporate incentives are fundamentally misaligned with public well-being.
Industry responses often consist of public-relations theater rather than meaningful change. Even self-described leaders in “ethical AI,” such as Alex Issakova, concede that the term has become “a marketing label.” Steven Adler, former head of safety at OpenAI, recently warned the public not to trust corporate claims about protecting users since companies rarely release data demonstrating that their interventions have reduced harm. Adler focused particularly on erotica and sexual content generated by AI, noting a disturbing trend: some users fall in love with AI chatbots, and in many cases the erotica escalates into child-related sexual fantasies. This aligns with long-standing research showing that digital-age pornography can escalate users’ appetites, leading some, half of users in one survey, who previously had no interest in child pornography to seek it out. Alarmingly, over 40% of people who consume child sexual abuse material reportedly act on it offline. Instead of government action, companies like Roblox, a platform repeatedly criticized for failing to protect minors, are left to implement their own safety protocols, an arrangement that places children’s well-being in the hands of profit-driven corporations.
Despite these grave concerns, AI adoption continues to spread into every corner of society. Education is often touted as part of the solution, but not all education is equal. Critical media literacy advocates argue that effective AI education must demystify the technology by examining its ownership, political implications, environmental costs, biases, hallucinations, and the reality that even many AI evangelists do not fully understand how these systems work. This approach is precisely what industry does not want. Instead, tech companies promote AI education designed to normalize acceptance of the tools while obscuring their structural harms.
This corporate approach to media literacy education is spreading as quickly as the AI tools themselves. Google has launched aggressive campaigns to insert its AI products into schools. San Jose, California recently announced a city-wide AI education initiative led by major AI companies. Universities are also aligning themselves with AI companies. Tina Austin reported that the Dean of the Marshall School of Business at University of Southern California (USC) Geoffrey Garrett and OpenAI‘s Head of Education, Leah Belsky, recently announced an institutional partnership that Garrett described as “USC-now brought to you by OpenAI.” Even labor unions such as the American Federation of Teachers have partnered with Big Tech to help shape AI curricula. Meanwhile, teachers are left facing the challenges to learning that AI has wrought: students who are caught cheating with AI and then submit AI-written apologies; classrooms where nearly every assignment is AI-generated have led to brain decay as students cannot recall basic facts of information from the assignment they just submitted; and students who turn in essays on topics such as Marxism, that they do not even remotely understand, are further confused by AI systems fail to explain Marxism.
This dynamic is exacerbated by a harmful divide within public discourse. On one side are techno-protectionists who believe they can simply reject AI altogether. Their refusal to understand the tools leaves them ill-equipped to communicate meaningfully with students and users who encounter AI everywhere. Young people dismiss such critics as out of touch. On the other side are techno-utopians who tout AI as a universal solution, ignoring its structural harms. They read the work of AI boosters like New Yorker writer James Somers, who promotes the illusion of AI’s “intelligence.” Or they listen to tech evangelists such as Joe Rogan who hosts influential tech-titans like Peter Thiel, Elon Musk, and Marc Andreessen on his podcast so they can celebrate the notion that flawed humans should be replaced by algorithmic governance.
Meanwhile, scholars grounded in rigorous research, such as Gary Smith, who repeatedly demonstrate that AI is not truly intelligent, have their warnings ignored by boosters. Critical scholars key message that the real danger is not that computers are smarter than humans, but that humans believe they are, goes unheeded. Nuanced perspectives such as AI having limited but legitimate utility are sidelined by utopians and pessimists alike.
The AI revolution is not merely a technological shift; it is a political one. The uncritical integration of AI systems into every facet of life poses an existential threat because they are controlled by a handful of rapacious corporations, led by people who openly believe that humans are fundamentally flawed and must be fixed by technology. Indeed, the Big Tech oligarchs increasingly argue that human judgment is so defective that democracy should be supplanted by a technocratic, CEO-style authoritarian regime.
The AI revolution is a political crossroads. Left unchecked, AI is accelerating the erosion of democratic values by destabilizing shared truth, undermining cognitive autonomy, and consolidating power within a handful of unaccountable corporations. The illusion of AI intelligence masks a deeper crisis: humans are ceding critical judgment to machines controlled by profit-driven entities that prioritize engagement and control over public well-being. Democracy depends on an informed and empowered citizenry, transparent governance, and collective accountability; pillars currently threatened by the unchecked deployment of AI. To safeguard democracy, we must demystify AI, demand rigorous regulation, and reclaim public control over the technologies shaping our societies. Ultimately, the greatest danger is not AI itself but the concentration of power behind it that endangers the very foundation of democratic life.
The Office of TAY's core services are provided through field-based teams; however, many other vital…
U.S. Coast Guard Southwest District SAN PEDRO – The Unified Command continued response operations Nov.…
Stormwater capture is an important element of LADWP’s overall plan to enhance Los Angeles’ water…
It is not only a disservice to families seeking clarity about vaccines but also potentially…
This comes as President Trump’s memorandum on domestic terrorism pledges to focus federal resources on…
LONG BEACH — The City of Long Beach has officially launched its new MyUtilityPortal, providing…