AI and the War for Truth: How Deepfakes and Cybersecurity Threaten Global Security

By Jay #Thought Leader

We live in an age where technology allows the creation of increasingly realistic fake videos and audio, known as deepfakes. At the same time, cyberattacks and hacking grow ever more sophisticated. Together, these trends threaten to undermine public trust and global security in profoundly concerning ways. 

In particular, the upcoming elections in the US and UK will likely see renewed attempts to spread disinformation and sow social discord through online propaganda. Deepfakes and data breaches could enable false narratives that manipulate voters on a mass scale. Preventing this interference and rebuilding confidence in democracy presents an urgent challenge.

The Risks of Deepfakes

Deepfake technology uses artificial intelligence (AI) to digitally impose one person’s likeness onto video or audio of another. The results appear strikingly credible to the undiscerning eye. Although deepfakes originated for creating celebrity pornography, their potential uses have grown far more troubling. Some recent examples include the concerning Taylor Swift deepfakes that went viral in recently and this Morgan Freeman video made in 2021 by Bob de Jong.

Skillfully edited deepfakes could depict politicians saying or doing things they never did. The resulting fake videos or other media could spread rapidly across social media, gaining credence through collective repeats. Even simple text captions or audio overlays can powerfully alter context to mislead.

This capacity to fabricate evidence gives deepfakes incredible power for deception. They can ruin reputations through slander. They can sway election outcomes through disinformation. In the most dystopian scenarios, they could even provoke global conflict by depicting false flag attacks. The risks for societal damage are enormous.

The Threat of Cybersecurity Breaches

Meanwhile, the past decade has seen cyberattacks grow vastly more prevalent and sophisticated. Hacking endangers individuals by stealing passwords, financial data, or personal information. For governments and institutions, data breaches may expose citizens’ private records, embarrassing internal communications, or confidential strategies. 

In particular, cybersecurity failures around elections represent a troubling threat. If timed strategically, the leak of candid emails or internal polling data could quickly sabotage campaign efforts. False records could also manipulate voter registration databases in hopes of rigging turnout demographics.

Data stolen by hackers might enable deepfake creators to produce more compelling fraudulent videos. Personal information can help these algorithms model facial expressions and vocal tones accurately. The combination of deepfakes and cyberattacks is, therefore, profoundly synergistic for spreading disinformation.  

Eroding Public Trust

In many ways, the actual content or truth of deepfakes hardly matters. Their steadily improving realism—and the accompanying awareness of their existence—serves to undermine public trust. Where embarrassing videos might once have ruined careers, scepticism now grants politicians some benefit of the doubt. Yet that same doubt also extends to genuine evidence.

This growing uncertainty represents a significant victory for propaganda efforts. It allows partisan supporters to dismiss reporting they dislike as “fake news.” In this post-truth environment, debates frequently revert to pure ideological conflict untethered from facts. Even without specific deepfakes, the background uncertainty grants bad-faith actors space for disinformation campaigns.

Elections in Peril

The greatest imminent threat is that these dynamics will severely compromise electoral integrity in the upcoming votes across Western democracies. The US presidential election in November 2024 and future British parliamentary elections likely face unprecedented volumes of misinformation and foreign interference. However, researchers have encountered issues in investigating and reporting due to inferred bias and pressure from others.

Online propaganda firms—potentially with state sponsorship—can wield hacked materials, voter data, demographically targeted ads, and deepfakes together for powerful psychological operations. They might craft false videos depicting opposing politicians in racist tirades or financial crimes. They could suppress voter turnout in key districts through misinformation, such as in the New Hampshire Primary.

In a race with razor-thin margins, even subtle effects might shift outcomes. This situation represents an enormous danger for destabilising societal trust. Millions may contest election results seen as tainted by propaganda. The disputes roiling US politics since 2020 could quickly intensify under these conditions. There are no easy governance solutions for placating cries of election fraud rooted in widespread disinformation.   

The Resulting Dangers  

The threats around deepfakes ultimately tie back to global security far beyond just election outcomes. Eroded public trust in leaders, scepticism of media sources, and loss of social cohesion make societies vastly more vulnerable to internal or external threats. Where facts grow uncertain, extreme partisan identities tend to dominate discourse instead.  

This is fertile ground for dangerous demagogic politics and authoritarian takeovers that subordinate the rule of law to ideological zealotry. It also weakens national solidarity and undercuts public support for security interventions against foreign threats. When citizens cannot even agree on basic facts due to Party-led tribal identities, rallying behind collective self-defence grows impossible. The fractures across Western European and North American democracies severely threaten the postwar alliance against Russian expansionism.

Moreover, the destabilisation of powerful states encourages more brazen policies from authoritarian regimes seeking to redefine global order. As internal chaos distracts and paralyses Washington, leaders in Moscow or Beijing may intensify external pressures against Eastern European or Asian allies. The territorial seizures and refugee crises likely to follow increase the long-term risk of outright kinetic wars. Allowing deepfakes and hacking to undermine electoral integrity is thus terrifyingly reckless in terms of global security.

Rebuilding Confidence in Truth  

The challenges around rebuilding public confidence and social cohesion already feel overwhelming. However, the first step must be defensive efforts toward securing elections against propaganda interference. Governments should invest heavily in cybersecurity around voter records and tallying infrastructure. Major social media platforms must cooperatively flag or remove deepfakes and demonstrably false news around campaign issues. 

Restoring trust in formal institutions also requires major transparency reforms around campaign funding and messaging supervision. Legislators might even consider restrictions on micro-targeted ads for campaign issues to limit divide-to-conquer demographic manipulation. Acknowledging the scale of cybersecurity threats and deepfake potentials could help informed citizens identify and resist propaganda efforts.

Of course, eliminating all-partisan bias is impossible after years of rising polarisation. But fact-focused analytical journalism around concrete policy measures represents one essential antidote. Media institutions should emphasise neutral systemic discussions—how proposed reforms might hamper or assist election security and integrity—rather than ideologically skewed speculation on particular leaders or parties. 

Over the longer term, investments in societal cohesion and civic trust-building also feel essential, although, again, tricky. Governments must enhance economic equity, educational quality, and social mobility to give citizens hope for personal advancement. Psychological studies show perceptions of “fairness” strongly colour public trust in institutions. People must view leaders as working for average welfare based on shared social contract expectations.

Finally, regulating AI ethics and passport-type verification requirements represent more structural approaches. Legally enforcing truthful disclosures around synthetic media alerts citizens of deepfakes rather than allowing their covert spread. Meanwhile, identity certificates would hamper the anonymous accounts most useful for malicious geopolitical hacking or propaganda efforts. However, these structural shifts raise major countervailing concerns around censorship overreach and privacy rights.

In summary, the policy trade-offs feel complex and multifaceted and have unintended consequences in both directions. But the threats of deepfakes and election hacking now represent such internal and global security dangers that some mix of extraordinary transparency, media responsibility, cybersecurity investment, and civic cohesion initiatives feel essential. We must rebuild public confidence that votes appropriately empower leaders to address society’s most significant problems rather than undercut the foundations of shared truth and social solidarity. The risks of continued erosion are now existential.

By Jay

Leave a Reply

Your email address will not be published. Required fields are marked *