Deepfakes and Democracy: Navigating the New Reality of AI in Political Communication

{{ vm.tagsGroup }}

11 May 2024

8 Min Read

Professor Dr Ong Kian Ming (Academic Contributor), The Taylor's Team (Editor)

IN THIS ARTICLE

In the era of rapid technological advancement, artificial intelligence (AI) has revolutionised various sectors, including media and communication. Among the most disconcerting innovations is the development of 'deepfakes'—highly realistic and often convincing video and audio recordings that are generated through AI and machine learning techniques. These digital fabrications are capable of altering faces, mimicking voices, and fabricating actions of public figures with alarming precision.

 

As these technologies become more accessible and their outputs more indistinguishable from reality, the implications for political communication are profound. The potential to spread misinformation and manipulate elections is unprecedented, posing a significant challenge to democratic processes and the integrity of public discourse.

The Evolution of Deepfakes

Deepfakes first entered the public lexicon a few years ago, but the underlying technology has evolved from decades of research in AI and computer graphics. Initially, deep learning techniques were applied primarily in benign contexts, such as improving visual effects in films and enabling real-time translation services. However, as these technologies became more sophisticated and user-friendly, their potential for misuse became apparent.

 

The term 'deepfake' itself is a portmanteau of 'deep learning' and 'fake', which succinctly captures the essence of these creations—synthetic media generated by neural networks, a key component of AI. The first widely acknowledged use of deepfake technology emerged on the internet in late 2017, when a user on Reddit demonstrated the capability to swap celebrities' faces into videos. This initial display, although crude, marked a pivotal moment in the understanding of how AI could be manipulated for creating hyper-realistic fake videos.

Big data technology and data science illustration

Since then, technology has progressed at a breakneck pace. Early deepfakes were relatively easy to spot due to their blurry or mismatched facial features and unnatural voice synthesis. But recent advancements have all but eliminated these telltale signs, with current iterations producing videos that can be nearly indistinguishable from genuine footage. Developers have leveraged vast datasets of facial imagery and vocal recordings, improving algorithms to the point where they can replicate minute details and idiosyncrasies of individual expressions and speech patterns.

 

The proliferation of deepfake technology has been further accelerated by the accessibility of deep learning tools and the democratisation of AI knowledge. Today, creating a convincing deepfake no longer requires extensive expertise in computer science. Software tools equipped with user-friendly interfaces are available freely or at low cost, enabling virtually anyone with a computer and internet access to create deepfakes.

 

This dramatic evolution presents a dual-edged sword. While the entertainment industry has found creative and innovative uses for deepfake technology, such as de-aging actors or resurrecting historical figures for educational purposes, the potential for harm in malicious hands is significant. Political deepfakes have begun to surface, showing fabricated speeches and actions of political leaders, which could have severe implications for public opinion and election integrity.

The Impact on Political Communication

Political communication is particularly vulnerable to deepfake exploitation due to the abundant availability of footage and images of politicians online. This wealth of material makes it easier for malicious actors to generate convincing deepfakes with minimal effort. The advancements in technology now require only a small number of images to create a deepfake, lowering the barrier for those seeking to mislead or manipulate public opinion.

 

Consider the case in the United States in 2019, where a subtly altered video of Speaker Emerita Nancy Pelosi, made to make her appear intoxicated and incoherent during a speech, went viral.

Image showing the differences between original and altered video for Nancy Pelosi

Image obtained from The New York Times

In Gabon, during a political crisis in 2018, a deepfake video of President Ali Bongo was circulated to portray him as healthy and in control amidst rumours about his illness. The intention was to stabilise public opinion and quell political unrest, demonstrating how such technologies can be employed to craft political narratives or stabilise a regime.

Deepfake video depicting formal President of Gabon, Ali Bongo

Image obtained from ResearchGate

Putin, the president of Russia, was also the victim of deepfake content. In 2013, a deepfake video of Mr Putin appeared on some television channels with the caption ‘mergency appeal of the president’ In the video, the Russian president’s deepfake said Ukraine’s army had entered three border regions. The deepfake also announced the declaration of martial law in those regions.

 

In December 2020, Britain's Channel 4 aired a deepfake video of Queen Elizabeth II delivering an alternative Christmas message. This deepfake was created as a satirical piece to demonstrate the potential of synthetic media and highlight the dangers of misinformation in the digital age. In the video, the deepfake Queen discussed several controversial topics and even performed a dance on her desk.

high-res-youtube-video
Audio Gif

All these incidents force us to confront critical questions: How might deepfakes influence voter behaviour if individuals cannot distinguish real statements from fabricated ones? What are the potential consequences on diplomatic relations if deepfakes depicting offensive or controversial statements by political leaders are mistakenly believed to be genuine? These scenarios highlight the potential for deepfakes to undermine trust not only in political figures but also in the media and public institutions that uphold democracy.

The Role of Social Media and Technology Companies

Social media platforms and technology companies play a pivotal role in the dissemination and control of deepfakes, positioning them at the forefront of the fight against digital misinformation. Their policies and the technologies they employ significantly influence how deepfakes are detected, moderated, and, ultimately, impact public discourse.

 

 

Policy Implementation and Challenges

 

Major social media platforms such as Facebook, Twitter, and YouTube have begun to implement policies specifically targeting deepfakes. For instance, Twitter has adopted a policy that labels tweets containing synthetic and manipulated media to inform users of the content's authenticity. Similarly, Facebook has partnered with third-party fact-checkers to downrank and label false content, including deepfakes, reducing its visibility and spread.

 

However, these policies are not without challenges. One significant limitation is the varying definitions of what constitutes a deepfake across different platforms, which can lead to inconsistencies in how these videos are handled. Not all synthetic content will necessarily be removed, especially in an era where synthetic media is gaining traction.

 

 

Technological Innovations and Limitations

 

Technology companies are investing in AI-driven solutions to better detect deepfakes. These include developing more sophisticated machine learning models that can analyse video frames and audio to spot inconsistencies that may indicate manipulation. Despite these advancements, the technology is in a constant race against deepfake creators who continuously refine their methods to evade detection. This cat-and-mouse dynamic presents a persistent challenge, raising questions about the long-term efficacy of current technological solutions.

Data flow across the globe

Strategies for Mitigation

As the threat of deepfakes continues to permeate political communication, developing robust strategies to mitigate their impact is crucial. These strategies must be multi-faceted, combining technological solutions, regulatory frameworks, and public education to address the complexities posed by AI-generated misinformation.

 

 

Technological Solutions

 

One of the primary lines of defence against deepfakes involves advancing detection technologies. AI researchers and tech companies are continuously working on developing more sophisticated algorithms that can identify subtle cues in videos and audio files that typically go unnoticed by human viewers. These cues might include irregular blinking patterns, unusual lip movements, or inconsistencies in skin texture. However, as detection methods improve, so do the techniques to circumvent them, necessitating an ongoing commitment to technological innovation.

 

 

Regulatory and Legal Approaches

 

On the regulatory front, governments worldwide are beginning to recognise the need for legislation that specifically addresses the creation and distribution of deepfakes. For instance, in January 2024, the United States proposed the No AI FRAUD Act, which sets up a federal framework to safeguard individuals against AI-generated fakes and forgeries. This bill makes it illegal to create a 'digital depiction' of any person, living or dead, without their consent. Similarly, the European Union is exploring amendments to digital laws that would require social media platforms to take greater responsibility for the content they host, including deepfakes. Effective legislation must balance the need to protect public discourse from harmful content while preserving freedom of expression and innovation.

Public Education and Media Literacy

 

Equipping the public with the skills to identify deepfakes is another critical aspect of mitigation. This involves extensive public education initiatives and the integration of media literacy into school curriculums. People need to learn how to critically evaluate the content they consume online, understand the nature of deepfakes, and recognise their potential impact. Public awareness campaigns can also play a role in educating citizens about the existence and dangers of deepfakes, promoting a more discerning consumption of digital content.

 

 

Collaborative Initiatives

 

Recognising the limitations of acting alone, some tech companies have called for collaborative efforts involving industry peers, policymakers, academics, and civil society to more effectively combat deepfakes. Initiatives like the Deepfake Detection Challenge, spearheaded by Facebook, are designed to foster innovation and encourage the development of new detection technologies through open collaboration and competition. These partnerships not only amplify efforts to address the deepfake problem more comprehensively but also foster the development of standardised practises for detecting and reporting deepfakes, sharing best practises, and coordinating responses across borders.

Professor Dr Ong Kian Ming

 

Politicians and political parties must be pro-active in managing the challenge of deepfakes. Steps taken includes: (i) Maintaining a consistent political and policy message so that if a video of a politician saying something “uncharacteristic”, the public will immediately suspect that this is not genuine (ii) Be ready to counter any deepfakes with clarification statements and video responses to limit the damage and virality and (iii) Work with the regulatory authorities and the main social media channels to allow for the reporting and deletion of deepfakes from these platforms quickly. Deepfakes cannot be prevented but with strategic thinking and timely action, their damage can be significantly limited.

 

 

Professor Dr Ong Kian Ming

School of Law and Governance

Looking Forward

As we navigate the complexities introduced by deepfakes in political communication, the future presents both significant challenges and opportunities for the preservation of democratic values. The rapid advancement of AI technologies points to an increasingly sophisticated array of synthetic media, which promises exciting possibilities for enhancing public engagement and transparency but also raises concerns about potent misinformation campaigns that could undermine democratic processes.

 

The future of AI in political communication is at a critical crossroads between innovation and regulation. As these tools become more integrated into daily life and political processes, the challenge will be to harness them for positive ends while vigilantly guarding against abuses. The dual-edged nature of deepfakes becomes a pivotal focus in this discourse, underscoring the need for rigorous development of detection technologies, thoughtful legislation, comprehensive public education, and collaborative efforts.

 

As we continue to explore and understand the implications of deepfakes, our commitment to maintaining open, informed, and respectful discourse remains crucial. This commitment ensures that as the capabilities of AI grow, they are matched by our collective wisdom in guiding their use for the common good, securing a healthy democracy for future generations.

As we navigate the crossroads of innovation and regulation, the landscape of communication is rapidly transforming, especially with the advent of sophisticated AI tools. These changes are not just technological—they redefine how we engage with information and each other in our democratic societies. Whether you're passionate about political or communication studies, discover how we can empower you to become a key player in this pivotal movement. Book an appointment with us today to learn more.

YOU MIGHT BE INTERESTED
{{ item.articleDate ? vm.formatDate(item.articleDate) : '' }}
{{ item.readTime }} Min Read