ai

In Focus: How Does AI Threaten the Safety of the US Presidential Election?

 Before the Indonesian elections on February 14th last year, a widely circulated video of the late Indonesian President (Soeharto) defending the political party he once led surfaced. The fabricated video, created using artificial intelligence techniques to clone his face and voice, garnered over 4.7 million views on the (X) platform alone.

But this incident wasn't isolated. In Pakistan, fake content also emerged featuring former Prime Minister (Imran Khan) regarding national elections, announcing that his party would boycott them. Simultaneously, in the United States, a fake voice call of President (Joe Biden) circulated, urging voters in (New Hampshire) not to vote in the state's presidential primaries.

Thus, the proliferation of fake videos of leaders and politicians using artificial intelligence techniques has become increasingly common since the beginning of the current year, which is expected to witness the largest global elections in history. Reports indicate that at least 60 countries are gearing up for elections this year, with over four billion people expected to vote, making deepfake videos a significant concern in many countries.
How does AI threaten US presidential elections?
The image was created by artificial intelligence 

How does AI threaten US presidential elections?

We are confronted with several crucial inquiries: Can artificial intelligence truly facilitate the manipulation of American elections by various nations to advance their agendas? Might we witness a more extensive breach of American elections compared to 2016? Are there feasible solutions to counter these threats?

First, here are some statistics on the prevalence of deepfakes:

To begin, let's examine some statistics concerning the proliferation of deepfake operations: The utilization of deepfake technologies in videos globally saw a staggering tenfold increase from 2022 to 2023. Specifically, in the Asia-Pacific region, deepfake videos surged by an astounding 1530% during the same period.

Furthermore, online media platforms, encompassing news websites, live streaming services, social media platforms, and digital advertisements, experienced the most significant spike in identity-related fraud, escalating by 274% between 2021 and 2023. Additionally, sectors such as professional services, healthcare, transportation, and video gaming fell victim to identity-related fraud.

According to CrowdStrike's Global Threat Report for 2024, the cybersecurity company noted that with the rise in the number of elections slated for this year, it is highly probable that representatives from nation-states, including China, Russia, and Iran, will orchestrate disinformation campaigns to manipulate presidential elections in several countries to further their interests.

Second, will some countries use AI to manipulate U.S. elections?

Microsoft has warned against China's use of artificial intelligence to manipulate the upcoming presidential elections in the United States, South Korea, and India, following a trial run during the presidential elections in Taiwan last January. This was revealed in a report published by Microsoft Threat Analysis Center (MTAC) at the beginning of April.

The Microsoft report indicated that China employed fake accounts on social media platforms to disseminate AI-generated content throughout 2023, attempting to influence the American public by stirring controversy on various topics. These included:
  • Alleging that the wildfires on Maui Island in August 2023 were deliberately ignited by the US government to test a weather warfare weapon.
  • Spreading conspiracy theories about the derailment of a train carrying molten sulfur in Kentucky in November 2023, accusing the US government of intentionally causing the incident and concealing information about it, linking the train derailment to the events of September 11 and theories of cover-up at Pearl Harbor.
  • Accusing the United States of intentionally poisoning water supplies in some countries to maintain water dominance, as part of a multilingual campaign primarily focused on Japan's decision and government to dispose of contaminated water from the Fukushima nuclear plant into the Pacific Ocean.
Additionally, some campaigns exploited immigration policies and racial tensions in the United States. Microsoft experts noted in the report that the impact of the content created by China using AI techniques on the American public opinion was marginal, but they warned that this could change during the upcoming elections due to the complexity of the attacks China might employ to achieve its goals.

Third: The dangers of AI and deepfakes:

Numerous experts have raised alarms about the profound dangers posed by deepfakes, stressing that the primary threats may not originate from foreign entities or rival nations, but rather from active actors within the affected countries. These actors could include opposition parties or extremist groups from both the right and left wings, contributing to the contamination of the information landscape and hindering individuals' ability to access accurate information or develop informed opinions about specific parties or candidates.

Simon Chesterman, Singapore's inaugural director of AI governance, warned of the detrimental impact that the spread of fake content, especially regarding alleged scandals, could have on a candidate's reputation and their ability to attract voters. Despite governments possessing tools to combat misinformation, Chesterman highlighted the challenge they face in keeping up with the rapid dissemination of misleading information online.

Drawing attention to the swift propagation of fake content falsely attributed to singer Taylor Swift, Chesterman underscored the urgency of addressing the rapid spread of misinformation online, noting that efforts to counter this trend are often inadequate and delayed, diminishing their effectiveness.

Fourth, are there possible solutions to avoid AI threats to elections?

Twenty leading tech companies, including Google, Microsoft, Amazon, Meta, Adobe, IBM, and emerging AI company OpenAI, along with social media giants like Snapchat, TikTok, and X, have recently announced a joint agreement to combat the use of deepfake operations driven by artificial intelligence to manipulate democratic elections worldwide.

Some experts describe this agreement as a crucial first step, but its effectiveness will depend on execution, as tech companies will need a multifaceted approach to adopt different measures across their platforms.

Chesterman stated, "Determining the content permissible across social media platforms is a challenging task, and companies may take months to make decisions. Therefore, we should not rely solely on the good intentions of these companies, and regulations should be put in place, setting expectations for these companies."

In pursuit of this goal, companies have launched the Coalition for Content Provenance and Authenticity (C2PA), a global initiative aiming to establish an open standard for tracking the origin of online content and ensuring its accuracy. Viewers will be presented with verified information, such as content creators, creation time and location, and whether generative AI was used in creating the content.

Member companies of the C2PA include Adobe, Google, Microsoft, Intel, Sony, Arm, BBC, and Truepic. OpenAI announced in February the addition of C2PA watermarking to images generated by its DALL-E 3 AI tool.

Sam Altman, the founder and CEO of OpenAI, emphasized the company's focus on ensuring its technologies are not used to manipulate the upcoming US presidential elections. He highlighted the difference in their role compared to social media platforms or news dissemination platforms, stating, "Our platform is used to create content, while other platforms are used to distribute it, so we need to work with them."

Ultimately, Chesterman notes that while technology is part of the solution, the larger and more critical part lies with users, who are still unprepared. He also emphasizes the importance of public education, stating, "We need to continue efforts to raise awareness and consciousness when the public receives information. Users need to be more vigilant; in addition to fact-checking information when something seems highly suspicious, users also need to verify important parts of information before sharing them with others."


Kar
By : Kar
Online content writer and chartered accountant .
Comments