Comprehensive coverage

Six ways artificial intelligence can make political campaigns trickier than ever

Campaigns are now rapidly adopting artificial intelligence to write and produce ads and solicit donors. The results are impressive, but the technology can also be misused by targeting personalized 'fake news' content

By      David Clemson, Adjunct Professor, Grady College of Journalism and Mass Communication, University of Georgia

political polarization. Illustration: depositphotos.com
political polarization. Illustration: depositphotos.com

Political campaign ads and soliciting donors have long been deceptive. In 2004, for example, the Democratic candidate for the US presidency, John Kerry, aired an advertisement in which it was said that his Republican opponent, George W. Bush, "says that sending jobs abroadmakes sense for America". Bush never said Such a thing.

The next day, Bush responded by publishing an ad stating that Kerry "supported raising taxes." More than 350 times". She was too False claim. These days, The internet went wild with Misleading political ads. Ads often masquerade as surveys and have misleading clickbait headlines.

Campaign fundraising solicitations are also full of deception. An analysis of 317,366 political emails sent during the 2020 US election found that Fraud was the norm. For example, a campaign manipulates recipients into opening the emails by lying about the identity of the sender and using subject lines that make the recipient think the sender is replying to their donor, or claims the email is "not asking for money" but is asking for money. also Republicans and Democrats alike do it.

Campaigns are now adopting rapidly artificial intelligence For writing and producing ads and soliciting donors. The results are impressive: Democratic campaigns found that donor letters written by Artificial intelligence were more effective than letters written by me Humans writing personalized text that convinces recipients to click and send donations.

A super PAC supporting Baron DeSantis featured an AI-generated imitation of Donald Trump's voice in this ad.

to her Artificially there are advantages to democracy, such as helping staff organize their emails from constituents or helping government officials summarize testimony.

But there is Concerns that artificial intelligence will make politics deceptive more than ever.

Here are six things to look out for. I am basing this list on My experiments that checked the effects of political fraud. I hope voters can learn what to expect and what to watch out for, and learn to be more skeptical, in preparation for the next US election.

Custom made fake campaign promises

my research On the 2020 presidential election, he found that the choice voters made between Biden and Trump stemmed from their perceptions of which candidate "offers realistic solutions to problems" and "says what I think out loud," based on 75 items in the survey. These are the two most important qualities a candidate needs to project a presidential image and win

AI chatbots, like Chat GPT of OpenAI, Or, bingchat של  Microsoft and Google's Bard can be used by politicians to create customized campaign promises that mislead voters and donors.

Currently, when people scroll through a news feed, the articles are recorded in their computer history, which are tracked Sites like Facebook. The user is labeled as liberal or conservative, and Tagged as having certain interests. Political campaigns can place a real-time ad in the person's feed with a custom headline.

Campaigns can use artificial intelligence to develop a pool of articles written in different styles and promising different campaigns. Campaigns will be able to embed an AI algorithm into the process – courtesy of automated commands already put together by the campaign – to generate fake, customized campaign promises at the end of the ad masquerading as a news story or soliciting donors.

ChatGPT, for example, could hypothetically be asked to add text-based material from recent articles the voter read online. The voter then scrolls down and reads the candidate who promises exactly what the voter wants to see, word for word, in a tailored tone. My experiments have shown that if a presidential candidate can match the tone of his word choices to the voter's preferences, the politician will be seen Presidential and more reliable.

Taking advantage of the tendency to believe each other

Humans tend to automatically believe what they are told. They have what scholars call "truth default". They even fall victim to lies  unreasonable prima facie.

in experiments My I have found that people who are exposed to misleading messages from a presidential candidate believe the false statements. Given that text generated by ChatGPT can change People's attitudes and opinions, will be It is relatively easy for artificial intelligence to take advantage of  Voters default to truth when bots stretch the limits of credibility with claims even more implausible than humans would imagine.

More lies, less responsibility

Chatbots As ChatGPT tend to invent things not Factually accurate or completely nonsensical. Artificial intelligence can generate information misleading, provide false statements and misleading advertisements. While the most unscrupulous human campaigner still has a modicum of responsibility, Artificial intelligence does not. OpenAI acknowledges the flaws in ChatGPT that lead it to provide biased information, disinformation, and outright false information.

if campaigns Distribute messages based on artificial intelligence without any human filter or a moral compass, the lies can get worse and get out of control.

persuade voters to cheat their candidate

A New York Times columnist had a long conversation with Microsoft's Bing chatbot. In the end, The bot tried to make him leave his wife. "Sydney" repeatedly told the reporter "I'm in love with you", and "You're married, but you don't love your partner... you love me. ... actually you want to be with me".

Imagine millions of such encounters, but with a bot trying to convince voters to leave their candidate for another candidate.

AI chatbots can exhibit partisan bias.   for example, today they lean much more to the left politically – hold liberal leanings, express 99% support for Biden – with far less diversity of opinion than the general population.

In 2024, Republicans and Democrats will have a chance to fine-tune models that inject political bias and even talk to voters to influence them.

Candidate image manipulation

AI can Change pictures. Videos and photos called "deep-fakes" are common in politics, and they are very advanced. Donald Trump used artificial intelligence to create Fake picture of himself on one knee, praying.

Images can be adjusted more precisely to influence voters more subtly. in research My I've found that a caller's appearance can be just as influential—and deceiving—as what someone actually says. my research It also revealed that Trump was perceived as "presidential" in the 2020 election, when voters thought he seemed "sincere." And making people think you "seem honest" through your non-verbal appearance is Deceptive tactic which is more convincing than saying things that are really true.

If we use Trump as an example, let's say he wants voters to see him as an honest, trustworthy, likable person. Certain modifiable characteristics of his appearance make him seem insincere, untrustworthy, and unlikeable: he Baring his lower teeth when he speaks andseldom smiles Which makes him threatening.

The campaign could use artificial intelligence to alter a photo or video of Trump to make him appear smiling and friendly, which would make voters think he's more calming and winning, and ultimately Yes and reliable.

Avoidance of guilt

AI provides campaigns with additional deniability when they screw up. Usually, if politicians get involved They blame their team. If the staff members get involved they Blame the intern. If interns get into trouble they can now blame ChatGPT.

A campaign may avoid mistakes by blaming a notorious inanimate object Inventing outright lies. When the Ron DeSantis campaign Tweet deepfake photos של Trump hugs and kisses Anthony Fauci, staff members They did not even acknowledge the evil deed and did not respond to the reporters' requests for a response. No man needed, it seemed, if robot could hypothetically absorb the fall.

Not all The contributions of AI Politics can be harmful. Artificial intelligence can help to voters politically, and help educate them on issues, e.g. However, a lot of horrible things can happen when Campaigns deploy AI. I hope these six points will help you prepare and avoid being misled by advertisements and donor solicitations.

For an article in The Conversation

More of the topic in Hayadan:

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.