Comprehensive coverage

A new study warns: it is necessary to better understand and manage artificial intelligence to prevent its use to deepen social divisions

Artificial intelligence and algorithms have the ability and are currently being used to exacerbate radicalisation, increase polarisation, and spread racism and political instability, according to an academic researcher from Lancaster University.

A dangerous combination of artificial intelligence and cyber. Illustration: depositphotos.com
A dangerous combination of artificial intelligence and cyber. Illustration: depositphotos.com

A new paper discusses the complex role of artificial intelligence in society, highlighting its potential to both benefit and harm. He investigates the contribution of artificial intelligence to national security, its role in exacerbating social problems such as radicalization and polarization, and the importance of understanding and managing its risks.

Artificial intelligence and algorithms have the ability and are currently being used to exacerbate radicalisation, increase polarisation, and spread racism and political instability, according to an academic researcher from Lancaster University.

Joe Burton, professor of international security at Lancaster University, argues that artificial intelligence and algorithms are more than just tools used by national security agencies to thwart malicious activities online. He suggests that they can also fuel polarization, radicalization and political violence, thereby making themselves a threat to national security.

Furthermore, he says, security processes (presenting technology as an existential threat) were crucial to the way artificial intelligence was designed, used, and the damage it produced.

Professor Burton's article was recently published in the journal Technology in Society.

"Artificial intelligence is often described as a tool to be used to counter violent extremism," says Professor Barton. "Here is the other side of the debate."

The article examines how AI has been promised throughout its history, and in depictions in media and popular culture, and by exploring modern examples of AI having polarizing and radicalizing effects that have contributed to political violence.

Artificial intelligence in warfare and cyber

The article cites the classic film series The Bang, which depicted a holocaust perpetrated by 'sophisticated and malignant artificial intelligence', as doing more than anything else to shape public awareness of artificial intelligence and the fear that mechanical consciousness could lead to disastrous consequences for humanity - in this case, nuclear war and experiment aiming to destroy the human race.

"This lack of trust in machines, the fears associated with them, and their link to biological, nuclear and genetic threats to humanity has contributed to the desire on the part of governments and national security agencies to influence the development of technology, to mitigate risk and (in some cases) to exploit its positive potential," writes Professor Barton.

The role of sophisticated UAVs, such as those used in the war in Ukraine, are now capable of full autonomy including functions such as target detection and identification. (Editor's note, the IDF uses suicide drones in Gaza and Lebanon, but it is always humans who control the drones.

And while there has been a broad and influential campaign, including at the United Nations, to ban 'killer robots' and keep humans in the loop when it comes to making life or death decisions, the acceleration and integration into armed UAVs has continued at a rapid pace, he says.

In cyber security - the security of computers and computer networks - artificial intelligence is used significantly where the most common field is (dis)information and online psychological warfare.

The Putin government's actions against the US election process in 2016 and the resulting Cambridge Analytica scandal showed the potential of artificial intelligence to combine with large amounts of information (including social media) to create political effects aimed at polarizing, encouraging extremist beliefs, and manipulating identity groups. It demonstrated the power and potential of artificial intelligence to divide societies.

During the Corona epidemic, artificial intelligence was seen as positive in tracking and locating the virus, but this also led to concerns regarding privacy and human rights.

The article examines artificial intelligence technology itself, and argues that problems exist in the design of artificial intelligence, the data it relies on, how it is used, and its results and effects.

The article concludes with a strong message for researchers working in the field of cyber and international relations. "Artificial intelligence is certainly capable of changing societies for the better but also represents risks that must be understood and managed in a better way," writes Professor Burton, an expert in cyber conflict and emerging technologies and part of the university's Science and Defense Initiative. "

"Understanding the divisive effects of technology at all stages of its development and use is clearly essential." Prof. Burton explains. "Researchers working in cyber and international relations have an opportunity to build these factors into the emerging AI research agenda and avoid treating AI as a politically neutral technology."

"In other words, the security of artificial intelligence systems, and how they are used in international geopolitical struggles, should not override concerns about their social effects." concludes the researcher.

More of the topic in Hayadan:

One response

  1. No need for artificial intelligence. There is already such a terrible radicalization that people are convinced that giving hormones to disrupt the endocrine system is an appropriate treatment for teenage boys and girls who are a little confused about their sexual identity (under the auspices of social networks and algorithms), as is the removal of their genitals in a surgical operation later on.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.