New Delhi, Nov 1: Now International Psychological Security is a separate area of study after scientists studied the threats that the malicious use of AI can pose to society in general or in specific areas of human activity.
For the first time ever, a team of researchers from the Russian Presidential Academy of National Economy and Public Administration (RANEPA) and the Russian Foreign Ministry’s Diplomatic Academy (DA) have studied and classified the possible threats of artificial intelligence from the perspective of international psychological security (IPS).
Dangers of Malicious Use of AI
Much has been written on the threats that artificial intelligence (AI) can pose to humanity. Today, this topic is among the most discussed issues in scientific and technical development. Despite the fact that so-called Strong AI, characterised by independent systems thinking and possibly self-awareness and will power, is still far from reality, various upgraded versions of Narrow AI are now completing specific tasks that seemed impossible just a decade ago.
The positive uses of AI – in healthcare, for instance, – are already undoubtedly beneficial. But in the hands of terrorists or other criminal organisations, increasingly cheap and sophisticated AI technology could become more dangerous than nuclear weapons.
Scientists from different countries are now studying the threats that the malicious use of AI can pose to society in general or in specific areas of human activity, such as politics, the economy, military affairs and so forth. However, the threats posed directly to IPS have never been narrowed down to a separate area of study before.
Meanwhile, the prospect of AI use aimed at destabilising international relations via high-tech information and psychological warfare against people is obviously becoming a greater danger. The researchers proposed a new classification of threats for the malicious use of AI based on a range of criteria that include, among other things, territorial coverage and the speed and form of propagation. Applied, this classification can help scientists find ways to counter these threats and develop tools to respond to them. (UNI)