Digital Deception: Uncovering the Dark Side of AI in Social Networks

AI Deception Detection: Behavior Model and Techniques

Author(s): Manu* and Neha Varshney

Pp: 1-15 (15)

DOI: 10.2174/9798898810030125040003

* (Excluding Mailing and Handling)

Abstract

Nowadays, it has become very common for the message to reach the receiver in the wrong way by spreading false information. In every field, passing wrong information by pretending it is right has become easy since everyone is using recent technologies. It is often seen that the person is not involved, but his credentials and personal details are shared indirectly. One thing used behind this technology is artificial intelligence, which is not based on emotions but can cause harm by using false methods. Experts caution against giving artificial intelligence (AI) executive control because its lack of emotions can do unthinkable harm. Due to the absence of understanding and an ethical compass, it can ultimately result in choices that have terrible emotional repercussions. The importance is to be given to the implications of AI and cover the vulnerabilities of social networks, possible ethical issues on privacy, and automated cyberattack case studies. The effects of a breach can reverberate for years as cybercriminals use the information they have stolen. The potential risk is only constrained by the creativity and technological abilities of malevolent persons. Sophisticated artificial intelligence (AI) systems are capable of operating deceit on their own to avoid human oversight, such as avoiding safety tests that regulators have mandated of them. However, despite topical developments, social media platform administration will continue to face a number of ethical difficulties.


Keywords: Automation, Artificial intelligence, Cybercrime, Deception, Deep fake, Ethical.

Related Journals
Related Books
© 2025 Bentham Science Publishers | Privacy Policy