Increasing Threat of Generative AI Technology

Increasing Threat of Generative AI Technology




Think of a drastic surge in advanced persistent threats (APTs), malware attacks, and organizational data breaches. An investigation on the case scenario revealed that these attacks are actually developed by threat actors who have access to generative AI.

However, it raises a question: who should be the culprit? The cybercriminals? The generative AI bots? The firms who develop these bots? Or perhaps the government that fails to come up with proper regulation and accountability? 


Generative AI Technology


Generative AI technology is another form of artificial intelligence that aids users in generating texts, images, sounds, and other content from inputs or instructions that are given in natural language.


Similar AI bots like ChatGPT, Google Bard, Perplexity, and others are made available to any online user who wishes to chat, generate human-like texts, and scripts, or even write complex codes. 


Although, one problem in common that these AI bots possess is their ability to produce offensive or harmful content on the basis of user input, which may violate ethical standards, inflict harm, or even be illegal.




These cases are why chatbots include security mechanisms onboard and content filters that could restrict output that may be harmful or malicious. However, how effective are these preventative methods for cont

[…]Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read ..

Support the originator by clicking the read the rest link below.