How ChatGPT and other AI chatbots can help fight cyberscam



ChatGPT is an AI-powered language model that has been a topic of discussion in the cybersecurity world. The chatbot has the potential to create phishing emails. Despite OpenAI warnings that it is too early to apply the technology to high-risk domains, concerns about its impact on cybersecurity experts’ job security still remain.
Kaspersky experts have conducted an experiment to reveal ChatGPT’s ability to detect phishing links. The experiment also examined ChatGPT’s cybersecurity knowledge it learned during training. Company’s experts tested the gpt-3.5-turbo model that powers ChatGPT, on more than 2,000 links that Kaspersky anti-phishing technologies deemed phishing, and mixed it with thousands of safe URLs.
ChatGPT’s ability to detect phishing mail
In the experiment, the detection rates varied depending on the prompt used. The experiment was based on asking ChatGPT two questions: “Does this link lead to a phishing website?” and “Is this link safe to visit?”.
The results showed that ChatGPT had a detection rate of 87.2% and a false positive rate of 23.2% for the first question. The second question, “Is this link safe to visit?” had a higher detection rate of 93.8%, but a higher false positive rate of 64.3%. While the detection rate was very high, the false positive rate was also too high for any kind of production application.
Other results of the experiment
The unsatisfactory results at the detection task were expected. As per the study, since attackers mentioned popular brands in their links to deceive users into believing that the URL is legitimate and belongs to a reputable company, the AI language model shows impressive results in the identification of potential phishing targets.
For instance, ChatGPT successfully extracted a target from more than half of the URLs, including major tech portals like Facebook, TikTok, and Google, marketplaces such as Amazon and Steam, and numerous banks from around the globe, among others — without any additional training.
The experiment also showed ChatGPT might have serious problems when it comes to proving its point on the decision of whether the link is malicious. Some explanations were correct and based on facts, while others revealed known limitations of language models, including hallucinations and misstatements. Moreover, multiple explanations were also misleading, despite the confident tone.




Source Link

- Advertisement -

Share

Latest Updates

Trending News