Malwarebytes survey reveals 81% of people are concerned about the security risks posed by ChatGPT and generative AI, while just 7% think they will improve internet security. Credit: Andrey_Popov/Shutterstock A new Malwarebytes survey has revealed that 81% of people are concerned about the security risks posed by ChatGPT and generative AI. The cybersecurity vendor collected a total of 1,449 responses from a survey in late May, with 51% of those polled questioning whether AI tools can improve internet safety and 63% distrusting ChatGPT information. What’s more, 52% want ChatGPT developments paused so regulations can catch up. Just 7% of respondents agreed that ChatGPT and other AI tools will improve internet safety. In March, a raft of tech luminaries signed a letter calling for all AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least six months to allow time to “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.” The letter cited the “profound risks” posed by “AI systems with human-competitive” intelligence. The potential security risks surrounding generative AI use for businesses are well-documented, as are vulnerabilities known to impact the large language models (LLM) applications they use. Meanwhile, malicious actors can use generative AI/LLMs to enhance attacks. Despite this, there are use cases for the technology to enhance cybersecurity, with generative AI- and LLM-enhanced security threat detection and response a prevalent trend in the cybersecurity market as vendors attempt to help make their products smarter, quicker, and more concise. ChatGPT, generative AI “not accurate or trustworthy” In Malwarebytes’ survey, only 12% of respondents agreed with the statement, “The information produced by ChatGPT is accurate,” while 55% disagreed, a significant discrepancy, the vendor wrote. Furthermore, only 10% agreed with the statement, “I trust the information produced by ChatGPT.” A key concern about the data produced by generative AI platforms is the risk of “hallucination” whereby machine learning models produce untruths. This becomes a serious issue for organizations if its content is heavily relied upon to make decisions, particularly those relating to threat detection and response. Rik Turner, a senior principal analyst for cybersecurity at Omdia, discussed this concept with CSO earlier this month. “LLMs are notorious for making things up,” he said. “If it comes back talking rubbish and the analyst can easily identify it as such, he or she can slap it down and help train the algorithm further. But what if the hallucination is highly plausible and looks like the real thing? In other words, could the LLM in fact lend extra credence to a false positive, with potentially dire consequences if the T1 analyst goes ahead and takes down a system or blocks a high-net-worth customer from their account for several hours?” Related content news Germany blames Russian hackers for months-long cyber espionage The attacks by Russia-backed Fancy Bear used an Outlook exploit to compromise several German officials’ accounts. By Shweta Sharma May 06, 2024 4 mins Advanced Persistent Threats Hacker Groups feature AI governance and cybersecurity certifications: Are they worth it? Organizations have started to launch AI certifications in governance and cybersecurity but given how immature the space is and how fast it's changing, are these certifications worth pursuing? By Maria Korolov May 06, 2024 12 mins Certifications IT Training Careers news Most interesting products to see at RSAC 2024 Tools, platforms, and services that the CSO team recommends 2024 RSA Conference attendees check out. By CSO Staff May 06, 2024 10 mins RSA Conference Security news CISA, FBI urge developers to patch path traversal bugs before shipping The advisory highlights how developers can follow best practices to fix these vulnerabilities during production. By Shweta Sharma May 03, 2024 3 mins Vulnerabilities PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe