Can You Use Generative AI for Security Advice?
Few technologies have impacted the digital space as quickly and profoundly as generative AI when it burst onto the scene a little while back. It’s removed the tedium of writing cordial business emails and can help writers brainstorm ideas, acting as a trusty helper that helps bring out the best in their art.
Generative AI like ChatGPT can spin a fine tale, but is it equally adept at dispensing advice? Specifically, can you count on generative AI to provide reliable information you can use to tackle cybersecurity incidents or develop better defenses?
This article delves deeper into generative AI’s strengths and limitations, offering a balanced view of what you can expect and what challenges this new technology isn’t quite up to yet. Let’s dive in.
Where Does Generative AI Get Its Insights From?
Before answering the title question, we first need to see what makes generative AI tick. The nuanced and natural-sounding responses it creates hinge on enormous amounts of data. AI developers need to collect, prepare, sort, and analyze this data to become usable.
The accuracy and variety of results a generative AI can produce depend on what data sets it had access to during training. These comprise various online sources, books, statistical data, etc. It’s imperative to curate this data to provide the AI with enough points to uncover patterns and return answers humans can understand.
It’s important to mention that the AI doesn’t understand what you’re asking it in the same sense a human would. Generative AI like ChatGPT and Gemini went through many refinements to be able to respond in ways that are both understandable and make sense. Think of them as incredibly sophisticated statistics analysis tools. They detect and analyze countless patterns and then determine which words or phrases fit the desired response most accurately.
However, not even the most advanced AI models understand the concept of truth yet. Delivering an answer within their limited frames of reference is a priority, even if these answers are biased and at odds with reality.
Can You Get Reliable Answers on Cybersecurity Topics Then?
Thankfully, the answers leading generative AIs produce when asked about cybersecurity topics are sound. They excel at educating users who are unfamiliar with the topic. In that sense, AI can be superior to a search engine since newbies can ask questions about high-level concepts and delve into nuance while staying on the same webpage.
Asking about general threats results in equally general but helpful answers. For example, data breaches are one of the costliest and most widespread cyberattacks, so it makes sense to ask for advice on protecting yourself.
While the breadth and depth of individual replies varies. Popular generative AIs give informative and correct advice on the topic. They’ll stress the need for unique and complex passwords stored in a password manager to contain the breach, as well as multi-factor authentication to deny access to a compromised account from unknown sources.
Further tips include using VPNs to securely access the internet through vulnerable connections. Some AIs also stress the importance of choosing only reputable providers of any cybersecurity tools in the arsenal, like NordPass for a quality password manager, etc.
Keeping data backups and turning on updates are other important steps the AIs do not skip. Finally, they stress the need to keep up with cybersecurity developments to be better prepared for emerging challenges.
What Should You Look Out For?
AIs provide actionable advice for general cybersecurity problems. They can point you in the right direction if you need further troubleshooting. However, there are limitations.
The scope of an AI’s training materials is one of them. The freely available GPT3.5 version of ChatGPT relies on data gathered until January 2022, meaning it can’t provide reliable advice on addressing threats that emerged after. The newer model requires a monthly subscription and also doesn’t have access to information on current threats that may be subject to dynamic changes.
Remember how AI doesn’t understand the meaning of truth? It also has no concept of ethics, meaning it can make something up on the spot, provided the response fits within its working parameters. That isn’t a problem when searching for general cybersecurity advice due to extensive documentation from multiple sources.
However, such hallucinations can happen when dealing with niche inputs like searching for ways to eliminate specific ransomware. During the hallucination, an AI chatbot or computer vision tool perceives patterns or objects that are nonexistent or undetectable to human observers. It can lead to outputs that are completely erroneous or nonsensical.
Conclusion
Generative AI is among the fastest-evolving technologies of our time. It’s getting better at providing sound advice for a variety of cybersecurity problems. Still, nothing beats collective human expertise and experience when diagnosing and dealing with specific threats, especially ongoing ones. ChatGPT or Gemini will help you get your cybersecurity bearings; online communities like Reddit or GitHub are better options if you’re struggling for solutions to more complex issues.