Google Expands AI Efforts into Cybersecurity

Google is now aiming to harness the power of artificial intelligence in cybersecurity. As people look for practical uses for generative AI beyond creating fake photos, Google plans to leverage AI to simplify threat reports and improve cybersecurity defenses.

In a blog post, Google announced its new product, Google Threat Intelligence, which combines the expertise of its Mandiant cybersecurity unit and VirusTotal threat intelligence with the Gemini AI model. This new product utilizes the Gemini 1.5 Pro large language model (LLM), which Google claims can significantly reduce the time required to reverse-engineer malware attacks. For instance, Gemini 1.5 Pro, released in February, took only 34 seconds to analyze the WannaCry virus code—a ransomware attack from 2017 that severely impacted hospitals, businesses, and other organizations worldwide—and identify a kill switch.

While this rapid analysis showcases the LLM’s ability to read and write code efficiently, Gemini can also summarize threat reports in natural language, allowing companies to better assess potential risks. By providing easily digestible summaries, organizations can react appropriately to potential threats without overreacting or underreacting.

Google Threat Intelligence provides a comprehensive view of the cybersecurity landscape by monitoring potential threats before they occur. It leverages a vast network of information, enabling users to prioritize areas of focus. Mandiant’s human experts monitor malicious groups and work with companies to prevent attacks, while the VirusTotal community regularly shares threat indicators.

In 2022, Google acquired Mandiant, the cybersecurity firm that uncovered the 2020 SolarWinds cyber attack against the U.S. federal government. Now, Google plans to use Mandiant’s experts to assess security vulnerabilities in AI projects. Through Google’s Secure AI Framework, Mandiant will test AI model defenses and assist in red-teaming efforts. Although AI models can help identify and mitigate threats, they are also susceptible to attacks like “data poisoning,” where malicious code is introduced to disrupt the models’ ability to respond to specific prompts.

Google isn’t alone in merging AI and cybersecurity. Microsoft recently launched Copilot for Security, powered by GPT-4 and a cybersecurity-specific AI model, which allows professionals to query information about threats. While the effectiveness of these AI-driven cybersecurity tools is yet to be fully proven, it is encouraging to see AI being used for more practical applications beyond creating viral images like the “swaggy Pope.”

Soource: The Verge

Want to promote your brand?

Let's
work
together

Are you looking to advertise your brand or monetise your traffic? Drop us a line, our team will be glad to assist you with your queries.

Select an option from the drop-down menu