AI Dominates RSA as Excitement and Questions Surround its Potential in Cybersecurity Tooling

Artificial intelligence (AI) tools were a hot topic at this year’s RSA conference in San Francisco. The potential of generative AI in cybersecurity tools is causing excitement among cybersecurity professionals. However, questions have been raised about the practical use of AI in cybersecurity and the trustworthiness of the data used to build AI models.

“We are leading the first inning of AI impact,” said MK Palmore, cybersecurity strategy advisor and director at Google Cloud and Cyversity. Information security.

“The company I work for is moving in a direction that shows that we recognize the value and usage of AI in terms of how it can positively impact the industry. I hope,” he added.

But as many have pointed out, Palmer admits that there is actually much more to be done when it comes to AI development.

“I doubt everything will change and be affected. As always, as they evolve, we will adapt to this new paradigm of making these large scale language models (LLMs) and AI available. To do that, we have to pivot,” he said.

Presidio Field CISO Dan Lohrmann agreed that we are in the dawn of AI in cybersecurity.

“I think we’re at the beginning of the game, but I think it’s going to be a game-changer. Talking about the tools on the RSA show floor, Lohrmann said AI will transform most of our products. .

“I think the offense and defense will change, for example, how red teams and blue teams work,” he said.

However, he noted that there is still a long way to go in terms of streamlining the tools security teams use. Speaking of some tools that integrate AI, he says:

Add AI to your security tools

At RSA 2023, we highlighted how many companies are using generative AI in their security tools. For example, Google launched Sec-PaLM, a generative AI tools and security LLM.

Sec-PaLM builds on Mandiant’s frontline intelligence on vulnerabilities, malware, threat indicators, and attacker behavioral profiles.

read more: Google Cloud Brings Generative AI to Security Tools as LLMs Reach Critical Mass

Google Cloud’s Director of User Experience, Steph Hay, said that LLM has finally reached a critical mass of being able to contextualize information in ways that weren’t possible before. “We now have truly generative AI,” she said.

Meanwhile, Mark Ryland, director of the Office of the CISO at Amazon Web Services, highlighted how generative AI can be used to improve threat detection.

“We are very focused on meaningful data and minimizing false positives. The only way to do this effectively is through machine learning, which is our It’s the core of our security services,” he said.

The company recently announced a new tool for building on AWS that incorporates generative AI called Amazon Bedrock. Amazon Bedrock is a new service that provides API access to AI21 Labs, Anthropic, Stability AI, and Amazon’s Foundational Model (FM).

Additionally, Tenable announced a generative AI security tool designed specifically for the research community.

The announcement included how generative AI is changing security research, It explores how LLM can reduce complexity and achieve efficiencies in research areas such as reverse engineering, debugging code, improving web app security, and visibility into cloud-based tools.

The report notes that LLM tools such as ChatGPT are evolving at “breakneck speed.”

Regarding AI tools for cybersecurity platforms, Tenable CSO Bob Huber said: information security, “I think what you can do with these tools is have your own database. For example, if you’re looking at penetration testing something and your target is X, what vulnerabilities might it have?” Is there a ?Usually it’s a manual process and has to be done?search for it in [AI] It helps you accomplish these things faster. “

He added that he has seen some companies work on open-source LLMs, but this has guardrails because the data that LLMs are built on isn’t always validated or accurate. she pointed out. An LLMS built on an organization’s own data is much more reliable.

I’m concerned about how hooking into open source LLMs such as GPT affects security. As security practitioners, it’s important to know the risks, but Huber notes that not enough time has passed for people to fully understand those risks when it comes to generative AI.

All of these tools aim to make defenders’ jobs easier, but Ismael Valenzuela, vice president of threat research and intelligence at BlackBerry, points out the limitations of generative AI.

“Like any tool, it should be used by defenders and attackers alike. But the best way to describe these generative AI tools is how they excel as assistants. It’s clear that it can speed things up for both sides, but do you think it will revolutionize everything? Probably not,” he said.

Additional reporting by James Coker

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *