ADVERTISEMENT

34% Of Organisations Are Using Or Implementing AI Application Security Tools: Gartner

Businesses should consider an enterprise-wide strategy for AI trust, risk, and security management.

<div class="paragraphs"><p>(Source: rawpixel.com/Freepik)</p></div>
(Source: rawpixel.com/Freepik)

A survey by research and consulting firm Gartner found that 34% of organisations are either already using or implementing artificial intelligence application security tools to mitigate the accompanying risks of generative AI. And, over half (56%) of the respondents said they are also exploring such solutions.

The survey was conducted among 150 IT and information security leaders at organisations where generative AI or foundational models are in use, had plans for use, or were exploring it. Of the respondents, 26% said they are currently implementing or using privacy-enhancing technologies, ModelOps (25%) or model monitoring (24%).

“IT and security and risk management leaders must, in addition to implementing security tools, consider supporting an enterprise-wide strategy for AI TRiSM (trust, risk and security management),” said Avivah Litan, distinguished VP analyst at Gartner.

“AI TRiSM manages data and process flows between users and companies who host generative AI foundation models, and must be a continuous effort, not a one-off exercise to continuously protect an organisation,” she said.

Responsibility Of Securing Generative AI Falls On IT

While 93% of IT and security leaders surveyed said they are at least somewhat involved in their organisation’s generative AI security and risk management efforts, only 24% said they own this responsibility.

Among the respondents that do not own the responsibility for generative AI security and/or risk management, 44% reported that the ultimate responsibility for generative AI security rested with IT. For 20% of the respondents, the responsibility was owned by their organisation’s governance, risk, and compliance departments.

Top-Of-Mind Risks

Survey respondents indicated that undesirable outputs and insecure code are among their top-of-mind risks when using generative AI.

  • 57% of the respondents are concerned about leaked secrets in AI-generated code.

  • 58% of the respondents are concerned about incorrect or biased outputs.

“Organisations that don’t manage AI risk will witness their models not performing as intended and, in the worst case, can cause human or property damage. This will result in security failures, financial and reputational loss, and harm to individuals from incorrect, manipulated, unethical, or biased outcomes. AI malperformance can also cause organisations to make poor business decisions,” said Litan.

Opinion
Accenture Invests In Writer To Accelerate Enterprise Use Of Generative AI In Content