A latest survey of 500 safety professionals by HackerOne, a safety analysis platform, discovered that 48% imagine AI poses probably the most important safety threat to their group. Amongst their biggest issues associated to AI embody:
- Leaked coaching information (35%).
- Unauthorized utilization (33%).
- The hacking of AI fashions by outsiders (32%).
These fears spotlight the pressing want for firms to reassess their AI safety methods earlier than vulnerabilities develop into actual threats.
AI tends to generate false positives for safety groups
Whereas the complete Hacker Powered Safety Report received’t be out there till later this fall, additional analysis from a HackerOne-sponsored SANS Institute report revealed that 58% of safety professionals imagine that safety groups and menace actors may discover themselves in an “arms race” to leverage generative AI techniques and strategies of their work.
Safety professionals within the SANS survey stated they’ve discovered success utilizing AI to automate tedious duties (71%). Nevertheless, the identical contributors acknowledged that menace actors may exploit AI to make their operations extra environment friendly. Particularly, respondents “have been most involved with AI-powered phishing campaigns (79%) and automatic vulnerability exploitation (74%).”
SEE: Safety leaders are getting frustrated with AI-generated code.
“Safety groups should discover one of the best functions for AI to maintain up with adversaries whereas additionally contemplating its present limitations — or threat creating extra work for themselves,” Matt Bromiley, an analyst on the SANS Institute, stated in a press release.
The answer? AI implementations ought to endure an exterior evaluation. Greater than two-thirds of these surveyed (68%) selected “exterior evaluation” as the simplest strategy to establish AI security and safety points.
“Groups are actually extra life like about AI’s present limitations” than they have been final 12 months, stated HackerOne Senior Options Architect Dane Sherrets in an e mail to TechRepublic. “People carry plenty of essential context to each defensive and offensive safety that AI can’t replicate fairly but. Issues like hallucinations have additionally made groups hesitant to deploy the expertise in vital methods. Nevertheless, AI remains to be nice for growing productiveness and performing duties that don’t require deep context.”
Additional findings from the SANS 2024 AI Survey, launched this month, embody:
- 38% plan to undertake AI inside their safety technique sooner or later.
- 38.6% of respondents stated they’ve confronted shortcomings when utilizing AI to detect or reply to cyber threats.
- 40% cite authorized and moral implications as a problem to AI adoption.
- 41.8% of firms have confronted pushback from workers who don’t belief AI selections, which SANS speculates is “because of lack of transparency.”
- 43% of organizations at present use AI inside their safety technique.
- AI expertise inside safety operations is most frequently utilized in anomaly detection methods (56.9%), malware detection (50.5%), and automatic incident response (48.9%).
- 58% of respondents stated AI methods battle to detect new threats or reply to outlier indicators, which SANS attributes to a scarcity of coaching information.
- Of those that reported shortcomings with utilizing AI to detect or reply to cyber threats, 71% stated AI generated false positives.
Anthropic seeks enter from safety researchers on AI security measures
Generative AI maker Anthropic expanded its bug bounty program on HackerOne in August.
Particularly, Anthropic desires the hacker neighborhood to stress-test “the mitigations we use to stop misuse of our fashions,” together with making an attempt to interrupt by means of the guardrails supposed to stop AI from offering recipes for explosives or cyberattacks. Anthropic says it would award as much as $15,000 to those that efficiently establish new jailbreaking assaults and can present HackerOne safety researchers with early entry to its subsequent security mitigation system.