{"id":240013,"date":"2024-05-14T10:04:56","date_gmt":"2024-05-14T10:04:56","guid":{"rendered":"https:\/\/www.techopedia.com\/?p=240013"},"modified":"2024-05-14T10:04:56","modified_gmt":"2024-05-14T10:04:56","slug":"study-shows-70-of-security-teams-misuse-ai-do-you-too","status":"publish","type":"post","link":"https:\/\/www.techopedia.com\/study-shows-70-per-cent-of-security-teams-misuse-ai","title":{"rendered":"Study Shows 70% of Security Teams Misuse AI \u2013 Do You, Too?"},"content":{"rendered":"
Organizations \u2014 and especially cybersecurity<\/a> and compliance<\/a> teams \u2014 are rapidly understanding that adopting generative AI<\/a> is unlike any other technology \u2014 presenting unique risks and threats that are challenging to understand in depth.<\/p>\n As AI big tech companies continue to push performance, releasing more powerful versions one after the other, with the latest move by OpenAI and its rollout of ChatGPT 4o<\/a>, studies warn that a perfect artificial intelligence<\/a> security storm is approaching.<\/p>\n Techopedia talked to experts to understand what security professionals struggle to understand, how leaders should be working to solve this knowledge gap, and whether AI frameworks and laws help.<\/p>\nKey Takeaways<\/span><\/h2>\n
\n