From streamlining operations to automating complex processes, AI has revolutionized how organizations approach tasks – however, as the technology becomes more prevalent, organizations are discovering the rush to embrace AI may come with unintended consequences.
A report by Swimlane reveals while AI offers tremendous benefits, its adoption has outpaced many companies’ ability to safeguard sensitive data. As businesses deeply integrate AI into their operations, they must also contend with the associated risks, including data breaches, compliance lapses, and security protocol failures.
AI works with Large Language Models (LLMs) which are trained using vast datasets that often include publicly available information. These datasets can consist of text from sources like Wikipedia, GitHub, and various other online platforms, which provide a rich corpus for training the models. This means that if a company’s data is available online, it will likely be used for training LLMs.
Data handling and public LLMs
The study revealed a gap between protocol and practice when sharing data in large public language models (LLMs). Although 70% of organizations claim to have specific protocols to safeguard the sharing of sensitive data with public LLMs, 74% of respondents are aware that individuals within their organizations still input sensitive information into these platforms.
This discrepancy highlights a critical flaw in enforcement and employee compliance with established security measures. Furthermore, there is a constant barrage of AI-related messaging which is wearing down professionals and 76% of respondents agree that the market is currently saturated with AI-related hype.
This…
Read full post on Tech Radar
Discover more from Technical Master - Gadgets Reviews, Guides and Gaming News
Subscribe to get the latest posts sent to your email.