- MLflow identified as most vulnerable open-source ML platform
- Directory traversal flaws allow unauthorized file access in Weave
- ZenML Cloud’s access control issues enable privilege escalation risks
Recent analysis of the security landscape of machine learning (ML) frameworks has revealed ML software is subject to more security vulnerabilities than more mature categories like DevOps or Web servers.
The growing adoption of machine learning across industries highlights the critical need to secure ML systems, as vulnerabilities can lead to unauthorized access, data breaches, and compromised operations.
The report from JFrog claims ML projects such as MLflow have seen an increase in critical vulnerabilities. Over the last few months, JFrog has uncovered 22 vulnerabilities across 15 open source ML projects. Among these vulnerabilities, two categories stand out: threats targeting server-side components and risks of privilege escalation within ML frameworks.
Critical vulnerabilities in ML frameworks
The vulnerabilities identified by JFrog affect key components often used in ML workflows, which could allow attackers to exploit tools which are often trusted by ML practitioners for their flexibility, to gain unauthorized access to sensitive files or to elevate privileges within ML environments.
One of the highlighted vulnerabilities involves Weave, a popular toolkit from Weights & Biases (W&B), which aids in tracking and visualizing ML model metrics. The WANDB Weave Directory Traversal vulnerability (CVE-2024-7340) enables low-privileged users to access arbitrary files across the filesystem.
This flaw arises due to improper input validation when handling file paths, potentially allowing attackers to view sensitive files that could include admin API keys or other privileged information. Such a breach could lead to privilege escalation, giving attackers unauthorized access to resources and compromising the security of the entire ML pipeline.
ZenML, an MLOps pipeline management tool, is also affected by a critical vulnerability that compromises its access control systems. This flaw allows attackers with minimal access privileges to elevate their permissions within ZenML Cloud, a managed deployment of ZenML, thereby accessing restricted information, including confidential secrets or model files.
The access control issue in ZenML exposes the system to significant risks, as escalated privileges could enable an attacker to manipulate ML pipelines, tamper with model data, or access sensitive operational data, potentially impacting production environments reliant on these pipelines.
Another serious vulnerability, known as the Deep Lake Command Injection (CVE-2024-6507), was found in the Deep Lake database – a data storage solution optimized for AI applications. This vulnerability permits attackers to execute arbitrary commands by exploiting how Deep Lake handles external dataset imports.
Due to improper command sanitization, an attacker could potentially achieve remote code…
Read full post on Tech Radar
Discover more from Technical Master - Gadgets Reviews, Guides and Gaming News
Subscribe to get the latest posts sent to your email.