Critical Flaw in AI Python Package Can Lead to System and Data Compromise

  • Home
  • Somcert
  • Critical Flaw in AI Python Package Can Lead to System and Data Compromise

A critical vulnerability discovered recently in a Python package used by AI application developers can allow arbitrary code execution, putting systems and data at risk.

The issue, discovered by researcher Patrick Peng (aka retr0reg), is tracked as CVE-2024-34359 and it has been dubbed Llama Drama. Cybersecurity firm Checkmarx on Thursday published a blog post describing the vulnerability and its impact.

CVE-2024-34359 is related to the Jinja2 template rendering Python tool, which is mainly used for generating HTML, and the llama_cpp_python package, which is used for integrating AI models with Python.

Llama_cpp_python uses Jinja2 for processing model metadata, but failed to use certain safeguards, enabling template injection attacks.

“The core issue arises from processing template data without proper security measures such as sandboxing, which Jinja2 supports but was not implemented in this instance,” Checkmarx explained.

According to the security firm, the vulnerability can be exploited for arbitrary code execution on systems that use the affected Python package. The company found that more than 6,000 AI models on the Hugging Face AI community that use llama_cpp_python and Jinja2 are impacted. 

“Imagine downloading a seemingly harmless AI model from a trusted platform like Hugging Face, only to discover that it has opened a backdoor for attackers to control your system,” Checkmarx said.

The vulnerability has been patched with the release of llama_cpp_python 0.2.72.

No products in the cart.

Subscribe to our newsletter

Sign up to receive latest news, updates, promotions, and special offers delivered directly to your inbox.
No, thanks