The image was deployed, then the IDS lit up. Why we audit every image.

A client image was rolled out, minutes later the IDS lit up. How we audit the software supply chain so things like this don't reach production any more.
It is one of the most expensive scenarios in modern IT and at the same time one of the most common: a container image runs through the pipeline, gets activated in the cluster, and a few minutes later the IDS lights up. Inside the container, traffic appears that no-one expected and that has nothing to do with the team's own code. What follows next is rarely the spectacular crisis with overnight on-call. It's more often the quiet post-mortem in the meetings of the days that follow: where did this come from, how did it get in, why didn't the pipeline flag it earlier?
We see this pattern regularly in intake conversations with mid-market IT teams. It's almost always the result of a blind spot that looks harmless on its own but, in aggregate, makes any supply chain attackable: the team's own build output is checked thoroughly. The code the pipeline pulls in from outside (base images, dependencies, sidecars, Helm charts) is taken as a given. With that, the trust boundary quietly shifts from your repository to dozens of foreign registries, hundreds of foreign maintainers and everything that happens inside their build pipelines.
Why this is no longer an isolated incident
Each of the layers named (base image, package dependency, sidecar, Helm chart) is a handover point at which third-party code enters your infrastructure. Attacks against these handover points are getting more professional, not more spectacular: typo-squatting in package managers, compromised maintainer accounts, manipulated post-install scripts, covertly inserted crypto miners in base images. None of these is a zero-day drama. Together they're the reason the IDS will, at some point, report something no-one requested.
What “auditing” means in practice
In the supply-chain world, auditing no longer means “antivirus scan on the artefact”. It means making every stage of provenance verifiable and checking it multiple times on the way to your production. Concretely, we're talking about four building blocks that work together in a mature DevSecOps pipeline:
Signature verification via Sigstore / cosign. Every image allowed into production has a cryptographic signature from a named identity provider. No valid signature, no deployment. Period. The admission controller in the cluster enforces this hard.
SBOM, Software Bill of Materials. Every build artefact carries a machine-readable list of its components. If a new CVE appears tomorrow, you know within minutes which of your running workloads are affected. Without an SBOM you'll know in days, or not at all.
SLSA provenance. The origin of an artefact is documented in machine-readable form: which commit, which runner, which parameters. That's the difference between “we believe the image comes from us” and “we can prove it comes from us”.
Continuous re-checking. An image that is clean today is potentially a CVE candidate in six weeks. Our pipeline therefore checks running workloads daily against updated vulnerability databases, not only at deploy time.
Why this isn't optional
Five years ago the arguments for these measures were primarily technical. Today they are also regulatory. NIS2 requires essential and important entities to manage supply-chain risk in a verifiable way. The Cyber Resilience Act makes software vendors responsible for the security of their products throughout their lifecycle. In the end, both demand the same: you must be able to prove what is in your software and how it got there. Without signatures, SBOM and provenance, that proof can't be produced, not in a form that holds up with an auditor.
How we set this up
In our DevSecOps as a Service operation, supply-chain security is not an add-on but part of the standard setup. We integrate cosign, SBOM generation and vulnerability scans into the existing pipeline, attach an admission controller to the cluster and build a simple dashboard that shows what is currently running and whether it is allowed to run there. That isn't revolutionary re-engineering, it's a few disciplined guardrails that you set cleanly once and then enforce consistently.
And yes, on the first run the setup almost always finds something. An old base, a forgotten sidecar, a long-deprecated dependency. That isn't a failure of your team; it's the result of an architecture in which no-one understood the supply chain as a product. As soon as it is a product, it is also maintained.
If something is on fire right now
If you're reading this because an IDS alarm has just gone off at your organisation, you don't need a consulting offer, you need immediate help. Kai Ole Hartwig, the founder of Moselwal, advises personally and 1:1 on acute security questions under his offering OnlyOle. That's the fast route to an experienced contact, without agency overhead, without a proposal phase. We then set up a sensible supply-chain pipeline structurally through Moselwal afterwards, in most cases within a few weeks.
![[Translate to English:] Büro-Bild mit drei Monitoren, roten Warn-Akzenten und Mosel-Fensterblick im Abendlicht](/fileadmin/_processed_/9/8/csm_1f9eb86ca04c63cb88f2e4f310316127e203cd729d7750ffad4cfad4bb076389_3bcd304b03.jpg)
Let's talk structurally
If you want to set your pipeline up so that the IDS has less to do in future, a sober conversation about your current state and the next two or three steps is worthwhile. 30 minutes, no pitch. We look at your build chain together and show you where the fastest wins are and what can responsibly come later.
Frequently asked questions
What clients ask us most often about supply-chain security — answered openly.
We only use official base images. Isn't that enough?+
Official base images are better than random community builds, but they are not a guarantee. They still contain third-party dependencies, they are occasionally compromised, and they age — an image that is clean in January may have new CVEs by April. Signature verification, SBOM and continuous re-checking therefore apply to official images too, not only to external ones.
What does an image scan cost per deploy?+
Measured in seconds: usually under a minute per build. Measured in licences: surprisingly little, because the mature tools (cosign, Trivy, Syft) are open source and only enterprise add-ons cost money. The bigger item is the one-off integration, not ongoing operation. We typically calculate a five-figure setup effort and a clearly lower monthly run cost.
What happens if the scan trips — does that block our releases?+
Yes, on hard findings. That's the point of a gate. At the same time we agree with you up front what counts as "hard" and what doesn't: critical CVEs without a fix block. Medium findings with a defined remediation deadline pass through as warnings. Informational findings are logged but don't slow anything down. What matters is that these lines are drawn together before the first stop, not in the heat of a blocked release day.
Can we retrofit this without rebuilding the pipeline completely?+
Yes. As a rule we don't rebuild, we hook in. cosign and Trivy integrate as additional pipeline stages into almost any CI environment. The admission controller in the cluster runs alongside existing deployments. The first results often appear after two or three days, the full coverage after two or three weeks — depending on how much unexpected legacy the first scan turns up.
How does this relate to NIS2 and CRA — is it the same?+
NIS2 and CRA address the same area from different angles. NIS2 requires affected entities to have traceable risk management for their supply chain. The CRA requires product manufacturers to ensure security across the full lifecycle of their software, including updates. The technical building blocks — signatures, SBOM, provenance, re-checking — satisfy central requirements of both. Whoever sets this up cleanly resolves both duties in one motion.