After patch wave comes continuity: what CISA's “CI Fortify” advisory means for German mid-market organisations

Mattschwarzes Server-Rack mit zwei getrennten Sektionen — obere Sektion mit aktiven Status-LEDs, untere Sektion mit geordnet getrennten Patch-Kabeln, kühles Studiolicht — Metapher für Isolation und Continuity als geplante Doppel-Disziplin.

After the NCSC patch-wave warning in April, the next government-level call to prepare — this time from the US. CISA puts two disciplines at the centre: isolation and recovery. What that means in practice, and why it's relevant even without KRITIS status.

The 90-second summary

With the “CI Fortify” initiative, CISA has published crisis-planning guidance for operators of critical infrastructure. The background is the sober assessment that nation-state actors have positioned themselves in critical infrastructure over the past years — with the option, in a crisis, of disrupting operational technology or taking out telecommunications. CISA puts two disciplines at the centre: isolation (proactive separation of third-party and business networks to protect OT systems) and recovery (continuing essential service delivery in a degraded communications environment, rather than complete outage). The goal isn't “no cyber incident,” it's “the incident doesn't stop essential operations.” For German mid-market organisations — KRITIS-regulated or not — it's a reminder that patch discipline is only half of resilience. The other half is the question: what keeps running when the network breaks?

What the dual discipline of isolation and recovery means in practice

What CISA is saying with “CI Fortify”

The US cyber-security agency is primarily addressing operators of water utilities, transportation, energy, and telecommunications. The position is unusually direct: nation-state actors have already established access in many critical infrastructures. The question is no longer whether a targeted cyber incident will come — but whether the affected organisation can keep its essential service delivery running anyway.

CISA puts two strategic planning objectives at the centre:

  • Isolation. Proactive separation of critical operational-technology systems from third-party and business networks. That means the OT layer must be able to keep running in a crisis without a connection to the rest of the internet — planned, documented, drilled.
  • Recovery. Essential service delivery in a degraded communications environment. If phone, internet, or cloud services fail, water supply still has to work, electricity transport still has to function, critical patient records still have to be reachable.

The goal isn't the zero-incident. The goal is sustained essential service delivery despite the incident.

Why this isn't just a US KRITIS topic

Three bridges that make this relevant for the German mid-market too.

First, KRITIS-Dachgesetz and NIS2. The German implementation now obliges a significantly expanded set of organisations to comparable preparatory measures. Anyone classified as a KRITIS operator, “particularly important entity,” or “important entity” has at the core exactly these two tasks: system isolation in the incident scenario, plus demonstrable continuity of essential services.

Second, supply-chain exposure. A mid-market organisation that isn't itself KRITIS but acts as a supplier to a KRITIS organisation will be drawn into the same discipline through contract clauses in the next quarters. Audit requirements, documented separation paths, crisis communication — these become standard inclusions in B2B contracts with critical sectors.

Third, your own business operations. Even without KRITIS status, the question “what keeps running if our ERP, our cloud, or our telco provider goes away for 72 hours?” is one every organisation should be able to answer today. The answer “then everything stops” is no longer a viable position commercially.

Concrete architecture implications

Isolation in practice means: OT networks aren't “somehow behind a firewall,” they're in documented zones with defined transitions. Data flow from the office network into the OT network is restricted, directed, and auditable. Third-party maintenance access is temporary, with short-lived credentials and an audit trail. The separation configuration is tested — not just theoretically documented but drilled at least once a year.

Recovery in practice means: critical services have a definition of what “minimum-service” means. For that, prepared configurations exist that keep running without external cloud dependencies, without current software updates, and without internet connectivity. Backup data is stored separately from the production environment, ideally with at least one variant offline. The restart sequence is documented, and the team has worked through the plan once in an exercise.

Both disciplines aren't a tool question, they're a discipline question. The same tools — firewalls, IAM, backup solutions — you already have are often enough. What's missing is usually the documented configuration, the tested drill, and the clear escalation route.

How we handle this ourselves

A recommendation without our own practical experience is hollow advice. We use what we build.

In our own infrastructure, critical services — code repositories, build pipeline, customer-data storage — are organised in documented zones with defined transitions. Backups follow a 3-2-1 strategy (three copies, two media types, one off-site, of which one is offline). The build pipeline can build, sign, and roll out code from the repos without external API access — last tested in a quarterly drill in which we simulated cutting off an external cloud provider for 24 hours.

The restart sequence for our critical customer stacks is in the repo as a runbook, with version status and last test date. The plan isn't spectacular — it's reliable.

When CISA published CI Fortify, we had no internal effort. Not because we're heroes, but because the discipline was already built in.

Three depths of action for your stack

Short-term (this week):

  • Inventory: which of your business processes are indispensable for the basic service delivery to your customers? Which can be down for 24, 48, 72 hours, which can't? Write that list down, even roughly.
  • Map external dependencies: which cloud providers, SaaS tools, external APIs are critical in which processes? Where are the single points of failure?
  • Check backup status: when was the last successful restore exercise? If the answer is “I don't know,” that's the first task.

Medium-term (next quarter):

  • Document or introduce network segmentation — critical systems in their own zones, with clear transitions, with auditable data flow.
  • Crisis communication plan: who decides what when normal communication (email, Slack, phone) fails? Which alternative channels are prepared?
  • First drill exercise: simulate cutting off an external dependency and see how far the stack runs without it. Build the lessons into the plan.

Strategic (next year):

  • Take KRITIS / NIS2 compliance seriously — even if formal classification ends up borderline, the required measures often make sense without formal obligation.
  • Modernise supplier clauses: which continuity requirements do you write into B2B contracts yourself? Which must your suppliers meet?
  • Tabletop exercise with management and IT: walk through a shared crisis scenario once, instead of hoping it'll be improvised in a real incident.

Frequently asked questions on CISA CI Fortify and continuity discipline

When does external help make sense?+

When you find after an internal stress test that your continuity plans are more hope than plan, or when the KRITIS / NIS2 classification is upcoming and the required measures aren't internally feasible in the available time. A three-week CI/CD security audit plus a continuity architecture review brings an honest situation assessment with a concrete action catalog. After that you decide whether to implement yourself or with support.

How realistic is the risk of a geopolitically motivated cyber incident?+

More realistic than most mid-market organisations assume. CISA and the German security agencies have been publishing regular notices for years about state actors with access in Western critical infrastructure. The risk isn't “guaranteed to happen next week,” it's “will be tested several times over the next two to five years.” Preparation is the only lever organisations have in their own hands.

What's the difference from patch-wave discipline?+

Patch wave is vulnerability management — closing software gaps quickly and systematically before they're exploited. CI Fortify is continuity management — if an incident happens anyway (through a zero-day, an insider, a geopolitical stress test), basic operations remain. The two disciplines complement each other; neither replaces the other. Anyone who only patches isn't prepared for the day X. Anyone who only does continuity practically invites every incident.

We have backups — isn't that enough?+

Backups are half of recovery. The other half is verified restore capability. When did you last run a full backup restore in a test environment? Who was involved, how long did it take, what didn't work? If those answers aren't readily at hand, the backups are objects of hope rather than instruments of resilience.

We're not a KRITIS operator — does this really concern us?+

Directly from a regulatory standpoint, probably not. Indirectly, probably more than you think today. Supplier contracts with KRITIS organisations will include extended continuity requirements over the coming quarters. Cyber insurance increasingly demands documented disaster-recovery plans. And independently of that: a 72-hour outage of your critical business processes is an expensive matter even without regulatory pressure. The CISA discipline is good operations hygiene, KRITIS-regulated or not.

When your continuity plan is more hope than discipline

Patches alone don't protect you against every week. If you find while reading this that the question “what keeps running when the network breaks?” is currently open in your organisation, a 30-minute first call is the lowest-threshold next step — no pitch, no sales funnel, an honest situation check.

Talk to us