High

Apache HTTP/2 CVE-2026-23918: when two frames topple the webserver pool — what 2.4.67 actually changes

On 4 May 2026 the Apache Software Foundation disclosed a double-free in mod_http2: CVE-2026-23918, CVSS 8.8, triggerable with a single TCP connection and exactly two HTTP/2 frames — no authentication, no special headers, no specific URL. Patch in Apache 2.4.67. A working public PoC for RCE on x86_64 has been demonstrated.

What has changed? A DoS vector with minimal effort and a theoretically escalable RCE in a component that runs as reverse proxy or LAMP webserver in many TYPO3, Sylius, and Symfony stacks. Who is affected? Every Apache 2.4.66 with mod_http2 enabled — the default in practically every modern distribution config. What's on today's list? Deploy 2.4.67, or temporarily disable HTTP/2.

Zwei papierdünne Rahmen überlappen sich leicht auf Beton, darunter ein kleiner Wasserspiegel; aus der Nahtstelle zieht ein roter Faden ins Wasser; eine Messinglupe und drei Stempel rahmen die Szene im kühlen Nordlicht.

The 90-second summary

On 4 May 2026 the Apache Software Foundation published CVE-2026-23918 — a double-free in the stream cleanup path of mod_http2 (file h2_mplx.c). Trigger: a HEADERS frame immediately followed by a RST_STREAM with non-zero error code on the same stream. One TCP connection, two frames — the MPM worker crashes, every request pending on that connection is dropped, Apache respawns, the attacker repeats. DoS runs sustained with minimal traffic.

On x86_64 a working RCE proof-of-concept has been demonstrated: a crafted h2_stream structure is placed at the freed virtual address via mmap reuse, its pool cleanup function redirected to system(). Debian-packaged Apache and the official Apache httpd Docker images using the APR mmap allocator are the relevant RCE target. Patch: Apache 2.4.67. No KEV entry as of 10 May, but a public PoC is available.

What CVE-2026-23918 actually means — and how we work it for customer stacks

What the vulnerability is

CVE-2026-23918 is a double-free in the HTTP/2 stream cleanup path. The bug sits in h2_mplx.c, the central multiplexer file of mod_http2. When an HTTP/2 stream is terminated by RST_STREAM with a non-zero error code while the HEADERS are still in flight („early reset“), the stream structure enters a state in which two different cleanup paths free the same pool allocations.

The DoS path is trivial: a TCP connect, a HEADERS frame, a RST_STREAM frame, the MPM worker terminates. Apache has a worker pool that can respawn, but all other requests held on the same worker are dropped in the process. The attacker repeats — the pattern produces sustained service outage with minimal traffic.

The RCE path is non-trivial but demonstrated. After the first free of the h2_stream structure, an attacker can place a crafted structure at the freed virtual address via mmap reuse, redirect its pool cleanup callback to system(), and trigger code execution on the second free. The precondition is a certain heap-layout predictability — which the APR default mmap allocator (in Debian packages and the official httpd Docker images) provides.

As of 10 May 2026 no active exploitation in the wild is documented. There's no CISA KEV entry. There's a working public PoC. The picture is that of a running vulnerability that has to be patched before someone automates it for mass scanning — not an escalation emergency.

Where Apache sits in German Mittelstand stacks

Apache shows up in three roles in the German Mittelstand:

  1. Classic LAMP stack for TYPO3, often with mod_php or php-fpm via mod_proxy_fcgi. Apache is the webserver receiving frontend requests — and therefore the direct target.
  2. Reverse proxy in front of Sylius/Symfony applications on php-fpm or Roadrunner. Apache handles TLS termination, HTTP/2 multiplexing, mod_rewrite rules, often authentication layers via mod_auth_*.
  3. Internal service frontend in containerized stacks, when teams stayed on Apache for historical reasons instead of moving to nginx or Caddy. Apache sits behind an external load balancer, but the vulnerability remains exploitable on the lateral path.

For all three roles: mod_http2 has been the default for years. If you haven't actively disabled HTTP/2, it's active.

What we concretely recommend

First — and immediately: deploy Apache 2.4.67. Debian stable, Ubuntu LTS, the RHEL family have the patch in their update streams; the official httpd Docker image is available with tag 2.4.67. Reload via apachectl graceful is enough — a full stop isn't required because the patch sits in the HTTP/2 module, not in the worker lifecycle.

Second — if the patch jump doesn't fit this week (change window, test pipeline, customer signoff): temporarily disable HTTP/2. a2dismod http2 on Debian/Ubuntu, or reduce Protocols h2 h2c http/1.1 to Protocols http/1.1. The performance loss is measurable but acceptable for most German SME workloads. The attack vector disappears immediately.

Third — structural: check whether your CI/CD rotates Apache image tags. We regularly see pipelines that reference the official httpd:2.4 image but don't re-pull it regularly. A :2.4.67 pin after successful testing, or a Wolfi-/Chainguard-based variant with nightly rebuild, is the robust answer. If you sit on :latest, you're patched within 24 hours — provided the pipeline actually pulls.

Fourth — detection. Apache logs don't necessarily show the DoS path because the worker crashes before writing the request line. What can be observed: anomalous crash frequency in the Apache error log („Child ... terminated by signal“), an unusual cluster of RST_STREAM frames in HTTP/2 capture (if available), connections that send a RST immediately after a HEADERS frame. A simple alert on SIGSEGV respawns in the systemd journal catches the DoS path reliably.

What we deliberately don't recommend

We don't recommend relying on an upstream WAF. CVE-2026-23918 is layer-7 specific and triggers at the HTTP/2 protocol level; a WAF inspecting HTTP/1.1 and forwarding to Apache only helps if it terminates HTTP/2 itself (Cloudflare, AWS CloudFront, an nginx in front). If you run such an architecture, Apache in the backend is somewhat less acute, but the patch remains operational obligation.

We equally don't recommend switching from Apache to nginx wholesale. The situation is manageable with a point-release jump; a migration is its own architectural decision, not a Patch Tuesday reflex. If you're migrating anyway for other reasons, you can use the timing — but the CVE alone isn't a sufficient migration reason.

Who is most affected

TYPO3 customers on classic LAMP stacks on Debian stable or RHEL family. Apache runs with mod_http2 as default, often behind a thin load balancer without its own HTTP/2 termination. Patch jump this week.

Sylius multi-tenant hosting with Apache as reverse proxy in front of multiple shop instances. The DoS path here doesn't hit just one shop but all tenants on the same Apache worker pool. Patch priority is correspondingly high.

SMEs with self-built container images based on httpd:2.4 with no automatic rebuild. These images often sit on the same layer hash for long stretches; without an explicit rebuild trigger the stack stays in the vulnerable state.

Conclusion

CVE-2026-23918 isn't a sensation, but it's one of the first running vulnerabilities since the early 2024 mod_http2 cluster that every Apache operator in the German Mittelstand should know. The patch is available, the mitigation is trivial, the public PoC is published. What's missing is the mass-scan wave — and experience says that arrives in weeks, not months.

The question isn't whether you can survive without running 2.4.67. It's whether your pipeline can pull this point-release jump within a week — or whether someone has to push it through manually today.

Personal context and technical detail on Apache patch discipline and Wolfi-/Chainguard-based image rotation: ole-hartwig.eu.

Frequently asked questions on CVE-2026-23918 and the mod_http2 patch status

We already terminate HTTP/2 in front of Apache (Cloudflare, nginx, ALB) — are we still affected?+

Less acute directly, yes. If Cloudflare, nginx or an AWS ALB terminates HTTP/2 in front and Apache only sees HTTP/1.1 from the reverse proxy, the DoS path and the RCE path are closed — the frames never reach mod_http2. But: the patch remains an operational duty because (a) architectures shift (e.g. when the CDN contract ends), (b) lateral paths can speak HTTP/2 (service mesh, K8s ingress behind the CDN), and (c) audit answers like “we have a vulnerable Apache but the CDN protects us“ are not answers an insurer or NIS-2 supervisor accepts. Patch 2.4.67 — the mitigation is reassurance, not a substitute.

We run on RHEL 9 / Debian stable — when does the backport land?+

With a disclosure date of 4 May 2026, distribution backports are expected within the first week — the RHEL family typically within 5–7 days via httpd errata, Debian within 7–10 days via security.debian.org, Ubuntu within 3–5 days via ubuntu-security. Check daily with apt list --upgradable | grep apache2 or yum check-update httpd until the patch is in the stream. If you cannot wait or need the patch today: the official httpd:2.4.67 Docker image is ready and can be put in front of the actual backend as a reverse-proxy layer.

How do we detect a DoS attempt if the Apache error log logs nothing?+

The worker crashes before writing the request line to access.log — that observation is correct. Three detection paths independent of Apache’s own log: (1) systemd journal for SIGSEGV respawns: journalctl -u apache2 -p err --since "1 hour ago" shows worker crashes with signal code. (2) a Falco or auditd rule that captures apache2/httpd process exits with non-zero signal. (3) a light Prometheus exporter setup (apache_exporter) that surfaces BusyWorkers/IdleWorkers oscillations — a sustained attack shows up as sustained respawning without matching request volumes at the frontend LB.

Does it pay to switch fully to nginx or Caddy now?+

Not from this CVE alone. A point-release bump from 2.4.66 to 2.4.67 is a pure patch-level update without API breakage; a webserver migration is a 5–20 person-day project with its own risks (mod_rewrite rules, .htaccess strategy, TYPO3- and Sylius-specific config paths). If Apache as webserver is already on the architecture roadmap for migration, the CVE can be the trigger to set the timing. If not: patch 2.4.67, continue the routine, run the migration on its own planning logic.

We use the official httpd Docker image — how do we check that our tag is patched?+

docker image inspect httpd:2.4 --format '{{.Config.Labels}}' shows the org.opencontainers.image.version label on current images. For a direct verification: docker run --rm httpd:2.4 httpd -v prints the Apache version. On 2.4.66 — re-pull immediately with docker pull httpd:2.4, then restart the container. Caveat: if you reference the image in your own build (FROM httpd:2.4), the pull only takes effect if the pipeline also sets --pull always or rotates the image tag. Best practice: FROM httpd:2.4.67 as an explicit pin after successful testing — or a Wolfi/Chainguard-based apache-httpd image with nightly rebuilds.

How do we cleanly fall back to HTTP/1.1 via Protocols if we can’t patch until next week?+

In the global server config or per VirtualHost: Protocols http/1.1. That removes h2 and h2c from ALPN negotiation, the TLS handshake degrades to HTTP/1.1, and the entire vulnerable path is shut off. On Debian/Ubuntu, additionally a2dismod http2 && systemctl reload apache2 — the most robust variant because the module is not loaded in the first place. Performance note: single-page loads with many small assets become 5–15 percent slower; for typical TYPO3 or Sylius pages with aggressive HTTP/1.1 pipelining and CDN caching this is in the tolerance band. For a reverse proxy in front of an SPA it may become noticeable — then patch directly instead.

Bevor der Public-PoC zur automatisierten Welle wird — sprechen wir über Ihre Apache-Pipeline.

We audit your Apache hosts against CVE-2026-23918 and pull 2.4.67 cleanly.

You give us access to your webserver hosts — we audit the Apache build state with SBOM inventory, deploy the Protocols H2 stopgap fallback to HTTP/1.1 as short-term mitigation, pull 2.4.67 in your maintenance window, validate with a controlled H2 frame test before and after each step, and hand back an audit-ready report.

This is the operational routine behind DevSecOps as a Service and the External IT Department — webserver operations, not advisory PDFs.

Termin direkt vereinbaren