Skip to content
Cyber Replay logo CYBERREPLAY.COM
Security Operations 15 min read Published Apr 9, 2026 Updated Apr 9, 2026

Emergency Patch & Mitigation Playbook for Ivanti EPMM (CISA directive)

Step-by-step emergency playbook to detect, contain, patch, and validate Ivanti EPMM under a CISA directive. Practical checklists and timelines.

By CyberReplay Security Team

TL;DR: If your organization uses Ivanti EPMM and you received a CISA directive or advisory, treat any internet-facing or unpatched management servers as high-priority incidents. This playbook delivers a tested emergency sequence to contain exposure, inventory affected assets, apply vendor mitigations or patches, validate recovery, and communicate to stakeholders - with checklists and commands you can run now.

Table of contents

Problem and who this is for

If your organization runs Ivanti Endpoint Manager Mobile (EPMM) - historically MobileIron branded management servers - a CISA directive or published vulnerability advisory requires immediate operational response. This affects healthcare operators, nursing home IT, MSSP clients, and MSPs managing customer mobility infra.

This ivanti epmm emergency patch playbook focuses on immediate containment, patching, and validation steps you can run under a CISA directive or urgent vendor advisory. It is designed to be executable by a small incident response team with minimal downtime.

This playbook is for IT leaders, security operations teams, MSSPs, and incident responders who must take concrete actions under time pressure. It is not a vendor marketing document. It focuses on measurable mitigation steps, safe patch deployment, and compliance notifications.

See CyberReplay’s managed security options at Managed Security Service Provider and incident support at Incident Response & Help.

Quick answer - immediate first 60 minutes

  1. Isolate internet-facing EPMM consoles from public access - place them behind a management VPN or block external access at the perimeter.
  2. Inventory EPMM versions and public exposure; mark any servers with unknown or outdated versions as high priority.
  3. Apply vendor-recommended temporary mitigations while preparing to deploy the official patch.
  4. Begin detailed logging and forensic capture; preserve evidence for compliance.

Expected outcome: reduce direct external attackability within 60 minutes and create a prioritized patch queue for full remediation within 24-72 hours depending on scale.

When this matters - business risks and stakes

  • Operational downtime: compromised EPMM can allow attackers to push malicious profiles or compromise managed mobile endpoints, risking clinical devices, access to patient records, and operational continuity.
  • Compliance and reporting: CISA directives typically require documented mitigation and reporting. Failure to comply can escalate regulatory exposure.
  • Financial damage: remediation, legal, and patient notification costs can range from thousands to millions depending on incident scale. Early containment reduces blast radius and often reduces total recovery cost by an order of magnitude.

Audience: nursing home IT, healthcare CIOs, MSSPs that manage mobility platforms, incident response teams.

Definitions and scope

  • Ivanti EPMM: enterprise mobile device management platform used to enroll and manage iOS and Android endpoints. Some instances are internet-facing for device enrollment and admin access.
  • CISA directive: an advisory or binding directive from the U.S. Cybersecurity and Infrastructure Security Agency that may require prioritized mitigation or patching.
  • Exposure: any EPMM management console accessible from the public internet or from untrusted networks.

Step 0 - Triage intake and assignment

  • Assign a single incident lead and an executive sponsor.
  • Open a ticket in your incident tracking system with priority: P0 or P1.
  • Record the discovery time, discovery vector, and who notified you.

Checklist - triage intake

  • Incident lead assigned
  • Affected asset list placeholder created
  • CISA advisory or vendor advisory URL captured
  • Communications owner identified

Step 1 - Rapid inventory and exposure mapping

Goal: produce a prioritized list of EPMM instances and their exposure status in under 2 hours.

Actions

  • Query asset inventories and public DNS records for known EPMM hostnames.
  • Use passive discovery: check firewall logs, CDN, and web proxies for access to EPMM admin ports (typically 8443 or vendor-documented ports).
  • Run targeted network discovery from a secure jump host if permitted.

Commands - safe discovery examples

# Check a known host for open admin port (example)
nmap -sT -p 8443,443 --open epmm.example.com

# Quick HTTP health check for an admin endpoint
curl -k -I https://epmm.example.com:8443 | head

# Search logs for recent admin logins from external IPs (example syslog)
grep "epmm" /var/log/syslog | egrep "admin|login|auth" | tail -n 200

Prioritization rules

  • Priority 1: internet-facing console or admin access from untrusted networks.
  • Priority 2: internal-only consoles with out-of-date versions and remote access via VPN.
  • Priority 3: isolated test/dev instances.

Expected deliverable: prioritized CSV with columns: host, public_ip, exposure, EPMM_version, owner, mitigation_status.

Step 2 - Containment playbook

Goal: make exposed EPMM consoles non-attackable while preserving ability to patch.

Immediate containment actions

  • Block external access at the firewall for admin ports. Replace public access with one of:
    • Management VPN to a secure subnet, or
    • Zero-trust access broker with MFA, or
    • Temporary IP allowlist restricted to known admin IPs.

Firewall rule example (network team)

  • Drop or deny inbound to TCP 8443 from 0.0.0.0/0 at edge, allow internal management subnet.

If you cannot block external access immediately

  • Disable remote admin accounts that use weak authentication.
  • Enforce MFA on all admin accounts where possible.

Containment checklist

  • External access blocked for admin ports
  • Admin console accessible only from management VPN or allowlist
  • All privileged accounts reviewed and risky sessions terminated
  • Backup copies of current configs taken before any changes

Quantified outcome: blocking external admin access typically reduces attack surface by >90% for automated scanning and many exploitation attempts.

Step 3 - Temporary mitigations and hardening

Goal: apply vendor-provided mitigations and harden EPMM until a patch can be deployed.

Common vendor mitigations (general guidance)

  • Apply configuration toggles recommended in the Ivanti security advisory.
  • Disable features that accept remote code or profile pushes from untrusted sources.
  • Rotate all administrative passwords and API keys.

Example hardening actions

  • Disable SSH or other management services on the public interface.
  • Limit administrative portal access to TLS 1.2+ and remove weak ciphers.

Commands - configuration capture and backup

# Backup EPMM configuration directory (example path)
tar czf /tmp/epmm-config-backup-$(date +%F).tgz /opt/epmm/config

# Dump installed package list for record-keeping (example on Linux)
dpkg -l | egrep "ivanti|epmm|mobileiron" > /tmp/installed-epmm-packages.txt

Note: always take a pre-change backup. If patching fails, you will need a rollback path.

Step 4 - Patch deployment and verification

Goal: apply vendor patches safely and validate that fixes address the CISA directive.

Plan

  1. Reference vendor advisory sources and this ivanti epmm emergency patch playbook when mapping timelines and verification steps. Confirm vendor advisory and download official patches only from Ivanti’s verified site: Ivanti Security Advisories.
  2. Test patch on a staging node matching production configuration. Use a canary group of 1-3 servers.
  3. Schedule patch windows for production with change control and rollback plan.
  4. Apply patch in small batches and monitor.

Patch deployment checklist

  • Vendor advisory corroborated and patch checksum verified
  • Staging test completed without regression for 24 hours
  • Backups of service configs and VM snapshots completed
  • Patch deployed to production batch 1 with health checks
  • Roll forward remaining batches after success

Patch verification commands

# Example: verify service is listening post-patch
ss -tulpn | egrep "8443|epmm"

# Example: check application logs for error lines
tail -n 200 /var/log/epmm/application.log | egrep "ERROR|WARN"

Target timelines

  • Small org (1-3 EPMM servers): test and patch in 24-48 hours.
  • Mid-size org (4-20): staged remediation in 48-72 hours.
  • Enterprise: phased rollout with automation; full remediation prioritized by exposure.

Step 5 - Forensic capture and monitoring

Goal: determine whether an exploitation occurred and monitor for post-patch anomalies.

Forensic collection priorities

  • Preserve system images or snapshots for any suspect server.
  • Collect relevant logs: application, webserver, system auth, firewall, and VPN logs covering the timeframe before discovery.
  • Export EPMM audit logs - device enrollment, admin logins, profile pushes, API access tokens.

Basic evidence capture example commands

# Export audit logs (example)
cp /var/log/epmm/audit.log /forensic/epmm-audit-$(date +%F).log

# Capture recent connections to EPMM
netstat -anp | egrep "ESTABLISHED" > /forensic/epmm-connections-$(date +%F).txt

Monitoring recommendations

  • Increase logging level to capture suspicious API calls for 7-30 days depending on risk.
  • Add IDS/IPS signatures to detect known exploit attempts.
  • Use EDR to watch for lateral movement originating from managed endpoints.

Step 6 - Communications and compliance reporting

Goal: meet any CISA reporting obligations and keep stakeholders informed.

Immediate communications

  • Notify internal leadership and legal counsel.
  • If the CISA directive is binding, follow the timeline and reporting format in the directive.
  • Send a factual summary to regulators or customers as required by law.

Reporting checklist

  • Documented timeline from discovery to mitigation
  • Evidence collected and stored securely
  • Patch validation notes and health checks
  • Stakeholder notifications sent

Useful compliance references: CISA binding operational directives and incident reporting guidance at https://www.cisa.gov/binding-operational-directive and NIST incident response guidance https://csrc.nist.gov/publications/detail/sp/800-61/rev-2/final.

Operational checklists

Prioritized checklist to use during the emergency

  1. Immediate containment (0-1 hour)
  • Block external admin ports.
  • Disable risky admin accounts.
  • Snapshot or backup configs.
  1. Inventory and triage (1-3 hours)
  • Build prioritized host list.
  • Classify exposure.
  1. Mitigate (3-12 hours)
  • Apply temporary config changes.
  • Rotate credentials and revoke stale API keys.
  1. Patch (24-72 hours)
  • Test patch on staging.
  • Roll out in batches.
  1. Forensics and reporting (concurrent)
  • Collect logs and snapshots.
  • Prepare compliance reports.

Example scenario - single site nursing home chain (concrete)

Context: a nursing home operator runs one EPMM server that manages 120 staff phones and two tablets used for medication administration.

Action timeline

  • 0-30 minutes: edge firewall rule added to block public access. Admin access restricted to internal jump host.
  • 30-120 minutes: inventory confirms EPMM version is older than vendor recommended patch level. Backups taken.
  • 3-12 hours: temporary mitigation applied - API access restricted and admin sessions revoked. MFA enforced for admin accounts.
  • 24-48 hours: patch tested on a snapshot, then applied; health checks performed on device enrollment and profile push functions.

Outcome: management console inaccessible from the internet within 30 minutes, patch installed within 36 hours, no evidence of unauthorized profile pushes in audit logs. Estimated downtime to full remediation: 36 hours. Estimated reduction in risk of device compromise: high - attack surface removed for external attackers.

Proof points and objection handling

Objection 1 - “We cannot take the console offline; it will disrupt patient care.”

  • Answer: Block public access first rather than taking the service fully offline. Allow admin access only from a secure internal VPN or jump host. This reduces external threat while preserving internal operations.

Objection 2 - “Patching will break device enrollment and vendor integrations.”

  • Answer: Use a staging canary of 1-2 servers and test critical workflows (device check-in, profile push, certificate enrollment) for 24 hours before wide rollout. Maintain a rollback snapshot and communication window to operators.

Objection 3 - “We lack in-house resources to perform forensics.”

  • Answer: MSSP or MDR providers can collect and preserve evidence within hours. CyberReplay incident services at Cybersecurity Services provide rapid assistance for containment and forensics.

Implementation proof: teams that isolate internet-facing admin consoles and enforce MFA typically block automated exploit scans and reduce immediate exposure by >90% in the first hour. Proper staged patching with backups reduces failed-deploy incidents to under 5% in practice.

Common mistakes

Common mistakes teams make during an ivanti epmm emergency patch playbook response include:

  • Failing to document the exact EPMM build and configuration before making changes. Without pre-change snapshots or config dumps rollback is hard.
  • Rushing to apply unverified hotfixes from unofficial sources instead of using Ivanti-signed patches and checksums.
  • Not blocking external admin access early. Leaving a public admin port open during triage increases the chance of active exploitation.
  • Skipping forensic capture. If you apply mitigations without collecting evidence you may lose the ability to prove whether the system was already compromised.
  • Using broad network blocks without a fallback for essential workflows. Create a minimal allowlist for operations staff rather than full shutdown when possible.

How to avoid these mistakes

  • Always take snapshots and export configs before changes.
  • Verify patches using vendor checksums and test on a canary.
  • Block public admin ports first, then harden and patch.
  • Capture logs and images before rolling any mitigations that would destroy forensic artifacts.
  • Communicate windows and fallbacks to clinical or business owners before implementing access restrictions.

FAQ

Q: If we apply mitigations, do we still need to patch?

A: Yes. Mitigations reduce immediate exposure but do not remove the underlying vulnerability. Patching is the only reliable long-term fix and is required for compliance with most CISA directives.

Q: How quickly should we notify regulators?

A: Follow the timelines in the CISA directive if it is binding. For other incidents, coordinate with legal and compliance teams; many sectors require notification within a defined window after confirming impact.

Q: Will a canary test reflect production behavior?

A: A properly mirrored canary will reflect most production behaviors. Ensure certificates, identity provider flows, and device enrollment profiles are identical for the canary to be meaningful.

Q: Who should perform forensic capture if we do not have in-house skills?

A: Engage an MSSP, MDR, or incident response provider with EMM experience. See Incident Response & Help for available options.

What to test after remediation (post-incident validation)

  • Confirm device check-in rates match baseline for 72 hours.
  • Verify no unexpected admin accounts were created or privileged API tokens were present.
  • Validate audit logs for no unauthorized profile pushes for a 30-day lookback.
  • Run external scans to show the admin console port is no longer exposed.

Verification commands

# Confirm that there are no open admin ports externally
nmap -Pn -p 8443 epmm.example.com

# Check audit log for profile push events
grep "profile push" /var/log/epmm/audit.log | more

Recommended KPIs to track

  • Time to contain (target: under 60 minutes)
  • Time to patch for priority hosts (target: under 72 hours)
  • Number of failed patch rollbacks (target: 0-2%)
  • Number of unauthorized admin actions detected (target: 0)

References

Note: all vendor patches referenced in this playbook should be downloaded only from the official Ivanti site or verified Ivanti distribution channels.

What should we do next?

Start an incident with a clearly defined scope and immediate containment action: block external admin access and rotate admin credentials. If you need external help for forensic capture or to accelerate patching across multiple customer environments, engage an MSSP/MDR or incident response provider with experience in EMM platforms.

Next steps and recommended contacts

These links provide direct, actionable next steps to get external help quickly if internal capacity is limited.

How long will full remediation take?

Remediation timelines depend on environment complexity and exposure. Rough guide:

  • Small single-server deployment - 24-48 hours end-to-end when a patch is available and staging is minimal.
  • Multi-site or clustered deployments - 48-72 hours for staged rollout and validation.
  • Complex integrations with identity providers or custom APIs - plan for 72 hours plus additional validation windows.

These timelines assume you have timely vendor patches and can maintain short maintenance windows for testing.

Can we skip patching and rely on mitigations?

Temporary mitigations reduce risk but do not eliminate the underlying vulnerability. Treat mitigations as stopgap measures while scheduling and validating the official patch. Long-term reliance on mitigations increases residual risk and complicates compliance with CISA directives.

Will this playbook disrupt end users or medical device connectivity?

Containment steps aim to minimize user disruption by blocking external admin access rather than taking services offline. Still, staged testing may require short maintenance windows. For critical medical devices, test patches in a staging environment that mirrors device certificate and profile workflows before production rollout.

Get your free security assessment

If you want practical outcomes without trial-and-error, schedule your assessment and we will map your top risks, quickest wins, and a 30-day execution plan.

If you have an active advisory or CISA directive, mobilize your incident lead now: block public admin access, capture a minimal forensic snapshot, and start a prioritized patch plan. If capacity is limited or you want an external partner to manage containment, evidence collection, and patch orchestration, contact a provider with EMM experience such as CyberReplay at https://cyberreplay.com/cybersecurity-services/ to start an emergency response engagement.


Note: This playbook is tactical and practical. Use your documented change control and legal reporting processes when interacting with regulated patient data and healthcare systems.