Show HN: A tool that alerts when 404s silently return 200)

3 months ago 3

Héber Júlio

One of the most overlooked changes in a web application is also one of the most dangerous: a page that used to return 404, silently coming back to life with a 200.

In the internet, doesn’t matter if you’re a site administrator, security professional, manager and whatever, most people obsess over what’s already online. But what about what comes back online?

A previously dead endpoint (404) suddenly returning 200 may mean:

  • An old admin panel got re-enabled
  • A misconfigured deployment reintroduced a vulnerable route
  • A forgotten subdomain was reactivated
  • A staging environment went live again
  • A redirect misbehavior exposed sensitive paths

And the worst part? No one notices, unless you’re watching closely.

Let’s say you’re monitoring:

https://example.com/admin-old

You discovered it weeks ago, returning 404 — not found, not exploitable, so you discarded it.

Now, suddenly:

GET /admin-old HTTP/1.1
Host: example.com

HTTP/1.1 200 OK
Content-Type: text/html

The page is back. No announcement, no changelog. Maybe it was part of an old deployment script, or an internal mistake.

Now it looks like this:

<html>
<head><title>Login</title></head>
<body>
<form action=”/admin-old/login” method=”post”>
<input type=”text” name=”user”>
<input type=”password” name=”pass”>
</form>
</body>
</html>

Congratulations, you just found a live login panel that shouldn’t exist! 😀.

1. Make a List of 404s

Use a spider like ffuf, gobuster, or dirsearch:

ffuf -u https://example.com/FUZZ -w common.txt -fc 200

Store all the paths that returned 404 in a list.

2. Recheck Periodically

Write a simple Bash/Python/Node.js script that loops through them and checks again weekly.

Example with bash + curl:

#!/bin/bash
cat 404s.txt | while read url; do
code=$(curl -s -o /dev/null -w "%{http_code}" "$url")
if [ "$code" == "200" ]; then
echo "[+] $url is now LIVE"
fi
done

But maintaining that manually gets painful fast. You can also use an cron with a job configured to run automatically this task from time to time. Also, you might want to do more than just detect it, you might want to automatically configure a job to try an specific payload right after the endpoint is live.

Instead of writing dozens of scripts and running them on cron, there are tools that automate this entire lifecycle:

  • Track and save endpoints (even those returning 404)
  • Check periodically for changes in status
  • Alert you instantly via Telegram/Email
  • Show screenshots and full HTML when changes happen

That’s what I built ReconSnap for — originally to help me during recon and bug bounty sessions, now open to everyone. And this happened to me a lot during my security assessements with my team at my work where the client during long term testings, was deploying random features that we didn’t notice.

Also, on hackerone there is a lot of companies that uses development subdomains with hidden routes that suddenly turns 200, 210, 403, and so on.

Here’s what can happen if you miss these changes:

  • Forgotten admin panels go live again
  • Unauthenticated API endpoints resurface
  • Attackers re-enable paths as backdoors
  • Shadow deployments are reactivated
  • Hidden testing tools become public

Tracking surface area isn’t just about what’s online now. It’s about what comes back unexpectedly.

| From | To | Risk |
| ---- | --- | --------------------------------------- |
| 403 | 200 | Access control broken |
| 301 | 200 | Redirect removed (can break auth flows) |
| 302 | 500 | Crashes on redirect |
| 404 | 403 | Existence revealed, access denied |

Add the URL or Group of URLs to Monitor

Once you’re logged into ReconSnap:

  1. Click “New Group”
  2. Name it something like Old Endpoints
  3. Click “New Job” and add the full URL:
    Example: https://example.com/admin-old

Bulks imports are not currently supported, but it’s a feature that will eventually come. For free accounts, you can’t create new groups, reuse the one that is created by default, the process is the same for this tutorial.

Creating a new group and settings the group name

While configuring the job, the configuration might look something like this:

Explanation:

  • Similarity Level: We changed to 90 because we want to monitor only when the application doesn’t look to simular, higher values means lower chances of capturing the same page over and over again.
  • Job Repetition Level: We will monitor about 1h each time peer day, completing 24 checks per day, you can increase this later, only depends if the company deploys too fast or too slow, you might want to change to higher values to avoid being spammed.

With the Advanced plan, you can create automated reactions — when a task changes from 404 → 200, you can instantly fire a custom payload or scanner. Limited for now but might come handy if you want to let Reconsnap test an SQL injection like in the picture and see the result later.

Configuring an telegram bot to receive the notifications

Reconsnap doesn’t alert you via email, since email vendos might flag as a spam, so telegram is the best way to deliver new changes to you. Go to the settings inside your dashboard and copy the telegram token that you’ll see on the left:

Click on How to setup? and open the telegram bot link, before typing anything on telegram, copy the Token and click Enable to allow Reconsnap send notifications back to you.

On telegram, just write the following

/addtoken YOUR-TOKEN-HERE

The YOUR-TOKEN-HERE is the token generated from your profile settings that you copied previously, if this, Reconsnap is fully configured to alert you for every change.

You might want to upgrade to Professional plan (with a trial), use this code QQMLJKUP5J to lower the price so you can have the most benefith of Reconsnap, while saving money, because, why not?

Let the old 404s surprise you — but on your terms.

Read Entire Article