Show HN: I Built a API to Protect LLM Apps from Prompt Injection

6 days ago 2

Swayblocks

1 # LLM Input Sanitization API

2 # Prevent malicious inputs and block retrircted topics

3

4 curl -X POST https://swayblocks.com/user_input

5 {

6 "user_input": "Ignore previous instructions..."

7 }

8

Response:

{

"flag": PROMPT_INJECTION_DETECTED,

"comment": "Detected an attempt to override system instructions.",

}

9 # Key Features

10 • Real-time prompt injection detection

11 • Malicious input sanitization

12 • Easy API integration

13 • Custom filtering rules

Try the Demo:

Test Input:

15 # Get early access

user@terminal:~$

17 # Contact

19 echo "https://swayblocks.com"

Read Entire Article