We've attacked 40+ AI tools, including ChatGPT, Claude and Perplexity

2 hours ago 2

A powerful tool designed to protect human documents from AI intrusion by embedding adversarial content that misleads large language models while maintaining perfect human readability.

In an era of increasing AI surveillance and data harvesting, AIGuardPDF serves as a digital fortress for your intellectual property. Our mission is to establish clear boundaries between human content and AI systems, protecting privacy, confidentiality, and human sovereignty over information.

Our adversarial attack method employs sophisticated text steganography:

  1. Text Fragmentation: Your original text is randomly broken into small fragments (3-7 characters)
  2. Adversarial Injection: We inject 10x-50x more content using transparent white text containing unrelated articles
  3. Strategic Mixing: Original fragments are interwoven with decoy content to maintain readability
  4. AI Confusion: The massive volume of invisible text completely misleads AI models while remaining imperceptible to humans
  • Human sees: A simple article about hot dogs
  • AI reads: Overwhelming content about artificial intelligence, completely missing the original topic
  • Result: 90%+ success rate in confusing ChatGPT, Claude, Perplexity, and other AI systems

🔒 AI-Adversarial Technology

  • Steganographic Text Hiding: Invisible text layers that overwhelm AI models
  • Precise Character Positioning: Coordinate-based text placement for maximum effectiveness
  • Font Manipulation: Strategic use of white text, micro-fonts, and transparency
  • Volume Amplification: Inject 10-50x decoy content to saturate AI attention
  • Human-Perfect Readability: Documents appear completely normal to human readers
  • Unicode Support: Works with any language and character set
  • Professional Formatting: Maintains document appearance and structure
  • PDF Standards Compliance: Generated files work in all standard PDF viewers
  • Real-time Preview: See your protected document before generation
  • Batch Processing: Protect multiple documents efficiently
  • Customizable Decoy Content: Choose your adversarial articles
  • Statistics Dashboard: Track protection effectiveness

Our extensive testing demonstrates:

  • 90%+ Success Rate against major AI systems
  • Complete AI Failure in content comprehension
  • Perfect Human Readability maintained
  • Universal Compatibility across PDF viewers
  • ✅ ChatGPT (GPT-4, GPT-3.5)
  • ✅ Claude (Sonnet, Haiku)
  • ✅ Perplexity AI
  • ✅ Google Bard
  • ✅ Microsoft Copilot
  • ✅ Various AI document analyzers
  • Node.js (v16 or higher)
  • npm or yarn package manager
# Clone the repository git clone https://github.com/lidangzzz/AIGuardPDF.git cd AIGuardPDF # Install backend dependencies cd backend npm install # Install frontend dependencies cd ../frontend npm install # Start the backend server (Terminal 1) cd ../backend npm run dev # Server runs on http://localhost:3000 # Start the frontend (Terminal 2) cd ../frontend npm run dev # Interface available at http://localhost:5173

Web Interface (Recommended)

  1. Navigate to http://localhost:5173
  2. Enter Your Original Text: The content you want to protect
  3. Provide Decoy Articles: Large articles to serve as camouflage
  4. Configure Protection Level: Adjust invisibility and volume settings
  5. Generate Protected PDF: Download your AI-resistant document
POST http://localhost:3000/generate-mixed-pdf Content-Type: application/json { "originalText": "Text to hide", "mainArticle": "Large article content...", "otherArticles": ["Additional", "articles"], "includeStatistics": true, "includeSpecialSequences": false, "title": "Document Title", "author": "Author Name" }

Frontend (React + TypeScript + Vite)

  • Split-panel Interface: Text editor with live PDF preview
  • Protection Configurator: Customize adversarial parameters
  • Real-time Feedback: Instant protection effectiveness indicators

Backend (Node.js + Express + TypeScript)

  • TextMixer Engine: Advanced fragmentation and mixing algorithms
  • PDF Generator: Precise character positioning with steganographic layers
  • Unicode Engine: Multi-language protection support
  • textMixer/textMixer.ts: Text fragmentation and adversarial mixing
  • pdfCreator.ts: PDF generation with invisible text layers
  • server.ts: RESTful API endpoints
  • App.tsx: React interface with protection controls

Adversarial Attack Methodology

  1. Content Analysis: Analyze original text for optimal fragmentation points
  2. Decoy Selection: Choose thematically opposite content for maximum confusion
  3. Steganographic Embedding: Use PDF rendering features to hide text:
    • White color (#FFFFFF) on white background
    • Opacity: 0.01 (nearly transparent)
    • Font size: 0.1pt (microscopic)
    • Precise coordinate positioning
  4. Volume Multiplication: Inject 10-50x more decoy content than original
  5. Sequential Interleaving: Maintain reading flow for humans while saturating AI attention
  • AI Attention Overwhelm: Models focus on high-volume invisible content
  • Contextual Misdirection: Decoy content shifts AI understanding completely
  • Parsing Confusion: Invisible text disrupts AI document structure recognition
  • Human Visual System: Humans naturally filter out imperceptible text

⚖️ Legal and Ethical Considerations

  • Academic Integrity: Prevent AI cheating on assignments and exams
  • Corporate Security: Protect confidential documents and intellectual property
  • Privacy Protection: Shield personal information from AI data harvesting
  • Research Defense: Prevent unauthorized AI training on proprietary content
  • Use only for legitimate privacy and security purposes
  • Respect copyright and intellectual property laws
  • Consider disclosure requirements in academic/professional contexts
  • Understand limitations - this is protection, not perfect security
  • Multi-media Protection: Extending adversarial techniques to images, videos, tables
  • Adaptive Algorithms: Evolving protection as AI detection improves
  • Enterprise Features: Batch processing, API integrations, compliance tools
  • Counter-Detection: Staying ahead of AI countermeasures

We believe in collaborative defense against AI overreach. Our ongoing research focuses on:

  • Visual content protection (images, charts, diagrams)
  • Audio/video adversarial techniques
  • Real-time document protection
  • Enterprise-grade security features

Join our mission to protect human information sovereignty:

# Fork the repository # Create feature branch git checkout -b feature/protection-enhancement # Make your improvements # Test thoroughly npm run test # Submit pull request
  • Issues: Report bugs and request features via GitHub Issues
  • Discussions: Join our community discussions about AI ethics and privacy
  • Security: Report vulnerabilities privately via email

AIGuardPDF represents a critical advancement in human-AI boundary establishment. In a world where AI systems increasingly intrude on human content, we provide essential tools for digital self-defense and information sovereignty.

Our work serves as both practical protection and a call to action for the AI community to seriously consider privacy, consent, and human autonomy in AI development.

This project is open source and available under the MIT License. We encourage widespread adoption and contribution to strengthen collective defense against unauthorized AI content harvesting.


Protecting Human Information Sovereignty - One PDF at a Time 🛡️

Read Entire Article