moar anti-AI measures

This commit is contained in:
mi
2025-11-15 15:49:34 +10:00
parent 14415dfcd2
commit 418ba6e4ba
2 changed files with 68 additions and 25 deletions

View File

@@ -26,7 +26,7 @@ A Flask-based webcomic website with server-side rendering using Jinja2 templates
- [Common SEO Questions](#common-seo-questions)
- [Content Protection & AI Scraping Prevention](#content-protection--ai-scraping-prevention)
- [Protection Features](#protection-features)
- [Optional: Additional Protection Measures](#optional-additional-protection-measures)
- [Advanced: Image-Level Protection Tools](#advanced-image-level-protection-tools)
- [Important Limitations](#important-limitations)
- [Customizing Your Terms](#customizing-your-terms)
- [Testing Your Protection](#testing-your-protection)
@@ -509,34 +509,59 @@ The Terms page is automatically linked in your footer and includes:
- TDM rights reservation (EU Directive 2019/790 Article 4)
- Clear permitted use guidelines
### Optional: Additional Protection Measures
#### HTTP Headers
Sunday Comics automatically adds `X-Robots-Tag: noai, noimageai` headers to all responses for additional AI blocking enforcement.
#### HTTP Headers (Advanced)
For stronger enforcement, you can add HTTP headers. Add this to `app.py` after the imports:
#### TDM Reservation File
The `/tdmrep.json` endpoint formally reserves Text and Data Mining rights under EU Directive 2019/790, pointing to your Terms of Service.
```python
@app.after_request
def add_ai_blocking_headers(response):
"""Add headers to discourage AI scraping"""
response.headers['X-Robots-Tag'] = 'noai, noimageai'
return response
```
### Advanced: Image-Level Protection Tools
#### TDM Reservation File (Advanced)
Create a `/tdmrep.json` endpoint to formally reserve Text and Data Mining rights:
For artists who want to protect their work at the image level, consider these specialized tools:
```python
@app.route('/tdmrep.json')
def tdm_reservation():
"""TDM (Text and Data Mining) reservation"""
from flask import jsonify
return jsonify({
"tdm": {
"reservation": 1,
"policy": f"{SITE_URL}/terms"
}
})
```
#### Glaze (Style Protection)
**What it does:** Adds imperceptible changes to images that prevent AI models from accurately learning your artistic style.
**Best for:**
- Protecting your unique art style from being copied by AI
- Making AI-generated imitations look wrong or distorted
- Artists concerned about style mimicry (e.g., "draw like [artist name]" prompts)
**How to use:**
1. Download from [glaze.cs.uchicago.edu](https://glaze.cs.uchicago.edu)
2. Process your comic images before uploading to your site
3. The changes are invisible to humans but confuse AI models
**Trade-offs:**
- Processing time: Can take several minutes per image
- Slight file size increase
- Requires reprocessing all comics
#### Nightshade (Data Poisoning)
**What it does:** Makes images appear as something completely different to AI models while looking normal to humans.
**Best for:**
- Active defense against unauthorized AI training
- Making scraped data actively harmful to AI models
- Artists who want to fight back against scraping
**How to use:**
1. Download from [nightshade.cs.uchicago.edu](https://nightshade.cs.uchicago.edu)
2. Process images before uploading (can combine with Glaze)
3. AI models trained on these images will produce incorrect results
**Trade-offs:**
- More aggressive than Glaze (may violate some ToS)
- Processing time similar to Glaze
- Ongoing research tool, effectiveness may vary
#### Recommendations
- **Use Glaze if:** You want passive protection for your art style
- **Use Nightshade if:** You want active defense and accept the risks
- **Use both if:** Maximum protection is your priority
- **Combine with Sunday Comics protections:** These tools complement the web-based protections (robots.txt, meta tags, etc.)
**Note:** Both tools are free, open-source projects from the University of Chicago's SAND Lab, specifically designed to help artists protect their work from AI exploitation.
### Important Limitations