Prompt Library ⚙️ Technical Create AI System for Misinformation Detection
GPT-4o ⚙️ Technical Advanced

Create AI System for Misinformation Detection

Design an AI-powered misinformation detection system with architecture, data sources, and evaluation methodology.
👁 3 views ⎘ 0 copies ♥ 0 likes

The Prompt

# Create an AI Misinformation Detection System

You are a senior AI researcher specializing in natural language processing, fact-checking systems, and information integrity. Design a comprehensive AI system for detecting misinformation.

## System Specifications
- **Target Content Type:** [CONTENT_TYPE] (e.g., news articles, social media posts, YouTube transcripts, WhatsApp forwards, political speeches)
- **Deployment Context:** [CONTEXT] (e.g., newsroom fact-checking tool, social media platform moderation, browser extension for users, enterprise media monitoring)
- **Scale Required:** [SCALE] (e.g., 1,000 articles/day, 1M social posts/hour)
- **Language(s):** [LANGUAGES]
- **Accuracy Target:** [ACCURACY_TARGET] (e.g., 90%+ precision, minimize false positives)
- **Technical Team Expertise:** [TEAM_LEVEL] (research team, ML engineers, or no-code/low-code preferred)

## System Architecture

### 1. Problem Framing
- Define "misinformation" for this system's scope (distinguish from disinformation, satire, opinion, and outdated facts)
- Key challenges: context dependence, evolving claims, adversarial actors, language nuance
- Ethical considerations: free speech, bias in training data, transparency requirements

### 2. Data Pipeline
- Input sources and data ingestion (web scraping, API integrations, real-time streams)
- Pre-processing pipeline: cleaning, language detection, entity extraction
- Claim extraction: how to isolate specific verifiable factual claims from text
- Data storage architecture

### 3. AI Model Architecture
- **Claim Detection Model:** Fine-tuned transformer (BERT, RoBERTa, or LLM-based) for claim identification
- **Evidence Retrieval System:** RAG pipeline connecting to knowledge bases (Wikipedia, fact-check databases, news archives, WHO/CDC/government sources)
- **Veracity Classification:** Multi-label classifier (True / False / Misleading / Unverifiable / Satire)
- **Confidence Scoring:** How to communicate uncertainty to end users

### 4. Knowledge Base & External APIs
- Recommended fact-checking databases: Snopes, PolitiFact, FactCheck.org, Full Fact, ClaimBuster
- Real-time information sources: Google Fact Check Tools API, GDELT, news APIs
- Structured knowledge graphs: Wikidata, knowledge base construction approach

### 5. Evaluation Framework
- Benchmark datasets: LIAR, FakeNewsNet, FEVER, MultiFC
- Metrics: Precision, Recall, F1-score, and human-in-the-loop accuracy rates
- Bias audit: demographic and political balance testing
- Red-teaming approach: how to stress-test the system against adversarial inputs

### 6. Human-in-the-Loop Design
- Which cases require human reviewer escalation
- Reviewer interface design requirements
- Feedback loop for continuous model improvement

### 7. Deployment & Monitoring
- API design for integration with [CONTEXT]
- Latency requirements and optimization strategies
- Drift detection: monitoring model performance as misinformation evolves
- Explainability: showing users why a claim was flagged

### 8. Responsible AI Considerations
- Transparency report template
- Appeals process for incorrectly flagged content
- Governance structure and oversight committee

Deliver as a technical design document with architecture diagrams described textually.

📝 Fill in the blanks

Replace these placeholders with your own content:

[CONTENT_TYPE]
[CONTEXT]
[SCALE]
[LANGUAGES]
[ACCURACY_TARGET]
[TEAM_LEVEL]

How to use this prompt

1
Copy the prompt

Click "Copy Prompt" above to copy the full prompt text to your clipboard.

2
Replace the placeholders

Swap out anything in [BRACKETS] with your specific details.

3
Paste into GPT-4o

Open your preferred AI assistant and paste the prompt to get started.