Verification Resources

Tools and databases for conducting due diligence on AI platforms and their leadership.

Regulatory & Background Checks

Security Certifications

Technical Research

Business Intelligence

Frameworks & Standards

Global AI governance frameworks that inform our manifesto principles.

EU AI Act

The world's first comprehensive AI regulation, establishing risk-based categories, transparency requirements, and accountability obligations for AI systems.

  • Risk-based classification (unacceptable, high, limited, minimal)
  • Transparency and documentation requirements
  • Human oversight mandates
  • Data governance standards
Official resource →

OECD AI Principles

International standards promoting trustworthy AI that respects human rights, democratic values, and enables meaningful challenge of AI-based outcomes.

  • Inclusive growth and sustainable development
  • Human-centered values and fairness
  • Transparency and explainability
  • Robustness, security, and safety
  • Accountability and redress
Official resource →

Montreal Declaration

A declaration for responsible AI development emphasizing human well-being, autonomy, privacy, and democratic participation in AI governance.

  • Well-being and autonomy
  • Privacy and consent
  • Solidarity and democratic participation
  • Equity and diversity
  • Prudence and responsibility
Official resource →

Asilomar AI Principles

23 principles addressing AI research, ethics, and longer-term safety concerns, developed by AI researchers and thought leaders.

  • Research culture and funding
  • Ethics and values alignment
  • Longer-term safety considerations
  • Avoiding harmful use and AI arms races
Official resource →

NIST AI Risk Management

A comprehensive framework for identifying, assessing, and managing AI risks throughout the AI system lifecycle.

  • Governance structures
  • Risk mapping and measurement
  • Risk management approaches
  • Continuous monitoring
Official resource →

OWASP Top 10 for LLMs

Security-focused guidance addressing vulnerabilities specific to large language model applications and generative AI systems.

  • Prompt injection vulnerabilities
  • Insecure output handling
  • Training data poisoning
  • Model denial of service
  • Supply chain vulnerabilities
Official resource →

Further Reading

Articles, reports, and research on AI ethics, fraud patterns, and responsible AI development.

AI Ethics & Governance

  • UNESCO Recommendation on the Ethics of AI (2021)First global standard on AI ethics
  • AI Index Report - Stanford HAIAnnual comprehensive AI progress and governance tracking
  • Responsible AI: Best Practices for Creating Trustworthy AI SystemsMicrosoft AI principles documentation

AI Security & Risk

  • Gartner: AI TRiSM FrameworkAI Trust, Risk, and Security Management
  • Adversarial Machine Learning: A Taxonomy and TerminologyNIST technical publication on ML threats
  • Prompt Injection: The New Security FrontierResearch on LLM-specific vulnerabilities

Due Diligence & Fraud Prevention

  • AI Snake Oil - Arvind NarayananUnderstanding what AI can and cannot do
  • FTC: Keep Your AI Claims in CheckRegulatory guidance on AI marketing claims
  • Due Diligence for AI AcquisitionsTechnical evaluation methodologies

Contribute Your Knowledge

Know a resource that should be here? Propose additions through GitHub or share with the community.