Research Tools
Verification databases, regulatory resources, and frameworks to help you evaluate AI platforms thoroughly.
Join the Community
This manifesto evolves through collective wisdom. Connect with others committed to promoting trustworthy AI practices.
GitHub
Propose principle refinements, contribute case studies, and collaborate on documentation through pull requests.
github.com/iknowmyllm →#IKnowMyOwnLLM
Share experiences, discuss principles, and connect with the community on social media.
Join the conversation →Sign the Manifesto
Add your name to show commitment to trustworthy AI principles and join the community of signatories.
Sign now →Verification Resources
Tools and databases for conducting due diligence on AI platforms and their leadership.
Regulatory & Background Checks
- FINRA BrokerCheckSecurities industry background checks
- SEC EDGARCompany filings and financial documents
- PACERFederal court records search
- Better Business BureauBusiness complaints and ratings
Security Certifications
- SOC Report VerificationVerify SOC 2 auditor credentials
- ISO Certification SearchVerify ISO 27001 and other certifications
- CSA STAR RegistryCloud security assurance registry
Technical Research
- Google ScholarAcademic papers and citations
- arXiv AI PapersPreprint AI/ML research
- USPTO Patent SearchVerify patent claims
- Papers With CodeML benchmarks and implementations
Business Intelligence
- CrunchbaseStartup funding and company data
- GlassdoorEmployee reviews and insights
- OpenCorporatesGlobal company registry database
Frameworks & Standards
Global AI governance frameworks that inform our manifesto principles.
EU AI Act
The world's first comprehensive AI regulation, establishing risk-based categories, transparency requirements, and accountability obligations for AI systems.
- Risk-based classification (unacceptable, high, limited, minimal)
- Transparency and documentation requirements
- Human oversight mandates
- Data governance standards
OECD AI Principles
International standards promoting trustworthy AI that respects human rights, democratic values, and enables meaningful challenge of AI-based outcomes.
- Inclusive growth and sustainable development
- Human-centered values and fairness
- Transparency and explainability
- Robustness, security, and safety
- Accountability and redress
Montreal Declaration
A declaration for responsible AI development emphasizing human well-being, autonomy, privacy, and democratic participation in AI governance.
- Well-being and autonomy
- Privacy and consent
- Solidarity and democratic participation
- Equity and diversity
- Prudence and responsibility
Asilomar AI Principles
23 principles addressing AI research, ethics, and longer-term safety concerns, developed by AI researchers and thought leaders.
- Research culture and funding
- Ethics and values alignment
- Longer-term safety considerations
- Avoiding harmful use and AI arms races
NIST AI Risk Management
A comprehensive framework for identifying, assessing, and managing AI risks throughout the AI system lifecycle.
- Governance structures
- Risk mapping and measurement
- Risk management approaches
- Continuous monitoring
OWASP Top 10 for LLMs
Security-focused guidance addressing vulnerabilities specific to large language model applications and generative AI systems.
- Prompt injection vulnerabilities
- Insecure output handling
- Training data poisoning
- Model denial of service
- Supply chain vulnerabilities
Further Reading
Articles, reports, and research on AI ethics, fraud patterns, and responsible AI development.
AI Ethics & Governance
- UNESCO Recommendation on the Ethics of AI (2021)First global standard on AI ethics
- AI Index Report - Stanford HAIAnnual comprehensive AI progress and governance tracking
- Responsible AI: Best Practices for Creating Trustworthy AI SystemsMicrosoft AI principles documentation
AI Security & Risk
- Gartner: AI TRiSM FrameworkAI Trust, Risk, and Security Management
- Adversarial Machine Learning: A Taxonomy and TerminologyNIST technical publication on ML threats
- Prompt Injection: The New Security FrontierResearch on LLM-specific vulnerabilities
Due Diligence & Fraud Prevention
- AI Snake Oil - Arvind NarayananUnderstanding what AI can and cannot do
- FTC: Keep Your AI Claims in CheckRegulatory guidance on AI marketing claims
- Due Diligence for AI AcquisitionsTechnical evaluation methodologies
Contribute Your Knowledge
Know a resource that should be here? Propose additions through GitHub or share with the community.
