AI Transparency & Responsible AI Policy
Effective Date: March 8 2026
1. AI Transparency
Diagnosaur uses AI models to assist with symptom analysis, differential diagnosis suggestions, clinical reasoning patterns, and medical research insights.
AI outputs are probabilistic and may be incomplete, incorrect, outdated, or hallucinated. Outputs are not guaranteed.
2. Human Oversight
Diagnosaur is a decision-support tool, not a decision-maker. Qualified professionals must review and verify all AI-generated outputs before clinical use.
3. Responsible AI Principles
- Safety
- Transparency
- Accountability
- Privacy protection
- Bias mitigation
- Responsible data usage
4. Bias Monitoring
AI models may reflect biases from source data. Diagnosaur monitors and improves model behavior to reduce demographic, clinical, and geographic bias.
5. Safe Use Requirements
- Do not rely on AI outputs without professional verification.
- Do not use as sole basis for emergency decisions.
- Use only within applicable medical and legal standards.
6. AI Training Disclosure
Models may be trained or improved with publicly available medical knowledge, licensed datasets, synthetic datasets, and anonymized interaction patterns. Diagnosaur does not intentionally use identifiable patient data for model training unless lawfully permitted.
7. Continuous Improvement
Diagnosaur may use anonymized and aggregated usage signals to improve reasoning quality, safety, and system performance.