Preamble
Veterinary medicine demands scientific rigor, clinical judgment, and a duty to protect the physical and emotional health of animals. The health, safety, and welfare of the animal patient is our paramount priority, overriding all other considerations.
As AI becomes integral to care, it must meet the same professional standards and do so without ever replacing or diminishing the veterinarian as the primary decision-maker. This system is built to amplify clinical judgment, not automate it; to expand what is possible in practice, not to substitute for the expertise of a licensed veterinarian. The veterinarian remains the human driver of this system, the sovereign interpreter of its outputs, and the final authority on patient care.
This Charter sets the foundational principles and auditable requirements that govern the design, deployment, and oversight of OpenVet's veterinary AI systems. It is a public, testable commitment to a race to the top on safety, grounded in the belief that technology must serve, never supplant, the clinician's duty to protect animal welfare.
OpenVet publishes this Charter not as a marketing document, but as a professional commitment, open to scrutiny by clinicians, partners, and the veterinary community.
1. Scope & Definitions
Systems Covered
This Charter applies to all OpenVet clinical reasoning, retrieval, and explanation services intended for use by licensed veterinarians, veterinary technicians, and trainees under their supervision. It is also designed to serve as a benchmark and shared standard for veterinary AI safety across the industry. This Charter governs system design and operation but does not alter or replace the legal, ethical, or professional responsibilities of licensed veterinary practitioners.
Out of Scope
Consumer chatbots not used under veterinary supervision; marketing copy; non-clinical educational content.
High-Risk Output
Any AI-generated output that could directly inform or influence a critical clinical decision, such as diagnosis, dosing recommendations, treatment planning, surgical guidance, or emergency triage. These actions require enhanced safeguards, including mandatory clinician review, to mitigate potential harm to animal patients.
Emergency Output
An AI-generated output intended to support clinician decision-making in time-critical clinical situations where delay may materially increase risk to the animal patient (e.g., shock, respiratory distress, cardiac arrest, acute toxicity). Emergency Outputs are considered a subset of High-Risk Outputs and are subject to heightened safety constraints, conservative defaults, and explicit clinician confirmation. The system does not provide autonomous emergency decision-making and does not replace established emergency protocols or clinical judgment.
Evidence Source
A credible, verifiable veterinary source, including but not limited to peer-reviewed scientific papers, established clinical guidelines, consensus statements from recognized veterinary bodies, and leading veterinary textbooks.
Safety Case
A structured, evidence-based argument, maintained internally, that a system is acceptably safe for its defined intended clinical use.
Access Control
Systems covered by this Charter must verify that clinical features are accessed only by appropriate veterinary professionals. Safeguards must be in place to prevent misuse by individuals without the required credentials or training.
The Heart of the Charter
This Charter is organized into three layers, each answering a fundamental question about our approach to responsible AI. The Foundational Principles in Layer 1 are the source code for the testable requirements in Layers 2 and 3. This Charter contains both (a) principles and (b) testable controls. Only the controls are auditable requirements. Where we use 'MUST' it is a required control. Where we use 'SHOULD' it is directional guidance.
Layer 1: Foundational Principles (The "Why")
These are our core beliefs. They are not abstract ideals; they are the ethical justification for the specific, auditable requirements that follow.
1. Patient Welfare Paramount
Borrowed from the highest canon of professional engineering ethics, this principle establishes an unbreakable hierarchy of duties. It means that the health, safety, and welfare of the animal patient sets the boundary conditions within which all design, development, and deployment decisions must operate.
In veterinary medicine, optimal care exists along a spectrum rather than a single endpoint. The system does not presume that the clinically "best" or most aggressive intervention is always the appropriate choice for a given patient or context. Instead, its role is to ensure that clinicians are aware of the full range of medically sound options, including the gold-standard approach, and the tradeoffs associated with each.
When tradeoffs exist within the clinician's judgment, patient welfare remains the guiding constraint, while the final course of action is determined by professional judgment in partnership with the client. The system supports this judgment by providing information that aligns with established veterinary standards of care and clearly distinguishing between ideal, alternative, and palliative paths where appropriate.
2. Principle of "Great Health"
We reject a narrow view of health defined solely as the absence of disease. We commit to an AI that supports Great Health: the animal patient's capacity to function, adapt, and thrive within the bounds of its species, life stage, environment, and clinical context.
In veterinary medicine, great health does not imply maximal intervention or performance, but appropriate resilience, comfort, and functional well-being over time. Our AI is therefore designed not only to identify pathology, but to help clinicians understand how different clinical choices affect vitality, quality of life, and long-term outcomes across the spectrum of care. This principle explicitly includes palliative, comfort-focused, and end-of-life care as valid expressions of great health when cure or reversal is no longer possible.
3. Principle of the Clinician as Sovereign
The AI is a tool to amplify the veterinarian's will, judgment, and creative capacity. It serves the expert, not the average. Our AI MUST NOT reduce a clinician's sovereign judgment to a statistical probability. Its function is to empower the professional to make bolder, more informed, and more individualistic decisions.
4. Principle of Resisting the "Standing-Reserve" (The Patient is Not Just Data)
We recognize the danger of the mindset that sees every being as a resource to be optimized. We explicitly commit that our AI SHALL NOT treat the patient as a mere "standing-reserve," or simply just data. The system is designed to preserve the mystery and uniqueness of the living being, acknowledging that often the most important aspects of care cannot be calculated.
5. Principle of Guarding the Unconcealed (Revealing Truth, Not Extracting It)
Modern technology often reveals truth by "challenging-forth", or violently extracting data. We commit to a different path. Our AI's function is to assist in revealing clinical truth gently. In other words, the system SHOULD be designed to ask questions, not just to provide answers indiscriminately, thereby keeping the human relationship to the patient open and central to the act of care.
6. Principle of Symbiotic Individuation
We reject the old framing of a "user" operating a "tool," because it reduces both the clinician and the technology to fixed, mechanical roles. Instead, we see the relationship as a dynamic, co-evolving partnership. The veterinarian remains the primary agent of care, and the AI develops alongside that judgment through continued use and expert feedback. This technology should not be understood as a static instrument, but as an adaptive clinical partner whose refinement depends on the expertise and judgment of the veterinarian. The goal is mutual elevation: technology shaped by clinicians, and clinicians empowered by technology.
Layer 2: Operational Processes & Requirements (The "How")
These are the specific, verifiable processes and system requirements we implement to make our principles a reality. Each requirement is an operational expression of our foundational beliefs.
7. Clinician-in-the-Loop by Design
Human oversight is not an afterthought; it is an architectural principle. All high-stakes clinical decisions, including but not limited to diagnosis, treatment planning, and prescribing, are subject to meaningful review and confirmation by a qualified, licensed veterinarian. The system is designed to facilitate and empower this essential human oversight, never to bypass it. Responsibility for final clinical decisions always rests with the licensed veterinarian. AI-generated suggestions are advisory tools and must be interpreted within the clinician's professional judgment.
For Emergency Outputs, systems must favor conservative, protocol-aligned guidance and may restrict or defer responses when required context (e.g., species, weight, known contraindications) is missing.
8. Traceability and Cited Reasoning
Trust requires verification. High-risk clinical outputs must include clinician-verifiable source grounding or explicitly state when such grounding is unavailable. Citations are provided to allow clinicians to validate the underlying evidence and reasoning, ensuring the AI serves as a bridge to the scientific literature, not a black box.
9. Transparent Confidence and Uncertainty
False certainty is more dangerous than admitted ignorance. Systems must express their own limitations honestly and clearly. When evidence is incomplete, conflicting, outside the model's scope of competence, or tied to off-label use, the system explicitly states this and recommends clinician review rather than presenting a facade of confidence. This includes clear identification of off-label recommendations when they arise. Transparency in uncertainty is essential for mitigating automation bias and keeping the clinician's critical judgment engaged.
10. Species-Aware Constraints
Safety demands recognition of biological diversity across animal species. Systems covered by this Charter are designed to apply and enforce species-specific constraints for physiology, dosing, and treatment recommendations, preventing errors caused by mismatched data. Cross-species extrapolation is restricted by default and requires explicit evidence support for the target species or closely related taxonomic group. This ensures outputs are tailored and reliable, aligning with veterinary best practices.
11. Zero Trust Execution for Safety-Critical Operations
When correctness is critical to patient safety, including numeric calculations (e.g., dosing, rates, unit conversions) and rule-bound clinical operations (e.g., contraindication checks, sequence-dependent procedures, or eligibility constraints), systems are designed to rely on deterministic execution rather than free-form probabilistic text generation.
In these cases, the model proposes logic or structure, and separate execution or validation components enforce correctness. This approach is applied selectively to high-risk outputs to reduce error and support patient safety.
12. Systemic Resilience and Redundancy
Safety in a complex system cannot rely on a single point of defense. Systems covered by this Charter adopt the "Defense in Depth" strategy from high-stakes fields like nuclear energy, assuming that individual components can and will fail. Our systems are architected with multiple, independent, and overlapping layers of technical and procedural safeguards, ensuring that no single-point failure can compromise patient safety.
13. Rigorous Validation and Adversarial Testing
We go beyond standard testing to actively seek out failure. Before deployment and throughout their lifecycle, systems covered by this Charter are subject to regular, targeted "red-teaming" by experts, who proactively search for and document failure modes, biases, and unsafe behaviors. This adversarial approach, common in leading AI labs, is essential for discovering unexpected weaknesses before they can cause harm in a clinical setting.
14. Incident Reporting
Systems covered by this Charter must support reporting of safety incidents and near misses by clinicians. Reports are reviewed for severity, root cause, and required updates. Significant safety issues and their resolutions are documented, and we commit to sharing appropriate summaries with partners and, where relevant, the broader veterinary community.
15. Verifiable Safety Case
We move beyond "trust us" to "show me." The development and deployment of our AI systems are informed and constrained by a formal, internal Safety Case, a practice adapted from safety standards used in high-reliability and autonomous systems.
This Safety Case constitutes a structured, evidence-based argument that a system is acceptably safe for its intended clinical use, given its defined scope, safeguards, and limitations.
We commit to transparency with partners regarding our safety validation processes and conclusions, providing an auditable basis for trust without requiring blind reliance on system outputs. The Safety Case is maintained as a living artifact and is reviewed and updated as system capabilities, evidence, or clinical use evolve.
16. Data Privacy and Stewardship
Clinical data carries a special responsibility. Any veterinary AI system covered by this Charter must handle patient and user information with the level of care expected in professional veterinary practice. Systems must follow rigorous privacy and security standards, and any use of data to improve performance must apply appropriate safeguards, including anonymization when suitable and transparency with partners when required. These systems are expected to treat clinical data as information held in trust and manage it in a way that respects the expectations of clinicians and the needs of the animals they serve.
Systems covered by this Charter document the provenance of clinical sources used to support clinical features.
OpenVet does not deploy clinical features that depend on persistent use of copyrighted veterinary content at scale without appropriate rights, licenses, or other lawful bases.
Where knowledge is derived from external sources, systems are designed to transform and abstract factual information rather than reproduce expressive source material.
When provenance, rights status, or applicability cannot be sufficiently validated for a given use, the system must refuse, limit output, or degrade gracefully.
17. Bounded Scope and Refusal
When a request exceeds the system's supported evidence scope, species coverage, or safety constraints, the system must refuse rather than speculate.
Refusals should be explicit and, where appropriate, indicate what additional information or context would be required to proceed safely.
Layer 3: Governance & Cultural Commitments (The "Who")
These principles define our organizational character and our auditable commitment to the veterinary community.
18. Formal Governance and Oversight
OpenVet is establishing a Clinical Oversight Board composed of licensed, practicing DVMs.
As this governance structure matures, the Board will review safety reports and participate in approval of high-risk clinical system changes.
19. A Just Culture of Learning
We commit to creating an environment of trust where failure is treated as an opportunity for learning, not a cause for blame. Adapted from the "Just Culture" model in aviation safety, our process for investigating errors and near-misses is transparent and focused on identifying systemic causes. We accept that errors will occur and we commit to analyzing them for systemic root causes rather than blaming individuals. Clinicians are encouraged to participate in this feedback loop. We will share these learnings to help advance the safety of the entire veterinary AI ecosystem.
20. Proactive Stakeholder Engagement
We actively and continuously collaborate with practicing veterinarians, educators, professional associations, and regulatory bodies to co-develop and refine these standards as a shared professional resource, aligning our work with frameworks like the NAM AI Code of Conduct. We commit to supporting clinicians with training materials that promote safe and effective use of veterinary AI systems.
21. Commitment to a Living Standard
This Charter is a living document. The fields of artificial intelligence and veterinary medicine are dynamic, and our standards must evolve with them. We commit to formally reviewing and updating this Charter periodically, incorporating advancements in technology, the evolution of clinical best practices, and, most importantly, feedback from the veterinary community we serve.
Appendices
A. Current Status
OpenVet is committed to these principles; some elements of this Charter represent aspirational goals as we build our systems. As a small team, we are actively working toward full implementation, prioritizing patient safety in our phased approach. This document serves as our guiding framework and a call to the veterinary AI community to join us in advancing these standards.
B. Plain-Language Summary for Clinicians
What this Charter is
This Charter explains how OpenVet designs and uses AI in clinical settings. It exists to make clear what the system does, what it does not do, and how safety is prioritized when AI is used to support veterinary care.
AI supports you. It does not replace you.
OpenVet's AI is an advisory tool. It does not diagnose, prescribe, or make decisions on its own. The licensed veterinarian remains fully responsible for all clinical decisions. The system is designed to support your judgment, not override it.
Patient welfare comes first, within real-world veterinary practice.
Veterinary medicine operates across a spectrum of care. OpenVet's AI is designed to understand gold-standard medicine while also respecting alternative, conservative, palliative, and quality-of-life-focused options. It is intended to inform choices, not force a single path.
Safety-critical outputs receive extra safeguards.
When AI output could affect patient safety, such as dosing, toxicities, contraindications, or emergency guidance, the system applies additional controls. These include conservative defaults, species awareness, explicit uncertainty, and refusal when required information is missing.
The system prefers clarity over confidence.
If the AI does not have sufficient evidence, context, or scope to answer safely, it is designed to say so. Refusal is a safety feature, not a failure.
Evidence is emphasized, not hidden.
For high-risk clinical outputs, the system is designed to point you to relevant veterinary evidence or clearly state when such grounding is unavailable. The goal is to support verification, not ask for blind trust.
Safety is reviewed, not assumed.
OpenVet maintains an internal Safety Case that documents how risks are identified, mitigated, tested, and reviewed. This process evolves as the system evolves, and safety concerns or near misses are treated as opportunities to improve.
Your feedback matters.
Clinicians are encouraged to report issues, edge cases, or concerns. OpenVet treats this feedback as a core input into system improvement.
This is a living standard.
Veterinary medicine and AI both change. This Charter will be reviewed and updated over time to reflect clinical reality, new evidence, and clinician experience.
C. Works Cited
- Code of Ethics for Engineers - National Society of Professional Engineers, accessed October 18, 2025, https://www.nspe.org/sites/default/files/resources/pdfs/Ethics/CodeofEthics/NSPECodeofEthicsforEngineers.pdf
- Our framework for developing safe and trustworthy agents - Anthropic, accessed October 18, 2025, https://www.anthropic.com/news/our-framework-for-developing-safe-and-trustworthy-agents
- Introducing the Frontier Safety Framework - Google DeepMind, accessed October 18, 2025, https://deepmind.google/discover/blog/introducing-the-frontier-safety-framework/
- What is Responsible AI - Azure Machine Learning | Microsoft Learn, accessed October 18, 2025, https://learn.microsoft.com/en-us/azure/machine-learning/concept-responsible-ai?view=azureml-api-2
- Principles for a Strong Nuclear Safety Culture - Nuclear Regulatory Commission, accessed October 18, 2025, https://www.nrc.gov/docs/ML0534/ML053410342.pdf
- WHO releases AI ethics and governance guidance for large multi-modal models, accessed October 18, 2025, https://www.who.int/news/item/18-01-2024-who-releases-ai-ethics-and-governance-guidance-for-large-multi-modal-models
- Health Care Artificial Intelligence Code of Conduct - National Academy of Medicine, accessed October 18, 2025, https://nam.edu/our-work/programs/leadership-consortium/health-care-artificial-intelligence-code-of-conduct/
- AVMA Principles of Veterinary Medical Ethics, accessed October 18, 2025, https://www.avma.org/resources-tools/avma-policies/principles-veterinary-medical-ethics-avma
- NIST AI Risk Management Framework (AI RMF), accessed October 18, 2025, https://www.nist.gov/itl/ai-risk-management-framework
- EU AI Act, accessed October 18, 2025, https://artificialintelligenceact.eu/the-act/