Our Mission

To ensure AI-powered patient communication is accurate, ethical, clinically aligned, and safe across every healthcare setting.

Who We Serve

We support organizations responsible for building, selecting, approving, or integrating AI-based patient facing communication tools, including:

Health systems & hospital networks

Compliance & risk officers

Clinical operations & digital strategy teams

AI vendors seeking certification alignment

Our Vision

A healthcare ecosystem where every AI-generated patient message meets verified clinical standards and improves—not compromises—care quality.

AI Care Standard™ and PatientAI Collaborative™ Frequently Asked Questions

I. THE WHY

  1. What problem is the AI Care Standard™ trying to solve?

Patient-facing AI introduces risks that traditional clinical or operational AI does not, including:

  • Confident but incorrect guidance

  • Poorly timed or emotionally inappropriate messages

  • Lack of disclosure that AI is involved

  • Confusion that leads to unsafe choices or missed care

These risks are already appearing at scale, yet governance remains inconsistent. The AI Care Standard™ was created to close that gap.

II. THE WHAT (AND WHAT IT’S NOT)

2. What is the AI Care Standard™?

The AI Care Standard™ is a ground-breaking framework that sets clear expectations for safe, accurate, and clinically responsible AI-driven patient communication. 

Developed by an elite set of health system leaders and patient safety experts, the Standard pairs Core Pillars for responsible AI communications with an Evaluation Framework organizations can use to assess any patient-facing tool or technology prior to adoption.

The Standard is grounded in a simple premise: when AI delivers health guidance, reminders, or explanations, it can shape safety, trust, and real-world decisions.

3. What are the Core Pillars of the AI Care Standard™?

The Core Pillars define the essential behaviors and safeguards required for patient-facing AI. They address safety, accuracy, clarity, transparency, inclusivity, autonomy, and accountability—forming the foundation for responsible AI-driven patient communication.

4. What is the Evaluation Framework?

The Evaluation Framework operationalizes the AI Care Standard™.

It translates the Core Pillars into a structured assessment organizations can use to evaluate patient-facing AI systems, identify risks and gaps, and strengthen internal oversight.

5. Is this a certification or an enforcement program?

No. The AI Care Standard™ is not a regulatory certification or compliance stamp.

It is a practical governance standard and assessment framework intended to support responsible deployment.

III. WHO IT’S FOR

6. Who is the AI Care Standard™ for?

The Standard is intended for organizations that design, deploy, oversee, or govern AI systems that communicate with patients.

This includes health systems, digital and AI teams, clinicians, patient experience leaders, compliance teams, and vendors building patient-facing AI tools. While vendors are an important audience, the Standard remains vendor-neutral and is not an endorsement mechanism.

7. What kinds of AI systems does this apply to?

The Standard applies regardless of technology type—generative AI, rules-based automation, or machine learning—if the output reaches patients or shapes communication.

IV. WHO BUILT IT (AND HOW)

8. How was the AI Care Standard™ developed?

The Standard was developed through:

  • Expert interviews across clinical, operational, and patient experience roles

  • Multiple structured cohort working sessions

  • Stress-testing against real patient communication scenarios and edge cases

  • Iterative drafting and peer review

The process and inputs are documented to ensure transparency and credibility.

9. Who developed the AI Care Standard™?

The Standard was developed by a multidisciplinary cohort known as the PatientAI Collaborative™, which is composed of senior clinicians, health system executives, digital and AI leaders, and patient experience authorities from leading healthcare organizations.

Each cohort member serves in a role directly responsible for patient communication at scale, AI governance, and/or technology oversight, bringing firsthand operational experience to the Standard’s development.

A complete list of all PatientAI Collaborative™ Founding Members can be located at AICareStandard.com.

10. What is the PatientAI Collaborative™?

The PatientAI Collaborative™ is a volunteer, multi-stakeholder initiative formed to advance responsible, patient-centered use of AI in healthcare communication. Participation is entirely voluntary, with cohort members contributing their time and expertise without financial compensation.

The Collaborative brings together clinicians, health system operators, digital and AI leaders, patient experience executives, and governance experts directly responsible for how AI-driven communication is designed, deployed, and overseen in real-world care settings.

Its purpose is to establish shared expectations, surface emerging risks, and develop practical, implementation-ready guidance grounded in real practice. The AI Care Standard™ is the Collaborative’s first formal output.

V. GOVERNANCE, INDEPENDENCE, AND TRUST

11. How is the AI Care Standard™ governed today?

The AI Care Standard™ is stewarded by the PatientAI Collaborative™ cohort members, including clinicians and healthcare operators responsible for patient engagement, digital platforms, AI strategy, and governance. 

Cohort members participate on a voluntary, unpaid basis and retain full authority over the content and outcomes of the Standard.

Vital funded the engagement of Wheel+Dow to independently lead the Collaborative’s work, including convening the cohort, structuring deliberations, and synthesizing input into the AI Care Standard™. This structure was intentionally designed to separate funding
from influence and ensure no single organization directed
the Standard’s conclusions.

Neither Vital nor Wheel+Dow owns the AI Care Standard™, controls its outcomes, or commercializes its use. All substantive decisions and final content were determined by the cohort through a structured, multi-stakeholder process.

12. Is the AI Care Standard™ tied to any single organization, vendor, or regulator?

No. The AI Care Standard™ is independent and vendor-neutral.

It does not endorse tools or companies and is not owned by any health system, payer, or regulatory agency. Cohort members contribute based on expertise and operational roles, not as formal representatives of their employers.

13. Is the AI Care Standard™ or the PatientAI Collaborative™ a nonprofit or formal organization?

Not yet.

Several long-term governance options have been explored, including nonprofit formation or alignment with an established national organization, but no permanent structure has been selected.

The priority is to support real-world adoption first, then formalize based on demonstrated need and market adoption.

14. What was and is Vital’s role?

Vital serves as a neutral convener and operational partner to the PatientAI Collaborative™.

In addition to financially supporting coordination and launch execution, Vital contributed subject-matter expertise informed by its experience designing and operating patient-facing AI systems in real-world healthcare settings. Vital took on this initiative as part of their corporate responsibility and moral obligation to bring AI solutions to the world that lift up humanity and/or do no harm.

Vital does not own the AI Care Standard™, control its conclusions, or commercialize its use. All substantive decisions and final content were determined by the cohort.

15. What was and is Wheel+Dow’s role?

Wheel+Dow was engaged to design and establish the PatientAI Collaborative™ and to independently lead the development, governance structure, and execution of the AI Care Standard™ from inception through launch.

This work included shaping the Collaborative’s charter and operating model, establishing the program architecture, guiding cohort decision-making, and synthesizing multidisciplinary expert input into a national, operational Standard for patient-facing AI communication.

Wheel+Dow serves in a neutral, advisory leadership capacity and does not own the AI Care Standard™, determine its conclusions, or represent it as a commercial product. Authority and final decisions remained with the cohort.

VI. HOW IT WORKS IN PRACTICE

16. Does the Evaluation Framework produce a score?

Yes. The Framework generates a structured score reflecting alignment with the AI Care Standard™ across domains.

The score is intended for internal governance and improvement—not public certification or regulatory determination.

17. Is there a cost to using the Evaluation Framework?

No. The Framework is currently available at no cost to encourage adoption and consistent self-assessment.

The PatientAI Collaborative’s hope is that this will become the universal standard adopted globally to guide and protect all using patient facing AI communication. 

18. How can organizations adopt it today?

Organizations can begin by:

  • Reviewing the Core Pillars

  • Utilizing the Evaluation Framework internally to assess patient-facing readiness

  • Identifying high-risk patient communication use cases

  • Embedding the Standard into governance and oversight workflows 

  • Requiring patient-facing AI vendors to submit their Evaluation Framework results as a step in the RFP process and/or in advance of adoption conversations

19. What is out of scope for the AI Care Standard™?

The Standard does not address:

  • Internal systems used only by staff, with no patient exposure

  • Administrative automation

  • Model training methodologies or benchmarking

  • AI tools used only internally by clinicians

These areas matter, but they are addressed by other frameworks. The AI Care Standard™ is intentionally focused on patient-facing communication.

20. What if someone has feedback or suggestions for improving the AI Care Standard™?

The AI Care Standard™ is intended to evolve as patient-facing AI, clinical practice, and governance expectations mature.

Organizations and individuals are encouraged to share feedback, implementation insights, and suggested refinements through the Contact Us button on the AI Care Standard website. Input will be reviewed by the Collaborative and considered as part of future updates, informed by real-world use, emerging risks, and operational experience.

All revisions are guided by the same principles that shaped the initial Standard: patient safety, clarity, accountability, and practical applicability in real healthcare settings.

VII. THE BIGGER PICTURE

21. How does the AI Care Standard™ relate to existing regulations and frameworks?

The Standard complements emerging regulatory and assurance efforts (CMS, ONC, Joint Commission, and others).

It does not replace legal or clinical requirements. Instead, it helps organizations operationalize expectations for AI-driven patient communication.

22. What is the long-term vision?

The long-term goal is to establish a shared national baseline for safe, trustworthy AI communication with patients—before failures force fragmented regulation or reactive enforcement.