Robot inside the velvet rope.
the summit

AMA releases an AI governance toolkit for health systems

The American Medical Association published a STEPS Forward® module, “Governance for Augmented Intelligence,” developed with Manatt Health. It’s positioned as an eight-step playbook for health systems to stand up AI governance from the C-suite down, and it includes model policy language, worksheets, and sample forms. Physicians can complete the module for CME credit.

What’s inside (in plain language)

AMA’s eight pillars are: set executive accountability and structure; form a cross-functional working group; assess existing policies; draft a system-wide AI policy; define project intake, vendor vetting and risk assessment; update implementation processes; establish ongoing oversight/monitoring; and prepare organizational readiness (training, communications, change management). These are spelled out in AMA’s news explainer and link through to the toolkit on Ed Hub.

AMA also highlights physician uptake trends—nearly 70% reported using some form of AI in 2024 (up from 38% in 2023)—to argue that formal governance can’t wait.

Our team is your team.

How it fits with the broader rulebook

The toolkit’s structure maps cleanly onto the NIST AI Risk Management Framework—NIST’s core functions are “Govern, Map, Measure, Manage”—so health systems can align local policy with a nationally recognized risk model. In the U.S. regulatory stream, ONC’s HTI-1 final rule adds transparency and risk-management requirements for AI-based “decision support interventions” inside certified EHRs; AMA’s policy templates help organizations operationalize those expectations on the provider side.

Why this matters (risk, compliance, and contracts)

For executives, the practical value is that AMA provides ready-to-edit policy language and vendor-evaluation scaffolding. That shortens the path to defensible decisions on model selection, validation, monitoring, documentation, disclosure to patients, and staff training—pressure points in audits, payer reviews, and litigation. Pairing AMA’s governance steps with NIST’s categories makes it easier to prove a repeatable process across intake, bias/risk assessment, human-in-the-loop controls, incident response, and post-deployment monitoring—evidence you’ll want on file if an AI-assisted decision is challenged.

Bottom line

This is not another high-level principles memo—the AMA/Manatt module delivers operational artifacts (policies, forms, checklists) and offers CME so clinical leaders can push adoption and governance in tandem. If your organization already tracks to NIST AI RMF and is preparing for HTI-1 obligations, this toolkit gives you a physician-centric wrapper that should play well with compliance, procurement, and clinical leadership.