Important notice: 

The Accreditation Council for Continuing Medical Education (ACCME) issued a system-wide update on the use of artificial intelligence (AI) in accredited CME, noting that while AI can support education, it also introduces risks if not properly managed. Accredited providers remain fully responsible for all learner-facing content, including AI-generated material, and must ensure it meets standards for accuracy, independence, and clinical relevance.

Key areas of focus:

  • Content Responsibility: All AI-generated content must be accurate, evidence-based, and free of commercial bias 
  • Oversight and Validation: AI tools require pre-use validation, clinical oversight, and ongoing monitoring 
  • Separation and Transparency: Education must remain free of promotion, and AI use should be disclosed to learners 
  • Data Security and Compliance: Use secure, private systems; store data in compliant environments (e.g., HIPAA, FERPA, GDPR); follow evolving laws 
  • Governance and Risk Management: Use institutionally approved platforms, conduct routine audits, avoid open-access tools for sensitive activities, and maintain alignment with accreditation standards 
  • Generative AI, especially open or public tools, introduces added regulatory, intellectual property, and confidentiality risks. Responsible use in CME requires continued attention to oversight, transparency, and accountability.

Here are important links:
Accreditation Council for Continuing Medical Education
and 
Guidance on the Responsible Use of Artificial Intelligence (AI) in Accredited Continuing Education (CE) - ACCME