Creating an AI Community of Practice and Learning Together
MedEdPearls April 2026: From Isolated Experiments to a Living AI Community of Practice
Everyone talks about integrating AI into medical and health professions education, including curricula, coaching, assessment, and evaluation. Yet, most efforts stay isolated, siloed in departments, or dependent on a handful of enthusiasts. The real gap is not the tools, it is connected learning. What turns AI experimentation into sustained adoption is not another email announcement or one-off workshop; it is a living Community of Practice (COPs) where educators, clinicians, informaticians, and learners learn with and from each other over time. That is how institutions move from AI buzz to capacity. COPs are defined as groups of people who share a concern or passion and deepen their knowledge through regular interaction and shared practice, not just presentations, fostering both individual and collective learning.
5 Practical Steps for Building an AI Community of Practice at Your Institution
1. Clarify the Domain and Mission
CoPs thrive on a meaningful shared domain, which gives members a reason to return and invest in learning together. For example, at Mayo Clinic, a generative AI CoP focused not on specific tools per se, but on practical educational use cases and policies, which helped the group stay relevant as tools evolved. Start by defining the shared focus, such as what AI in medical and health professions education means for your institution.
- Draft a simple charter domain or focus, such as ethical AI use or evidence-based pedagogical AI use
- Decide what counts as success in 6–12 months, and how this complements institutional goals, such as learner outcomes
- Invite diverse stakeholders to define the charter, such as faculty developers, curriculum leaders, clinicians, librarians, and IT professionals
- Share your charter with institutional leadership to gain support
2. Build the Community through Structured Engagement
Interaction beats simple notification about AI advancements. The social fabric of a CoP is what sustains it. Designing structure sustains a CoP. CoPs work when members engage regularly in shared activities and build a shared repertoire of resources and practices.
- Identify a champion or team-leader—not necessarily a subject expert—who keeps the momentum going, handles logistics, and oversees planning and scheduling
- Invite interested parties
- Schedule consistent meetings with rotating facilitators
- Consider rotating leadership to avoid burnout and provide low-stakes roles, such as note taker or resource curator, to share ownership
- Combine structured discussions around case presentation or peer demos, for example, with time for questions and problem-solving
- Keep communication open between sessions
3. Make Practice Visible with Shared Artifacts
A CoP is not just a social group; it is about generating practical knowledge. For example, instead of one faculty member keeping AI prompts for their own use, make them part of a shared toolbox that other members can search, adapt, and improve. Additionally, learning what colleagues do with AI can inspire future uses.
- Collect and curate AI use cases, prompts that worked, ethical checklists, evaluation rubrics, lesson templates, and implementation lessons learned
- Create a shared repository or an institutionally approved virtual drive which all CoP members can access, and curate it collaboratively
- Consider linking created artifacts to curricular outcomes and accreditation standards to increase uptake
4. Blend Theory and Practical Reflection
Connect what the group does with why it matters educationally and ethically. Reflection fosters deeper learning and helps participants articulate why certain practices matter. Group discussions also invite different perspectives.
- Frame discussions with foundational learning theories and be sure to make decisions around AI use and regulation that are evidence-based
- Use moments to reflect on both successes and tensions, such as when an AI tool prompts unanticipated bias concerns or workflow concerns
- Question whether AI is helpful or harmful in teaching, learning, or efficiency
5. Evaluate, Iterate, and Share
A thriving CoP evolves. The evaluation of a CoP’s impact signals seriousness and connects the CoP to broader institutional goals. Institutional goals could include AI literacy expectations, ethical frameworks, or educational effectiveness.
- Collect simple metrics on attendance, artifact contributions, reported changes in practice, or pilot outcomes
- At least quarterly, ask, “What is working? What is not? What should we add or drop?”
- Share outcomes beyond your group at institutional forums, grand rounds, or national meetings
Technologies change. Tools emerge, wane, and pivot. Policies evolve. What endures is the human capacity to learn together amid complexity. Building a CoP is not a one-time event; it is a culture of collective inquiry, shared practice, and ongoing refinement of both tools and pedagogy. By tending to the social architecture as diligently as the technology, institutions do not just keep pace with AI; they shape it wisely, ethically, and with educational integrity. Continuous individual and collective learning is not optional. It is essential in an ever-changing world that depends on human connection and collaboration.
#MedEdPearls are developed monthly by the Health Professions Educator Developers on Educational Affairs. Previously, #MedEdPearls explored similar topics, including using AI for pedagogical excellence, delivering an integrated curriculum, and using rapid prototyping.
About the MedEd Pearls Author
The MedEdPearls are a collaborative, peer-reviewed, monthly brief intended to provide practical tips and strategies for medical and health professions educators to enhance teaching and learning.
Stacey Pylman
PhD
- Director of Continuing Medical Education, Office of Medical Education Research and Development (OMERAD), Michigan State University College of Human Medicine
- Assistant Professor, Michigan State University College of Human Medicine
- Jean Bailey, PhD – Virginia Commonwealth University School of Medicine
- Carrie Bowler, EdD, MS, MLSCM (ASCP) – Mayo Clinic School of Continuous Professional Development
- Kristina Dzara, PhD, MMSc (Educators ’16; Assessment ’16; HCE 2.0 ’17) – Saint Louis University School of Medicine
- Shanu Gupta, MD, SFHM – University of South Florida Morsani College of Medicine and Tampa General Hospital
- Jennifer Hillyer, PhD – Northeast Ohio Medical University
- Larry Hurtubise, PhD, MA (HCE 2.0 '16) – The Ohio State University
- Anna Lama, EdD, MA – West Virginia University School of Medicine
- Machelle Linsenmeyer, EdD, NAOME (Assessment ’07) – West Virginia School of Osteopathic Medicine
- Skye McKennon, PharmD, BCPS, ACSM-GEI – Washington State University Elson S. Floyd College of Medicine
- Rachel Moquin, EdD, MA – Washington University School of Medicine
- Stacey Pylman, PhD – Michigan State University College of Human Medicine
- Leah Sheridan, PhD – Northeast Ohio Medical University
- Lonika Sood, MBBS, MHPE – Washington State University Elson S. Floyd College of Medicine
- Mark Terrell, EdD – Lake Erie College of Osteopathic Medicine
- Stacey Wahl, PhD – Virginia Commonwealth University School of Medicine