The Human Side of AI Innovation in Health Care

Effective AI leadership in health care centers on people, trust, and systems—requiring leaders who pair clinical insight and ethical judgment with the ability to integrate technology thoughtfully into real-world workflows.

A red tint covering a person looking at a screen.

As artificial intelligence becomes increasingly embedded in health care delivery, the challenge is no longer whether AI can be built, but whether organizations can adopt it responsibly and effectively. According to Roger Daglius Dias, MD, PhD, MBA, director of research and innovation at the STRATUS Center for Medical Simulation and program co-director of Harvard Medical School’s Leading AI Innovation in Health Care certificate program, meaningful AI innovation requires leaders who can look beyond technology and focus on people, systems, and trust.

Drawing on his experience at the intersection of clinical medicine, research, and health care innovation, Dias emphasizes that AI leadership is ultimately about understanding real clinical problems and designing solutions that fit within complex organizational environments.

Becoming Fluent Across Disciplines

Health care professionals seeking to lead AI innovation must develop what Dias describes as a kind of multilingual fluency. Leaders need to be conversant in clinical medicine, scientific research, health care management, business, and technological advancement.

“The most essential skill is deep clinical workflow knowledge paired with systems thinking,” he explains. Effective leaders understand not just what AI can do, but also what problems need solving for patients, clinicians, hospitals, and health systems at large.

Beyond technical literacy, Dias highlights the importance of intellectual humility and ethical reasoning. AI leaders must recognize the limitations of algorithms, navigate the ethical implications of data-driven decision-making, and translate effectively between clinicians and data scientists. Curiosity and a lifelong learning mindset are also critical, particularly a willingness to challenge assumptions about how health care has always been delivered while remaining grounded in evidence-based practice.

Putting People and Processes First

When organizations rush to deploy new technology without considering the human impact, AI initiatives often struggle to gain traction. Dias stresses that successful implementation follows a clear order of priorities.

There are always three critical components for any AI implementation in health care, people, then processes, then technology— and their importance follows exactly that order. 

— Roger Daglius Dias

Human factors come first. Leaders must consider who will need to change their workflow, what existing structures may resist change, and whether new solutions align with organizational values and strategy. To surface potential challenges early, Dias often encourages teams to conduct pre-mortem analyses, imagining that a project has failed and working backward to identify the organizational, cultural, or regulatory barriers that may have been overlooked.

He also emphasizes early collaboration with compliance, legal, and frontline staff. Rather than acting as gatekeepers, these stakeholders should be engaged as co-designers. This reframes the conversation from “Can we build this?” to “Should we build this, and can our organization absorb it successfully?”

Evaluating Organizational Readiness for AI

Before adopting or scaling AI-driven solutions, leaders must assess whether their organizations are prepared. Dias points to three dimensions of readiness.

First is data infrastructure. Organizations must ask whether their data is accessible, standardized, and of sufficient quality. Second is governance, including clear processes for validating algorithms, monitoring performance, and addressing bias. Third is organizational culture, particularly whether there is psychological safety for experimentation and learning from failure. 

Dias recommends starting with a readiness assessment that includes stakeholder interviews, technical audits, and workflow analyses. Preparation often involves building data and AI literacy across teams, establishing governance committees with diverse representation, creating feedback loops between end users and developers, and piloting solutions in controlled environments before scaling. 

Above all, he notes, organizations must invest in change management, as the human side of AI adoption is frequently the most challenging.

Why Trust and Empathy Matter in AI Leadership

Despite rapid advances in technology, Dias underscores that innovation in health care is fundamentally about people.

“Active listening helps leaders understand the real pain clinicians face,” he says, including workarounds, frustrations, and unmet needs that AI might address. Empathy allows leaders to acknowledge concerns about job displacement, deskilling, or algorithmic bias rather than dismissing them.

Building trust requires transparency about AI’s capabilities and limitations, meaningful involvement of frontline staff in design decisions, and a clear commitment to using innovation in service of patient care rather than efficiency alone. When trust is present, clinicians and staff are more willing to experiment, provide honest feedback, and speak up when systems are not working as intended. 

Grounding Innovation in Human Experience

As AI continues to shape the future of health care, Dias offers a reminder that technology alone does not drive transformation. Sustainable innovation depends on leaders who understand systems, respect the people within them, and remain focused on real clinical needs.

Ultimately, the success of AI in health care will be determined not by the sophistication of algorithms, but by how thoughtfully they are designed, implemented, and trusted by the professionals who use them every day.