Artificial intelligence (AI) is permeating every aspect of 21st century human life. It is now deeply embedded in sectors like logistics, business, engineering and finance. AI is highly pertinent to medicine providing assistance in reading radiology, pathology and endoscopy images, suggesting diagnosis, recommending therapy, surgical decisions and even predicting outcome and survival. While AI’s usefulness and potential impact on medicine is undeniable, real-world deployment is slow relative to other disciplines. A key challenge is the complex issue of “trust” hindering adoption.
Any AI model must be operable within its intended domain to be effective. Its operability is contingent upon first, informed trust, and then, acceptance, by stakeholders. Given the democratisation of AI tools and open availability of data, anyone can develop AI models with unknown biases and unknown utility. Poor development strategy and lack of rigor (both technical and operational), can easily erode trust, or worse, cost human lives. These trust considerations extend beyond technicalities of AI model itself. Social, ethical and legal considerations hold the key to successful deployment and must be examined deeply.
The dimensions of trust and human values were investigated in prior studies. And so, established frameworks and guidelines are now in place. Yet, we are still not seeing widespread adoption and rich data being generated on user experiences in medical setting. This is likely because trustworthiness has not yet been translated into readiness (As analogy, a good product may be on the market, but that does not mean people are willing to try it yet.) Hence, this study will examine three thrusts of adoption readiness via case studies on AI developments across the National Healthcare Group (NHG) medical cluster. We will first identify issues and shortfalls of these 3 thrusts on late phase case studies, develop interventions based on these, and then evaluate for efficacy on early phase case studies. This will in turn, help us “find the wheel in the driver’s seat” as we navigate and negotiate the deployment of next-generation medical practice in the age of AI.
[1] Compliance to standards. How ready are we to comply with AI governance standards? (with emphasis on MOH’s AIHGle)?
[2] Engagement with stakeholders. How ready are people to use AI in healthcare? How do users trust AI in healthcare? Is AI accepted by patients? And is enough being done to facilitate AI-stakeholder interactions?
[3] Integration into ecosystem. How ready is the medical ecosystem for AI embedding?
Health is a high-stakes frontier, and so, examining how ready we are to trust AI prior and during its deployment is critical. Via this project, we will produce insight on real-world deployment issues regarding relevance, applicability and compliance of prevailing governance guidelines. We will produce a framework on auditing compliance, engagement of stakeholders and formulate model of integrating AI protocols into medical eco-system in clinical practice. All these will be crucial in building the trust of stakeholders in the use of AI-tools for future Medicine.