Skip to main content

Articles

HealthTechX Asia

Beyond the AI hype: Healthcare CIOs call for measured approach to AI

Cindy Peh

AI investments need to align with clinical and operational needs, and integrate well with existing workflows

The excitement around Artificial Intelligence (AI) in healthcare showed little sign of slowing in 2025. With over $18 billion invested, AI made up 46% of all healthcare investment for the year, with the average deal size for an AI startup standing at US$83 million. As of July 2025, over 1,250 AI-enabled medical devices have received the Food and Drug Administration (FDA) stamp of approval in the US, up from 950 a year ago.

This momentum is driven in part by heightened interest amongst healthcare providers. A study estimates that over half of Asia-Pacific’s healthcare providers plan to invest in GenAI solutions within the next two years. In the US, 22% of healthcare organisations have paid commercial licenses for AI applications in 2025, a 10-fold increase from 2023.

The past year also saw the rise of Agentic AI – autonomous AI systems that can execute complex tasks with minimal human involvement – which have quickly become a focal point of industry discussion.

Keeping clear-eyed amid the AI hype

But some healthcare chief information officers (CIOs) have called for calm amid the hype. At the CXO 2025 Forum held in the Philippines, speakers emphasised the need for new AI technologies to demonstrate clear impact relevant to their business context – whether in terms of efficiency, safety, or profitability.

“Amid the global excitement surrounding AI, healthcare must remain thoughtful and grounded. AI holds enormous potential, but its true impact will come from disciplined, responsible integration into clinical and operational workflows,” said Frank Vibar, CIO of Asian Hospital and Medical Center (AHMC) in the Philippines.

“Before AI can thrive, hospitals must invest in strong data foundations, interoperability, governance, ethical oversight, rigorous validation, and meaningful clinician readiness. CIOs must serve as stewards, ensuring that AI remains safe, secure, explainable, and aligned with clinical realities.”

Menlo Ventures analysis noted that leading US health organisations, such as Mayo Clinic and Kaiser Permanente, similarly take a measured approach to AI by ‘stacking early wins’: prioritising proven systems ready to be deployed at scale and which deliver rapid value. Quick wins help ‘generate momentum’ for sustained adoption and long-term transformation.

Chat AI

Choosing healthcare AI solutions that balance value and risk

Chief Transformation Officer of Indonesia’s Pertamina Bina Medika IHC hospital chain, Ashok Bajpai, had similarly highlighted the importance of balancing AI’s benefits with its costs and risks in the healthcare setting.

While AI in lower-risk areas such as imaging and radiology have already demonstrated benefits in accuracy, efficiency and quality of care – and hence present a solid case for adoption – its use in areas such as diagnosis comes with more complexity and risks.

Providers also face challenges in selecting the right solution in a crowded and rapidly evolving technology marketplace, said Bajpai. It is not sustainable for hospitals to continually adopt multiple AI solutions, each requiring doctors to be trained and familiarised, only for these solutions to go obsolete. Instead, a subscription-based AI platform offering access to multiple models could be a more viable approach, while ensuring technological ease of use, he suggested.

Holistic evaluation of healthcare AI tools

Vibar emphasised that any assessment of an AI tool needs to start with the fundamental question: What problem are we trying to solve?

“If the need is real and the evidence is strong, only then do we begin the evaluation,” he said. “And even then, we ask more questions. Can clinicians understand how the model works? Can it integrate smoothly into the hospital environment? Can it support care without adding risk or cognitive burden?” 

This sentiment was echoed by Dr Sujoy Kar, Chief Medical Information Officer of India’s Apollo Hospitals.

“We have learnt in quality management a long time back that we do not fix a solution that is not broken. That applies to AI as well: don’t introduce AI into a workflow that already works well.”

He also cautioned against adopting AI simply because peers are doing so. “A tool that works well for the hospital across the street may not work for yours,” he said, pointing out that each organisation’s processes, priorities, and challenges are vastly different. AI tools, therefore, should be evaluated based on an individual hospital’s specific needs, rather than prevailing market trends.

Over at AHMC, a structured AI evaluation framework assesses five key areas of potential AI tools, namely:

  • Clinical validity: Reliable performance based on real-world data
  • Patient safety: Explainability, transparency, and clear escalation pathways
  • Data governance: Strong controls, minimal data movement, and robust privacy protections
  • Workflow fit: Seamless integration with existing processes
  • Sustainability: Demonstrable clinical and operational value over time

“As CIO, my responsibility is to champion the tools that make care safer, faster, and more humane — and to set aside anything that adds noise, complexity, or risk,” Vibar added.

“At the end of the day, our standard is simple: If an AI solution doesn’t improve decisions, lighten workloads, or strengthen the trust between patient and clinician, it doesn’t belong in our hospital.”

View all Articles
Loading