Podcast with MedAsian

Innovation Is Easy, Governance Is Hard: AI’s Next Challenge in Healthcare (Part I)

Rex

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 21:16
  • Surge drivers: Advances in deep learning and large language models, mounting pressure from aging populations and rising costs, plus heavy investment and government backing are converging to make AI a practical, system-shaping force in healthcare.


  • Market size: The global AI healthcare market could approach $190 billion by 2030, with China’s medical large-model segment alone surpassing RMB 100 billion, touching the full value chain from drug discovery to post-care.


  • Evolution & applications: AI has moved from rigid expert systems to narrow deep-learning tools to today’s multimodal models that reason across imaging, records, and genomics. It now spans clinical decision support, drug R&D, public health, and hospital operations.


  • Key risks: Data privacy, biased outcomes from unrepresentative training data, “black box” opacity, and unclear accountability when AI is involved in clinical decisions.


  • Governance as the real bottleneck: Technology is advancing faster than regulation. Without clear rules on responsibility, safety, and transparency, even the most advanced AI cannot win trust or achieve sustainable adoption.


  • Regional governance models: The US relies on adaptive, market-driven frameworks (e.g., FDA) but faces fragmentation; the EU enforces strict, rights-based rules via the AI Act, prioritizing safety; China takes a state-led, rapid-scaling approach with evolving regulatory oversight.


  • Core message: For AI to responsibly transform healthcare, governance must catch up with innovation—ensuring ethics, accountability, and trust are built into the system from the start.
SPEAKER_00

Welcome to Podcast with MetAsium. I'm Rex, your host. Over the next two episodes, we're tackling a topic that's hot on fire right now, but also one that doesn't get nearly enough serious attention. Artificial intelligence in healthcare, more specifically, governing it. Everyone's talking about the tech, but the real bottleneck, the one that decides whether these tools actually reach patients safely, is regulation, policy, and trust. To guide us through this, I'm joined by my colleague, Felix, Director of Government Affairs and Med Asian, who leads a lot of our policy analysis and the government affairs work here. Felix, great to have you. Thanks, Red. Good to be here. Let's start with the big picture. The headline are full of AI breakthroughs, and it feels like we're we've hit a real inflection point. But I want to go deeper than the hype. What's actually driving this global surge in AI healthcare right now? Are we looking at a technology story, a market story, or something else entirely?

SPEAKER_01

So the surge we're seeing in AI healthcare is not just happening randomly. It's really a few big things all coming together at once. First, the technology has just improved a lot, especially with things like deep learning and large language models. AI can now handle really complex medical data, whether that's imaging, doctors' notes, or research papers. And because of that, it's finally becoming something that actually works in real clinical settings, not just in theory. Second, the healthcare system itself is kind of pushing this forward. You've got aging populations, more chronical disease, upsurge in costs, and at the same time, not enough doctors and specialists in key areas. So AI is not no longer just about efficiency. People are starting to see it as a real way to scale and fill some of those gaps. And then the third piece is money and policy. Governments are investing in digital health, and you've got a ton of capital, from venture capital to big tech, flowing into this space. When you pull all of that together, better tech, real demand, and serious investments. That's what's turning AI healthcare from something promising into something that's actually reshaping the system right now.

SPEAKER_00

The numbers being tossed around are enormous. That kind of figure can either signal a genuine transformation or just a bubble. Coming up through the noise, how should we actually think about the scale of this opportunity? Is China's story mirroring the global one, or is it playing out entirely differently?

SPEAKER_01

If you look at this from a market perspective, AI and healthcare is honestly one of the fastest growing spaces out there right now. Some estimates say the global market could hit close to 190 billion by 2030, growing at something like 35 to 40 percent annually. And that's not just normal growth. That's a signal that something much bigger is happening. It's really changing how healthcare gets delivered, how it's paid for, and how systems are run. If you zoom in on China, you see a very similar story. The momentum there is just as strong, especially around medical large models. That segment alone is expected to grow past 100 billion RMB in the next few years. And it's not just market demand driving that. There's also a very clear push at the national level to make AI a core part of key industries, including healthcare. Large models have really accelerated that trend. What's especially interesting here is how broad the impact is. This isn't just about one part of healthcare. It cuts across the entire value chain, from drug discovery and clinical trials all the way to diagnostics, treatments, and even postcare management. So AI isn't just creating new opportunities on the edge. It's actually reshaping how the whole system works.

SPEAKER_00

A lot of people assume AI in medicine started maybe five or ten years ago, but you've traced a much longer arc. Can you walk us through the major phases we've gone through to reach today's large model? I'm especially interested in what changed at each stage, not just Matech, but how the role of AI shifted.

SPEAKER_01

If you zoom out a bit, AI in healthcare has really gone through three major phases. And each one is a pretty big leap forward. The first phase goes back, goes way back, starting in the 1960s. With what were called expert systems. These were basic this were basically ruled system programs that tried to mimic how doctors think using if this then that logic. But they were pretty limited. They couldn't really adapt. They struggled with uncertainty and they depend heavily on predefined rules. So they worked in narrow cases, but weren't very flexible in real-world medicine. The second phase kicked off around 2012 with the upsurge of deep learning. That's when AI really started to break through, especially in areas like medical imaging. You had models that could detect things like cancer or diabetic eye disease, levels comparable to specialists. But the catch was they were very narrow. Each model was built for one specific task and couldn't really be applied beyond that. Now we're in the third phase, which is where things get really interesting. With large models and multimodal AI systems, they can now combine different types of data, like medical records, images, even genetic data, and actually reason across them. So instead of just doing one thing well, they're starting to act more like general assistance. Not replacing doctors, but helping across multiple parts of care and even supporting research in a much more integrated way.

SPEAKER_00

So we're no longer in a world of single-use tools. That makes me want to map the landscape. If you walk through the healthcare system today, clinical research, public health operations, where is AI actually showing up now and where is it having the most impact? Isn't this a case of a few hotspots, or has it really permeated the whole chain?

SPEAKER_01

I mean, at this point, AI is pretty much everywhere in healthcare. It's not just one or two use cases anymore. In clinical settings, it's being applied for things like diagnosis, treatment planning, and even assisting in surgery. It can analyze medical images, read signals like ECGs, and help doctors quickly identify which cases need urgent attention. So, in a lot of ways, it's helping to improve both the speed and accuracy of healthcare. On the research side, AI is really starting to change how drugs get developed. It can sift through massive data sets to identify potential drug targets, model how molecules interact, and even help design clinical trials. The big upside here is time and cost. AI has the potential to significantly shorten the drug development cycle and make the whole process more efficient. And beyond that, AI is also showing up in public health and day-to-day healthcare operations. It can help track disease chains, predict outbreaks, and support better resource allocation. On the administrative side, it's streamlining things like scheduling, billing, and patient flow. We step back. What's really interesting is how broad this all is. AI isn't just a pool in healthcare anymore. It's starting to become part of the system's core infrastructure.

SPEAKER_00

When something becomes infrastructure, the risks stop being theoretical. Privacy buys accountability. These words get thrown around a lot, but in healthcare, the stakes are life and death. I want to get specific. What are the risks that genuinely worry you when you look at how fast AI is being deployed? And we're on the fault lines that haven't been addressed yet.

SPEAKER_01

When you look at the risks around AI healthcare, it's not just one issue. It's a pretty complex mix of challenges. One of the biggest concerns is data privacy and security. These systems rely on huge amounts of sensitive health data. So if something goes wrong, like a breach or misuse, it can have very real consequences for both patients and healthcare organizations. Another big issue is bias and transparency. If an AI model is trained on data that isn't representative, it can lead to biased outcomes, meaning some patient groups might get less accurate or even unfair treatment. On top of that, a lot of these systems operate like black boxes. So even doctors may not fully understand how a decision was made. And if you can't explain it, it's a lot harder to trust it. Then there's a bigger system level question around accountability. Once AI starts playing a role in clinical decisions, it's not always clear who's responsible if something goes wrong. Is it the doctor who used the tool? The hospital? The company that built the AI? That gray area is still being worked out, and it raises some pretty important legal and ethical questions.

SPEAKER_00

You've made a strong claim before that governance governance might actually be more important than the technology itself in healthcare. That's a provocative idea, especially for an industry that loves to chase the next shining tool. May the case, why is governance the thing that would determine whether AI actually gets used at scale rather than the quality of the acronyms?

SPEAKER_01

At the end of the day, in healthcare, governance is what really determines whether AI can actually be applied in practice. This isn't like other industries. There are life and death decisions. So trust and accountability matter a lot. You can have the most advanced AI systems in the world, but if it doesn't meet regulatory standards where doctors don't feel comfortable using it, it's just not going to get adopted. Governance is basically what sets the rules of the game. It answers questions like who's responsible if something goes wrong? How are risks being managed? How do we know the system is actually making sound decisions? It makes sure that AI isn't just technically impressive, but also ethically and compliant with the law. Without that structure, you end up with a lot of uncertainty. And that can slow innovation down instead of speeding it up. And if you think long-term, governance is really what makes AI and healthcare sustainable. The technology is going to keep evolving quickly. No question. But governance is what makes sure that progress actually translates into better outcomes for patients and society. So it's not something that holds innovation back. It's what makes responsible innovation possible in the first place.

SPEAKER_00

So if governance is the linked pen, let's get concrete. What are the core governance challenges that makes that make this so difficult to get right? I'm thinking about the gap between how fast AI evolves and how slowly regulation moves. But I'd also like to add to the problem that AI errors can scale in ways human errors don't. What's on your shortlist of government headaches?

SPEAKER_01

One of the biggest challenges here is just how out of sync technology and regulation are. AI can evolve in a matter of months, sometimes even faster, but regulatory frameworks usually take years to catch up. That gap creates a gray zone where new technologies are already being used, and there aren't always clear rules for how they should be governed. Another issue is that AI can actually amplify risk once it's built into the system. In traditional healthcare, mistakes tend to be more isolated. But with AI, if something goes wrong, like a flawed prediction or what people call hallucinations in large models, it can scale across many patients or settings pretty quickly. So the impact of errors becomes much bigger, and the stakes are a lot higher. And then there are some more structured challenges. It's still not always clear who's responsible when AI is involved, and there aren't consistent regulatory pathways across markets, and the rules can vary a lot from country to country. All of that not only adds risks, but also makes it harder for companies to scale solutions globally.

SPEAKER_00

Let's make this tangible by looking at real regulatory philosophies. The US is often held up as the innovation first model where the FDA tries to enable without crushing. But how does that actually work in practice for AI in healthcare? What's the underlying logic and where does it strain?

SPEAKER_01

So the United States adopts a market-driven innovation-first approach to AI healthcare governance. Rather than creating an entirely new regulatory system, it builds on existing frameworks, particularly through agencies like the FDA, while introducing adaptive tools to accommodate AI's unique characteristics. A key feature of the US model is flexibility. It emphasizes the life design, life cycle regulation, real-world performance monitoring, and collaboration with industry stakeholders. This allows innovation to move quickly while maintaining a level of oversight. The system also relies heavily on self-regulation and industry standards. But this approach also comes with challenges. This includes regulatory fragmentation across federal and state levels. While it fosters innovation, it can create inconsistencies in how rules are applied, increasing compliance complexity for companies.

SPEAKER_00

With the AI Act now in place, how does the EU's approach to AI in healthcare differ from what we have just described in the US and what trade-offs come with that model?

SPEAKER_01

So the European Union takes a pretty different approach when it comes to AI, especially in healthcare. The focus is really on safety, transparency, and accountability. And they tend to enforce that through comprehensive laws like the AI Act. Compared to the United States, the EU is definitely more cautious. For higher-risk AI systems, which includes a lot of healthcare, healthcare applications, there are strict requirements that need to be met before anything can actually be deployed. That means going through detailed assessments around safety, fairness, and whether the system can be explained and understood. The goal here is to build trust and make sure AI is applied in an ethical way. But there's also trade-off. This kind of approach can slow things down and add more regulatory burden, especially for startups or smaller companies trying to innovate quickly.

SPEAKER_00

That brings us to China. The playbook here feels different, more top-down, more aligned with national industry strategy. Where does China actually stand today in its AI healthcare governance and what's still under construction? How do you see this play into China's broader ambitions in digital health?

SPEAKER_01

China's approach to AI and healthcare is pretty different. It's much more driven from the top down. The government has made AI healthcare a national priority with strong policy support and a clear push to tie it into the broader digital economy. So there's a very international strategy behind how this space is developing. On the ground, that's translated into pretty fast progress. China has moved quickly to build out regulatory frameworks and just as importantly, to actually roll out real-world applications. There's a big focus on scaling, getting AI into hospitals, into clinical workflows, and that's supported by access to large data sets and a more centralized system for coordination. That being said, it's still a work in progress. There are ongoing challenges around refining the regulatory system, getting different stakeholders more aligned, and make sure there's a strong feedback loop between policy and what's actually happening in practice. How China addresses those gaps will really shape how sustainable this growth is over the long term.

SPEAKER_00

Felix, thank you. This has been a really clear tour through the landscape, from the tribal engine driving the search to the market scale, the risks, and the very different governance approaches taking shape in the US, Europe, and China. What I'm taking away is that the technology is racing ahead, but the rules on the road are still being written. And they are being written very differently, depending on where you stand. In part two, we'll dig deeper into what those differences mean for company trying to navigate this space and what MetAsian sees on the horizon. To our listeners, thanks for tuning in. I'm Rex. Join us next time.