Loader

Business

Abhinav Mathur on AI Failures, Bias & Future of Trustworthy AI

September 22, 2025

Abhinav Mathur on AI Failures, Bias & Future of Trustworthy AI

The recent fireside chat at Masters’ Union featured Abhinav Mathur, a seasoned data and AI leader with experience at Genpact, Clinton Health Access Initiative, and Afin. In conversation with the UG students, Mathur explored why many AI projects fail, how bias creeps into algorithms, and where the future of explainable and trustworthy AI lies.

He drew from his career spanning public health, financial services, and AI consulting, sharing examples that grounded the hype in reality. The session went beyond buzzwords, showing students that innovation succeeds only when regulation, trust, and problem-solving align.

 

Why AI Projects Struggle in Real-World Adoption

Mathur began with an unfiltered perspective: most AI projects don’t fail because of technology, they fail because of poor framing.

The reasons are structural. Organisations often chase AI trends without defining a real use case, or they underestimate how regulation shifts can halt progress overnight. In trying to replicate the success of global AI leaders, Indian firms sometimes overpromise while under-delivering.

 

The challenges in adoption:

  1. Vague AI use cases lead to poor outcomes

  2. Regulatory shifts disrupt AI execution

  3. Overhyped AI quick wins overshadow real adoption

 

Bias Challenges in AI Models and Data

Bias in AI isn’t abstract; it shows up in ways that damage trust. Mathur cited examples like Amazon’s hiring algorithm, which penalised women, and image recognition tools that mislabelled people of colour.

For him, the problem is deeper than faulty datasets. Bias amplifies inequalities already present in society. Without accountability in how AI is designed and tested, algorithms inherit prejudice and scale it.

 

The reality of bias:

  1. Algorithmic bias driven by flawed datasets

  2. AI models are amplifying existing inequalities

  3. Lack of accountability in AI design processes

 

How Explainable AI Builds Trust

Students pressed Mathur on how companies can bridge the trust gap in AI. His answer: explainability matters more than sophistication.

Explainable AI (XAI) focuses on making outputs interpretable for both decision-makers and end-users. While leaders may be willing to trust models, frontline professionals, doctors, teachers, and bankers won’t rely on what they can’t understand. Transparency is the bridge.

 

The role of explainable AI:

  1. Explainable AI bridges leadership and end-user trust

  2. Interpretable AI reduces the black box problem

  3. Transparency in AI outcomes drives adoption

 

AI Use Cases in Healthcare and Public Health

Mathur’s most compelling stories came from his work in healthcare and development. As part of the Clinton Health Access Initiative, he used AI to model the spread of HIV and later worked on COVID-19 analytics.

But the real opportunities, he argued, are in unsexy but vital problems like predicting medicine shortages or detecting fraud in healthcare claims. Instead of flashy applications like X-ray scans, India needs AI that strengthens systems.

 

The healthcare applications:

  1. AI for healthcare supply chain optimisation

  2. Predicting medicine shortages with AI

  3. Detecting fraud in healthcare systems

 

Navigating AI Innovation and Regulation Together

Innovation, Mathur stressed, can’t ignore regulation. Many fintech and healthtech startups have tried to exploit loopholes, only to collapse when compliance caught up.

INDmoney, a company he referenced, has chosen a different path — building credibility by staying within the regulatory perimeter. For Mathur, that’s the only way AI companies can compound trust over time.

 

The balance between growth and compliance:

  1. Aligning AI with regulatory frameworks

  2. Building credibility through compliance

  3. Designing AI beyond short-term loopholes

Would You Trust This AI?

In an interactive segment, Mathur challenged students with scenarios: would they trust an AI to predict exam scores, act as a judge, or decide medical treatment?

The exercise revealed how comfort with AI varies depending on stakes. Students were open to AI avatars in entertainment but sceptical of AI in courts or healthcare. For Mathur, this underlined why context is critical in AI adoption, not every problem should be solved by machines.

 

Student reflections on AI trust:

  1. Comfort levels depend on the decision’s stakes

  2. AI fits better in low-risk than high-risk areas

  3. Human oversight remains non-negotiable

 

Skills That Outlast Buzzwords

When asked what advice he’d give to students, Mathur didn’t hesitate: don’t chase buzzwords. Prompt engineering may dominate headlines today, but what matters is the ability to frame problems, understand processes, and design sustainable systems.

The future, he said, belongs to those who combine technical skill with domain insight — whether in healthcare, finance, or public policy.

 

Advice for future professionals:

  1. Focus on process understanding over prompt hacks

  2. Build expertise across technology and domain knowledge

  3. Long-term value comes from solving uncharted problems

 

End Thoughts

The fireside chat with Abhinav Mathur highlighted the role Masters’ Union plays in exposing students to practitioners who have built and deployed technology at scale.

For students, the session was less about AI hype and more about its fragility, bias, and potential. The message was clear: future leaders must combine innovation with responsibility.

 

The student takeaways:

  1. AI must be framed as problem-first, not trend-first

  2. Explainability is the foundation of trust

  3. Regulation and credibility are as important as innovation

At Masters’ Union, conversations like these move beyond textbooks — showing how practitioners tackle the challenges shaping the future of work, technology, and society.

 

Explore Masters' Union ...

Explore Masters' Union ...