Top

AI Must Be Made Reliable And Understandable, Say Experts

A panel discussion on India’s AI future saw researchers call for stronger academic research, better support for doctoral scholars and closer university-industry collaboration to build trustworthy AI systems

Hyderabad: Artificial intelligence systems can no longer be treated as black boxes and must be able to reason, explain their decisions and operate safely as they are increasingly used in real-world settings, researchers said at the Indian Symposium on Machine Learning 2025 held at the BITS Pilani Hyderabad campus.
Several speakers said the next phase of AI research is less about making models bigger and more about making them understandable and reliable. Saurabh Prasad, professor at the University of Houston, said work in GeoAI shows how combining data across sensors, space and time improves decision-making only when systems can explain how conclusions are reached.
Abhilasha Ravichander, research scientist at the Max Planck Institute for Software Systems, said studying how large language models store and organise knowledge is essential to reduce opaque behaviour and unexpected outputs.
The need for explanation became sharper in discussions on multimodal AI. Khyathi Chandu, research scientist at Mistral AI, said aligning audio and language helps systems respond more naturally and consistently.
Hisham Cholakkal, assistant professor at Mohamed bin Zayed University of Artificial Intelligence, said multimodal language models increase capability but also raise questions around accountability if decisions cannot be traced.
Trust and evaluation featured across sessions. Sunayana Sitaram, principal applied scientist at Microsoft, said AI systems still perform unevenly across languages and cultures, making clear evaluation standards critical.
Soujanya Poria, associate professor at Nanyang Technological University, said current language and multimodal models are reaching limits and need fresh research approaches rather than incremental scaling.
Applied sessions reflected the same concern. Mercy Ranjit, senior researcher at Microsoft Research, showed how healthcare AI agents are being designed with safety checks built into workflows. Mrinmaya Sachan, assistant professor at ETH Zürich, spoke about AI tools in education that prioritise transparency in how outcomes are generated.
A panel discussion on India’s AI future saw researchers call for stronger academic research, better support for doctoral scholars and closer university-industry collaboration to build trustworthy AI systems. The programme also included a datathon and early-career talks focused on safety, privacy and interpretability.
( Source : Deccan Chronicle )
Next Story