IIIT-H researchers striving to make AI more practical and responsible
As AI’s capabilities skyrocket, researchers warn that blindly racing ahead without addressing its risks could backfire

Hyderabad: Researchers at IIIT Hyderabad are working to make AI more practical, explainable, and accountable, developing real-world applications that can benefit society.
In collaboration with Salesforce, they are improving AI’s ability to handle time-series data, which is crucial for forecasting weather, stock market predictions and healthcare analytics. They have also launched data foundation (india-data.org), a platform hosting datasets like the Indian Brain Atlas and Indian Driving Dataset, aimed at making AI more relevant to Indian applications.
But as large language models (LLMs) and large vision models (LVMs) continue to grow in power, researchers warn that AI’s biases, resource consumption, and existential risks must be addressed.
“AI models today can generate better content than most humans in many areas. But we have to ask—what happens when AI surpasses human intelligence entirely? The more effort we put into building smarter machines over understanding their consequences, the larger this question becomes,” said Prof. Vikram Pudi of IIIT-H.
At their core, LLMs (like GPT-4 and BERT) and LVMs (such as DALL·E and Vision Transformers) don’t “think” like humans. They recognise patterns at massive scales, learning from enormous datasets of text and images.
“There’s no magic here. These models optimise for patterns and minimise errors through deep networks, but their sheer scale makes them vastly more powerful than anything before,” adds Prof. Pudi.
The larger the dataset, the better the model gets, which is why bias is a serious issue. AI can unknowingly reinforce stereotypes found in its training data, raising concerns in areas like hiring, law enforcement, and healthcare.
Another major challenge is the sheer energy consumption of AI. Training these models requires immense computational power, using energy-hungry GPUs and TPUs, making AI accessibility and sustainability key global concerns. Meanwhile, the existential risks of AI—machines eventually outthinking humans—are often dismissed as a distant worry.
“Governments and industries are charging ahead with automation, assuming the existential threat is far off,” Prof. Pudi says, while adding, “But the rate of AI progress suggests we may need to address these concerns sooner than expected.”
The bigger question, however, is who controls AI.
“A handful of companies own the most powerful models. We must push for open-source AI, developed by the public, for the public—not locked away by corporations,” he points out.
Regulations, he adds, must ensure AI’s safe development without stifling innovation.
As AI’s capabilities skyrocket, researchers warn that blindly racing ahead without addressing its risks could backfire. The challenge now is not in just making AI smarter, but ensuring that it serves humanity—rather than replacing it.

