Nvidia CEO touts India’s progress with sovereign AI and over 100K AI developers trained

by CryptoExpert
Binance


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

Nvidia CEO Jensen Huang noted India’s progress in its AI journey in a conversation at the Nvidia AI Summit in India. India now has more than 2,000 Nvidia Inception AI companies and more than 100,000 developers trained in AI.

That compares to a global developer count of 650,000 people trained in Nvidia AI technologies, and India’s strategic move into AI is a good example of what Huang calls “sovereign AI,” where countries choose to create their own AI infrastructure to maintain control of their own data.

Nvidia said that India is becoming a key producer of AI for virtually every industry — powered by thousands of startups that are serving the country’s multilingual, multicultural population and scalingout to global users.

Tokenmetrics

The country is one of the top six global economies leading generative AI adoption and has seen rapid growth in its startup and investor ecosystem, rocketing to more than 100,000 startups this year from under 500 in 2016.

More than 2,000 of India’s AI startups are part of Nvidia Inception, a free program for startups designed to accelerate innovation and growth through technical training and tools, go-to-market support and opportunities to connect with venture capitalists through the Inception VC Alliance.

At the NVIDIA AI Summit, taking place in Mumbai through Oct. 25, around 50 India-based startups are sharing AI innovations delivering impact in fields such as customer service, sports media, healthcare and robotics.

Conversational AI for Indian Railway customers

Nvidia’s AI services are enabling much more efficient call centers in India.

Bengaluru-based startup CoRover.ai already has over a billion users of its LLM-based conversational AI platform, which includes text, audio and video-based agents.

“The support of NVIDIA Inception is helping us advance our work to automate conversational AI use cases with domain-specific large language models,” said Ankush Sabharwal, CEO of CoRover, in a statement. “NVIDIA AI technology enables us to deliver enterprise-grade virtual assistants that support 1.3 billion users in over 100 languages.”

CoRover’s AI platform powers chatbots and customer service applications for major private and public sector customers, such as the Indian Railway Catering and Tourism Corporation, the official provider of online tickets, drinking water and food for India’s railways stations and trains.

Dubbed AskDISHA, after the Sanskrit word for direction, the IRCTC’s multimodal chatbot handles more than 150,000 user queries daily, and has facilitated over 10 billion interactions for more than 175 million passengers to date. It assists customers with tasks such as booking or canceling train tickets, changing boarding stations, requesting refunds, and checking the status of their booking in languages including English, Hindi, Gujarati and Hinglish — a mix of Hindi and English.

The deployment of AskDISHA has resulted in a 70% improvement in IRCTC’s customer satisfaction rate and a 70% reduction in queries through other channels like social media, phone calls and emails.

CoRover’s modular AI tools were developed using Nvidia NeMo, an end-to-end, cloud-native framework and suite of microservices for developing generative AI. They run on Nvidia GPUs in the cloud, enabling CoRover to automatically scale up compute resources during peak usage — such as the moment train tickets are released.

Nvidia also noted that VideoVerse, founded in Mumbai, has built a family of AI models using Nvidia technology to support AI-assisted content creation in the sports media industry — enabling global customers including the Indian Premier League for cricket, the Vietnam Basketball Association and the Mountain West Conference for American college football to generate game highlights up to 15 times faster and boost viewership. It uses Magnifi, with tech like vision analysis to detect players and key moments for short form video.

Nvidia also highlighted Mumbai-based startup Fluid AI, which offers generative AI chatbots, voice calling bots and a range of application programming interfaces to boost enterprise efficiency. Its AI tools let workers perform tasks like creating slide decks in under 15 seconds.

Karya, based in Bengaluru, is a smartphone-based digital work platform that enables members of low-income and marginalized communities across India to earn supplemental income by completing language-based tasks that support the development of multilingual AI models. Nearly 100,000 Karya workers are recording voice samples, transcribing audio or checking the accuracy of AI-generated sentences in their native languages, earning nearly 20 times India’s minimum wage for their work. Karya also provides royalties to all contributors each time its datasets are sold to AI developers.

Karya is employing over 30,000 low-income women participants across six language groups in India to help create the dataset, which will support the creation of diverse AI applications across agriculture, healthcare and banking.

Serving over a billion local language speakers with LLMs

India is investing in sovereign AI in an alliance with Nvidia.

Namaste, vanakkam, sat sri akaal — these are just three forms of greeting in India, a country with 22 constitutionally recognized languages and over 1,500 more recorded by the country’s census. Around 10% of its residents speak English, the internet’s most common language.

As India, the world’s most populous country, forges ahead with rapid digitalization efforts, its government and local startups are developing multilingual AI models that enable more Indians to interact with technology in their primary language. It’s a case study in sovereign AI — the development of domestic AI infrastructure that is built on local datasets and reflects a region’s specific dialects, cultures and practices.

These public and private sector projects are building language models for Indic languages and English that can power customer service AI agents for businesses, rapidly translate content to broaden access to information, and enable government services to more easily reach a diverse population of over 1.4 billion individuals.

To support initiatives like these, Nvidia has released a small language model for Hindi, India’s most prevalent language with over half a billion speakers. Now available as an Nvidia NIM microservice, the model, dubbed Nemotron-4-Mini-Hindi-4B, can be easily deployed on any Nvidia GPU-accelerated system for optimized performance.

Nvidia's accelerated AI infrastructure platform
Nvidia’s accelerated AI infrastructure platform.

Tech Mahindra, an Indian IT services and consulting company, is the first to use the Nemotron Hindi NIM microservice to develop an AI model called Indus 2.0, which is focused on Hindi and dozens of its dialects.

Indus 2.0 harnesses Tech Mahindra’s high-quality fine-tuning data to further boost model accuracy, unlocking opportunities for clients in banking, education, healthcare and other industries to deliver localized services.

The Nemotron Hindi model has 4 billion parameters and is derived from Nemotron-4 15B, a 15-billion parameter multilingual language model developed by Nvidia. The model was pruned, distilled and trained with a combination of real-world Hindi data, synthetic Hindi data and an equal amount of English data using Nvidia NeMo, an end-to-end, cloud-native framework and suite of microservices for developing generative AI.

The dataset was created with Nvidia NeMo Curator, which improves generative AI model accuracy by processing high-quality multimodal data at scale for training and customization. NeMo Curator uses Nvidia RAPIDS libraries to accelerate data processing pipelines on multi-node GPU systems, lowering processing time and total cost of ownership.

It also provides pre-built pipelines and building blocks for synthetic data generation, data filtering, classification and deduplication to process high-quality data.

After fine-tuning with NeMo, the final model leads on multiple accuracy benchmarks for AI models with up to 8 billion parameters. Packaged as a NIM microservice, it can be easily harnessed to support use cases across industries such as education, retail and healthcare.

It’s available as part of the Nvidia AI Enterprise software platform, which gives businesses access to additional resources, including technical support and enterprise-grade security, to streamline AI development for production environments. A number of Indian companies are using the services.

India’s AI factories can transform economy

Nvidia’s tech is being used to build AI factories in India.

India’s leading cloud infrastructure providers and server manufacturers are ramping up accelerated data center capacity in what Nvidia calls AI factories. By year’s end, they’ll have boosted Nvidia GPUdeployment in the country by nearly 10 times compared to 18 months ago.

Tens of thousands of Nvidia Hopper GPUs will be added to build AI factories — large-scale data centers for producing AI — that support India’s large businesses, startups and research centers running AI workloads in the cloud and on premises. This will cumulatively provide nearly 180 exaflops of compute to power innovation in healthcare, financial services and digital content creation.

Announced today at the Nvidia AI Summit, this buildout of accelerated computing technology is led by data center provider Yotta Data Services, global digital ecosystem enabler Tata Communications, cloud service provider E2E Networks and original equipment manufacturer Netweb.

Their systems will enable developers to harness domestic data center resources powerful enough to fuel a new wave of large language models, complex scientific visualizations and industrial digital twins that could propel India to the forefront of AI-accelerated innovation.

Yotta Data Services is providing Indian businesses, government departments and researchers access to managed cloud services through its Shakti Cloud platform to boost generative AI adoption and AI education.

Powered by thousands of Nvidia Hopper GPUs, these computing resources are complemented by Nvidia AI Enterprise, an end-to-end, cloud-native software platform that accelerates data science pipelines and streamlines development and deployment of production-grade copilots and other generative AI applications.

India’s robotics ecosystem.

With Nvidia AI Enterprise, Yotta customers can access Nvidia NIM, a collection of microservices for optimized AI inference, and Nvidia NIM Agent Blueprints, a set of customizable reference architectures for generative AI applications. This will allow them to rapidly adopt optimized, state-of-the-art AI for applications including biomolecular generation, virtual avatar creation and language generation.

“The future of AI is about speed, flexibility and scalability, which is why Yotta’s Shakti Cloud platform is designed to eliminate the common barriers that organizations across industries face in AI adoption,” said Sunil Gupta, CEO of Yotta, in a statement. “Shakti Cloud brings together high-performance GPUs, optimized storage and a services layer that simplifies AI development from model training to deployment, so organizations can quickly scale their AI efforts, streamline operations and push the boundaries of what AI can accomplish.”



Source link

You may also like