What is India’s greatest asset in the global AI ecosystem? 𝐓𝐚𝐥𝐞𝐧𝐭
𝐈𝐧𝐝𝐢𝐚 𝐫𝐚𝐧𝐤𝐬 #2 𝐢𝐧 𝐭𝐞𝐫𝐦𝐬 𝐨𝐟 𝐀𝐈 𝐓𝐚𝐥𝐞𝐧𝐭, 𝐨𝐧𝐥𝐲 𝐛𝐞𝐡𝐢𝐧𝐝 𝐭𝐡𝐞 𝐔𝐒𝐀, while being ranked #10 overall (The Global AI Index, 2024). Let’s dive deeper - 1️⃣ Global optimism in India’s Talent “𝘐𝘯𝘥𝘪𝘢 𝘩𝘢𝘴 𝘢𝘭𝘭 𝘵𝘩𝘦 𝘪𝘯𝘨𝘳𝘦𝘥𝘪𝘦𝘯𝘵𝘴 𝘵𝘰 𝘭𝘦𝘢𝘥 𝘵𝘩𝘦 𝘈𝘐 𝘳𝘦𝘷𝘰𝘭𝘶𝘵𝘪𝘰𝘯” - Jensen Huang, NVIDIA - “𝘐𝘯𝘥𝘪𝘢 𝘤𝘢𝘯 𝘭𝘦𝘢𝘥 𝘵𝘩𝘦 𝘈𝘐 𝘧𝘳𝘰𝘯𝘵𝘪𝘦𝘳” - Sundar Pichai, Google - “𝘐𝘯𝘥𝘪𝘢 𝘩𝘢𝘴 𝘴𝘰 𝘮𝘢𝘯𝘺 𝘵𝘢𝘭𝘦𝘯𝘵𝘦𝘥 𝘱𝘦𝘰𝘱𝘭𝘦, 𝘴𝘰 𝘮𝘢𝘯𝘺 𝘨𝘳𝘦𝘢𝘵 𝘤𝘰𝘮𝘱𝘢𝘯𝘪𝘦𝘴—𝘪𝘵 𝘩𝘢𝘴 𝘵𝘩𝘦 𝘳𝘦𝘴𝘰𝘶𝘳𝘤𝘦𝘴 𝘵𝘰 𝘣𝘰𝘵𝘩 𝘵𝘳𝘢𝘪𝘯 𝘧𝘰𝘶𝘯𝘥𝘢𝘵𝘪𝘰𝘯 𝘮𝘰𝘥𝘦𝘭𝘴 𝘢𝘯𝘥 𝘣𝘶𝘪𝘭𝘥 𝘢𝘱𝘱𝘭𝘪𝘤𝘢𝘵𝘪𝘰𝘯𝘴” - Andrew Ng, DeepLearning.ai India's young, capable and energetic workforce, gives us an edge that is partly due to our sheer demographic weight but also thanks to our strong network of higher education STEM institutions, and our global position as an IT outsourcing powerhouse. 2️⃣ AI Developers vs. Scientists We are particularly strong in our AI developer talent who are proficient in building generativeAI and LLM powered applications. However, in terms of highly specialised AI research scientists, India ranks only 24 (The Global AI Index, 2024). 3️⃣ AI Research Talent Churn Our AI Research Talent in particular is prone to churn. Due to the lack of a supporting infrastructure, R&D culture, commercial ecosystem, mentorship etc., a significant proportion of our talent opts out of AI research by: - Moving to industry to work on AI applications - Migrating to USA etc. for better AI research opportunities 4️⃣ Growing and Retaining India’s AI Talent In order to maintain our competitive edge in AI Talent, we need to continue investing in skill development. We not only need AI-native talent who can conduct research and build AI applications, but we also need our non-technical workforce to be adept in AI skills and tools that are critical for driving efficiency and productivity at work. This will not only result in economic gains for the country but also pave the way for future success - “𝘕𝘦𝘦𝘥 𝘵𝘰 𝘴𝘬𝘪𝘭𝘭, 𝘳𝘦-𝘴𝘬𝘪𝘭𝘭 𝘱𝘦𝘰𝘱𝘭𝘦 𝘧𝘰𝘳 𝘈𝘐-𝘥𝘳𝘪𝘷𝘦𝘯 𝘧𝘶𝘵𝘶𝘳𝘦” - 𝐏𝐌 𝐌𝐨𝐝𝐢 at AI Action Summit, Paris 2025 5️⃣ Conclusions I am personally optimistic about India’s AI potential only because of her Talent. My belief is substantiated by studies which show that India ranks 1st globally in AI skill penetration (Stanford AI Index 2024). Additionally, India also leads in AI skill penetration for Women with a penetration rate of 1.7. If we take the right steps in supporting and nurturing our talent and provide them with the necessary resources, infrastructure, ecosystem, mentorship, and foster a culture of meritocracy and research, we will not only be regarded as leaders in AI Talent but also as global leaders in AI implementation, innovation, and R&D.
Comments
What is India’s strength in AI? 𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗔𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀
India may be lagging behind other countries in terms of fundamental AI research but it punches above its weight when it comes to building AI applications - 1️⃣ Greater adoption of Application models vs. Foundational LLMs The number of downloads of models (on Hugging Face) focused on Indic use cases in the last month from today show up to a staggering ~90X greater adoption of smaller application models (largely developed by AI4Bhārat) vs. foundational LLMs (based on Sarvam's Sarvam-1 and Krutrim's Krutrim-2-instruct). These are the use cases for each of the Application models: - indictrans2-indic-en-1B: translation from 22 Indian languages to English - indic-bert: language model and embeddings for 12 Indian languages - indicBERtv2-MLM-only: multilingual language model for 23 languages - indictrans2-en-indic-1B: translation from English to 22 Indian languages - indic-sentence-bert-nli: sentence similarity across 10 Indian languages 👉 The application models are typically “small” models ranging from ~300M to ~1B parameters in size vs. the foundational LLMs that are 2 to 12B parameters in size. This also indicates that for solving India-specific use cases, we do not necessarily need “large” models; and the development of small, fine-tuned models on top of leading open-source LLMs from global companies is a good strategy to solve for niche domestic use cases. 2️⃣ India publishes ~2x more at Application vs. Theoretical AI Conferences Of the top 10 AI conferences, India publishes ~2 times more papers in conferences like AAAI and EMNLP that are more application focused vs. the more theory focused conferences like NeurIPS, ICML and ICLR (source: Mahajan, Bhasin & Aggarwal, 2024). 3️⃣ AI4Bharat's significant contribution to India's R&D capabilities The team at AI4Bhārat in collaboration with Microsoft India, Indian Institute of Technology, Madras, EkStep Foundation and others has done a stellar job in collecting, curating and processing local language datasets to unlock significant value for both public and private sector organisations. By using these datasets to fine-tune Transformer-based models like BERT & ALBERT, they have created models that often outperform models from global companies on niche NLP use cases. Additionally, this work has led to the formation of Sarvam as a venture-backed startup focused on the commercialisation of this research. 4️⃣ Growth of India's AI Startups The rise of generativeAI startups from India that are developing on top of the global foundational LLMs further highlights our strength in building AI applications. These startups are not only solving domestic use cases but also catering to global markets. 5️⃣ Conclusions India’s prowess in building AI applications is highly commendable. One way to make our mark on the global AI ecosystem is by standing on the shoulder of giants to build impactful products. Can India build its own foundational LLMs? Yes
But who is using them? How much is their adoption? To find answers to these questions, I’ve sourced publicly available data from various sources as below: 1️⃣ Number of Downloads on Hugging Face Hugging Face is the de-facto platform for developers to download AI models and datasets. I’ve considered the number of downloads (as a proxy for usage and adoption) of leading, open-source LLMs from USA (from Meta), China (from DeepSeek AI & Alibaba Cloud), and India (from Sarvam & Krutrim, as the two most well capitalized Generative AI startups). The data shows that in the same time period of the last one month from today: - US: LLama’s 3.2-1B & 3.1-8B-instruct were downloaded ~11M & ~6M times - China: DeepSeek-R1 & Qwen2-VL-7B-instruct were downloaded ~4M & 1.5M times - India: Sarvam-1 & Krutrim-2-instruct (built on top of Mistral-NeMo 12B) were downloaded ~5k and ~1k times 👉 These numbers show that the adoption of our leading LLMs is 3 to 4 orders of magnitude less than the most popular LLMs from China and USA respectively. The absolute numbers might be slightly different as these LLMs are also available as APIs, on cloud platforms etc. but the overall trend may not be that different. 2️⃣ Number of forks of Github repositories Forking of Github repos represents a stronger sign of adoption by the developer community, and here also the picture is similar: - meta-llama has been forked ~9700 times - DeepSeek-v3 has been forked ~13800 times - DeepSeek-R1 has been forked ~10000 times - Qwen-VL has been forked 400 times - Krutrim-2-12B has been forked 6 times - Sarvam doesn’t have a dedicated repo for Sarvam-1 3️⃣ Listing in LLM Marketplaces Customer-centric LLM marketplaces like AWS BedRock also provide an indication of customer usage & adoption. While Meta’s LLama and DeepSeek-R1 models are supported, none of India’s LLMs are available. 4️⃣ Support from LLM inference engines LLM Inference engines like vLLM also provide signals about LLM adoption for production use cases. vllm currently supports Llama and Qwen models but again no Indian LLMs yet. 5️⃣ Conclusions Overall, the analysis indicates that Indian LLMs do not currently receive significant user interest and therefore their impact is far less than top, global LLMs. Our LLMs likely have a competitive advantage for domestic use cases focused on speech and language e.g. translation, document analysis, speech recognition etc. The market size of our domestic use cases may not be big enough to justify investment by global companies, but it clearly represents an area where indigenous LLM builders can distinguish themselves. Following my previous post on the poor trajectory of India’s AI research record at top AI conferences, these data further show that we are far from the cutting-edge of AI research and a lot of work needs to be done to raise the bar in terms of global adoption and impact. Unfortunately No.
While India's contribution to AI papers at top AI conferences (including NeurIPS, ICLR, ICML, CVPR, EMNLP etc.) has remained flat over the last 10 years, China's contribution to the AI field, on the other hand, has dramatically increased and caught up with the USA during the same time period (Mahajan, Bhasin & Aggarwal, 2024). This period in the field of AI was marked by numerous innovations in Deep Learning for images, text, audio; Transfer Learning, Synthetic Data, Transformers to name a few. We witnessed the emergence of groundbreaking models such as BERT, GPT-1/2/3, Stable Diffusion etc., which eventually led to the development of ChatGPT and the advent of the current era of LLMs and GenerativeAI. India has missed the boat during this period and failed to proactively increase investment in R&D, infrastructure and capacity building for AI (our R&D budget is only ~0.65% of GDP vs. ~2.4% for China and ~3.5% for USA) as well as retain home-grown talent. There is no straightforward solution to India's AI R&D challenges. While are early signs of progress (e.g. AI4Bhārat, IndiaAI, BHASHINI), in order to truly turn the page and compete at the top of the global AI hierarchy, we need to execute robust AI investment, innovation and implementation strategies. (More to come on this topic) Introduction
The AI revolution is no longer a distant future—it’s reshaping industries today. By 2025, the global AI market is projected to reach $190 billion (Statista, 2023), with generative AI tools like ChatGPT and Midjourney contributing an estimated $4.4 trillion annually to the global economy (McKinsey, 2023). For tech professionals and organizations, this rapid evolution presents unparalleled opportunities but also demands strategic navigation. As an AI expert with a decade of experience working at Big Tech companies and scaling AI-first startups, I’ve witnessed firsthand the transformative power of well-executed AI strategies. This blog post distills actionable insights for:
Let’s explore how to turn AI’s potential into measurable results. Breaking into AI – A Blueprint for Early-Career Professionals The Skills That Matter in 2024 The AI job market is evolving beyond traditional coding expertise. While proficiency in Python and TensorFlow remains valuable, employers now prioritize three critical competencies: 1. Prompt Engineering: With generative AI tools like GPT4/o/o1-/o-3, Deepseek-R1, Claude Sonnet 3.5 etc., the ability to craft precise prompts is becoming a baseline skill. For example, a marketing analyst might use prompts like, “Generate 10 customer personas for a fintech app targeting Gen Z, including pain points and preferred channels.” 2. AI Literacy: 85% of hiring managers now require familiarity with responsible AI frameworks ([Deloitte, 2023](https://www2.deloitte.com)). This includes understanding bias mitigation and compliance with regulations like the EU AI Act. 3. Cross-Functional Collaboration: AI projects fail when technical teams operate in silos. Professionals who can translate business goals into technical requirements—and vice versa—are indispensable. Actionable Steps to Launch Your AI Career 1. Develop a "T-shaped" Skill Profile: Deepen expertise in machine learning (the vertical bar of the “T”) while broadening knowledge of business applications. For instance, learn how recommendation systems impact e-commerce conversion rates. 2. Build an AI Portfolio: Document projects that solve real-world problems. A compelling example: fine-tuning Meta’s Llama 2 model to summarize legal contracts, then deploying it via Hugging Face’s Inference API. 3. Leverage Micro-Credentials: Google’s [Generative AI Learning Path](https://cloud.google.com/blog/topics/training-certifications/new-generative-ai-training) and DeepLearning.AI’s short courses provide industry-recognized certifications that demonstrate proactive learning. From Individual Contributor to AI Leader – Strategies for Mid/Senior Professionals The Four Pillars of Effective AI Leadership Transitioning from technical execution to strategic leadership requires mastering these core areas: 1. Strategic Vision Alignment: Successful AI initiatives directly tie to organizational objectives. For example, a retail company might set the OKR: “Reduce supply chain forecasting errors by 40% using time-series AI models by Q3 2024.” 2. Risk Mitigation Frameworks: Generative AI models like GPT-4 can hallucinate inaccurate outputs. Leaders implement guardrails such as IBM’s [AI Ethics Toolkit](https://www.ibm.com), which includes bias detection algorithms and human-in-the-loop validation processes. 3. Stakeholder Buy-In: Use RACI matrices (Responsible, Accountable, Consulted, Informed) to clarify roles. For instance, when deploying a customer service chatbot, legal teams must be “Consulted” on compliance, while CX leads are “Accountable” for user satisfaction metrics. 4. ROI Measurement: Track metrics like inference latency (time to generate predictions) and model drift (performance degradation over time). One fintech client achieved a 41% improvement in fraud detection accuracy by combining XGBoost with transformer models, while reducing false positives by 22%. Building an AI-First Organization – A Playbook for Startups The AI Strategy Canvas 1. Problem Identification: Focus on high-impact “hair-on-fire” pain points. A logistics startup automated customs documentation—a manual 6-hour process—into a 2-minute task using GPT-4 and OCR. 2. Tool Selection Matrix: Compare open-source (e.g., Hugging Face’s LLMs) vs. enterprise solutions (Azure OpenAI). Key factors: data privacy requirements, scalability, and in-house technical maturity. 3. Implementation Phases: - Pilot (1-3 Months): Test viability with an 80/20 prototype. Example: A SaaS company used a low-code platform to build a churn prediction model with 82% accuracy using historical CRM data. - Scale (6-12 Months): Integrate models into CI/CD pipelines. One e-commerce client reduced deployment time from 14 days to 4 hours using AWS SageMaker. - Optimize (Ongoing): Conduct A/B tests between model versions. A/B testing showed that a hybrid CNN/Transformer model improved image recognition accuracy by 19% over pure CNN architectures. Generative AI in Action – Enterprise Case Studies Use Case 1: HR Transformation at a Fortune 500 Company Challenge: 45-day hiring cycles caused top candidates to accept competing offers. Solution: - GPT-4 drafted job descriptions optimized for DEI compliance - LangChain automated interview scoring using rubric-based grading - Custom embeddings matched candidates to team culture profiles Result: 33% faster hiring, 28% improvement in 12-month employee retention. Use Case 2: Supply Chain Optimization for E-Commerce Challenge: $2.3M annual loss from overstocked perishable goods. Solution: - Prophet time-series models forecasted regional demand - Fine-tuned LLMs analyzed social media trends for real-time demand sensing Result: 27% reduction in waste, 15% increase in fulfillment speed. Avoiding Common AI Adoption Pitfalls Mistake 1: Chasing Trends Without Alignment Example: A startup invested $500K in a metaverse AI chatbot despite having no metaverse strategy. Solution: Use a weighted decision matrix to evaluate tools against KPIs. Weight factors like ROI potential (30%), technical feasibility (25%), and strategic alignment (45%). Mistake 2: Ignoring Data Readiness Example: A bank’s customer churn model failed due to incomplete historical data. Solution: Conduct a data audit using frameworks like [O’Reilly’s Data Readiness Assessment](https://www.oreilly.com). Prioritize data labeling and governance. Mistake 3: Overlooking Change Management Example: A manufacturer’s warehouse staff rejected inventory robots. Solution: Apply the ADKAR framework (Awareness, Desire, Knowledge, Ability, Reinforcement). Trained “AI ambassadors” from frontline teams increased adoption by 63%. Conclusion The AI revolution rewards those who blend technical mastery with strategic execution. For professionals, this means evolving from coders to translators of business value. For organizations, success lies in treating AI as a core competency—not a buzzword. Three Principles for Sustained Success: 1. Learn Systematically: Dedicate 5 hours/week to AI upskilling through curated resources. 2. Experiment Fearlessly: Use sandbox environments to test tools like Anthropic’s Claude or Stability AI’s SDXL. 3. Collaborate Across Silos: Bridge the gap between technical teams (“What’s possible?”) and executives (“What’s profitable?”). |
Archives
April 2025
Categories
All
Copyright © 2025, Sundeep Teki
All rights reserved. No part of these articles may be reproduced, distributed, or transmitted in any form or by any means, including electronic or mechanical methods, without the prior written permission of the author. Disclaimer
This is a personal blog. Any views or opinions represented in this blog are personal and belong solely to the blog owner and do not represent those of people, institutions or organizations that the owner may or may not be associated with in professional or personal capacity, unless explicitly stated. |