Major opportunities for AI in jobs and governance, says MeitY Secretary S. Krishnan

Date:

Ahead of the India AI Impact Summit 2026 in New Delhi, the Secretary for the Ministry of Electronics and Information Technology (MeitY) in a wide-ranging conversation at The Hindu MIND event moderated by Aroon Deep discussed artificial intelligence (AI), India’s semiconductor ambitions, and MeitY’s role in digital governance.


We are less than a week away from the India AI Impact Summit, which will witness participation of representatives from dozens of countries. Could you give us a quick rundown on where we are on AI from an Indian perspective?


We’ve taken an approach where we will try to provide the three aspects of infrastructure that AI needs: compute, datasets, and models. With government support, access to these is made a little easier. Then the focus is on seeing what we can do with the applications and solutions that people are able to develop using these resources.

Ultimately, there are two things that are important. One, that firms’ revenues will depend on how they deploy AI. Deployment is important and that’s what also delivers impact. In the Indian context, there are many areas where you can use AI to enhance productivity, efficiency, and effectiveness. Our start-ups can do well and these are things that we can offer also as products to the rest of the world.

We necessarily have to do it a little frugally given the kind of resources we have, which is again a reason why this model that we have adopted appeals to many countries in the poorer parts [of the world]. In a number of indicators relating to AI, which institutions like Stanford University and others measure, we seem to be doing relatively well. On the Vibrancy Index, we ranked third, on skill penetration and use of AI for enterprise solutions, we ranked second overall.

So, if you look at this kind of penetration and the kind of skilling, clearly we seem to have some advantage there that we need to build on. NITI Aayog has done a study that shows that yes, undoubtedly we lose, or some jobs in the regular coding programming side of IT/ITeS (Information Technology-enabled Services) will go away, but we can create many more jobs in terms of what else can happen.

What I see really as the big other opportunity is that there are many areas, including governance, where all of us would like to see substantial enhancement in quality and that probably is something that AI can offer. At the same time, we are aware of risks, dangers, and possible harms, which is why I think that when there is a need to regulate, we stand ready to regulate.


What does regulation look like practically?


If you have seen the report chaired by the Principal Scientific Adviser on AI Governance Guidelines, what it also states is try and use existing laws as much as possible. If you take, for example, what we can do with the existing Information Technology Act, that’s one aspect of it. The other part is what we need to do in the copyright space. So, that is being dealt with in a particular way. Another part is how other data, including personal data, get used. So, the Digital Personal Data Protection Act, 2023 sort of fits in there.

Some of this regulation is already in place. Some of it requires tweaking, tightening, and that is what we keep attempting to do, including the new set of rules we put out [amending the IT Rules, 2021 to require labelling of synthetically generated content].


Those rules introduce labelling for AI-generated content and reduce takedown timelines for all content from 24-36 hours to two to three hours.


Labelling is in terms of a right to know. We all have a right to know if what we’re seeing is artificially generated. It’s a very minor requirement and technologically fairly easy to solve. There were certain issues that I think in the course of consultation [from October 2025 onwards, when the draft of these rules was published] they [stakeholders] did raise with us and we have addressed them. For instance, we exempted smartphone camera auto-enhancements. Likewise for special effects in films.

The change in time limits is fundamentally based on our understanding that there are two factors involved. When initially these time frames were imposed, they were much longer because the nature and kind of intermediaries we were dealing with those days were different and they had more time to respond.

The possible virality of a lot of these things is very quick. All the damage is done within a matter of 24 or 36 hours. Practically, our own experience has been that whenever any such takedowns have been required, most companies did not need more than an hour or two to comply.


On electronics manufacturing, how prepared are we in an era of weaponised supply chains?


Some of the story lies in the past, some of it in the future. We did produce electronics even up to the late 1990s. A lot of it went out after the Information Technology-I agreement of 1997 [which allowed IT hardware to be imported at minimum duties]. I am not for a moment saying that that was necessarily bad.

I think the IT revolution may not have taken place if you did not have access to computers and laptops and various other tools on the scale that we did, thanks to opening up. Now, you have reached a stage where I think it is important to also have that capacity at home domestically. We recognise that it is a global value chain, so it is not as if every part of it will be in India, but you have to have a reasonably substantial part of it to make sure that the value chain deepens.

So, we start in a sense at the end of the finished product [such as smartphones] because that gives you scale and employment. Value addition in the country is just about 18-20% because companies mostly import components. However, this is changing with schemes like the Electronics Component Manufacturing Scheme encouraging technology transfer similar to how China learned from the Apple ecosystem. This scheme is expected to significantly increase value addition to 35-40%, which is comparable to China’s 40-50%. Semiconductors are more strategic and less about value; it is about what we are capable of doing. There is a Tamil saying, ‘Veralukketha veekam [Don’t bite off more than you can chew]’. So, the question is, ‘How do you chew what you can bite off and manage?’ The India Semiconductor Mission is designed on the basis of what we can actually chew. We are not at the leading edge. But we are in those segments where there is still considerable volume of consumption and will be there for the foreseeable future. The support has to be extended over at least a decade or so, which is why the India Semiconductor Mission 2.0 was also announced in the Union Budget. So, we should move forward and organically, then grow sort of into the more leading edges.


There are reports of the compliance timeline for the Digital Personal Data Protection Act, 2023 reducing from 18 to 12 months. Why?


We have not shortened it. We have initiated a consultation with the industry. We received feedback that the 18-month period is a little too long and that there are various elements that firms are already ready to comply with. So, can we actually talk to the industry and see if we can reduce that time frame? So, that is a context in which we are speaking to the industry.


Varghese K. George: An international commentator likened the situation on AI now to what our awareness of COVID-19 was in February 2020. So, everybody was seeing some distant virus in China and then three weeks later everything in the world turned on its head. So, the comment being that that moment in terms of AI has already arrived. So, what is our understanding of where global AI research stands?


While much is said about agentic AI taking over, our view is that its practical utility remains uncertain. We believe that focusing on smaller, specialised AI tools – like sector-specific, vision, quantitative models, and smaller language models – offers more immediate, practical relevance and greater benefit to society and humanity. The agentic vision may transpire, but it is still far off.

Jacob Koshy: How are IT firms discussing the AI wave? Their business model is built on a labour arbitrage that is now being threatened by this technology.


We have had conversations with many of the people in the IT industry. They say many of the coding and programming jobs are difficult to sustain because those can be done by an AI bot. But when you have to create an application, or create a solution, then you need to have better domain expertise, like in agriculture or manufacturing. The deployment of the application takes human resources. You have to understand which are the data sets you have to bring in, how you tailor those to suit a particular situation, how you adjust the way that the orchestration levels work, and multiple deployment-related tasks that need to be done. Their understanding is that they would still have multiple job opportunities. But that would require many of their present employees to get retrained and understand this differently. We have this programme called Future Skills Prime, which is primarily designed around reskilling and retraining people. In colleges, the emphasis has been to teach this as a horizontal technology; we need to teach it across every course.

Suhasini Haidar: Two questions – are we looking to create an international body for AI ethics and safety? And on MeitY’s cyber law division: it is meant to stop unlawful speech and yet we see again and again people who are in government putting out AI videos inciting violence. Where do you think MeitY’s responsibility really lies?


This is the first time a country in the Global South is hosting the AI summit. So, in a sense, yes, India could possibly be a natural leader in some of the aspects of AI, not necessarily in AI governance or regulation – that is one part of it – but more in terms of even offering more affordable technologies and more affordable deployments. Hopefully, in the final declaration, something will come out. Now, whether there will be another international body like the Solar Alliance, I don’t really know. We may not do it as a regular body – we are also part of the Global Digital Compact of the UN and so on. So, we will work with the international community to see how this progresses. The number of cases where the government blocks information online is actually a fraction – it’s less than 0.1% of the total number of cases that social media entities actually take down as part of their community guidelines and so on. So, it is very small, but we have to act when things come up through this channel and we act on what material is brought before us.

G. Sampath: AI is a power-intensive sector with water and electricity needs. How are we looking at this from our climate commitments?


India has one of the largest grids in the world with high levels of renewable energy and load capacity. One of the issues with renewable power is often there is no consumption at the time when it gets generated because the loads are inadequate and a lot of it just gets back down. So, there is an understanding that there could be surplus power that could be used for this purpose. There are both air-cooled servers and water-cooled servers, and there are ways in which this can also be economised.

But we are fairly clear that there is nothing in terms of a relaxation that is given from any of the environmental norms or any of the other norms for a data centre. The only set of norms that have been relaxed are building norms; data centres don’t need much parking, etc. To that limited extent, it is a relaxation.

But in terms of water and electricity consumption, they will have to meet all the relevant norms, subject to availability, subject to what needs to be done. Many of these decisions ultimately are taken at the State government level. There has not been very open encouragement of data centres in all locations.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

Join Us WhatsApp