wordpress blog stats
Connect with us

Hi, what are you looking for?

Summary: NITI Aayog’s proposal for an AI oversight body

Central government think-tank NITI Aayog has proposed an oversight body to manage artificial intelligence (AI) policy which will lay down guidelines for responsible behaviour, and coordinate with sectoral regulators. Stressing that “existing regulatory mechanisms are best placed to enforce rules, standards, and guidelines”, the think-tank recommended that an advisory body be set up that interfaces with existing sectoral regulators, among other things

Prepared “based on expert consultations over the past year”, the paper acknowledges a few experts: Professor Mayank Vatsa (IIT Jodhpur), Arghya Sengupta and Ameen Jauhaur from Vidhi Centre for Legal Policy, Google Team, and John C. Havens from IEEE. The NITI Aayog had invited comments to the proposal up to December 15, 2020, it said in a draft discussion document released in November. This deadline was extended to January 15, 2021 a week ago.

Since the use cases and contexts of AI deployment evolve over time, a one-size-fits-all approach is not sustainable, the paper said, citing different “risks” of use of AI across sectors, such as discrimination in credit lending or safety risks in autonomous vehicles. It also highlighted that the enforcement of AI regulation depends upon sectoral regulators such as the National e-Health Authority (NeHA) for healthcare. Instead, “a flexible risk-based approach must be adapted”, according to the discussion paper. 

Functions of proposed oversight body

The body should clarify what responsible behaviour is, the lack of it has “inhibited the growth” of AI in India. Giving its own recommendations for standards and guidelines, the paper says the body “may identify design standards, guidelines and acceptable benchmarks for priority use cases with sectoral regulators and experts” and that “these may be made mandatory for public sector procurement.” Some of the areas the government can offer clarity on include doctor-patient confidentiality, informed consent process, and procurement mechanisms, etc.

  • Create guidelines for ‘Model AI Procurement’ for Request for Proposal for various priority use cases to guide responsible AI procurement in the public sector. Such documents may include risk assessment, best practices through the lifecycle, clarity on responsibility, liability and IP considerations.”
  • Ratify international standards on consultation with relevant ministries and sectoral regulators, or with Bureau of Indian Standards; it can develop “frameworks for responsible AI through policy sandbox and controlled pilots”. 
  • Develop design guidelines, and “frameworks for responsible AI through responsible sandbox and controlled pilots”. 

Access to responsible AI tools

The body should promote the development and access to data and technology tools for responsible AI: 

Advertisement. Scroll to continue reading.
  • It can support open technology projects, via hackathons and workshops to identify solutions for “adherence to Principles.” “Linguistic and NLP tools in local Indian languages may be promoted to facilitate access to benefits of AI across the country,” the paper said. 
  • Identify issues with data availability, sharing mechanisms and promote:
    • research into data generation, identifying proxies,
    • creation and adoption of safe data sharing protocols (ex: through model protocols, data sharing agreements)

Coordinate with sectoral regulators 

Multiple regulators regulate data and AI, therefore it requires coordination to prevent inconsistent policies and ambiguity, especially for cross-sectoral use cases. The body can “coordinate approaches” across regulations to avoid duplication of efforts and inconsistent policies, it said.

It can “assist” regulators in identifying risks and work with “various civil societies, research institutions, industry bodies and other relevant agencies to monitor existing policies and regulations gaps, inconsistencies, and other issues and provide recommendations.”

It also laid down the following functions:  

  1. Monitor and update responsible AI principles continuously and “interface” with various bodies to design “specific mechanisms to translate principles into practice.”
  2. Research technical, legal, policy, societal issues of AI: Government should support AI’s impact in Indian context and prioritise funding and fellowship programs, leverage international alliances, organise conferences, study and publishers policy papers on AI deployments.
  3. Awareness programs should broadly aim to reduce trust issues, information asymmetry and others. “Such programs may be entity specific (Public sector, Private sector, Academia, General Public, etc) and may be customized to the local context,” the paper said. Training can also be deployed to decision makers, implementing agencies, users, and stakeholders on responsible AI. The private sector may be encouraged to create open knowledge resources on risks, case studies, and best practices on responsible AI in collaboration with academic institutions. 
  4. Represent India in international dialogue on AI: provide the perspective of India and other emerging economies on responsible AI and assist ministries in developed of cross-border data sharing protocols to facilitate collaborative research 

Structure of oversight body

The body, proposed to be called the Council for Ethics and Technology, should consist of computer science and AI experts, legal experts, relevant sectoral experts, civil society, humanities and social science experts, industry representatives, representatives from standard setting bodies, and government support for interfacing with ministries and departments. Additional experts may be opted in depending on the requirement, the paper said. 

Further, an “Ethical Committee” will be constituted for the procurement, development, operations phase of AI systems, as well as to be accountable for adherence to Responsible AI principles. The Committee itself should have “multi-disciplinary composition without Conflict of Interest,” including a chairperson and member secretary. While its composition will depend on the use case, its role should be to: 

  1. Assess the potential harms and benefits, evaluate mitigation plan, recommend whether an AI solution should be approved 
  2. Ensure an AI system is developed, deployed, operated and maintained in accordance with principles 
  3. Determine the extent of review needed for an AI system depending on inherent risks and benefits “including but not limited to external audit”
  4. Ensure accessible and affordable grievance redressal mechanisms for decisions made by the AU system 
  5. Ensure structure for whistleblower protection 

For the private sector, the oversight body may encourage self-regulation via ethics-by-design structures.

For high risk use cases, “adherence mechanisms may be mandated,” the paper said. Such use cases, guidelines and adherence mechanisms may be defined by the Council in consultation with sectoral regulators and experts. Adherence may be through self-declaration or through an independent third party audit, depending on the level of risk. “We invite comments as a part of public consultation on a framework to identify high risk applications and practical means to ensure adherence,” the paper says. 

International standards may not always to relevant, exhaustive or available for Indian context. Hence it is critical that the Government plays a role in ensuring the definition of ‘acceptable behaviour’ is clear, the paper said. 

Advertisement. Scroll to continue reading.

Read more:

Written By

I cover health, policy issues such as intermediary liability, data governance, internet shutdowns, and more. Hit me up for tips.

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.



Looking at the definition of health data, it is difficult to verify whether health IDs are covered by the Bill.


The accession to the Convention brings many advantages, but it could complicate the Brazilian stance at the BRICS and UN levels.


In light of the state's emerging digital healthcare apparatus, how does Clause 12 alter the consent and purpose limitation model?


The collective implication of leaving out ‘proportionality’ from Clause 12 is to provide very wide discretionary powers to the state.


The latest draft is also problematic for companies or service providers that have nothing to with children's data.

You May Also Like


Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...


135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...


Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...


By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Your email address:*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ