Towards Responsible #AIforAll in India
Here's how India is striving to balance regulation and innovation in the domain of artificial intelligence technology. Image: REUTERS/Sivaram V
- NITI Aayog is developing an approach towards ensuring responsible usage of AI in India with the support of Centre for the Fourth Industrial Revolution, World Economic Forum.
- The approach defines broad principles that AI solutions should adhere to, while also exploring implementation and enforcement mechanisms.
- A draft document has been released for public consultation. NITI Aayog is currently exploring a roadmap for enforcement of principles across the public sector, private sector and academic research.
Building further on the National Strategy on AI (NSAI) released in 2018, NITI Aayog is now working on outlining an approach towards realising the economic benefits of AI in a manner that is “responsible” to its users and broader society. The approach attempts to establish broad principles for design, development and deployment of AI in India – drawing on similar global initiatives but grounded in the Indian legal and regulatory context. The paper also explores means of operationalization of principles across the public sector, private sector, research and academia.
NITI Aayog, the think tank of the Government of India, is developing the approach to “Responsible #AIforAll” based on a large-scale stakeholder consultation facilitated by The Centre for the Fourth Industrial Revolution (C4IR) India. A Responsible AI working document (developed on the basis of a consultation workshop held in December 2019, organised by the C4IR India, World Economic Forum) was presented during a global consultation with AI ethics experts around the world on 21 July 2020 and subsequently released by NITI Aayog for wider public consultations.
India strives to balance regulation and innovation in the domain of AI technology.
What should AI ethics principles look like?
A number of Ethical AI principles have been developed globally on the underlying theme of protection of basic human rights. For an approach more tailored to the Indian context, principles in the working document have been developed on the basis of the “fundamental rights” afforded to Indian citizens as per its Constitution. These principles have been designed to be “long lasting” in pre-empting risks of AI regardless of the use case in question.
At the same time, however, nimbleness is required for these principles to not stifle innovation and to benefit from the latest developments in this rapidly developing field. The principles will need to evolve and an institutional mechanism to update them periodically is likewise necessary.
It is also important to evaluate the kind of regulatory framework that might work for India, bearing in mind that for a number of sectors (health, finance) there already exist regulators who may be positioned as the natural choice for discharging this function by extending their regulatory ambit. Thus the question of leaving it to sectoral regulators and legislation or having a new statutory authority confronts the policy-makers. Another consideration that arises is the “degree” of enforcement required for adherence to the defined principles. Do we just need high-level principles, or should we be regulating specific applications and use cases?
How do you put these principles into practice?
Enforcement mechanisms are evolving at various levels, viz. standards, guidelines, legislation. Drawing the line between self-certification and regulation is one of the key questions being discussed by NITI Aayog in various expert consultations. While self-certification is useful and should be explored with basic elements notified by the Government for adherence by enterprises, the same cannot be a substitute for a more direct approach.
In various expert consultations, a risk-based stratification of AI use cases suggestive of different degrees of enforcement of principles has emerged as the most favoured line of action.
Towards enforcement of principles, AI ethics standards are necessary but not sufficient, and need to be supplemented by guidelines and legislation. Further notifying standards alone would be a very passive approach and more initiative needs to be directed towards ensuring these get adequately integrated in an organization’s culture and workflow. Guidelines and legislation though more powerful tools for enforcement of principles, are also less agile and flexible to innovation.
AI ethics in the private sector
Internal ethics boards, self-assessment guides and external audits have been recommended by several experts as key mechanisms for private sector enforcement. Under certain scenarios, one could consider requiring providers to allow independent audit(s) of their solutions. This can help prevent or mitigate unanticipated outcomes.
The other major issue is to distinguish between sector-agnostic and sector-specific guidelines. Evaluation of AI-based use cases at the industry level can help identify sector-specific variations. Depending on the impact, benefits and risks attached to the portfolio of use cases, governance frameworks can then be operationalized accordingly. This will also allow for testing the responsible AI policy framework in real-world applications.
AI ethics in research
To ensure that research in the field of AI adheres to the principles, experts have recommended drawing from practices in the field of healthcare. Specific practices include the creation of “Internal Review Boards (IRBs)” or specific ethics review committees.
Incentivizing ethical AI research is another key focus area. NITI Aayog, while expounding the concepts of “India as the AI Garage" and “Global AI Alliance” in the NSAI, has been advocating inter alia international collaboration on ethical AI and government funding directed to similar projects. The NITI Aayog paper would aim to identify broad sectors in which research can be considered.
AI ethics in public sector deployment
As the public sector deployment of AI systems is expected to impact population at large, it’s important to ensure responsible public procurement of AI. To do so, mechanisms will need to be put into place to ensure that choices of public sector procurement professionals are guided by multidisciplinary group of experts in the field, which may take the form of “ethics review committees” to review procurement documents and the evaluation of proposals.
Other operationalization recommendations
A number of other techniques are being considered, including:
- Creating environments conducive to innovation, such as regulatory sandboxes for facilitating innovation, development and adoption of AI.
- “Ethics by design” approach whereby ethical principles are implemented from the beginning of the design process.
- Continuous post-deployment monitoring of AI systems (similar to post-market surveillance for drugs and medical devices).
- Ensuring accessible and affordable grievance redressal mechanisms for decisions made by the AI system.
- In case of AI-based decision making, contestability (ability to appeal) also becomes important. If an algorithm will be making decisions that affect people’s rights and public benefits, it is important to describe how the administrative process would preserve due process by enabling the contestability of automated decision-making.
What is the World Economic Forum doing about the Fourth Industrial Revolution?
One of the key aspects of India’s ambition of #AIforAll includes responsible AI and balancing ethical considerations with need for innovation. When released, India’s Responsible AI for All paper will lay down recommendations for addressing some of the AI ethics challenges in India’s future roadmap for AI.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Emerging TechnologiesSee all
Michele Mosca and Donna Dodson
December 20, 2024