wordpress blog stats
Connect with us

Hi, what are you looking for?

In a first, the UK’s NHS takes up algorithmic impact assessments to uncover potential risks

The process looks to address and eradicate algorithm bias before granting AI systems access to troves of medical records.

The UK’s National Health Service (NHS) will be the first healthcare system in the world to undertake algorithmic impact assessments (AIAs) in order to maximise the benefits and mitigate the harms of artificial intelligence (AI) technologies in healthcare, the Ada Lovelace Institute said in a press release. The institute has designed the impact assessment for NHS.

The NHS will undertake this assessment on a trial basis at the NHS AI Lab. “The framework will be used in a pilot to support researchers and developers in assessing the possible risks of an algorithmic system before they are granted access to NHS patient data,” said the release. The institute will use two databases for the assessment, namely: the National Covid-19 Chest Imaging Database (NCCID) and the proposed National Medical Imaging Platform (NMIP).

In the release, the institute described NCCID as a central database of medical images from hospital patients across the country that supports researchers to better understand COVID-19 and develop technology enabling the best care. “The proposed NMIP will expand on the NCCID and enable the training and testing of a wider range of AI systems using medical imaging for screening and diagnostics,” it added.

Data-driven technologies (including AI) are increasingly being used in healthcare to help with detection, diagnosis, and prognosis, the release said. However, there are legitimate concerns that AI could exacerbate health inequalities and entrench social biases (for example, training data biases have resulted in AI systems for diagnosing skin cancer risk being less accurate for people of colour).

A closer look at the algorithmic assessment protocol

The Ada Lovelace Institute detailed out the assessment in another document and said that it has seven steps —

Advertisement. Scroll to continue reading.
  1. AIA reflexive exercise: Firstly, an impact-identification exercise will be completed by the applicant team(s) and submitted to the NMIP Data Access Committee (DAC) as part of the NMIP filtering. “This templated exercise prompts teams to detail the purpose, scope and intended use of the proposed system, model or research, and who will be affected. It also provokes reflexive thinking about common ethical concerns, consideration of intended and unintended consequences and possible measures to help mitigate any harms,” the institute said.
  2. Application filtering: After this, an initial process of application filtering is completed by the NMIP DAC to determine which applicants proceed to the next stage of the AIA.
  3. AIA participatory workshop: Then, an interactive workshop is held wherein participants can pose questions and pass judgement on the harm and benefit scenarios identified in the previous exercise.
  4. AIA synthesis: “The applicant team integrates the workshop findings into a template,” the institute said.
  5. Data-access decision: Finally, the NMIP DAC makes a decision about whether to grant data access. “This decision is based on criteria relating to the potential risks posed by this system and whether the product team has offered satisfactory mitigations to potentially harmful outcomes,” it added.

India’s draft Data Protection Bill also requires impact assessments

The Joint Parliamentary Committee (JPC) in its report has said that data protection impact assessments will be necessary for data fiduciaries carrying out data processing activities. The report said that fiduciaries could be liable to pay fine ‘as may be prescribed’ up to a maximum penalty, upon violation of the Data Protection Act.

Specifically, its provisions lay down —

A prescribed fine, upto Rs 5 Crore or 2% of the total global turnover for the preeceding year, whichever is higher. These will apply in case of violations against the following provisions:

  1. Prompt action against a data breach
  2. Registering with the Data Protection Authority
  3. Undertaking data protection impact assessment and data audit
  4. Appointing a data protection officer

Potential of religion-based discrimination in Delhi Police’s use of facial recognition

Bias in algorithms are a reality as substantiated in this study by Jai Vipra, Senior Resident Fellow at the Vidhi Centre for Legal Policy, who mapped out police station jurisdictions and found that in Delhi, Muslims are more likely to be targeted by the police if facial recognition technology is used.

“Given the fact that Muslims are represented more than the city average in the over-policed areas, and recognising historical systemic biases in policing Muslim communities in India in general and in Delhi in particular, we can reasonably state that any technological intervention that intensifies policing in Delhi will also aggravate this bias. The use of FRT in policing in Delhi will almost inevitably disproportionately affect Muslims, particularly those living in over-policed areas like Old Delhi or Nizamuddin.” — Empirical Study

Also Read:

Have something to add? Post your comment and gift someone a MediaNama subscription.

Written By

Among other subjects, I cover the increasing usage of emerging technologies, especially for surveillance in India

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.



Looking at the definition of health data, it is difficult to verify whether health IDs are covered by the Bill.


The accession to the Convention brings many advantages, but it could complicate the Brazilian stance at the BRICS and UN levels.


In light of the state's emerging digital healthcare apparatus, how does Clause 12 alter the consent and purpose limitation model?


The collective implication of leaving out ‘proportionality’ from Clause 12 is to provide very wide discretionary powers to the state.


The latest draft is also problematic for companies or service providers that have nothing to with children's data.

You May Also Like


Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...


135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...


Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...


By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Your email address:*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ