The UK’s National Health Service (NHS) will be the first healthcare system in the world to undertake algorithmic impact assessments (AIAs) in order to maximise the benefits and mitigate the harms of artificial intelligence (AI) technologies in healthcare, the Ada Lovelace Institute said in a press release. The institute has designed the impact assessment for NHS.
The NHS will undertake this assessment on a trial basis at the NHS AI Lab. “The framework will be used in a pilot to support researchers and developers in assessing the possible risks of an algorithmic system before they are granted access to NHS patient data,” said the release. The institute will use two databases for the assessment, namely: the National Covid-19 Chest Imaging Database (NCCID) and the proposed National Medical Imaging Platform (NMIP).
In the release, the institute described NCCID as a central database of medical images from hospital patients across the country that supports researchers to better understand COVID-19 and develop technology enabling the best care. “The proposed NMIP will expand on the NCCID and enable the training and testing of a wider range of AI systems using medical imaging for screening and diagnostics,” it added.
Data-driven technologies (including AI) are increasingly being used in healthcare to help with detection, diagnosis, and prognosis, the release said. However, there are legitimate concerns that AI could exacerbate health inequalities and entrench social biases (for example, training data biases have resulted in AI systems for diagnosing skin cancer risk being less accurate for people of colour).
A closer look at the algorithmic assessment protocol
The Ada Lovelace Institute detailed out the assessment in another document and said that it has seven steps —
- AIA reflexive exercise: Firstly, an impact-identification exercise will be completed by the applicant team(s) and submitted to the NMIP Data Access Committee (DAC) as part of the NMIP filtering. “This templated exercise prompts teams to detail the purpose, scope and intended use of the proposed system, model or research, and who will be affected. It also provokes reflexive thinking about common ethical concerns, consideration of intended and unintended consequences and possible measures to help mitigate any harms,” the institute said.
- Application filtering: After this, an initial process of application filtering is completed by the NMIP DAC to determine which applicants proceed to the next stage of the AIA.
- AIA participatory workshop: Then, an interactive workshop is held wherein participants can pose questions and pass judgement on the harm and benefit scenarios identified in the previous exercise.
- AIA synthesis: “The applicant team integrates the workshop findings into a template,” the institute said.
- Data-access decision: Finally, the NMIP DAC makes a decision about whether to grant data access. “This decision is based on criteria relating to the potential risks posed by this system and whether the product team has offered satisfactory mitigations to potentially harmful outcomes,” it added.
India’s draft Data Protection Bill also requires impact assessments
The Joint Parliamentary Committee (JPC) in its report has said that data protection impact assessments will be necessary for data fiduciaries carrying out data processing activities. The report said that fiduciaries could be liable to pay fine ‘as may be prescribed’ up to a maximum penalty, upon violation of the Data Protection Act.
Specifically, its provisions lay down —
A prescribed fine, upto Rs 5 Crore or 2% of the total global turnover for the preeceding year, whichever is higher. These will apply in case of violations against the following provisions:
- Prompt action against a data breach
- Registering with the Data Protection Authority
- Undertaking data protection impact assessment and data audit
- Appointing a data protection officer
Potential of religion-based discrimination in Delhi Police’s use of facial recognition
Bias in algorithms are a reality as substantiated in this study by Jai Vipra, Senior Resident Fellow at the Vidhi Centre for Legal Policy, who mapped out police station jurisdictions and found that in Delhi, Muslims are more likely to be targeted by the police if facial recognition technology is used.
“Given the fact that Muslims are represented more than the city average in the over-policed areas, and recognising historical systemic biases in policing Muslim communities in India in general and in Delhi in particular, we can reasonably state that any technological intervention that intensifies policing in Delhi will also aggravate this bias. The use of FRT in policing in Delhi will almost inevitably disproportionately affect Muslims, particularly those living in over-policed areas like Old Delhi or Nizamuddin.” — Empirical Study
- The use of facial recognition technology for policing in Delhi: An empirical study of potential religion-based discrimination
- Lucknow Safe City Project: Uttar Pradesh To Deploy Facial Recognition, ‘Label’ Faces Of Suspects
- Legal Notice To Hyderabad Police Commissioner Highlights Lack Of Lawfulness Of Facial Recognition Measures
- RTI: Kolkata, Delhi Police Refuse To Give Information On Facial Recognition Systems
Have something to add? Post your comment and gift someone a MediaNama subscription.