Back to Brige2AI main page
This is a working resource page for the Ethics activities of the Bridge2AI program.
The Bridge2AI program uses this wiki to bring together resources relevant to the Ethics Pillar in the within Bridge2AI Consortium, which incorporates from activities from the Data Generation Projects and the BRIDGE Center.
Current Ethics Activities of the Consortium
- (2/7/24) refining manuscript on ethical dilemmas in the lifecycle of AI/ML development and application. This is close to going out the door for review.
- (2/7/24) working on a framework for ethical sourcing of data to be used in AI/ML
- (2/7/24) collecting and reviewing best practices for consent for data in cohort studies, particularly with a focus on programs that are hypothesis free (e.g., All of Us)
- (2/7/24) developing ELSI materials for the Open House and data sharing more broadly
- Review and guidance on licenses
- Review of various code of conducts to derive one for the program
- Review of ELSI training programs. We have considered CITI, but the OHRP program (https://www.hhs.gov/ohrp/education-and-outreach/human-research-protection-training/index.html)
- (2/7/24) In collaboration with the Salutogenesis Grand Challenge, the Ethics WG has been supporting a seminar series on ethics in AI. Several of the seminars that have taken place so far:
- Berk Uston, 6/6/23, “Towards Personalization Without Harm”
- Brad Malin, 10/17/23, “One Size Does not Fit All: How to Build Respectful Cohorts for Biomedical Data Science”
- Babak Salimi, 11/28/23, “Certifying Fair Predictive Models in the Face of Selection Bias”
- Xiaoqian Jiang, 12/5/23, “Sensitive Data Detection with High-Throughput Machine Learning Models in Electronic Health Records”
- (2/7/24) looking into specific approaches to bias mitigation and fairness enhancement. An example of a technical manuscript that we have developed as an artifact: https://www.biorxiv.org/content/10.1101/2023.08.04.551906v1
Publications:
2/15/24, NIST Researchers Suggest Historical Precedent for Ethical AI Research
2/24/22, Scientific American, The Culture of Engineering Overlooks the People It’s Supposed to Serve
Book:
Noise: A Flaw in Human Judgment
by Daniel Kahneman (Author), Olivier Sibony (Author), Cass R. Sunstein (Author)
https://www.amazon.com/Noise-Human-Judgment-Daniel-Kahneman/dp/03164514…
OTHER RESOURCES:
- VPH Institute - 12 GUIDING PRINCIPLES FOR ETHICAL AND LEGAL CONDUCT IN IN SILICO TRIALS
- NIST AI Risk Management Framework - PLAYBOOK
- Pervasive Data Ethics for Computational Research
- The Consortium for the Science of Sociotechnical Systems Researchers (CSST)
- WHO outlines principles for ethics in health AI (June 30, 2021)
- Understanding Artificial Intelligence Ethics and Safety, The Alan Turing Institute
- Government's Role in AI, Brookings Institute, https://www.youtube.com/watch?v=PO08ECx8ru4
- A Closer Look: The Department of Defense AI Ethical Principles, The JAIC 24 FEB 2020
- The Institute for Ethical AI & Machine Learning
- NIH BRAIN Initiative, Neuroethics - "Instilling a culture of ethical inquiry, not compliance"
- A Proposed Framework on Integrating Health Equity and Racial Justice into the Artificial Intelligence Development Lifecycle
-
The Ethics of Data Collection -- Is your data ethically sourced?
Ethics: Fair Representation and Transparency
The core ethical principle is “do no harm.” Biased consequences of these tools are due to human factors that can be mitigated by utilizing ethical data curation and AI design principles. Biased human judgments can affect AI systems in the data that systems learn from and in the way algorithms are designed. Therefore, algorithms must ethically be explainable, auditable, and transparent to to mitigate potential biases resulting from historical patterns of discrimination.
In addition, risks of systemic “ism” (racism, sexism, and ageism) in medicine often are overlooked. Without appropriate precautions, AI Systems may replicate patterns of racial, gender and age biases in ways that can deepen and justify historical inequality. It is ethically imperative that the quality of data is representative of the actual population and considers the multitude of factors that contribute to, and significantly impact, health outcomes for all populations, especially the vulnerable and marginalized. Data sets must include minority and marginalized populations with attention to historical biases. The current AI field is at risk of replicating or perpetuating historical biases and power imbalances and could deepen and perpetuate racial, gender and age biases if historical contexts are not considered in data curation, in the construction and testing of the algorithms, machine training, and in application.
Ethically, there needs to be greater transparency and more demographic data on racial and ethnic, gender and age profiles of new data and the use of secondary data sets with adequate marginalized population data. Vulnerable research participants should receive special attention, as they may suffer from stigmatization, have limited power, lower educational levels, poverty, limited resources, inadequate physical strength and/or other necessary attributes to protect and defend their own interests.
AI needs to address bias and fairness that goes beyond technical debiasing and extends to application biases by utilizing a wider social analysis of how AI is used in context. This necessitates including multi- disciplinary expertise, including underrepresented non-technical uses and community stakeholders. Cultural presuppositions must be considered throughout the design, encoding, training and application.
Development of standard ethical operating procedures (SOPs) that will assure the conduct of ethical data curation, algorithm development, machine training and application to mitigate biases and ensure no harm.