AI is poised to have an enormous impact on society. What should that impact be and who should get to decide it? The goal of this course is to critically examine AI research and deployment pipelines, with in-depth examinations of how we need to understand social structures to understand impact. In application domains, we will examine questions like “who are key stakeholders?”, “who is affected by this technology?” and “who benefits from this technology?”. We will also conversely examine: how can AI help us learn about these domains, and can we build from this knowledge to design AI for "social good"? As a graduate-level course, topics will focus on current research including development and deployment of technologies like large language models and decision support tools, and students will conduct a final research project.

Prerequisites: At least one graduate-level computer science course in Artificial Intelligence or Machine Learning (including NLP, Computer Vision, etc.), two preferred, or permission of the instructor. Students must be comfortable with reading recent research papers and discussing key concepts and ideas.

Acknowledgements Thank you to Dan Jurafsky for sharing course materials and to Daniel Khashabi for sharing the course website template!

Schedule

The current class schedule is below. The schedule is subject to change, particularly the specific readings:

Date Topic Readings Work Due
Tue Aug 27 Introduction, Origins and Data [slides]
  1. Avoiding Past Mistakes in Unethical Human Subjects Research: Moving From Artificial Intelligence Principles to Practice
Thu Aug 29 No class
Tue Sept 3 Data: Ownership [slides]
  1. Lundberg, Ian, et al. "Privacy, ethics, and data access: A case study of the Fragile Families Challenge." Socius 5 (2019)
  2. Buolamwini, Joy, and Timnit Gebru. "Gender shades: Intersectional accuracy disparities in commercial gender classification." FAccT. PMLR, 2018
  3. "Facial recognition's 'dirty little secret': Millions of online photos scraped without consent", NBC article
Reading Responses by 8pm Monday; course goals survey by class 9/10
Thu Sept 5 Data: Privacy
  1. Carlini, Nicolas, et al. "Extracting training data from diffusion models." 32nd USENIX Security Symposium (USENIX Security 23). 2023.
  2. Hannah Brown, Katherine Lee, Fatemehsadat Mireshghalla, Reza Shokri, Florian Tramèr. "What Does it Mean for a Language Model to Preserve Privacy?" FAccT 2022.
Reading Responses by 8pm Wednesday
Tue Sept 10 Data: Crowdsourcing
  1. Boaz Shmueli, Jan Fell, Soumya Ray, and Lun-Wei Ku. 2021. Beyond Fair Pay: Ethical Implications of NLP Crowdsourcing. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3758–3769, Online. Association for Computational Linguistics.
  2. "Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic", Time
Reading Responses by 8pm Monday
Thu Sept 12 Fairness, Bias and Stereotypes: Fairness metrics
  1. Dwork, Cynthia, et al. "Fairness through awareness." Proceedings of the 3rd innovations in theoretical computer science conference. 2012.
  2. Chouldechova, Alexandra. "Fair prediction with disparate impact: A study of bias in recidivism prediction instruments", Big Data, Special issue on Social and Technical Trade-Offs. 2017.
  3. (optional) Corbe-Davies, Sam et al. "Algorithmic Decision Making and the Cost of Fairness", KDD. 2017.
  4. (optional) Mehrabi, Ninareh et al. "A Survey on Bias and Fairness in Machine Learning", ACM Computing Surveys. 2021.
Reading Responses by 8pm Wednesday
Tue Sept 17 Fairness, Bias and Stereotypes: Bias in Classification [slides]
  1. Zhao et al. "Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints", EMNLP. 2017.
  2. Obermeyer et al. "Dissecting racial bias in an algorithm used to manage the health of populations," Science, 2019.
  3. (optional) Sap et al. "The Risk of Racial Bias in Hate Speech Detection", ACL. 2019.
Reading Responses by 8pm Monday
Thu Sept 19 Fairness, Bias and Stereotypes: Stereotypes in Generation [slides]
  1. Feng et al. "From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models", ACL 2023.
  2. Bianchi, Federico, et al. "Easily accessible text-to-image generation amplifies demographic stereotypes at large scale." FAccT 2023.
  3. (optional) Myra Cheng, Esin Durmus, and Dan Jurafsky. "Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models", ACL 2023.
Reading Responses by 8pm Wednesday
Tue Sept 24 Values and Design: Value sensitive design
  1. Friedman, Batya, et al. "Value sensitive design and information systems." Early engagement and new technologies: Opening up the laboratory (2013): 55-95.
  2. Umbrello, Steven, and Ibo Van de Poel. "Mapping value sensitive design onto AI for social good principles." AI and Ethics 1.3 2021.
Reading Responses by 8pm Monday
Thu Sept 26 Values and Design: Participatory design
  1. Brown, et al. 2019. Toward Algorithmic Accountability in Public Services: A Qualitative Study of Affected Community Perspectives on Algorithmic Decision-making in Child Welfare Services. CHI 2019.
  2. Sloane et al. Participation is not a Design Fix for Machine Learning. EAAMO 2022.
Reading Responses by 8pm Wednesday
Tue Oct 1 Values and Design: Surveyed Values
  1. Birhane et al. The Values Encoded in Machine Learning Research. FAccT 2022.
  2. Widder et al. It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them? FAccT 2023.
Reading Responses by 8pm Monday
Thu Oct 3 Societal Impact: Human-in-the-loop Decision Making
  1. Dasha Pruss. Ghosting the Machine: Judicial Resistance to a Recidivism Risk Assessment Instrument. FAccT 2023.
  2. Alex Albright. If You Give a Judge a Risk Score: Evidence from Kentucky Bail Decisions. 2019 [Only need to read the Abstract and Introduction]
Reading Responses by 8pm Wednesday
Tue Oct 8 Societal Impact: Hallucinations and Misinformation
  1. Kreps, Sarah, R. Miles McCain, and Miles Brundage. "All the news that’s fit to fabricate: AI-generated text as a tool of media misinformation." Journal of experimental political science 9.1 (2022): 104-117.
  2. Koenecke, Allison, et al. "Careless Whisper: Speech-to-Text Hallucination Harms." FAccT 2024.
Reading Responses by 8pm Monday
Thu Oct 10 Societal Impact: Overview
  1. Sayash Kapoor*, Rishi Bommasani* et al. "On the Societal Impact of Open Foundation Models." ICML 2024
  2. Harry H. Jiang et al. "AI Art and its Impact on Artists." AIES 2023
Reading Responses by 8pm Wednesday
Tue Oct 15 Proposal Presentations [Prepare for proposal presentations]
Thu Oct 17 Fall Break

Policies

Attendance policy This is a graduate-level course revolving around in-person discussion. Students are expected to attend class and may notify instructors if there are extenuating circumstances.

Course Conduct This is a discussion class focused on controversial topics. All students are expected to respect everyone's perspective and input and to contribute towards creating a welcoming and inclusive climate. We the instructors will strive to make this classroom an inclusive space for all students, and we welcome feedback on ways to improve.

Academic Integrity This course will have a zero-tolerance philosophy regarding plagiarism or other forms of cheating, and incidents of academic dishonesty will be reported. A student who has doubts about how the Honor Code applies to this course should obtain specific guidance from the course instructor before submitting the respective assignment.

Discrimination and Harrasment The Johns Hopkins University is committed to equal opportunity for its faculty, staff, and students. To that end, the university does not discriminate on the basis of sex, gender, marital status, pregnancy, race, color, ethnicity, national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, military status, immigration status or other legally protected characteristic. The University's Discrimination and Harassment Policy and Procedures provides information on how to report or file a complaint of discrimination or harassment based on any of the protected statuses listed in the earlier sentence, and the University’s prompt and equitable response to such complaints.

Personal Well-being Take care of yourself! Being a student can be challenging and your physical and mental health is important. If you need support, please seek it out. Here are several of the many helpful resources on campus: