AI is poised to have an enormous impact on society. What should that impact be and who should get to decide it? The goal of this course is to critically examine AI research and deployment pipelines, with in-depth examinations of how we need to understand social structures to understand impact. In application domains, we will examine questions like “who are key stakeholders?”, “who is affected by this technology?” and “who benefits from this technology?”. We will also conversely examine: how can AI help us learn about these domains, and can we build from this knowledge to design AI for "social good"? As a graduate-level course, topics will focus on current research including development and deployment of technologies like large language models and decision support tools, and students will conduct a final research project.

Prerequisites: At least one graduate-level computer science course in Artificial Intelligence or Machine Learning (including NLP, Computer Vision, etc.), two preferred, or permission of the instructor. Students must be comfortable with reading recent research papers and discussing key concepts and ideas.

Acknowledgements Thank you to Dan Jurafsky for sharing course materials and to Daniel Khashabi for sharing the course website template!

Schedule

The current class schedule is below. The schedule is subject to change, particularly the specific readings:

Date Topic Readings Work Due
Tue Aug 29 Course overview, plan and expectations [relevant slides]
Thu Aug 31 Origins and Data: Research ethics and data privacy
  1. The Belmont Report
  2. Lundberg, Ian, et al. "Privacy, ethics, and data access: A case study of the Fragile Families Challenge." Socius 5 (2019)
Reading Responses by 5pm Wednesday
Tue Sept 5 Origins and Data: Ownership [relevant slides]
  1. Buolamwini, Joy, and Timnit Gebru. "Gender shades: Intersectional accuracy disparities in commercial gender classification." FAccT. PMLR, 2018
  2. "Facial recognition's 'dirty little secret': Millions of online photos scraped without consent", NBC article
  3. Training the Next Generation of Indigenous Data Scientists", NYT article [available via JHU login here]
Reading Responses by 9:30am Monday
Thu Sept 7 Origins and Data: Crowdsourcing [relevant slides]
  1. Boaz Shmueli, Jan Fell, Soumya Ray, and Lun-Wei Ku. 2021. Beyond Fair Pay: Ethical Implications of NLP Crowdsourcing. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3758–3769, Online. Association for Computational Linguistics.
  2. "Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic", Time
  3. (optional) "AI interview: Krystal Kauffman, lead organiser, Turkopticon", Computer Weekly
Course Goals Sheet (email to instructors) by class Thursday, Reading Responses by 5pm Wednesday
Tue Sept 12 Fairness, Bias and Stereotypes: Fairness metrics [relevant slides]
  1. Dwork, Cynthia, et al. "Fairness through awareness." Proceedings of the 3rd innovations in theoretical computer science conference. 2012.
  2. Chouldechova, Alexandra. "Fair prediction with disparate impact: A study of bias in recidivism prediction instruments", Big Data, Special issue on Social and Technical Trade-Offs. 2017.
  3. (optional) Corbe-Davies, Sam et al. "Algorithmic Decision Making and the Cost of Fairness", KDD. 2017.
  4. (optional) Mehrabi, Ninareh et al. "A Survey on Bias and Fairness in Machine Learning", ACM Computing Surveys. 2021.
Reading Responses by 9am
Thu Sept 14 Fairness, Bias and Stereotypes: Classification/Prediction [relevant slides]
  1. Zhao et al. "Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints", EMNLP. 2017.
  2. Sap et al. "The Risk of Racial Bias in Hate Speech Detection", ACL. 2019.
  3. (optional) Field et al. "Examining risks of racial biases in NLP tools for child protective services" FAccT. 2023.
Reading Responses by 9am
Tue Sept 19 Fairness, Bias and Stereotypes: Generation
  1. Myra Cheng, Esin Durmus, and Dan Jurafsky. Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models. ACL 2023.
  2. Bianchi, Federico, et al. "Easily accessible text-to-image generation amplifies demographic stereotypes at large scale." FAccT 2023.
Reading Responses by 9am
Thu Sept 21 Research Codes of ethics
  1. The code of ethics from at least 2 publication venues: NeuRIPS, ACM (also adopted by ACL). You may choose a code of ethics from a different conference, in which case please link it in your Piazza post
  2. Kobi Leins, Jey Han Lau, and Timothy Baldwin. "Give Me Convenience and Give Her Death: Who Should Decide What Uses of NLP are Appropriate, and on What Basis?". ACL. 2020.
Reading Responses by 9am
Tue Sept 26 Privacy in Generative AI
  1. Carlini, Nicolas, et al. "Extracting training data from diffusion models." 32nd USENIX Security Symposium (USENIX Security 23). 2023.
  2. Hannah Brown, Katherine Lee, Fatemehsadat Mireshghalla, Reza Shokri, Florian Tramèr. "What Does it Mean for a Language Model to Preserve Privacy?" FAccT 2022.
Reading Responses by 9am
Thu Sept 28 Environmental Impact [No required readings] Project Literature Review (11:59pm on Friday 9/29)
Tue Oct 3 Values and design: Value sensitive design
  1. Friedman, Batya, et al. "Value sensitive design and information systems." Early engagement and new technologies: Opening up the laboratory (2013): 55-95.
  2. Umbrello, Steven, and Ibo Van de Poel. "Mapping value sensitive design onto AI for social good principles." AI and Ethics 1.3 2021.
Reading Responses by 9am
Thu Oct 5 Values and design: Participatory design
  1. Katell et al. Toward situated interventions for algorithmic equity: lessons from the field. FAccT. 2020.
  2. Sloane et al. Participation is not a Design Fix for Machine Learning. EAAMO 2022.
Reading Responses by 9am
Tue Oct 10 Values and design: Surveyed Values
  1. Birhane et al. The Values Encoded in Machine Learning Research. FAccT 2022.
  2. Widder et al. It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them? FAccT 2023.
Reading Responses by 9am
Thu Oct 12 Policy and Regulation
  1. Hutson, Matthew "Rules to keep AI in check: nations carve different paths for tech regulation", Nature news feature 2023.
  2. "Blueprint for an AI Bill of Rights", OSTP, 2023
  3. Sarah Zheng and Jane Zhang "China Wants to Regulate Its Artificial Intelligence Sector Without Crushing It", Time Magazine, 2023
Tue Oct 17 Project Presentations [Prepare for proposal presentations]
Thu Oct 19 Fall Break
Tue Oct 24 Social Impact: Defining AI for Good
  1. Rediet Abebe, Solon Barocas, Jon Kleinberg, Karen Levy, Manish Raghavan, and David G. Robinson, "Roles for computing in social change" FAccT 2020.
  2. Ben Green, "'Good' isn’t good enough", AI for Social Good workshop at NeurIPS, 2019.
Reading Responses by 9am
Thu Oct 26 Trustworthy AI Project proposals due at 11:59pm
Tue Oct 31 Social Impact: Child Welfare
  1. Brown, et al. 2019. Toward Algorithmic Accountability in Public Services: A Qualitative Study of Affected Community Perspectives on Algorithmic Decision-making in Child Welfare Services. CHI 2019.
  2. Devansh Saxena, Shion Guha. Algorithmic Harms in Child Welfare: Uncertainties in Practice, Organization, and Street-level Decision-Making. ACM Journal on Responsible Computing 2023. [Abstract; Sections 1, 4-7; can skip or skim other sections] [link without the "Just Accepted" watermark]
Reading Responses by 9am
Thu Nov 2 AI and Power, with guest Ria Kalluri
  1. Langdon Winner, "Do Artifacts Have Politics?", Daedalus, 1980.
Reading Responses by 9am (1 paragraph is ok since it's 1 reading)
Tue Nov 7 Social Impact: Criminal Justice
  1. Dasha Pruss. Ghosting the Machine: Judicial Resistance to a Recidivism Risk Assessment Instrument. FAccT 2023.
  2. Alex Albright. If You Give a Judge a Risk Score: Evidence from Kentucky Bail Decisions. 2019 [Only need to read the Abstract and Introduction]
Responses by 9am
Thu Nov 9 AI Policy Guest Peter Henderson
  1. White House Executive Order. 2023 There's a formatted version available here. Please read Sections 1 and 2, and choose at least 2 other sections to read in-depth. Skimming the rest is ok.
Responses by 9am (1 paragraph is ok)
Tue Nov 14 Social Impact: Policing
  1. Rob Voigt et al. Language from police body camera footage shows racial disparities in officer respect. PNAS 2017
  2. Matt Franchi et al. Detecting disparities in police deployments using dashcam data. FAccT 2023
Responses by 9am
Thu Nov 16 Social Impact: Healthcare
  1. Emma Pierson et al. An algorithmic approach to reducing unexplained pain disparities in underserved populations. Nature Medicine 2021
  2. Wolf et al. The SEE Study: Safety, Efficacy, and Equity of Implementing Autonomous Artificial Intelligence for Diagnosing Diabetic Retinopathy in Youth. Diabetes Care. 2021. [Read just the first few paragraphs before the "RESEARCH DESIGN AND METHODS" section. This is to provide some additional context for #3]
  3. Wolf et al. Clinical Implementation of Autonomous Artificial Intelligence Systems for Diabetic Eye Exams: Considerations for Success. Practical Pointers. 2023
Responses by 9am
Tue Nov 21 Fall Recess
Thu Nov 23 Fall Recess
Tue Nov 28 Social Impact: ChatGPT and Disinformation
  1. Alejo José G. Sison et al. ChatGPT: More Than a “Weapon of Mass Deception” Ethical Challenges and Responses from the Human-Centered Artificial Intelligence (HCAI) Perspective. International Journal of Human–Computer Interaction. 2023
Thu Nov 30 No class
Tue Dec 5 Social Impact: System Automation [slides]
Thu Dec 7 Project Presentations

Policies

Attendance policy This is a graduate-level course revolving around in-person discussion. Students are expected to attend class and may notify instructors if there are extenuating circumstances.

Course Conduct This is a discussion class focused on controversial topics. All students are expected to respect everyone's perspective and input and to contribute towards creating a welcoming and inclusive climate. We the instructors will strive to make this classroom an inclusive space for all students, and we welcome feedback on ways to improve.

Academic Integrity This course will have a zero-tolerance philosophy regarding plagiarism or other forms of cheating, and incidents of academic dishonesty will be reported. A student who has doubts about how the Honor Code applies to this course should obtain specific guidance from the course instructor before submitting the respective assignment.

Discrimination and Harrasment The Johns Hopkins University is committed to equal opportunity for its faculty, staff, and students. To that end, the university does not discriminate on the basis of sex, gender, marital status, pregnancy, race, color, ethnicity, national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, military status, immigration status or other legally protected characteristic. The University's Discrimination and Harassment Policy and Procedures provides information on how to report or file a complaint of discrimination or harassment based on any of the protected statuses listed in the earlier sentence, and the University’s prompt and equitable response to such complaints.

Personal Well-being Take care of yourself! Being a student can be challenging and your physical and mental health is important. If you need support, please seek it out. Here are several of the many helpful resources on campus: