Date | Session | Material |
29.10. (Markert) | Organisation, Topic Introduction, Start of Talk Assignment | Literature List Organisation Slides Intro Slides |
12.11. (Markert) |
Types of Bias, Fairness definitions, Legal Background |
Obermeyer et al (2019): Dissecting racial bias in an algorithm used to manage the health of populations Folien |
19.11 |
Measuring bias in word embeddings (PS Logvinenko, Rev: Reuter) |
Word embeddings quantify 100 years of gender and ethnic stereotypes (Garg et al 2018) Folien Logvinenko |
26.11. |
Mitigating Bias in Word Embeddings (Reading Group) |
Man is to computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. (Bolukbasi et al 2016) |
3.12 |
Mitigating Bias in Word Embeddings (HS Lyu, Revs: Liang, Kaiser) |
It's all in the name: Mitigating Gender Bias with Name-based Corpus Data Substitution (Maudlay et al 2019)
Lipstick on a Pig: Debiasing Methods cover up systematic gender bias in word embeddings but do not remove them (Gonen and Goldberg 2019) Folien Lyu
|
10.12 (Start s.t, Ende 16 Uhr) |
Selection bias and unbiased corpora (PS Kaiser, Rev Logvinenko; HS Reinig, Revs: Lyu, Born) |
It's a man's Wikipedia... (Wagner et al 2015) Folien Kaiser
Gender bias in coreference resolution (Rudinger et al 2018)
Mind the GAP: a balanced corpus of gendered ambiguous pronouns (Webster et al 2018) Folien Reinig
|
7.1 |
Case Studies I: Bias in MT (HS Wiesenbach, Revs: Liang, Reuter) |
Equalizing gender biases in neural MT (Escudero-Font et al (2019) Learning gender-neutral word embeddings (Zhao et al (2018) Folien Wiesenbach |
14.1 |
Case Studies II: Bias in dialect processing (HS Born, Revs: Zimmermann, Reinig) |
Demographic Dialectal Variation in Social Media: A case study of African American English (Blodgett et al 2016)
Twitter Universal Dependency Parsing for African-American and Mainstream American English (Blodgett et al 2018) Folien Born |
21.1 (Start s.t., End 16.00) |
Bias as disparate impact in ML classification (HS Zimmermann, Liang, Revs: Wiesenbach, Reinig) |
Equality of Opportunity in Supervised Learning (Hardt et al 2016)
Classifying without discriminating (Kamiran and Calders 2009) |
23.1 (Start 11.15) |
Case Studies III: Bias in Text Classification (HS Reuter, Revs: Lyu, Bacher) |
Reducing gender bias in abusive language detection
Examining gender and race bias in two hundred sentiment analysis systems (Kiritchenko and Mohammad 2018) |
28.1 |
Case Studies IV: Visual semantic role labeling (HS Wiesenbach, Revs: Born, Zimmermann) |
Men also like Shopping: Reducing Gender Bias Amplification using Corpus-Level Constraints (Zhao et al 2017) |
4.2 |
Final Discussion, Impact of fair machine learning |
|
TBD (Mid/End of March?) |
Project Progress Discussion and Feedback |
Opportunity
to discuss project intermediate results and problems in a group (optional for students) |
Literature for presentation is listed in the literature list (see above). Below are some overview papers on fairness in ML and NLP as well as links to legal background.