Cx Talk: Abhijnan Chakraborty | Fairness in Algorithmic Decision Making

Where: Bloomberg Center 201

When: 2:30pm, Monday May 20, 2019


Algorithmic (data-driven) decision making is increasingly being used to assist or replace human decision making in domains with high societal impact, such as banking (estimating creditworthiness), recruiting (ranking applicants), judiciary (offender profiling) and journalism (recommending news-stories). Consequently, in recent times, multiple research works have attempted to identify (measure) bias or unfairness in algorithmic decisions and proposed mechanisms to control (mitigate) such biases. However, the emphasis has been on fairness in classification or regression tasks, and fairness issues in other scenarios remain relatively unexplored. In this talk, I’ll cover our recent works on fairness in recommendation and matching systems. I’ll introduce the notions of fairness in these contexts and propose techniques to achieve them. Additionally, I’ll briefly touch upon the possibility of utilizing user interface of platforms (choice architecture) to achieve fair outcomes in certain scenarios. I will conclude the talk with a list of open questions and directions for future work.

Speaker Bio:

Abhijnan Chakraborty is a Post-doctoral Researcher at the Max Planck Institute for Software Systems (MPI-SWS), Germany. He obtained PhD from the Indian Institute of Technology (IIT) Kharagpur under the supervision of Prof. Niloy Ganguly and Prof. Krishna Gummadi. During PhD, he was awarded the Google India PhD Fellowship and Prime Minister's Fellowship for Doctoral Research. Prior to joining PhD, he spent two years at Microsoft Research India. His research interests span the area of social computing and fairness in algorithmic decision making. He has authored several papers in top-tier computer science conferences including WWW, KDD, CSCW, ICWSM, MobiCom. His research works have won the best paper award at ASONAM'16 and best poster award at ECIR’19. He is one of the recipients of the highly competitive research grant from Data Transparency Lab to advance his research on fairness and transparency in algorithmic systems. More details about him can be found at


Recent Posts

See All

Detecting Propaganda in News

Cx PhD student Yiqing Hua has been working on using natural language processing techniques to detect propaganda in news. As a participant in the 2nd Workshop on NLP for Internet Freedom (NLP4IF), Y


© 2019 The Connected Experiences Lab. 

This site was designed with the
website builder. Create your website today.
Start Now