diff --git "a/6tFKT4oBgHgl3EQfTy2d/content/tmp_files/load_file.txt" "b/6tFKT4oBgHgl3EQfTy2d/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/6tFKT4oBgHgl3EQfTy2d/content/tmp_files/load_file.txt" @@ -0,0 +1,1294 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQfTy2d/content/2301.11781v1.pdf,len=1293 +page_content='Aleatoric and Epistemic Discrimination in Classification Hao Wang 1 Luxi He 2 Rui Gao 3 Flavio P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQfTy2d/content/2301.11781v1.pdf'} +page_content=' Calmon 4 Abstract Machine learning (ML) models can underperform on certain population groups due to choices made during model development and bias inherent in the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQfTy2d/content/2301.11781v1.pdf'} +page_content=' We categorize sources of discrimina- tion in the ML pipeline into two classes: aleatoric discrimination, which is inherent in the data distri- bution, and epistemic discrimination, which is due to decisions during model development.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQfTy2d/content/2301.11781v1.pdf'} +page_content=' We quan- tify aleatoric discrimination by determining the performance limits of a model under fairness con- straints, assuming perfect knowledge of the data distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQfTy2d/content/2301.11781v1.pdf'} +page_content=' We demonstrate how to characterize aleatoric discrimination by applying Blackwell’s results on comparing statistical experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQfTy2d/content/2301.11781v1.pdf'} +page_content=' We then quantify epistemic discrimination as the gap between a model’s accuracy given fairness con- straints and the limit posed by aleatoric discrimi- nation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQfTy2d/content/2301.11781v1.pdf'} +page_content=' We apply this approach to benchmark ex- isting interventions and investigate fairness risks in data with missing values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQfTy2d/content/2301.11781v1.pdf'} +page_content=' Our results indi- cate that state-of-the-art fairness interventions are effective at removing epistemic discrimination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQfTy2d/content/2301.11781v1.pdf'} +page_content=' However, when data has missing values, there is still significant room for improvement in handling aleatoric discrimination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQfTy2d/content/2301.11781v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQfTy2d/content/2301.11781v1.pdf'} +page_content=' Introduction Algorithmic discrimination may occur in different stages of the machine learning (ML) pipeline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQfTy2d/content/2301.11781v1.pdf'} +page_content=' For example, histori- cal biases in the data-generating process can propagate to downstream tasks;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQfTy2d/content/2301.11781v1.pdf'} +page_content=' human biases can influence a ML model through inductive bias;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQfTy2d/content/2301.11781v1.pdf'} +page_content=' optimizing solely for accuracy can lead to disparate model performance across groups in the data (Suresh & Guttag, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQfTy2d/content/2301.11781v1.pdf'} +page_content=' Mayson, 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQfTy2d/content/2301.11781v1.pdf'} +page_content=' The past years have seen a rapid increase in algorithmic interventions that aim to mitigate biases in ML models (see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQfTy2d/content/2301.11781v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQfTy2d/content/2301.11781v1.pdf'} +page_content=', Zemel 1MIT-IBM Watson AI Lab 2Harvard College 3UT- Austin 4Harvard University.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tFKT4oBgHgl3EQfTy2d/content/2301.11781v1.pdf'} +page_content=' Hao Wang