oab_bench / README.md
felipeoes's picture
Update README.md
e02c873 verified
metadata
dataset_info:
  features:
    - name: year
      dtype: string
    - name: area
      dtype: string
    - name: question_num
      dtype: int64
    - name: question
      dtype: string
    - name: options
      sequence: string
    - name: labels
      sequence: string
    - name: correct_answer
      dtype: string
    - name: comments
      dtype: string
    - name: valid
      dtype: bool
    - name: exam_number
      dtype: int64
    - name: exam_released_date
      dtype: string
  splits:
    - name: train
      num_bytes: 733091
      num_examples: 480
  download_size: 377259
  dataset_size: 733091
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: mit
task_categories:
  - question-answering
language:
  - pt
tags:
  - legal
  - law
  - benchmark
  - portuguese
  - brazilian
pretty_name: OAB Exams Bench
size_categories:
  - n<1K

OABench: Brazilian Bar Exams Benchmark Dataset

License: MIT Hugging Face Datasets

Overview

OABench is a benchmark dataset designed to evaluate the performance of Large Language Models (LLMs) on Brazilian legal exams. It is based on the Unified Bar Exam of the Brazilian Bar Association (OAB), a comprehensive and challenging exam required for law graduates to practice law in Brazil. This dataset provides a rigorous and realistic testbed for LLMs in the legal domain, covering a wide range of legal topics and reasoning skills.

Dataset Details

Dataset Description

The dataset consists of 80 multiple-choice questions and 5 simulated discursive (open-ended) questions.

  • Multiple-Choice Questions: These are taken directly from the 42nd OAB exam. They cover a broad spectrum of Brazilian law, including:

    • Constitutional Law
    • Civil Law
    • Criminal Law
    • Procedural Law (Civil and Criminal)
    • Administrative Law
    • Tax Law
    • Labor Law
    • Business Law
    • Environmental Law
    • Consumer Law
    • Human Rights
    • International Law
    • Statute of the Child and Adolescent
    • Philosophy of Law
    • Financial Law
    • Election Law
    • Social Security Law
    • Statute of Advocacy and the OAB, its General Regulations and Code of Ethics and Discipline of the OAB.

    Each question has four options (A, B, C, and D), with only one correct answer.

  • Simulated Discursive Questions: These are not from the original exam. They are new questions designed to mimic the style and complexity of real-world legal problems. They are based on scenarios presented within the multiple-choice questions, requiring the LLM to generate longer, more argumentative responses in the form of legal documents (e.g., legal opinions, initial petitions, defenses). These questions are intended to be evaluated by human legal experts.

Dataset Sources [optional]

Uses

Direct Use

This dataset is intended to be used for:

  • Benchmarking LLMs: Evaluating the performance of LLMs on a challenging, real-world legal task (the Brazilian Bar Exam).
  • Research on Legal AI: Facilitating research on legal reasoning, text comprehension, and knowledge representation in LLMs.
  • Comparison of LLMs: Providing a standardized way to compare different LLMs on their legal capabilities.
  • Identifying Strengths and Weaknesses: Analyzing LLM performance on different areas of law and question types.
  • Training and fine-tuning LLMS: The dataset can be use as training/fine-tuning dataset for legal tasks in portuguese.

Out-of-Scope Use

  • Providing Legal Advice: This dataset, and LLMs trained on it, should never be used to provide legal advice to individuals or organizations. Legal decisions require the expertise of qualified human lawyers.
  • Replacing Human Lawyers: This dataset is not intended to replace human lawyers, but rather to assist them or to evaluate tools that might assist them.
  • Making High-Stakes Decisions: The benchmark results should not be the sole basis for making important decisions (e.g., hiring, investment) without human review.
  • Commercial use without scrutiny: The use in commercial application should be carefully analyzed and a human expert should review the results.

Dataset Structure

Key Fields:

  • year: The year of the exam.
  • area: The area of law (e.g., "Constitutional Law", "Civil Law").
  • question_num: The question number (1-80 for multiple-choice, D1-D5 for discursive).
  • question: The full text of the question.
  • options: A list of strings representing the text answer choices (for multiple-choice questions).
  • correct_answer: The correct answer (A, B, C, or D) for multiple-choice questions. (Not applicable for discursive questions).
  • labels: A list of strings representing the alternative answer choices (for multiple-choice questions).
  • comments: Official comments about the question. Generally populated when the question is annulled
  • valid: Flag indicating if the question is valid or not (e.g., it would be False if the question was anulled).
  • exam_number: Number indicating the version of exam (e.g., 42 refers to 42º EXAME DE ORDEM UNIFICADO)
  • exam_released_date: The date in which the exam was released and applied.

Bias, Risks, and Limitations

  • Scope Limitation: The OAB exam, while broad, does not cover every area of Brazilian law. Highly specialized or niche legal topics may not be well-represented.
  • Exam-Specific Bias: The dataset reflects the specific style and format of the OAB exam. LLMs trained solely on this dataset might perform well on similar exams but not necessarily on all real-world legal tasks.
  • Language Bias: The dataset is entirely in Brazilian Portuguese. It does not evaluate LLMs on other languages.
  • Temporal Bias: The dataset reflects the state of Brazilian law until 2024. Changes in legislation or jurisprudence after this date are not captured.