Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

image
image
label
class label
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
0Alarm_Clock
1Backpack
1Backpack
1Backpack
1Backpack
1Backpack
1Backpack
1Backpack
1Backpack
1Backpack
1Backpack
1Backpack
1Backpack
1Backpack
1Backpack
1Backpack
1Backpack
1Backpack
1Backpack
1Backpack
1Backpack
1Backpack
1Backpack
1Backpack
1Backpack
1Backpack
1Backpack
End of preview.

Office-Home-LDS Dataset

Paper: β€œGeometric Knowledge-Guided Localized Global Distribution Alignment for Federated Learning”
Github: 2025CVPR_GGEUR

The Office-Home-LDS dataset is constructed by introducing label skew on top of the domain skew present in the Office-Home dataset.
The goal is to create a more challenging and realistic dataset that simultaneously exhibits both label skew and domain skew.

πŸ”— Citation

The article has been accepted by 2025CVPR, if you use this work, please cite:

Yanbiao Ma, Wei Dai, Wenke Huang, Jiayi Chen. "Geometric Knowledge-Guided Localized Global Distribution Alignment for Federated Learning". CVPR 2025.

πŸ“‚ Huggingface Dataset Structure

The dataset is organized as follows:

Office-Home-LDS/
β”œβ”€β”€ data/ 
β”‚   └── Office-Home.zip        # Original raw dataset (compressed)
β”œβ”€β”€ new_dataset/               # Processed datasets based on different settings
β”‚   β”œβ”€β”€ Office-Home-0.1.zip    # Split with Dirichlet Ξ± = 0.1 (compressed)
β”‚   β”œβ”€β”€ Office-Home-0.5.zip    # Split with Dirichlet Ξ± = 0.5 (compressed)
β”‚   └── Office-Home-0.05.zip   # Split with Dirichlet Ξ± = 0.05 (compressed)
β”œβ”€β”€ Dataset-Office-Home-LDS.py # Python script for processing and splitting Original raw dataset
└── README.md                  # Project documentation

πŸ“₯ Extract Files

After downloading the dataset, you can extract it using the following commands:

πŸ”Ή Linux/macOS:

unzip data/Office-Home.zip -d ./data/Office-Home
unzip new_dataset/Office-Home-0.1.zip -d ./new_dataset/Office-Home-0.1
unzip new_dataset/Office-Home-0.5.zip -d ./new_dataset/Office-Home-0.5
unzip new_dataset/Office-Home-0.05.zip -d ./new_dataset/Office-Home-0.05

πŸ”Ή Windows:

🟦 PowerShell Method:

  # Extract the original dataset
  Expand-Archive -Path data/Office-Home.zip -DestinationPath ./data/Office-Home

  # Extract processed datasets
  Expand-Archive -Path new_dataset/Office-Home-0.1.zip -DestinationPath ./new_dataset/Office-Home-0.1
  Expand-Archive -Path new_dataset/Office-Home-0.5.zip -DestinationPath ./new_dataset/Office-Home-0.5
  Expand-Archive -Path new_dataset/Office-Home-0.05.zip -DestinationPath ./new_dataset/Office-Home-0.05

🟦 Python Method:

  import zipfile
  import os
  
  # Create target directories if they don't exist
  os.makedirs('./data/Office-Home', exist_ok=True)
  os.makedirs('./new_dataset/Office-Home-0.1', exist_ok=True)
  os.makedirs('./new_dataset/Office-Home-0.5', exist_ok=True)
  os.makedirs('./new_dataset/Office-Home-0.05', exist_ok=True)
  
  # Extract zip files
  with zipfile.ZipFile('data/Office-Home.zip', 'r') as zip_ref:
      zip_ref.extractall('./data/Office-Home')
  
  with zipfile.ZipFile('new_dataset/Office-Home-0.1.zip', 'r') as zip_ref:
      zip_ref.extractall('./new_dataset/Office-Home-0.1')
  
  with zipfile.ZipFile('new_dataset/Office-Home-0.5.zip', 'r') as zip_ref:
      zip_ref.extractall('./new_dataset/Office-Home-0.5')
  
  with zipfile.ZipFile('new_dataset/Office-Home-0.05.zip', 'r') as zip_ref:
      zip_ref.extractall('./new_dataset/Office-Home-0.05')

πŸ“‚ Extracted File Structure

The dataset is organized as follows:

Office-Home-LDS/
β”œβ”€β”€ data/ 
β”‚   └── Office-Home/                  # Original dataset
β”‚       β”œβ”€β”€ Art/
β”‚       β”œβ”€β”€ Clipart/
β”‚       β”œβ”€β”€ Product/
β”‚       └── Real World/
β”œβ”€β”€ new_dataset/ 
β”‚   β”œβ”€β”€ Office-Home-0.1/              # Split with Ξ± = 0.1
β”‚   β”‚   β”œβ”€β”€ Art/                      # Domain: Art
β”‚   β”‚   β”‚   β”œβ”€β”€ client/               # Client-level split images
β”‚   β”‚   β”‚   β”œβ”€β”€ train/                # Train set images
β”‚   β”‚   β”‚   └── test/                 # Test set images
β”‚   β”‚   β”œβ”€β”€ Clipart/                  # Domain: Clipart
β”‚   β”‚   β”‚   β”œβ”€β”€ client/               # Client-level split
β”‚   β”‚   β”‚   β”œβ”€β”€ train/                # Train set images
β”‚   β”‚   β”‚   └── test/                 # Test set images
β”‚   β”‚   β”œβ”€β”€ Product/                  # Domain: Product
β”‚   β”‚   β”‚   β”œβ”€β”€ client/               # Client-level split
β”‚   β”‚   β”‚   β”œβ”€β”€ train/                # Train set images
β”‚   β”‚   β”‚   └── test/                 # Test set images
β”‚   β”‚   β”œβ”€β”€ Real World/               # Domain: Real_World
β”‚   β”‚   β”‚   β”œβ”€β”€ client/               # Client-level split
β”‚   β”‚   β”‚   β”œβ”€β”€ train/                # Train set images
β”‚   β”‚   β”‚   └── test/                 # Test set images
β”‚   β”‚   β”œβ”€β”€ output_indices/           # Split information and indices
β”‚   β”‚   β”‚   β”œβ”€β”€ Art/                  # Indices for Art domain
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ class_indices.npy              # Class-level indices
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ client_client_indices.npy      # Client split indices
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ test_test_indices.npy          # Test set indices
β”‚   β”‚   β”‚   β”‚   └── train_train_indices.npy        # Train set indices
β”‚   β”‚   β”‚   β”œβ”€β”€ Clipart/               # Indices for Clipart domain
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ class_indices.npy
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ client_client_indices.npy
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ test_test_indices.npy
β”‚   β”‚   β”‚   β”‚   └── train_train_indices.npy
β”‚   β”‚   β”‚   β”œβ”€β”€ Product/               # Indices for Product domain
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ class_indices.npy
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ client_client_indices.npy
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ test_test_indices.npy
β”‚   β”‚   β”‚   β”‚   └── train_train_indices.npy
β”‚   β”‚   β”‚   β”œβ”€β”€ Real World/            # Indices for Real_World domain
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ class_indices.npy
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ client_client_indices.npy
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ test_test_indices.npy
β”‚   β”‚   β”‚   β”‚   └── train_train_indices.npy
β”‚   β”‚   β”‚   └── combined_class_allocation.txt     # Global class allocation info
β”‚   β”œβ”€β”€ Office-Home-0.5/              # Split with Ξ± = 0.5 (similar structure)
β”‚   └── Office-Home-0.05/             # Split with Ξ± = 0.05 (similar structure)
β”œβ”€β”€ Dataset-Office-Home-LDS.py         # Python script for processing and splitting Original raw dataset
└── README.md                          # Project documentation

🐍 Dataset-Office-Home-LDS.py


1️⃣ ImageFolder_Custom Class

  • Loads the original dataset from ./data/Office-Home
  • Index the dataset for each domain, each domain is processed separately!
  • Generates training and testing splits for each domain, each domain is processed separately!
# ImageFolder_Custom class loads a dataset of a single domain
class ImageFolder_Custom(ImageFolder):
    def __init__(self, data_name, root, transform=None, target_transform=None, subset_train_ratio=0.7):
        super().__init__(os.path.join(root, 'Office-Home', data_name), transform=transform,
                         target_transform=target_transform)

        self.train_index_list = []
        self.test_index_list = []

        # Calculate the proportion of the training set
        total_samples = len(self.samples)
        train_samples = int(subset_train_ratio * total_samples)

        # Randomly shuffle the index
        shuffled_indices = np.random.permutation(total_samples)

        # The scrambled index, with the first train_stamples used as the training set and the rest used as the testing set
        self.train_index_list = shuffled_indices[:train_samples].tolist()
        self.test_index_list = shuffled_indices[train_samples:].tolist()

2️⃣ generate_dirichlet_matrix Function

  • Generates a 4 Γ— 65 Dirichlet matrix based on the specified coefficient (alpha)
  • Each row represents one of the four domains
  • Each column represents one of the 65 classes
  • The sum of each column equals 1, representing the class distribution across the four domains
from numpy.random import dirichlet

# Generate a 4x65 Dirichlet distribution matrix to partition the proportion of 65 classes among 4 clients
def generate_dirichlet_matrix(alpha):
    return dirichlet([alpha] * 4, 65).T  # Generate a 4x65 matrix

3️⃣ split_samples_for_domain Function

  • Applies the generated Dirichlet matrix to distribute class number across the four domains
  • Ensures label skew for each domain
# Partition samples to the client based on a column of the Dirichlet matrix
def split_samples_for_domain(class_train_indices, dirichlet_column):
    client_indices = []
    class_proportions = {}

    for class_label, indices in class_train_indices.items():
        num_samples = len(indices)
        if num_samples == 0:
            continue

        # Obtain the Dirichlet ratio of the current class
        proportion = dirichlet_column[class_label]
        class_proportions[class_label] = proportion

        # Calculate the number of allocated samples
        num_to_allocate = int(proportion * num_samples)

        # Allocate samples
        allocated_indices = indices[:num_to_allocate]
        client_indices.extend(allocated_indices)

    return client_indices, class_proportions

4️⃣ construct_new_dataset, copy_images Functions

  • Creates a new dataset structure based on computed indices
  • Copies images from the original dataset to the new partitioned folders
  • Renames files to prevent overwriting
# Copy the image according to the index and rename it
def copy_images(dataset, indices, target_dir):
    os.makedirs(target_dir, exist_ok=True)
    for idx in tqdm(indices, desc=f"Copy to {target_dir}"):
        source_path, label = dataset.samples[idx]

        # Generate a unique file name (based on class label and index)
        new_filename = f"class_{label}_index_{idx}.jpg"
        target_path = os.path.join(target_dir, new_filename)

        # copy picture
        shutil.copy(source_path, target_path)


# Building a new dataset
def construct_new_dataset(dataset, train_indices, test_indices, client_indices, domain, alpha):
    base_path = f'./new_dataset/Office-Home-{alpha}/{domain}'
    os.makedirs(base_path, exist_ok=True)

    # Copy training and testing sets
    copy_images(dataset, train_indices, os.path.join(base_path, 'train'))
    copy_images(dataset, test_indices, os.path.join(base_path, 'test'))

    # Copy client dataset
    client_path = os.path.join(base_path, 'client')
    copy_images(dataset, client_indices, client_path)

5️⃣ get_class_indices, save_class_indices, save_indices, save_class_allocation_combined Functions

  • get_class_indices β†’ Retrieves the indices for each class
  • save_class_indices β†’ Saves the class indices in .npy format
  • save_indices β†’ Saves train/test/client indices for each domain
  • save_class_allocation_combined β†’ Saves the complete label allocation for all domains
# Obtain the class index of the entire dataset
def get_class_indices(dataset):
    class_indices = {i: [] for i in range(65)}  # The Office Home dataset has 65 classes
    for idx in range(len(dataset)):
        label = dataset.targets[idx]  # Obtain labels for each sample
        class_indices[label].append(idx)  # Save the index of the entire dataset to the corresponding class
    return class_indices

# Save class index (class index of the entire dataset)
def save_class_indices(class_indices, domain_name, alpha):
    output_dir = os.path.join(f'./new_dataset/Office-Home-{alpha}/output_indices', domain_name)
    os.makedirs(output_dir, exist_ok=True)

    txt_filename = os.path.join(output_dir, 'class_indices.txt')
    npy_filename = os.path.join(output_dir, 'class_indices.npy')

    with open(txt_filename, 'w') as f:
        for class_label, indices in class_indices.items():
            f.write(f"Class {class_label} indices: {list(indices)}\n")

    np.save(npy_filename, class_indices)

# Save index function
def save_indices(indices_dict, domain_name, file_type, alpha):
    output_dir = os.path.join(f'./new_dataset/Office-Home-{alpha}/output_indices', domain_name)
    os.makedirs(output_dir, exist_ok=True)  # If the output folder does not exist, create it

    for key, indices in tqdm(indices_dict.items(), desc=f"Save {file_type} Index"):
        txt_filename = os.path.join(output_dir, f"{file_type}_{key}_indices.txt")
        npy_filename = os.path.join(output_dir, f"{file_type}_{key}_indices.npy")

        # Save as. txt file
        with open(txt_filename, 'w') as f:
            f.write(f"{file_type.capitalize()} {key} indices: {list(indices)}\n")

        # Save as. npy file
        np.save(npy_filename, np.array(indices))

# Save the class allocation quantities of all domains to one file
def save_class_allocation_combined(domains, alpha):

    output_dir = f'./new_dataset/Office-Home-{alpha}/output_indices'
    combined_allocation = []

    # Traverse each domain
    for domain_name in domains:
        domain_output_dir = os.path.join(output_dir, domain_name)
        class_indices_path = os.path.join(domain_output_dir, 'class_indices.npy')
        client_indices_path = os.path.join(domain_output_dir, 'client_client_indices.npy')

        # Ensure that the file exists
        if not os.path.exists(class_indices_path) or not os.path.exists(client_indices_path):
            print(f"Document loss: {class_indices_path} or {client_indices_path}")
            continue

        # Load NPY file
        class_indices = np.load(class_indices_path, allow_pickle=True).item()
        client_indices = np.load(client_indices_path)

        # Initialize class allocation for the current domain
        domain_class_allocation = {class_label: 0 for class_label in class_indices.keys()}

        # Count the sample size of each class
        for idx in client_indices:
            for class_label, indices in class_indices.items():
                if idx in indices:
                    domain_class_allocation[class_label] += 1
                    break

        # Format the class allocation information for the current domain
        allocation_str = f"{domain_name}[" + ",".join(f"{class_label}:{count}" for class_label, count in domain_class_allocation.items()) + "]"
        combined_allocation.append(allocation_str)

    # Save all domain class assignment information to a txt file
    combined_txt_filename = os.path.join(output_dir, 'combined_class_allocation.txt')
    with open(combined_txt_filename, 'w') as f:
        for allocation in combined_allocation:
            f.write(f"{allocation}\n")
    print(f"Saved all domain class assignments to {combined_txt_filename}")

Downloads last month
38