BRONet: Enhancing Certified Robustness via Block Reflector Orthogonal Layers and Logit Annealing Loss
This repository contains the official PyTorch implementation for BRONet, a novel approach presented in the paper Enhancing Certified Robustness via Block Reflector Orthogonal Layers and Logit Annealing Loss.
[Project Page] | [GitHub Repository]
Abstract
Lipschitz neural networks are well-known for providing certified robustness in deep learning. In this paper, we present a novel, efficient Block Reflector Orthogonal (BRO) layer that enhances the capability of orthogonal layers on constructing more expressive Lipschitz neural architectures. In addition, by theoretically analyzing the nature of Lipschitz neural networks, we introduce a new loss function that employs an annealing mechanism to increase margin for most data points. This enables Lipschitz models to provide better certified robustness. By employing our BRO layer and loss function, we design BRONet - a simple yet effective Lipschitz neural network that achieves state-of-the-art certified robustness. Extensive experiments and empirical analysis on CIFAR-10/100, Tiny-ImageNet, and ImageNet validate that our method outperforms existing baselines. The implementation is available at this https URL .
Overview
Official PyTorch implementation for our ICML 2025 spotlight paper. We introduce:
- Block Reflector Orthogonal Layer (BRO): A low-rank, approximation-free orthogonal convolutional layer designed to efficiently construct Lipschitz neural networks, improving both stability and expressiveness.
- Logit Annealing Loss (LA): An adaptive loss function that dynamically balances classification margins across samples, leading to enhanced certified robustness.
Repository Structure
bronet/
โ Contains the implementation for BRONet experiments.lipconvnet/
โ Contains the implementation for the LipConvNet experiments.
Key modules:
- BRO layer:
lipconvnet/models/layers/bro.py
- LA loss:
bronet/models/margin_layer.py
Getting Started
To set up the environment and run our code:
1. Requirements
- Python 3.11
- PyTorch โฅ 2.0 with CUDA support
- A recent NVIDIA GPU (e.g., Ampere or newer) is recommended for training and certification
2. Reproduce the paper results
To reproduce the main results in the paper, run the following command:
cd bronet
bash run.sh
Pre-trained Models
Datasets | Models | Checkpoint |
---|---|---|
ImageNet (Table 1) | BRONet (+LA) | Link |
ImageNet w/ EDM2 2M (Table 2) | BRONet (+LA) | Link |
To test the provided models, download the checkpoint and config file, then run:
cd bronet
OMP_NUM_THREADS=1 torchrun --nproc_per_node=1 \
test.py --launcher=pytorch \
---master_port $MASTER_PORT=$((12000 + $RANDOM % 20000)) \
--config='path_to_config' \
--resume_from='path_to_downloaded_checkpoint'
See bronet/README.md
for instructions on reproducing the results.
Acknowledgements
This work builds on and benefits from several open-source efforts:
We sincerely thank the authors of these projects for making their work publicly available.
Citation
If you find our work useful, please cite us:
@inproceedings{lai2025enhancing,
title={Enhancing Certified Robustness via Block Reflector Orthogonal Layers and Logit Annealing Loss},
author={Bo-Han Lai and Pin-Han Huang and Bo-Han Kung and Shang-Tse Chen},
booktitle={International Conference on Machine Learning (ICML)},
year={2025},
note={Spotlight}
}
- Downloads last month
- 3