FoNE: Precise Single-Token Number Embeddings via Fourier Features
Abstract
Large Language Models (LLMs) typically represent numbers using multiple tokens, which requires the model to aggregate these tokens to interpret numerical values. This fragmentation makes both training and inference less efficient and adversely affects the model's performance on number-related tasks. Inspired by the observation that pre-trained LLMs internally learn Fourier-like features for number tokens, we propose Fourier Number Embedding (FoNE), a novel method that directly maps numbers into the embedding space with their Fourier features. FoNE encodes each number as a single token with only two embedding dimensions per digit, effectively capturing numerical values without fragmentation. This compact representation accelerates both training and inference. Compared to traditional subword and digit-wise embeddings, FoNE not only reduces computational overhead but also achieves higher accuracy across various numerical tasks including addition, subtraction and multiplication. On 6-digit decimal addition, FoNE requires 64times less data to achieve 99% accuracy than subword and digit-wise embeddings while using 3times and 6times fewer tokens per number, respectively. Furthermore, FoNE is the only method that yields 100% accuracy on over 100,000 test examples for addition, subtraction, and multiplication. The codes and visualization are available at https://fouriernumber.github.io/.
Community
We propose Fourier Number Embedding (FoNE), that encodes any number into one token. FoNE enables LLMs precisely and efficiently represent numbers and can achieve 99% accuracy on addition tasks up to 50 digits. It enables better length generalization (from 10-digit addition to 50-digit) than abacus.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Enhancing Lexicon-Based Text Embeddings with Large Language Models (2025)
- SAISA: Towards Multimodal Large Language Models with Both Training and Inference Efficiency (2025)
- ST3: Accelerating Multimodal Large Language Model by Spatial-Temporal Visual Token Trimming (2024)
- Softplus Attention with Re-weighting Boosts Length Extrapolation in Large Language Models (2025)
- MQuant: Unleashing the Inference Potential of Multimodal Large Language Models via Full Static Quantization (2025)
- What Kind of Visual Tokens Do We Need? Training-free Visual Token Pruning for Multi-modal Large Language Models from the Perspective of Graph (2025)
- Enhancing Token Filtering Efficiency in Large Language Model Training with Collider (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper