Papers
arxiv:2502.09741

FoNE: Precise Single-Token Number Embeddings via Fourier Features

Published on Feb 13
· Submitted by deqing on Feb 17
Authors:
,
,
,

Abstract

Large Language Models (LLMs) typically represent numbers using multiple tokens, which requires the model to aggregate these tokens to interpret numerical values. This fragmentation makes both training and inference less efficient and adversely affects the model's performance on number-related tasks. Inspired by the observation that pre-trained LLMs internally learn Fourier-like features for number tokens, we propose Fourier Number Embedding (FoNE), a novel method that directly maps numbers into the embedding space with their Fourier features. FoNE encodes each number as a single token with only two embedding dimensions per digit, effectively capturing numerical values without fragmentation. This compact representation accelerates both training and inference. Compared to traditional subword and digit-wise embeddings, FoNE not only reduces computational overhead but also achieves higher accuracy across various numerical tasks including addition, subtraction and multiplication. On 6-digit decimal addition, FoNE requires 64times less data to achieve 99% accuracy than subword and digit-wise embeddings while using 3times and 6times fewer tokens per number, respectively. Furthermore, FoNE is the only method that yields 100% accuracy on over 100,000 test examples for addition, subtraction, and multiplication. The codes and visualization are available at https://fouriernumber.github.io/.

Community

Paper author Paper submitter

We propose Fourier Number Embedding (FoNE), that encodes any number into one token. FoNE enables LLMs precisely and efficiently represent numbers and can achieve 99% accuracy on addition tasks up to 50 digits. It enables better length generalization (from 10-digit addition to 50-digit) than abacus.

Paper author Paper submitter

You can also find our website at https://fouriernumber.github.io.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.09741 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.09741 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.09741 in a Space README.md to link it from this page.

Collections including this paper 4