Building an Open Ecosystem for Time Series Forecasting: Introducing TimesFM in Hugging Face
By Jinan Zhou, Nutanix Kashif Rasul, HuggingFace
Time series forecasting plays a pivotal role in decision-making across industries like finance, healthcare, energy, and beyond. As demand grows for forecasting models that are highly adaptable, precise, and scalable, traditional statistical models like ARIMA and classical machine learning methods have increasingly revealed their limitations. Typically, these traditional approaches require extensive manual intervention and often fail to capture complex, dynamic, and non-linear temporal patterns.
In response to these challenges, researchers are embracing Large Language Model (LLM)-based solutions, with TimesFM emerging as a notable recent advancement. Leveraging modern transformer architectures, TimesFM achieves state-of-the-art performance across diverse datasets—even in zero-shot forecasting scenarios—delivering exceptional accuracy and adaptability right out of the box.
Figure 1: The average performance in three groups of datasets. In all figures, the lower the metric, the better, and the error bars represent one standard error.
Recognizing the importance of the new time series models like TimesFM, we are calling for a comprehensive initiative to build a complete, unified ecosystem specifically tailored for time series LLMs. Our objective is to establish a standard framework that unifies benchmarking, fine-tuning, and seamless integration within the existing HuggingFace ecosystem, facilitating easier and more effective research and practical applications.
As the first step, we integrated TimesFM into Hugging Face’s Transformers library by translating the original TimesFM implementation into Hugging Face-compatible code. Unlike the original research implementation—which was written in Pax and later partially ported to PyTorch—the current contribution, merged into Hugging Face offers a standardized implementation that fully adheres to the design pattern of the Transformers library. This not only ensures compatibility with the broader Hugging Face ecosystem (including tokenizers, training utilities, and model hubs) but also significantly lowers the barrier to entry for both researchers and engineers. Users can now fine-tune, benchmark, and deploy TimesFM using familiar APIs and workflows, without needing to adapt to custom codebases or non-standard interfaces. This makes experimentation faster, collaboration easier, and real-world adoption far more scalable.
Below is a code example demonstrating how to start forecasting with TimesFM based on HuggingFace:
Python
import torch
from transformers import TimesFmModelForPrediction
model = TimesFmModelForPrediction.from_pretrained(
"google/timesfm-2.0-500m-pytorch",
torch_dtype=torch.bfloat16,
attn_implementation="sdpa",
device_map="cuda" if torch.cuda.is_available() else None
)
# Create dummy inputs
forecast_input = [
np.sin(np.linspace(0, 20, 100)),
np.sin(np.linspace(0, 20, 200)),
np.sin(np.linspace(0, 20, 400)),
]
frequency_input = [0, 1, 2]
# Convert inputs to sequence of tensors
forecast_input_tensor = [
torch.tensor(ts, dtype=torch.bfloat16).to("cuda" if torch.cuda.is_available() else "cpu")
for ts in forecast_input
]
frequency_input_tensor = torch.tensor(frequency_input, dtype=torch.long).to(
"cuda" if torch.cuda.is_available() else "cpu"
)
# Get predictions from the pre-trained model
with torch.no_grad():
outputs = model(past_values=forecast_input_tensor, freq=frequency_input_tensor, return_dict=True)
point_forecast_conv = outputs.mean_predictions.float().cpu().numpy()
quantile_forecast_conv = outputs.full_predictions.float().cpu().numpy()
To further illustrate the tangible benefits of our community-driven approach, we developed a Microsoft Excel plugin that leverages TimesFM. One can easily see how significantly TimesFM outperforms Excel's built-in AutoFill, delivering far more accurate and reliable forecasts. This simple yet powerful demo underscores the impact our collaborative ecosystem could have on the industry.
Video: MS Excel plugin powered by TimesFM
Looking ahead, we plan to expand our ecosystem through the following initiatives:
- Developing a unified pipeline and interface applicable to all time series LLMs.
- Integrating with more LLM toolchains, including vLLM for more efficient deployment and transformer.js for web applications.
- Establishing a community-driven benchmarking leaderboard to drive continuous improvement.
We warmly invite researchers, developers, and industry experts to collaborate, contribute, and share their insights as we collectively shape the future of time series forecasting. Your participation is essential in advancing this exciting and evolving field.