Model Card for FCD: Fourier Convolutional Decoder for Solar Data
FCD (Fourier Convolutional Decoder) is an autoencoder model designed to reconstruct solar images from compressed or transformed data, especially from data collected by space instruments like (Spectrometer/Telescope for Imaging X-rays) STIX on the Solar Orbiter.
FCD is lightweight, fast, and uses less energy than traditional methods, making it suitable for use onboard satellites or in limited computing environments.
It has been trained to handle solar X-ray data and produces clearer images with fewer artifacts compared to previous approaches.
Model Details
Model Description
- Developed and shared by: Merve Selçuk-Şimşek
- Funded by: Project RODEM by SNSF through FHNW
- Model type: Autoencoder
- License: MIT
Model Sources
- Repository: GitHub
- Paper: Fourier convolutional decoder: reconstructing solar flare images via deep learning
- Demo: HuggingFace
Uses
The model takes 48 real numbers (of Fourier components, combined 24 real and 24 imaginary) as input at the code level, and reconstructs a 128x128 image corresponding to them.
How to Get Started with the Model
Use the code below to get started with the model.
# load TF-Keras
from tensorflow.keras.saving import load_model
# load the custom layer
from .filters import GaussianFilter
model = load_model('fcd.keras',
custom_objects={'GaussianFilter': GaussianFilter},
compile=False)
Please see fcd_demo for the full demonstration of how to use the model including
- uploading input data,
- loading the model,
- prediction via model and
- visualizing the output.
Training Details
Training Data
Training data can be generated via the code at GitHub.
Preprocessing
Both input and output data is multiplied with an alpha coefficient. For the inputs, Fourier components, it is for normalization. For the outputs, reconstructed solar flare images, it is to have a steady flux intensity range.
Training Hyperparameters
- Training regime: Only FP32 precision, both for the data and the network.
Evaluation
Testing Data, Factors & Metrics
Testing Data
Testing data can be generated via the code at GitHub.
Metrics & Results
- Image Quality
- MS-SSIM: Up to 0.97 ± 0.02 (FCD, VIS_FWDFIT)
- LPIPS: As low as 0.04 ± 0.03 (FCD)
- PSNR: Up to 35.70 ± 3.97 (FCD)
- Segmentation Accuracy
- Dice Coefficient: Up to 0.83 ± 0.10 (FCD, VIS_FWDFIT)
- Hausdorff Distance: As low as 5.08 ± 6.26 (FCD)
Metrics | FCD | VIS_FWDFIT | MEM_GE | CLEAN |
---|---|---|---|---|
MS-SSIM | 0.97 ± 0.02 | 0.97 ± 0.03 | 0.89 ± 0.07 | 0.95 ± 0.03 |
LPIPS | 0.04 ± 0.03 | 0.05 ± 0.04 | 0.07 ± 0.04 | 0.11 ± 0.04 |
PSNR | 35.70 ± 3.97 | 35.50 ± 4.78 | 32.22 ± 3.93 | 31.77 ± 3.26 |
Dice Coefficient | 0.83 ± 0.08 | 0.83 ± 0.10 | 0.76 ± 0.10 | 0.76 ± 0.11 |
Hausdorff Distance | 5.08 ± 6.26 | 5.40 ± 7.33 | 34.46 ± 33.44 | 7.16 ± 6.56 |
- Signal Reconstruction
- MAE: As low as 0.42 ± 2.11 (MEM_GE)
- Cosine Similarity: Up to 0.97 ± 0.03 (MEM_GE)
- Spectral Convergence: As low as 0.27 ± 0.10 (MEM_GE)
- χ²: As low as 2.14 ± 1.64 (MEM_GE)
Metrics | FCD | VIS_FWDFIT | MEM_GE | CLEAN |
---|---|---|---|---|
MAE | 0.55 ± 2.78 | 0.59 ± 2.95 | 0.42 ± 2.11 | 0.46 ± 2.04 |
Cosine Similarity | 0.94 ± 0.05 | 0.94 ± 0.06 | 0.97 ± 0.03 | 0.95 ± 0.05 |
Spectral Convergence | 0.31 ± 0.13 | 0.32 ± 0.14 | 0.27 ± 0.10 | 0.32 ± 0.16 |
χ² | 3.54 ± 5.70 | 3.63 ± 4.22 | 2.14 ± 1.64 | 2.98 ± 3.01 |
- Efficiency
- Time-to-Solution:
- FCD: 0.032 ± 0.005 s
- CLEAN: 0.032 ± 0.092 s
- VIS_FWDFIT: 5.522 ± 3.655 s
- MEM_GE: 9.048 ± 6.844 s
Imaging Algorithm | Time-to-Solution (s) |
---|---|
VIS_FWDFIT | 5.522 ± 3.655 |
MEM_GE | 9.048 ± 6.844 |
CLEAN | 0.032 ± 0.092 |
FCD | 0.032 ± 0.005 |
- FCD Runtime Scaling:
- 1–10000 iterations: 0.035–2.069 s
FCD Runtime | Time-to-Solution (s) |
---|---|
Runtime-1 | 0.035 ± 0.006 |
Runtime-10 | 0.036 ± 0.010 |
Runtime-100 | 0.051 ± 0.008 |
Runtime-1000 | 0.240 ± 0.010 |
Runtime-10000 | 2.069 ± 0.039 |
Summary
Environmental Impact
Carbon emission is estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Carbon Emitted: 0.15 kg CO2eq.
- Hardware Type: 1 NVIDIA RTX A4500 GPU
- Hours used: 2.5 hours
- Training Type: Standard training
Technical Specifications
Model Architecture and Objective
Overcomplete Autoencoder trained in a supervised manner to reconstruct solar flare images from the Fourier components.
Compute Infrastructure
Private cluster
Hardware
NVIDIA RTX A4500 GPU
Software
Only TF-Keras (v. 2.15.0) is sufficient to run the model.
Citation
Selcuk-Simsek, M., Massa, P., Xiao, H. et al. Fourier convolutional decoder: reconstructing solar flare images via deep learning. Neural Comput & Applic (2025). https://doi.org/10.1007/s00521-025-11283-6
Model Card Contact
- Downloads last month
- 7