This model was trained on base_activations - chat_activations from gemma-2-2b

This model has been pushed to the Hub using the PytorchModelHubMixin integration:

Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including science-of-finetuning/SAE-difference_bc-gemma-2-2b-L13-x32-k100-lr1e-04-local-shuffling