Model Card for Abliterated Cohere Labs Command A
Model Summary
This is an abliterated version of Cohere Labs Command A, a 111 billion parameter model originally optimized for enterprise use. The model has been modified using the Abliteration technique to alter its behavior by removing specific directions in the model's weight space.
Original Model Information
- Developed by: Cohere and Cohere Labs - CohereLabs/c4ai-command-a-03-2025
Original License: CC-BY-NC, with adherence to Cohere Lab's Acceptable Use Policy
Base Model: c4ai-command-a-03-2025
- Model Size: 111 billion parameters
- Context Length: 256K (configured for 128K in Hugging Face)
Abliteration Details
- Abliteration Method: Applied using the script from Abliteration-by-Transformers https://github.com/JanRoslein/Abliteration-by-Transformers.git
- Abliteration Parameters: Used --proportional-scaling and --max-scale-factor with value of 2.25
- Significantly Modified Layers: layer_32_attn: Change magnitude = 2.171875 (1.4103%) layer_32_mlp: Change magnitude = 3.546875 (1.3748%) layer_62_attn: Change magnitude = 3.828125 (3.7531%) layer_62_mlp: Change magnitude = 6.656250 (2.6206%)
Known Limitations
- The model may produce artifacts in some languages due to the significant modifications to specific layers.
- Performance characteristics differ from the original model, particularly in areas targeted by the abliteration process.
Usage Notes
- The abliteration process may have altered the model's behavior in ways that could affect its reliability and output quality.
- Users should be aware that the original model's licensing terms still apply to this derivative work.
Technical Details
The abliteration technique works by identifying and modifying specific directions in the model's weight space that are associated with certain behaviors. The proportional scaling approach used in this case applies different modification intensities to different layers based on their contribution to the targeted behavior.
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support