--- base_model: - mlabonne/gemma-3-4b-it-abliterated - mshojaei77/gemma-3-4b-persian-v0 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method. ### Models Merged The following models were included in the merge: * [mlabonne/gemma-3-4b-it-abliterated](https://huggingface.co/mlabonne/gemma-3-4b-it-abliterated) * [mshojaei77/gemma-3-4b-persian-v0](https://huggingface.co/mshojaei77/gemma-3-4b-persian-v0) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: mlabonne/gemma-3-4b-it-abliterated - model: mshojaei77/gemma-3-4b-persian-v0 base_model: mlabonne/gemma-3-4b-it-abliterated merge_method: slerp dtype: bfloat16 # Better stability for precision-sensitive merges parameters: density: 0.5 weight: - filter: "self_attn" value: [0.75, 0.4, 0.25, 0.4, 0.75] # U-shaped attention weighting - filter: "mlp" value: [0.25, 0.6, 0.9, 0.6, 0.25] # Λ-shaped MLP weighting t: [0.15, 0.35, 0.65, 0.35, 0.15] # Optimized linguistic injection ``` ```json generation_config = { "temperature": 1.1, "top_k": 50, "top_p": 0.9, "repetition_penalty": 1.15, "do_sample": True } ```