YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Original model link: Pclanglais/ScikitLLM-Model.

For imatrix data generation, kalomaze's groups_merged.txt were used, you can find it here.

Original model README below.

ScikitLLM is an LLM finetuned on writing references and code for the Scikit-Learn documentation.

Features of ScikitLLM includes:

  • Support for RAG (three chunks)
  • Sources and quotations using a modified version of the wiki syntax ("")
  • Code samples and examples based on the code quoted in the chunks.
  • Expanded knowledge/familiarity with the Scikit-Learn concepts and documentation.

Training

ScikitLLM is based on Mistral-OpenHermes 7B, a pre-existing finetune version of Mistral 7B. OpenHermes already include many desired capacities for the end use, including instruction tuning, source analysis, and native support for the chatML syntax.

As a fine-tune of a fine-tune, ScikitLLM has been trained with a lower learning rate than is commonly used in fine-tuning projects.

Downloads last month
1
GGUF
Model size
7.24B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support