π News
- [2025-05] Paper TagRouter: Learning Route to LLMs through Tags for Open-Domain Text Generation Tasks has been accepted by the top NLP conference ACL. Model Download.
- [2025-03] Paper GeoFactory: an LLM Performance Enhancement Framework for Geoscience Factual and Inferential Tasks has been accepted by the journal Big Earth Data. Data Download.
- [2025-03] Paper ClimateChat: Designing Data and Methods for Instruction Tuning LLMs to Answer Climate Change Queries has been accepted by the International Conference on Learning Representations (ICLR). Model Download.
- [2024-12] Paper JiuZhou: Open Foundation Language Models and Effective Pre-training Framework for Geoscience has been accepted by the International Journal of Digital Earth. Model Introduction. Project Repository.
- [2024-09] Released chat model ClimateChat.
- [2024-08] Paper PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models has been accepted by the journal Big Earth Data. WeChat article: PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models. Model Download.
- [2024-08] Released chat model Chinese-Mistral-7B-Instruct-v0.2, featuring significantly improved language understanding and multi-turn conversation capabilities.
- [2024-06] Released chat model JiuZhou-Instruct-v0.2, with significantly enhanced language understanding and multi-turn conversation capabilities.
- [2024-05] WeChat Article: Chinese Vocabulary Expansion Incremental Pretraining for Large Language Models: Chinese-Mistral Released.
- [2024-03] Released base model Chinese-Mistral-7B-v0.1 and chat model Chinese-Mistral-7B-Instruct-v0.1. Model Introduction. Project Repository.
- [2024-03] Released JiuZhou's base version JiuZhou-base, instruct version JiuZhou-instruct-v0.1, and intermediate checkpoints. Model Introduction. Project Repository.
- [2024-01] Completed training of Chinese-Mistral and JiuZhou, and commenced model evaluation.
Download
Model Series | Model | Download Link | Description |
---|---|---|---|
JiuZhou | JiuZhou-base | Huggingface | Base model (Rich in geoscience knowledge) |
JiuZhou | JiuZhou-Instruct-v0.1 | Huggingface | Instruct model (Instruction alignment caused a loss of some geoscience knowledge, but it has instruction-following ability) LoRA fine-tuned on Alpaca_GPT4 in both Chinese and English and GeoSignal |
JiuZhou | JiuZhou-Instruct-v0.2 | HuggingFace Wisemodel |
Instruct model (Instruction alignment caused a loss of some geoscience knowledge, but it has instruction-following ability) Fine-tuned with high-quality general instruction data |
ClimateChat | ClimateChat | HuggingFace Wisemodel |
Instruct model Fine-tuned on JiuZhou-base for instruction following |
Chinese-Mistral | Chinese-Mistral-7B | HuggingFace Wisemodel ModelScope |
Base model |
Chinese-Mistral | Chinese-Mistral-7B-Instruct-v0.1 | HuggingFace Wisemodel ModelScope |
Instruct model LoRA fine-tuned with Alpaca_GPT4 in both Chinese and English |
Chinese-Mistral | Chinese-Mistral-7B-Instruct-v0.2 | HuggingFace Wisemodel |
Instruct model LoRA fine-tuned with a million high-quality instructions |
PreparedLLM | Prepared-Llama | Huggingface Wisemodel |
Base model Continual pretraining with a small number of geoscience data Recommended to use JiuZhou |
- Downloads last month
- 47
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support