AstroMLab/AstroSage-70B

#1045
by Tijmen2 - opened

You already did AstroSage-8B. Many thanks for that! Would love to see your quants for our latest 70B model as well!
https://huggingface.co/AstroMLab/AstroSage-70B

It's queued! :D
Thank you and your team so much for creating this amazing model. I'm impressed how much you managed to improve its domain specific knowledge. Thanks a lot for the recommendation and sorry that we somehow missed it. The effort you put into training this is crazy. 176000 GPU-hours is so much. I find it quite interesting that in the end you merged 3 other Llama-3.1 based models into it to further improve it. @mradermacher I recommend you give it a try as well if you are interested in astronomy/cosmology.

You can check for progress at http://hf.tst.eu/status.html or regularly check the model summary page at https://hf.tst.eu/model#AstroSage-70B-GGUF for quants to appear

Static quants will appear under: https://huggingface.co/mradermacher/AstroSage-70B-GGUF
Wighted/imatrix quants will appear under: https://huggingface.co/mradermacher/AstroSage-70B-i1-GGUF

Sign up or log in to comment