Improve model card for openbmb/MiniCPM-o-2_6-gguf: Add metadata, links, and usage examples

#8
by nielsr HF Staff - opened

This PR significantly enhances the model card for openbmb/MiniCPM-o-2_6-gguf by:

  • Adding license: apache-2.0 and library_name: transformers to the metadata. The library_name is justified by the presence of transformers-based code examples in the official GitHub repository, enabling automated "How to use" snippets on the Hub.
  • Updating the model description with key highlights and a clear introduction of MiniCPM-o 2.6, linking it to its foundational paper MiniCPM-V 4.5: Cooking Efficient MLLMs via Architecture, Data, and Training Recipe.
  • Including direct links to the main GitHub repository (https://github.com/OpenBMB/MiniCPM-V) and the project homepage (https://minicpm-omni-webdemo-us.modelbest.cn/).
  • Adding a "Quickstart" section with a Python code snippet demonstrating how to use the base MiniCPM-o 2.6 model with the Hugging Face transformers library, as found in the original project's GitHub README.
  • Retaining and clearly separating the existing, valuable instructions for converting to and using the GGUF format with llama.cpp.
  • Adding a citation section for proper attribution.

These changes aim to make the model card more informative, discoverable, and user-friendly for a wider audience, catering to both transformers users and those utilizing the GGUF format.

tc-mb changed pull request status to closed

Sign up or log in to comment