robgreenberg3 commited on
Commit
4f43b70
·
verified ·
1 Parent(s): 72352d8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -21,7 +21,7 @@ We believe the future of AI is open. That’s why we’re sharing our latest mod
21
  - **Use or build optimized foundation models**, including Llama, Mistral, Qwen, Gemma, DeepSeek, and others, tailored for performance and accuracy in real-world deployments.
22
  - **Customize and fine-tune models for your workflows**, from experimentation to production, with tools and frameworks built to support reproducible research and enterprise AI pipelines.
23
  - **Maximize inference efficiency across hardware** using production-grade compression and optimization techniques like quantization (FP8, INT8, INT4), structured/unstructured sparsity, distillation, and more, ready for cost-efficient deployments with vLLM.
24
- - **Confidently deploy [validated models](http://www.redhat.com/en/products/ai/validated-models)** on Red Hat AI products.
25
 
26
  🔗 **Explore relevant open-source tools**:
27
  - [**vLLM**](https://github.com/vllm-project/vllm) – Serve large language models efficiently across GPUs and environments.
 
21
  - **Use or build optimized foundation models**, including Llama, Mistral, Qwen, Gemma, DeepSeek, and others, tailored for performance and accuracy in real-world deployments.
22
  - **Customize and fine-tune models for your workflows**, from experimentation to production, with tools and frameworks built to support reproducible research and enterprise AI pipelines.
23
  - **Maximize inference efficiency across hardware** using production-grade compression and optimization techniques like quantization (FP8, INT8, INT4), structured/unstructured sparsity, distillation, and more, ready for cost-efficient deployments with vLLM.
24
+ - **[**Validated models**](http://www.redhat.com/en/products/ai/validated-models) by Red Hat AI offer confidence, predictability, and flexibility when deploying third-party generative AI models across the Red Hat AI platform.** Red Hat AI validates models by running a series of capacity planning scenarios with [GuideLLM](https://github.com/neuralmagic/guidellm) for benchmarking, [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) for accuracy evaluations, and [vLLM](https://github.com/vllm-project/vllm) for inference serving across a wide variety of AI acclerators.
25
 
26
  🔗 **Explore relevant open-source tools**:
27
  - [**vLLM**](https://github.com/vllm-project/vllm) – Serve large language models efficiently across GPUs and environments.