README / README.md
eskibars's picture
Update README.md
228a0f6 verified
---
title: README
emoji: πŸ“š
colorFrom: indigo
colorTo: purple
sdk: static
pinned: false
---
| ![Vectara logo](Vectara-logo.png) | <span style="color:white">Invisible Content Invisible Content Invisible Content Invisible Content Invisible Content Invisible Content Invisible Content Invisible Content Invisible Content</span> |
|-----------------------------------|-|
Vectara is an end-to-end platform to embed powerful generative AI features with extraordinary results.
We provide simple APIs for indexing documents and generating summaries using retrieval augmented generation (RAG),
all in a managed service that dramatically simplifies the task of building scalable, secure, and reliable GenAI applications.
To learn more - here are some resources:
* [Sign up](https://console.vectara.com/signup/?utm_source=huggingface&utm_medium=space&utm_term=i[…]=console&utm_campaign=huggingface-space-integration-console) for a Vectara account.
* Check out our API [documentation](https://docs.vectara.com/docs/).
* We have created [vectara-ingest](https://github.com/vectara/vectara-ingest) to help you with data ingestion and [vectara-answer](https://github.com/vectara/vectara-answer) as a quick start with building the UI.
* Join us on [Discord](https://discord.gg/GFb8gMz6UH) or ask questions in [Forums](https://discuss.vectara.com/)
* Here are few demo applications built with vectara-ingest and vectara-answer:
* [AskNews](https://asknews.demo.vectara.com/)
* [AskGSB](https://askgsb.demo.vectara.com/)
* [Legal Aid](https://legalaid.demo.vectara.com/)
* Our [Hughes Hallucination Evaluation Model](https://huggingface.co/vectara/hallucination_evaluation_model), or HHEM, is a model to detect LLM hallucinations.
* [HHEM leaderboard](https://huggingface.co/spaces/vectara/leaderboard)
* Our platform provides a production-grade [factual consistency score](https://vectara.com/blog/automating-hallucination-detection-introducing-vectara-factual-consistency-score/) (aka HHEM v2) which supports a longer sequence length, is calibrated, and is integrated into our Query APIs.