Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 5 missing columns ({'uploader', 'description', 'upload_date', 'title', 'duration'})

This happened while the json dataset builder was generating data using

hf://datasets/dwb2023/yt-transcripts-v2/data/transcriptions-5760b78f-110c-4cbe-ba2a-f03efaa19339.json (at revision 02d31af25431b402f012e436b77982fa6f928ef8)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              url: string
              transcription: string
              datetime: string
              to
              {'url': Value(dtype='string', id=None), 'transcription': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'duration': Value(dtype='int64', id=None), 'uploader': Value(dtype='string', id=None), 'upload_date': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'datetime': Value(dtype='string', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1323, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 938, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 5 missing columns ({'uploader', 'description', 'upload_date', 'title', 'duration'})
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/dwb2023/yt-transcripts-v2/data/transcriptions-5760b78f-110c-4cbe-ba2a-f03efaa19339.json (at revision 02d31af25431b402f012e436b77982fa6f928ef8)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

url
string
transcription
string
title
string
duration
int64
uploader
string
upload_date
string
description
string
datetime
string
https://www.youtube.com/live/Anr1br0lLz8?si=qz792SKvBHbY-n4N
Hey, Wiz, is there a way to know what comes out of any RAG application that we build is right or correct? Well, it's really hard to say things like it's absolutely right, it's absolutely correct, it's absolutely true. That's pretty difficult. Okay. Okay. So there's no absolutes. It's absolutely correct. It's absolutely true. That's pretty difficult. Okay. Okay. So there's no absolutes, but is there a way to know that changes that we make to the system to our RAG application makes the performance better or worse? That we can know absolutely. So you're saying there's a way to assess RAG systems? Yeah. I think like assess RAG systems? Yeah. I think like a RAG assessment kind of make. A RAG assessment. Yeah, man. Let's show everybody how to do that today. Let's do it. All right, man. My name is Greg and we are here to talk RAG eval today. Hey, I'm Makerspace. Thanks for taking the time to join us. Everybody in our community, we'd love to hear your shout out in the chat where you're calling in from. Today we're going to walk through a simple RAG system built with the latest and greatest from Langchain, their most recent stable release and most stable version ever, we're also going to outline how you can actually assess your RAG systems using the RAG assessment or RAGIS framework. Finally, we'll do some advanced retrieval. We'll just sort of pick one off the shelf that's built into Langchain and show how we can go about this improvement process. We are very excited to have the Ragas co-founders and maintainers Jitin and Shaul joining us for the Q&A today. So definitely get your questions in the chat, anything you're curious about Ragas. We have the creators in the house today. And of course, we'll see Wiz, aka the LLMm wizard and cto at am makerspace back for demos real soon so let's get into it everybody today we're talking rag evaluation this black art that everybody is really really focused on as they start to build prototype and deploy these systems to production in 2024. as we align ourselves to this session we want to get out of this what's up with this langchain v 0.1 that just came out we want to understand how we can build a rag system with the latest syntax and then also evaluate it there's a lot of changes happening on the ragas side just as on the langchain side finally we want to see how we can pick different tools different ways to improve our system our application see how we can pick different tools, different ways to improve our system, our application, and how we can then quantify that using evaluation. So first we'll go into laying chain, then we'll go into a high level view of RAG and see exactly where the different laying chain components fit in. Finally, we're going to see what you all came here for today, the RAGIS metrics and how to implement the RAGIS framework. So we'll be building, we'll be evaluating, we'll be improving today and the Q&A should be pretty dope. So, Langchain v0.1.0. What's Langchain all about again? Well, it's all about enabling us to build LLM applications that leverage context, our so-called context aware, so we can connect to other sources of data. We can do lots of interesting prompt engineering. We can essentially do stuff in the context window that makes our applications more powerful. And also reasoning. This is the agentic behavior stuff. And look for another event from us soon that focuses more on reasoning. Today, we're focused on context, though. And we're doing that in the context of V0.1.0. The blog that they put this out with said, the journey of a thousand miles always starts with a single step. And that's kind of where Langchain sees themselves to be today. Langchain Core has come together, Langchain Community has come together, and they're officially going to be incrementing v0.1 to v0.2 if there are any breaking changes they'll be incrementing this and they'll continue to support v0.1 for a time every time this gets incremented of course as bug fixes and new features come out, they're also going to be incrementing now in this third v0.1.x slot. So pay attention to how quickly the development goes from here, because I imagine there's a lot of great stuff on the horizon coming from Langchain. There was a lot of great stuff in the v0.1 release. There was a lot of great stuff in the v0.1 release. And we're going to primarily focus on retrieval today, and also on this sort of langchain core that leverages L-C-E-L or the langchain expression language. So in terms of retrieval, there's going to be a lot that you can check out and add after today's event that you can then go assess to see if it actually helps your pipelines. So definitely encourage you to check those things out in more detail after today. For production components, there's a lot that we hope to explore in future events as well. But starting from the ground up here, we want to kind of focus on this Langchain core. This is the Langchain expression language, and this is really a very easy kind of elegant way to compose chains with syntax like this. This dovetails directly into deployments with LangServe, into operating in production environments and monitoring and visibility tooling with LangSmith. So really it kind of all starts from here and allows you to really do some industry-leading best practice stuff with these tools. Now today we're going to focus on a couple of the aspects of Langchain. We're going to take Langchain core functionality, and then we're also going to leverage models and prompts, as well as retrieval integrations from Langchain community. Chains, of course, are the fundamental abstraction in laying chain, and we will use those aspects to build our RAG system today. When we go and we assess, then we're going to take it to the next level with an advanced retrieval strategy. This is going to allow us to quantitatively show that we improved our RAG system. So quick recap on RAG in general for everybody. The point of RAG is to really help avoid these hallucinations. This is the number one issue. Everybody's talking about confident responses that are false. We want our systems, our applications to be faithful. And we'll see that we can actually evaluate this after we build out systems and instrument them with the latest evaluation tools. We want them to be faithful to the facts. We want them to be fact checkable. This idea of RAG is going and finding reference material, adding that reference material to the prompt, augmenting the prompt, and thus improving the answers that we generate. Visually, we can think of asking a question, converting that question to a vector, embedding representation, And then looking inside of our vector database, our vector store, the place where we store all of our data in vector format, we're looking for similar things, similar to the vector question we asked. We can find those similar things. And if we've set up a proper prompt template before we go into our LLM, something that says, for instance, use the provided context to answer the user's query. You may not answer the user's query unless you have context. If you don't know, say, I don't know. And then into this prompt, we inject these references, we augment this prompt. And then of course, where does the prompt go? Well, it goes into the chat model into our LLM. This gives us our answer and completes the RAG application input and output. So again, RAG is going to leverage models, prompts, and retrieval. In terms of models, we're going to use OpenAI models today. One note on syntax is that the chat style models we use generally leverage a system user assistant message syntax and Langchain is going to tend to prefer this system human AI syntax instead which personally I think is a little bit more straightforward in terms of the prompt template well we already saw it this is simply setting ourselves up for success so that we can inject those reference materials in and we can generate better answers. Now, it's important what these reference materials contain and how they're ordered. And that is going to be the focus of our evaluation. Of course, when we create a vector store, we're simply loading the docs. That's a document loader. Splitting the text. That's the text splitter. Creating embeddings. We use an embedding model. And storing the vectors in our vector store. Then we need to wrap a retriever around, and we're ready to rock and rag. Our build today is going to leverage, as mentioned, OpenAI models. We're going to leverage the Ada Embeddings model and OpenAI's GPT models. And the data we're going to use is actually, we're going to set up a rag system that allows us to query the Langchain v0.1.0 blog. So we'll read in this data and we'll create a rag based on this Langchain blog that we can ask, see if we missed anything that we might want to take away from this session that we could also learn about the 0.1.0. So to set up our initial rag system, we're gonna send you over to Wiz to show us Langchain v0.1.0 RAG setup. Hey, thank you, Greg. Yes. So today we're going to be looking at a very straightforward RAG pipeline. Basically, all we're going to see is how we get that context into our LLM to answer our questions. And then later on, we're going to think about how we might evaluate that. Now, the biggest changes between this and what we might have done before is the release of Langchain v0.1.0. So this is basically Langchain's, you know, first real minor version. We're looking to see this idea of, you know, splitting the core langchain features out. And that's exactly what, you know, Greg was just walking us through. Now, you'll see that we have mostly the same code that you're familiar with and used to, we can still use LCL, as we always have have that staying part of the core library. But we also have a lot of different ways we can add bells and whistles or different features to our Langchain application or pipeline. So in this case, we'll start, of course, with our classic import or dependency Langchain. We noticed we also have a specific package for OpenAI, for core, for the community Langchain, as well as Langchain Hub. And so all of these let us pick and choose, pick and choose whatever we'd like really, from the Langchain package. This is huge, right? So one of the things that people oftentimes are worried about language there's a ton of extra kind of uh unnecessary things in there well this is you know goes a long way to solving that problem um and it's awesome so let's see first which version we're working with uh so if you're watching this in the future you can be sure so we're on version 0.1.5 so we're already at dot five um line chain you know they're they're hard at work over there uh we're gonna need to add our open AI API key since we are going to be leveraging open AI uh basically this is a uh you know way that we can both use our lm for evaluation but also for generation and also for powering the application. We're just going to use this one LLM today for everything. When it comes to building our pipeline, it's very much so the case that, you know, we have the same stuff that we always have. We need to create an index and then we need to use an LLM to generate responses based on the retrieved context from that index. And we're going to get started as we always do with creating the index. Now we can and will still use LCEL. LCEL is important. You know, one of the things that we're going to show in this notebook, because you don't have to use LCL, they've implemented some abstractions in order to modify the, you know, the base chains that you're used to importing to LCL format, so you get all the advantages. But we're still going to look at LCL today, because it is an important piece of the line chain puzzle. because it is an important piece of the Langchain puzzle. But first, we're going to start with our first difference, right? So we're going to load some data, and we're going to load this from the Langchain community package where we're going to grab our document loader to get our web-based loader. You know, importantly, this is not part of core Langchain. This is a community package, and it works exactly the same as it used to, as it always has. You know, our web-based loader is going to let us load this web page, which we can do with loader.load. And then we can check out that we have our metadata, which is just for our web page. We're happy with that. Next, we need to do the second classic step of creating index. We have a document in this case. You know, it's just one document, but we have it and we need to convert it into several smaller documents, which we're going to do with the always fun recursive character text splitter. You'll notice that this has stayed part of core. So this is in just the langchain base package. Hooray. We have a recursive character text splitter. We've chosen some very arbitrary chunk sizes and overlaps here and then we can split those documents this is less so focused on a specific uh Lang chain rag and more on the evaluation so we're just kind of choosing these values uh you know to to showcase what we're trying to showcase you see that we've converted that one web page into 29 distinct documents. That's great. That's what we want to do with our splitting. Next, we're going to load the OpenAI embeddings model. Now, you'll notice that we're still using text embedding AIDA 002. We don't need to use this embeddings model. And it looks like very soon we'll be able to use OpenAI's latest model once the tick token library updates there's a PR that's ready just waiting to be merged which is going to let us be able to do that but for now until that change is implemented we're going to stick with text data embedding 002 and this is like the classic embedding model, right? Nothing too fancy. Just what we need. When it comes to our face vector store, what we need is to get that from lane chain community. But otherwise, this is exactly the same as it used to be, right? So there's no difference in the actual implementation of the VectorStore. It's just coming from the community channel. We'll pass in our split documents as well as our embedding model and away we go. Next, we're gonna create a retriever. This is the same as we've always done, dot as retriever on our VectorStore. Now we can interact with it through that retrieval API. We can test it to see it working. Why did they change to version 0.1.0? And we get some relevant documents to that query that mention the 0.1.0 release. Hooray. Now that we've got our retrieval pipeline set up, that's the R in RAG, we need to look at creating that AG. So what we're going to do is showcase a few different ways that we can create a prompt template. You can just pull it from the hub. So there are lots of different community created or Langchain created hubs. The idea is that, you know, you can just pull one that fits your task from the hub, but the one that we're showcasing is maybe not ideal. So we're going to go ahead and create our own. You can still do this process if you want to create your own. You don't have to use a, you know, one from the hub. And so we're just going to create the simple one, answer the question based only on the following context. If you cannot answer the question in context context please respond with i don't know that's a classic we pass in our context we pass on our question away we go and you'll notice that this is exactly the same as it used to be let's go laying chain now we'll set up our basic qa chain i've left a lot of comments here in the uh implementation of this uh lcel chain in order to hopefully clarify exactly what's going on. But for now, we'll just leave it at we can create this chain using LCL. And we want to pass out our context along with our response. This is important in order for us to be able to do those evaluations that we're hoping to do with RAGUS. So we do need to make sure that we pass out our context as well as our response. This is an important step. And we'll look at another way to implement this chain a little bit later, which is going to showcase a little bit more exactly what we can do to do this a little bit easier with still getting the advantages of LCL. You'll notice we're just using GPT-305 Turbo. That's it. And there you go. Now we can test it out and we can see, you know, what are the major changes in v0.1.0? The major changes are information. It goes on, it gives a correct answer. That's great. And we have what is a laying graph. And basically the response from the LLM is, I don't know, which is a laying graph. And basically, the response from the LLM is I don't know, which is not necessarily satisfying. So we're going to see a way to improve our chain to get a better answer to that question. And the next step now that we have this base chain would be to evaluate it. But before we do that, let's hear from Greg about how we're going to evaluate it and what we're going to evaluate it with. And with that, I'll pass you back to Greg. Thanks, Wiz. Yeah, so that was Langchain v0.1.0 RAG. Now let's talk RAG assessment. The RAGIS framework essentially wraps around a RAG system. If we think about what comes out in our answer, we can look at that, we can assess different pieces that helped generate that answer within the RAG system. And we can use that information to then decide on updates, on different things that we might try to add to either augment our retrieval or our generation. And we can continue the process of improvement by continually measuring. But what are we measuring? Well, this is where the RAG evaluation really gets particular. We have to make sure that we understand the core concepts of RAG eval. And in order to sort of do this in an automated way, we need four primary pieces of information. You're probably familiar with question, answer, input, output, and you may even be familiar with question, answer, context triples. What we need for eval is we need to also add a fourth component, the ground truth, sort of the correct or right answer, so to speak. Now, in practice, it's often not feasible to collect a comprehensive, robust ground truth data set. So again, what we can do, since we're not focused on absolutes here, is we can actually create a ground truth data set synthetically. And this is what we'll do today. We'll find the best model that we can, pull GPT-4 off the shelf, and we'll generate this set of information that will allow us to do evaluation. Okay, so we'll see how this works. It's pretty cool. And Ragus has a new library for this. But in terms of actual evaluation, when we finally have this data set up, we need to look at two different components. The first component is retrieval. There are two metrics that focus on retrieval exclusively. One is called context precision, and context precision asks the question, how relevant is the context to the question? All right, context recall, on the other hand, asks the question, is the retriever able to retrieve all of the relevant context relevant to the ground truth answer? On the generation side, we have two metrics as well. The first is answer relevancy, which asks the question, how relevant is the answer to our initial query? And finally, faithfulness tries to address the problem of hallucinations and asks, is the answer fact checkable from the context or is this a hallucination? So the four primary metrics in the RAGUS framework are these four, two for retrieval, two for generation. Let's dig in a little bit deeper to each one so that we really try to start grokking each metric individually because they're slightly different but nuanced. Faithfulness is trying to measure this factual consistency. Let's look at an example. The question, where and when was Einstein born? Context. If this is the context, Albert Einstein, born 14 March 1879, was a German-born theoretical physicist, etc., etc. So a high faithfulness answer is something that says, well, he was born in Germany and he was born on 14 March 1879. Where a low faithfulness answer might get part of it right, but might hallucinate, right? We want to avoid these hallucinations with faithfulness. So we're looking at the number of claims that can be inferred from the given context over the total number of claims in the generated answer. To be 100% faithful to the facts, we want this to be the same number. Okay, so answer relevancy is trying to, of course, measure how relevant the answer is. Rather than considering factuality, how factual it is, what we're doing here is we're penalizing when the answer lacks completeness or on the other side, when it contains redundant details. So, for instance, where is France and what is its capital? A low relevance answer is like talking to somebody that's not paying attention to everything that you said. Oh, France is in Western Europe. It's like, yeah, okay, well, what about the other part of my question, right? You want it to be completely relevant to the input, just like a good conversationalist's answer would be. Very relevant, right? Okay, so context precision, as we get into the retrieval metrics, we're thinking about, in this case, a way that we can evaluate whether all of the ground truth relevant items are present in the context and how well ranked they are in order. So what we're looking for is we want all the most relevant chunks that we return from our vector database to appear in the top reference ranks. Okay. We want lots of good stuff ranked at the top. That's what we want. And so we're really looking for everything that's relevant to the question to then be returned in our context and to be order ranked by relevancy. Makes sense, you know, just the way we would want to do it if we were writing a book report or something. Finally, context recall is again kind of doing this same thing that we talked about before. We want to make sure we're paying attention to everything that's relevant. We want to make sure that we're addressing everything that's asked. So if the question here, where is France and what is its capital? Once again, if we have a ground truth answer already, the key here is we're actually leveraging ground truth as part of calculating this metric. France is in Western Europe and its capital is in Paris. A high context recall is addressing both of these. And within each sentence of the output addressing both of these. You can look sort of ground truth sentences that can be attributed to context over number of sentences in ground truth. And a low context recall is going to kind of be doing the same thing that we saw earlier. Well, France is in Western Europe, simple villages, Mediterranean beaches, country is renowned, sophisticated cuisine, on and on and on, but it doesn't address anything about Paris, which of course the ground truth does. And we can start to get a picture of, if we look at each of these metrics, we get some idea of how our system is performing overall. But that's generally kind of difficult to get a perfect picture of that. These are the tools we have, and they work, as we mentioned, very well for directional improvements. Context precision is sort of conveying this sort of high-level quality idea, right? Not too much redundant info, but not too much left out. Context recall is measuring our ability to retrieve all of the necessary or relevant information. Faithfulness is trying to help us avoid hallucinations. And answer relevancy is sort of, am I to the point here? Am I very, very relevant to the question that was asked? Or am I kind of going off on a tangent here? And finally, RAGUS also has a few end-to-end metrics. We're just going to look at one of them today, just to give you an idea. And that one is called answer correctness. This is a great one for your bosses out there. You want to know if it's correct? Boom. How about we look at correctness, boss? So this is potentially a very powerful one to use for others, but beware, you know what's really going on and directional improvements is really what we want to be focusing on. But we want to basically look at how the answer is related to the ground truth. Of course, if we have like a true ground truth data set, this is probably a very, very useful metric. If we have one that's generated by AI, we might want to be a little bit particular, a little bit more careful in looking at this metric and relying on it too much. But if we have this great alignment between ground truth and answer, we're doing a pretty good job, right? Let's see a quick example for this one. We're kind of looking at two different things. We're looking at that factual similarity, but we're also looking at semantic similarity. So, you know, again, you can use this Einstein example. If the ground truth was Einstein was born in 1879 in Germany, the high answer correctness answer is exactly that. And then of course, low answer correctness is you're getting something literally wrong. So there is overlap between all of these things and it's important to sort of track that. But overall, the steps for doing RAGIS are to generate the question answer context ground truth data. And there's a awesome new way to do this called synthetic test data generation that has recently been released by RAGUS. We'll show you how to get it done today. Run that eval and then go ahead and try to improve your RAG pipeline. We're just going to take one simple retrieval improvement off the shelf from Langchain today. It's called the multi-query retriever. This is going to sort of generate many queries from our single query and then answer all of those and then return the relevant context from each of those questions into the prompt. So we're actually getting more information. But you can pick any retrievers off the shelf and you can then go back, you can look, did my metrics go up? Did they go down? What's happening as I add more data or more different retrieval advanced methods to my system? And in this way, we can see how we can combine RAGIS with RAG improvement as Wiz will go ahead and show us right now. Oh yeah, Greg, can't wait. Thank you. So RAGIS, this is the thing we're here to talk about, right? It's a amazing library that does a lot of cool, powerful things. But the thing that is, you know, most important is that it allows us to have some insight into changes we make in terms of the directional impact they have, right? So while we might not be able to say, you know, these answers are definitely true, as Greg was expressing, we can say, it appears as though these answers are truer than the ones we had before, which is awesome. So let's look at how we can do this. First of all, in order to actually do, you know, a evaluation on all of the metrics, we'd have two important things. One, we need to have questions. So these are questions that are potentially relevant to our data. In fact, they should be relevant to our data if we're trying to assess our retrieval pipeline, as well as our generations. And also some ground truths, right? As Greg was mentioning, you know, we are going to use synthetically created ground truths. So it might be more performant to use, let's say, you know, human labeled ground truths. But for now, we can let the LLM handle this. I'll just zoom in just a little bit here. And the idea is that we're going to leverage Ragus's new synthetic test data generation, which is very easy to use, much better than what the process we had to do before, which is kind of do this process manually. We're going to go ahead and use this to create our test data set. Now, it's important to keep in mind that this does use GPT-3, 5 Turbo 16 K as the base model, and it also includes GPT-4 as the critic. So we want to make sure we're not evaluating or creating too much data, or if we are, that we're staying very cognizant of the costs. So the first thing we're going to do is just create a separate data set or separate document pile that we're going to pull from. We're doing this to mitigate the potential that we're just asking the same LLM, the same questions with the same context, which might, you know, unfairly benefit the more simple method. So we're just going to create some new chunks with size 1000, overlap 200. We're going to have 24 docs, so about the same, 29, 24. And then we're going to use the test set generator. It really is as easy as test set generator with open AI. That's what we're using for our LLM. And then we're going to generate with langchain docs. You'll notice this is specifically integrated with langchain. There's also a version for Lama index. And all we need to do is pass in our documents, the size that we like of our test set, and then the distributions. Now this distributions is quite interesting. Basically, this is going to create us questions at these ratios from these subcategories. So the idea is that this is going to be able to test our system on a variety of potentially different, you know, tests, right? So we have simple, which is, you know, as you might think, very simple. And we have, you know, this reasoning, which is going to require some more complex reasoning that might, you know, tax our LLM a little harder. And then we have this multi-context, which is going to require multiple contexts. So our LLM is going to have to pick up a bunch of them in order to be very good at this particular kind of task. And the reason this is important is that not only do we get kind of an aggregate directional indication of how our system is improving, but we can also see how it's improving across specific subcategories of application. Very cool, very awesome. Thanks to the RAGUS team for putting this in. You know, we love this and it makes the job very much a lot easier. So that's great. We look at an example of the test data. We have our question, we have some contexts, and then we have our ground truth response, as well as our evaluation type, which is in this case, simple. In terms of generating responses with the RAG pipeline, it's pretty straightforward. There is an integration that exists between Langchain and RAGIS. It's currently being worked on to be brought up to speed. But for now, we're just going to kind of do this manually. So what we're going to do is we're going to take our test set. We're going to look and see. We've got our questions, context, ground truths, as well as our evolution type. This is our distribution that we talked about earlier. And then we're going to grab a list of questions and ground truths. We're going to ask those questions to our RAG pipeline. And we're going to collect the answers and we're going to collect the contexts. And then we're going to create a Hugging Face data set from those collected responses along with those test questions and our test ground truths. We can see that each of the rows in our data set has a question with our RAG pipeline's answer, our RAG pipeline's context, as well as the ground truth for that response. Now that we have this data set, we're good to go and we can go ahead and we can start evaluating. Now, Greg's talked about these metrics in depth. The code and the methodology can be found in the documentation from Ragas, which is very good. These are the ones we're caring about today. Faithfulness, answer relevancy, context precision, context recall, and answer correctness. And you can see it's as simple as loading, importing them, and then putting them into a list so that when we call the evaluate, you know, we're going to pass in our response data set, which is this data set we created above that has these rows for every question, and then our metrics, which we've just set above. That's all we have to do. Now, the test set generation is awesome and very useful. Another change that Ragas made recently is that they've made their evaluation async. This is a much faster process than it used to be. As you can see, this was around 42 seconds, which is much better than the times that we used to see. Thanks, Ragas team, for making this change. We can get our results here. We have our faithfulness, our answer relevancy, our context recall, our context precision, and our answer correctness. You can see that it does all right. But again, these numbers in a vacuum aren't really indicative of what's happening. It's like we want these numbers to be high, but we're more interested in seeing if changes we make to our system make those numbers higher. So let's look at another awesome part of RAGUS before we move on to making a change and seeing how it goes, which is we have the ability to look at these scores at a per-question level in the Pandas data frame. So you can see that we have all of our scores and they're given to us in this data frame this is huge especially because we can map these questions back to those evolution types and we can see how our model performs on different subsets of those uh those distribute the elements of that distribution so now we're going to just make a simple change. We're going to use the multi-query retriever. This is stock from the Langchain documentation. We're going to use this as an advanced retriever. So this should retrieve more relevant context for us. That's the hope anyway. We'll have our retriever and our primary QA LLM. So we're using the same retriever base and the same LLM base that we were using before. We're just wrapping it in this multi-query retriever. Now, before we used LCEL to create our chain, but now we'll showcase the abstraction, which is going to implement a very similar chain in LCEL, but we don't have to actually write out all that LCEL. So we're going to first create our stuff documents chain, which is going to be our prompt. We're using the same prompt that we used before. So we're not changing the prompt at all. And then we're going to create retrieval chain, which is going to do exactly what we did before in LCL, but it's, you know, we don't have to write all that LCL. So if you're looking for an easier abstracted method, here you go uh you'll notice we call it in basically the same way and then we are also looking at uh this answer the answer is basically uh you know the response.content from before and then uh you know we can see this is a good answer makes sense to me uh but we also have a better answer for this what is Landgraf question. So this heartens me, right? I'm feeling better. Like maybe this will be a better system. And before you might have to just look at it and be like, yeah, it feels better. But now with RAGUS, we can go ahead and just evaluate. We're going to do the same process we did before by cycling through each of the questions in our test set and then getting responses and context for them and then we're going to evaluate across the same metrics you'll notice that our metrics uh have definitely changed so let's look at a little bit more closely how they've changed so it looks like we've gotten better at our faithfulness metric we've gotten significantly better at answer relevancy which is nice we've gotten a little bit better at context recall. We've taken some small hits, a small hit on context precision, and a fairly robust hit on answer correctness. So it's good to know that this is going to improve kind of what we hoped it would improve. And now we are left to tinker to figure out how would we improve this or answer correctness doesn't get impacted by this change, but at least we know in what ways, how, and we're able to now more intelligently reason about how to improve our RAG systems, thanks to RAGIS. And each of these metrics correspond to specific parts of our RAGIS application. And so it is a great tool to figure out how to improve these systems by providing those directional changes. With that, I'm going to kick it back to Greg to close this out and lead us into our Q&A. Thanks, Wiz. Yeah, that was totally awesome, man. It's great to see that we can improve our rag systems not just sort of by thinking about i think that's better uh land graph question got answered better but actually we can go and we can show our bosses our investors anybody that might be out there listening hey look we have a more faithful system check it out went from base model to multi-query retriever and improved our generations. Of course, as developers, you want to keep in mind exactly what the limitations of each of these things are. But for all of those folks out there that aren't down in the weeds with us, if they really want an answer, here's an answer. And so it's awesome that we can go and take just things off the shelf that we're trying to qualitatively analyze before and directionally improve our systems by instrumenting them with RAGIS and measuring before and after small iterations to our application. So today we saw Langchain v0.1.0 to build RAG, and then we actually did RAG on the Langchain v0.1.0 blog. Expect stable releases from here. It's more production ready than ever. And you can not just measure faithfulness, you can measure different generation metrics, different retrieval metrics even different end-to-end metrics and big shout out to everybody today that supported our event shout out to langchain shout out to ragas and shout out to everybody joining us live on youtube with that it's time for q a and i'd like to welcome Wiz back to the stage as well as Jithin and Shaul from Vragus, co-founders and maintainers. If you have questions for us, please scan the QR code and we'll get to as many as we can. Guys, welcome. Let's jump right in. Hey guys. Hey. What's up? All right. Let's see. I'll go ahead and toss this one up to Jitin and Shaul. What's the difference between memorization and hallucination in RAG systems? How can developers prevent hallucinated content while keeping the text rich. Yeah. You want to go for it? I know I didn't actually understand what you actually mean by memorization. Yeah. Oh, yeah. OK. You want to take a crack at this, Shaul? Yeah, I mean, what is the difference between memorization and hallucination rack systems? That's it. The line between memorization and hallucination, I don't know where to draw that particular line. It's something seems like, seems like what it meant is the usage of internal knowledge versus you know there are situations in drag when knowledge is a continually evolving thing right so maybe the llm thing that a person is you know is still alive but the person died yesterday or something now the now if if that particular thing is uh is read using wikipedia or something there will be a contrasting knowledge between the LLM and what the ground truth Wikipedia sees. Now, that can be hard to overcome because the LLM still believes something else. So it's a hard to crack problem and I hope there will be many future works on it. But how can we prevent such hallucination? The thing is, what we require is when using LLMs to build RAC, we can align LLMs so that LLMs answer only from the given grounded text data and not from the internal knowledge. So, or there must be high preference to the grounded text data compared to what is there in the LLMs internal knowledge. So that can be one of the situations. Yeah, definitely. Wiz, any thoughts on memorization versus hallucination before we move on here? I think the answer to the question was already provided uh basically really i mean yeah yeah we when it comes to the memorization versus hallucination i think the the most important thing is uh you know memorization is that you could maybe frame it as a slightly less negative form of hallucination because it's likely to be closer to whatever the training data was. But in terms of RAG application, both bad. We want it to really take into account that context and stick to it. Okay. We've got a question from Jan Boers. I'm curious if you already have experience with smart context aware chunking. Can we expect significant improvements of rag results using smart chunking? What do you think, Jitin? Is this something that we can expect improvements in? Yeah, so how you, so one thing that we see when we're building rag systems is that how you're formatting the data is where most of the problems are. Like if you take some time to clean up the data and to format the data is like where most of the problems are like if you if you take some time to clean up the data and like to format data that actually makes it easier for your act the performance difference like like really great because like models right now if you're using a very stable model if you provide with the correct context the model will be able to use the information in the context to get it so all these tips and tricks to optimize about even like um chris was using the multi uh context method right it's also another trick to get make sure that you get different context from different perspectives into the final answer so all these different types of tricks can be used and this is actually why we started this also we wanted to like evaluate all the different different tricks that are there out there and try to see which works best because it can be different on your domain. So yeah, smart chunking is smart. Yeah. So you're saying that it actually matters what data you put into these systems just because they're LLMs, it doesn't solve the problem for you? Yeah. That actually matters a lot more because what goes in comes out. So that's important that you format your data. That's right. The data-centric paradigm has not gone anywhere, people. You heard it here first. Garbage in, garbage out. So Matt Parker asks, maybe I'll send this one over to Shaul. Can you compare TrueLens and RAGAS? This is the first I've heard of TrueLens. Maybe if other people have, and maybe you can tell us a little bit about what they're doing and what you're doing and the overlap you see. Sure. Yeah, TrueLens has been around for a while for evaluating ML applications, and they are also doing a lot of applications. So RAGAS currently is mostly focused on racks as in we wanted to crack the application that most people care about that is racks. And so we are mostly, you know, doing things that can help people to evaluate and improve their racks. We are not building any UI. We are largely providing for the integrations part. We are largely interested in providing integrations to players like Langsmith so that people can trace and see their UI rather than building a UI on top of Raga. So Raga mainly offers metrics and features like as you have seen, synthetic test data generation to help you evaluate your racks. I don't think TrueLens has a synthetic data generation feature, which is something that most of our developers really liked because it has saved a ton of their time because nobody really wants to go and label hundreds of documents of documents it's a boring job right so we are trying to double down on these points that we have seen that developers really like and we are trying to stay true to the open source community as well nice okay very cool very cool rad asks I'll send this one over to you, Wiz. Can you combine multiple query retriever with conversational retrieval chain? Sure. Yeah. Basically, Langchain works in a way where you can combine any retriever inside of any chain, right? So a retriever is going to be some kind of slot that we need to fill with something. So if you want to use a more complex retrieval process or combine many different retrievers in an ensemble, you can do that with basically any chain. Basically, that conversational retrieval chain is looking for a retriever. And so as long as it can be accessed through the retrieval API, it's going to work fine. retriever. And so as long as it can be accessed through the retrieval API, it's gonna work fine. I would I would add though, conversational retrieval chain, you'll want to use the 0.1.0 version, which is, you know, been implemented with LCL. But other than that, you're good to go. Okay, okay. And sort of back to this idea of sort of smart, chunking, smart hierarchy of data. Is there sort of like, we often talk in our classes about this sort of black art of chunking. Everybody's like, well, what's the chunk size I should use? What's the chunk size? So Sujit asks, and maybe I'll send this one over to you, Jithin, I know the chunk size matters. Are there like guidelines for chunking that you guys are aware of or that you recommend when people are building rag systems? Yeah, so I don't have like a very good guideline. Maybe Shahul can take back it up. But one thing that I've like seen like personally from experience is like, so A, do the evaluations, but then B, like also making sure that you get, you combine like multiple, like, so you basically, you create a hierarchy system where you have like different chunks. Then you summarize the different like concepts, like define the, uh, summarize the different channels so that, uh, even like all the beer, like core ideas are there in the hierarchy that actually has been like very like helpful. So, yeah. like core ideas are there in the hierarchy that actually has been like very like helpful so yeah so exactly like chunking size i haven't seen it in the uh like matrices as such um but all the like all the recursive like summarization that has helped and i think uh lament x has like uh a few retrievers right there what shall what do you think? VARSHAAL KUMAR- Yeah, just adding some more points into it. I think there is no one size fits chunk size that fits all type of documents and all type of text data. So it's a relative thing that should either you get. So there are two ways to handle this problem. Either you can, the general rule of thumb is to ensure that enough context the context makes sense even without any you know as as an individual you know as an individual chunk it it should make con if it should make some sense if you read it if a person writes it so how to how to achieve this you can achieve this either using writing a set of heuristics or let's say you know it can be something like okay determine the document you know type or something and change it using that and i think the from moving from heuristics to where we are going i think we might even see smaller models smaller very smaller models that are capable of chunking determining the chunk boundaries smartly so that you don't really have to rely on the heuristics it's more a generalizable way of doing it so I think that's where we are going in in the future um of chunking and uh hopefully the problem gets solved like that yeah yeah yeah I really like this idea of making sure each individual chunk makes sense before sort of moving up a level and thinking about, okay, what's the exact, you know, hierarchical parent document, multi-equal, like whatever it is that you're doing, each chunk should make sense. And that's going to be dependent on data. Yeah. I really liked that. And okay. So let's, let's go ahead and sort of related to that, I wanna go to this embedding model question in the Slido from Ron. It's similar in sort of relation to this chunking idea. I mean, people always want the answer, you know? So what chunk size? Here, Ron asks, which embedding models should I be using when I develop a system? Any emergent models or techniques that I can see significant improvements with? Maybe Shaul, if you want to continue here. Sure. Again, there is no one fit size for this answer. You know, the thing is that, again, it depends on a lot of factors. So if you don't want to really you know use again first first you know question will be open source or closed source you have like a lot of open source players even revealing open a with their open source models like i think recently uh by uh alibaba group uh released their m3 embedding which is like awesome it's like most powerful open source embeddings which we we have ever seen uh even revealing open is at our buildings right so it's it's a set of questions that you have to answer if you want to go for easy way of building a baseline rag of course open is embeddings you know good place to start you don't have to worry about anything else then you you can iteratively improve it that's where also ragas comes in let's say you have now you have an abundance of embeddings to choose from right so now you have you want a way to compare it so you don't use ragas you know you can just compare all these different embeddings choose the one that fits you and you're done there it it is. There it is. Just closing up this topic on chunks and embedding models. Wiz, I wonder, why did you choose Ada? Why did you choose, what is it? 750 overlap. Any particular reason? Zero thought put into those decisions. We used Ada because it's the best open AI model that's currently implemented. And we used 750 because we did. Basically, we wanted to show that those naive settings are worse than a more considerate or a more mindful approach. And so to do that, we just kind of selected them. I think the thing I really want to echo that we've heard so far is when we're thinking about our index or we're thinking about our vector store, we really want to be able to represent individual like quanta of information. And so the closer we can get to that, the better it will be. And then we can add that hierarchy on top. And I think what was said about, you you know using models to determine that at some point is definitely a future we can uh we can imagine we'll be living in soon yeah yeah and i think again we go back to this data centric idea it's easy to get the rag system set up and to get instrumented with aragus but like you're gonna get the improvements you're gonna get the thing really doing what you need it to do for your users by doing the hard kind of boring data work data engineering data science on the front end that really you just can't outsource to ai and you just have to kind of deal with yourself okay one more sort of like what's the answer question. I want to maybe send this one to Jithin. If somebody is picking up ragas and they build a rag system and they're like, okay, well, which ragas metric should I use? You know, which one should I look at? Right. What would you say? Is there, is there a starting point? Is there a sequence that you'd look at? Or the jury's not out yet on this. VATSAL SHARANAMANIYARANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANAN And then just like first of all, the first one, just try out like with all of the stuff, like basically like once you know which component, like what figuring out which component work, how, like what the state of which all these components are gives you an idea of, okay, where can I make an improvement with like as fast as possible? If, if your generator is bad, maybe try out a few other other LLMs or maybe if your retriever is bad, then figure out okay, in the retriever part, what is actually happening is context relevancy? Is it is it the recall that's bad? And like that is the way so starting off try out, try out all the metrics that you have. And then for the ones that are the bad the worst. And like after you understand like what the metrics are, you will get an idea of how you could like what other stuff you can actually try out to improve it and if it's like try out the easiest part like cross out the low-hanging fruits first and that is how you would like over time like progressively like uh improve it like but like i said it's not the absolute values that matter it's like the trends that matter right so you guys did a good job in explaining that so make sure like you go for the easiest things that you can patch up fast and keep that trend in the upward direction. Yeah, yeah, I love it. If you're getting low retrieval metrics, maybe pay attention to some retriever stuff. If you're getting low generation metrics, maybe try a different model. It's like, yeah, it's so simple when we can break it down like this. And you know, just a shout out to everybody out in Manny, just shouting out to Manny. That was kind of an attempt to answer one of your many questions today. We'll see if we can get some more on LinkedIn, but I think this idea of like getting your system instrumented so you can start to look at and chunk up different pieces of it and try to improve them. There's a lot of content that needs to be made on this. These guys are open source first, open source forward. We'd love to see some folks in the community start to put some guides together for how to actually break down and use RAGUS in sophisticated ways. So last question, guys, we're at time here, but what's next for RAGUS in 2024? Maybe if either of you wanna go ahead and take take this go ahead and take it let us know what to expect from you guys heading forward this year yeah shall we we want to take this yeah yeah that's a tricky question so you want to go where the community takes us so yeah doubling down on um things like synthetic data generation there are there are a lot of interests there there are a lot of interest in expanding ragas to other llm tasks as well so yeah there are all these interesting directions to take hopefully uh you know we'll get more signals from the community on which path so to take i mean we do have a lot of directions a lot of feature requests coming in so we have to just you know take that decision and move on but uh but yeah as of now um the the synthetic test generation is something that gets a lot of interest we want to you know make it very stable very useful make sure that that we push the limits of you know uh the the closed source models and plus frameworks analogy uh to build a great uh you know test data point that's that's very easy and uh easy to use yeah yeah anything to add yet then yeah like honestly like so right now we have a good base right now we're like very curious what like what we can do like evaluation driven development what are the extremes of that so like curious to see like what like uh what the community comes up with what like like you guys can like we come up with so yeah excited really excited for that yeah yeah let's see what everybody builds ships and shares out there and uh and contributes well thanks so much jiten thanks shaul thanks Wiz. We'll go ahead and close it out for today. And thanks everybody for joining us. Next week, you can continue learning with us. We're talking alignment with reinforcement learning with AI feedback. If you haven't yet, please like and subscribe on YouTube. And if you haven't yet, but you liked the vibe today, think about joining our community on Discord, where we're always getting together and teaching and learning. You can check out the community calendar directly if you're not a Discord user to see what's happening this week and in upcoming weeks. And finally, if you're ready to really accelerate LLM application development in your career or for your company, we have a brand new AI engineering bootcamp that's going to cover everything you need to prompt engineer, fine-tune, build RAG systems, deploy them, and operate them in production using many of the tools we touched on today, but also many more. You can check out the syllabus and also download the detailed schedule for more information. And then finally, any feedback from today's event, we'll drop a feedback form in the chat. I just want to shout out Jonathan Hodges as well. We will get back to your question and we will share all the questions today with the RAGUS guys to see if we can get follow-ups for everybody that joined us and asked great questions today. So until next time and as always keep building, shipping and sharing and we and the RAGUS guys will definitely keep doing the same. Thanks everybody. See you next time.
RAG with LangChain v0.1 and RAG Evaluation with RAGAS (RAG ASessment) v0.1
3,842
AI Makerspace
20240207
GPT-4 Summary: Join us for an enlightening YouTube event that delves into the critical art of evaluating and improving production Large Language Model (LLM) applications. With the rise of open-source evaluation tools like RAG Assessment (RAGAS) and built-in tools in LLM Ops platforms such as LangSmith, we're uncovering how to quantitatively measure and enhance the accuracy of LLM outputs. Discover how Metrics-Driven Development (MDD) can systematically refine your applications, leveraging the latest advancements in Retrieval Augmented Generation (RAG) to ensure outputs are factually grounded. We'll start with creating a RAG system using LangChain v0.1.0, assess its performance with RAGAS, and explore how to boost retrieval metrics for better results. Don't miss this deep dive into overcoming the challenges and understanding the limitations of current AI evaluation methods, with insights from our partners at LangChain and RAGAS. This is your opportunity to learn how to build and fine-tune RAG systems for your LLM applications effectively! Special thanks to LangChain and RAGAS for partnering with us on this event! Event page: https://lu.ma/theartofrag Have a question for a speaker? Drop them here: https://app.sli.do/event/2rLa8RML994YsMQt1KLrJi Speakers: Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/greglough... The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply for our new AI Engineering Bootcamp on Maven today! https://bit.ly/aie1 How'd we do? Share your feedback and suggestions for future events. https://forms.gle/ryzhbvxZtbvQ4BCv5
2024-06-09T21:36:12.312444
https://www.youtube.com/live/Anr1br0lLz8?si=qz792SKvBHbY-n4N
Hey, Wiz, is there a way to know what comes out of any RAG application that we build is right or correct? Well, it's really hard to say things like it's absolutely right, it's absolutely correct, it's absolutely true. That's pretty difficult. Okay. Okay. So there's no absolutes. It's absolutely correct. It's absolutely true. That's pretty difficult. Okay. Okay. So there's no absolutes, but is there a way to know that changes that we make to the system to our RAG application makes the performance better or worse? That we can know absolutely. So you're saying there's a way to assess RAG systems? Yeah. I think like assess RAG systems? Yeah. I think like a RAG assessment kind of make. A RAG assessment. Yeah, man. Let's show everybody how to do that today. Let's do it. All right, man. My name is Greg and we are here to talk RAG eval today. Hey, I'm Makerspace. Thanks for taking the time to join us. Everybody in our community, we'd love to hear your shout out in the chat where you're calling in from. Today we're going to walk through a simple RAG system built with the latest and greatest from Langchain, their most recent stable release and most stable version ever, we're also going to outline how you can actually assess your RAG systems using the RAG assessment or RAGIS framework. Finally, we'll do some advanced retrieval. We'll just sort of pick one off the shelf that's built into Langchain and show how we can go about this improvement process. We are very excited to have the Ragas co-founders and maintainers Jitin and Shaul joining us for the Q&A today. So definitely get your questions in the chat, anything you're curious about Ragas. We have the creators in the house today. And of course, we'll see Wiz, aka the LLMm wizard and cto at am makerspace back for demos real soon so let's get into it everybody today we're talking rag evaluation this black art that everybody is really really focused on as they start to build prototype and deploy these systems to production in 2024. as we align ourselves to this session we want to get out of this what's up with this langchain v 0.1 that just came out we want to understand how we can build a rag system with the latest syntax and then also evaluate it there's a lot of changes happening on the ragas side just as on the langchain side finally we want to see how we can pick different tools different ways to improve our system our application see how we can pick different tools, different ways to improve our system, our application, and how we can then quantify that using evaluation. So first we'll go into laying chain, then we'll go into a high level view of RAG and see exactly where the different laying chain components fit in. Finally, we're going to see what you all came here for today, the RAGIS metrics and how to implement the RAGIS framework. So we'll be building, we'll be evaluating, we'll be improving today and the Q&A should be pretty dope. So, Langchain v0.1.0. What's Langchain all about again? Well, it's all about enabling us to build LLM applications that leverage context, our so-called context aware, so we can connect to other sources of data. We can do lots of interesting prompt engineering. We can essentially do stuff in the context window that makes our applications more powerful. And also reasoning. This is the agentic behavior stuff. And look for another event from us soon that focuses more on reasoning. Today, we're focused on context, though. And we're doing that in the context of V0.1.0. The blog that they put this out with said, the journey of a thousand miles always starts with a single step. And that's kind of where Langchain sees themselves to be today. Langchain Core has come together, Langchain Community has come together, and they're officially going to be incrementing v0.1 to v0.2 if there are any breaking changes they'll be incrementing this and they'll continue to support v0.1 for a time every time this gets incremented of course as bug fixes and new features come out, they're also going to be incrementing now in this third v0.1.x slot. So pay attention to how quickly the development goes from here, because I imagine there's a lot of great stuff on the horizon coming from Langchain. There was a lot of great stuff in the v0.1 release. There was a lot of great stuff in the v0.1 release. And we're going to primarily focus on retrieval today, and also on this sort of langchain core that leverages L-C-E-L or the langchain expression language. So in terms of retrieval, there's going to be a lot that you can check out and add after today's event that you can then go assess to see if it actually helps your pipelines. So definitely encourage you to check those things out in more detail after today. For production components, there's a lot that we hope to explore in future events as well. But starting from the ground up here, we want to kind of focus on this Langchain core. This is the Langchain expression language, and this is really a very easy kind of elegant way to compose chains with syntax like this. This dovetails directly into deployments with LangServe, into operating in production environments and monitoring and visibility tooling with LangSmith. So really it kind of all starts from here and allows you to really do some industry-leading best practice stuff with these tools. Now today we're going to focus on a couple of the aspects of Langchain. We're going to take Langchain core functionality, and then we're also going to leverage models and prompts, as well as retrieval integrations from Langchain community. Chains, of course, are the fundamental abstraction in laying chain, and we will use those aspects to build our RAG system today. When we go and we assess, then we're going to take it to the next level with an advanced retrieval strategy. This is going to allow us to quantitatively show that we improved our RAG system. So quick recap on RAG in general for everybody. The point of RAG is to really help avoid these hallucinations. This is the number one issue. Everybody's talking about confident responses that are false. We want our systems, our applications to be faithful. And we'll see that we can actually evaluate this after we build out systems and instrument them with the latest evaluation tools. We want them to be faithful to the facts. We want them to be fact checkable. This idea of RAG is going and finding reference material, adding that reference material to the prompt, augmenting the prompt, and thus improving the answers that we generate. Visually, we can think of asking a question, converting that question to a vector, embedding representation, And then looking inside of our vector database, our vector store, the place where we store all of our data in vector format, we're looking for similar things, similar to the vector question we asked. We can find those similar things. And if we've set up a proper prompt template before we go into our LLM, something that says, for instance, use the provided context to answer the user's query. You may not answer the user's query unless you have context. If you don't know, say, I don't know. And then into this prompt, we inject these references, we augment this prompt. And then of course, where does the prompt go? Well, it goes into the chat model into our LLM. This gives us our answer and completes the RAG application input and output. So again, RAG is going to leverage models, prompts, and retrieval. In terms of models, we're going to use OpenAI models today. One note on syntax is that the chat style models we use generally leverage a system user assistant message syntax and Langchain is going to tend to prefer this system human AI syntax instead which personally I think is a little bit more straightforward in terms of the prompt template well we already saw it this is simply setting ourselves up for success so that we can inject those reference materials in and we can generate better answers. Now, it's important what these reference materials contain and how they're ordered. And that is going to be the focus of our evaluation. Of course, when we create a vector store, we're simply loading the docs. That's a document loader. Splitting the text. That's the text splitter. Creating embeddings. We use an embedding model. And storing the vectors in our vector store. Then we need to wrap a retriever around, and we're ready to rock and rag. Our build today is going to leverage, as mentioned, OpenAI models. We're going to leverage the Ada Embeddings model and OpenAI's GPT models. And the data we're going to use is actually, we're going to set up a rag system that allows us to query the Langchain v0.1.0 blog. So we'll read in this data and we'll create a rag based on this Langchain blog that we can ask, see if we missed anything that we might want to take away from this session that we could also learn about the 0.1.0. So to set up our initial rag system, we're gonna send you over to Wiz to show us Langchain v0.1.0 RAG setup. Hey, thank you, Greg. Yes. So today we're going to be looking at a very straightforward RAG pipeline. Basically, all we're going to see is how we get that context into our LLM to answer our questions. And then later on, we're going to think about how we might evaluate that. Now, the biggest changes between this and what we might have done before is the release of Langchain v0.1.0. So this is basically Langchain's, you know, first real minor version. We're looking to see this idea of, you know, splitting the core langchain features out. And that's exactly what, you know, Greg was just walking us through. Now, you'll see that we have mostly the same code that you're familiar with and used to, we can still use LCL, as we always have have that staying part of the core library. But we also have a lot of different ways we can add bells and whistles or different features to our Langchain application or pipeline. So in this case, we'll start, of course, with our classic import or dependency Langchain. We noticed we also have a specific package for OpenAI, for core, for the community Langchain, as well as Langchain Hub. And so all of these let us pick and choose, pick and choose whatever we'd like really, from the Langchain package. This is huge, right? So one of the things that people oftentimes are worried about language there's a ton of extra kind of uh unnecessary things in there well this is you know goes a long way to solving that problem um and it's awesome so let's see first which version we're working with uh so if you're watching this in the future you can be sure so we're on version 0.1.5 so we're already at dot five um line chain you know they're they're hard at work over there uh we're gonna need to add our open AI API key since we are going to be leveraging open AI uh basically this is a uh you know way that we can both use our lm for evaluation but also for generation and also for powering the application. We're just going to use this one LLM today for everything. When it comes to building our pipeline, it's very much so the case that, you know, we have the same stuff that we always have. We need to create an index and then we need to use an LLM to generate responses based on the retrieved context from that index. And we're going to get started as we always do with creating the index. Now we can and will still use LCEL. LCEL is important. You know, one of the things that we're going to show in this notebook, because you don't have to use LCL, they've implemented some abstractions in order to modify the, you know, the base chains that you're used to importing to LCL format, so you get all the advantages. But we're still going to look at LCL today, because it is an important piece of the line chain puzzle. because it is an important piece of the Langchain puzzle. But first, we're going to start with our first difference, right? So we're going to load some data, and we're going to load this from the Langchain community package where we're going to grab our document loader to get our web-based loader. You know, importantly, this is not part of core Langchain. This is a community package, and it works exactly the same as it used to, as it always has. You know, our web-based loader is going to let us load this web page, which we can do with loader.load. And then we can check out that we have our metadata, which is just for our web page. We're happy with that. Next, we need to do the second classic step of creating index. We have a document in this case. You know, it's just one document, but we have it and we need to convert it into several smaller documents, which we're going to do with the always fun recursive character text splitter. You'll notice that this has stayed part of core. So this is in just the langchain base package. Hooray. We have a recursive character text splitter. We've chosen some very arbitrary chunk sizes and overlaps here and then we can split those documents this is less so focused on a specific uh Lang chain rag and more on the evaluation so we're just kind of choosing these values uh you know to to showcase what we're trying to showcase you see that we've converted that one web page into 29 distinct documents. That's great. That's what we want to do with our splitting. Next, we're going to load the OpenAI embeddings model. Now, you'll notice that we're still using text embedding AIDA 002. We don't need to use this embeddings model. And it looks like very soon we'll be able to use OpenAI's latest model once the tick token library updates there's a PR that's ready just waiting to be merged which is going to let us be able to do that but for now until that change is implemented we're going to stick with text data embedding 002 and this is like the classic embedding model, right? Nothing too fancy. Just what we need. When it comes to our face vector store, what we need is to get that from lane chain community. But otherwise, this is exactly the same as it used to be, right? So there's no difference in the actual implementation of the VectorStore. It's just coming from the community channel. We'll pass in our split documents as well as our embedding model and away we go. Next, we're gonna create a retriever. This is the same as we've always done, dot as retriever on our VectorStore. Now we can interact with it through that retrieval API. We can test it to see it working. Why did they change to version 0.1.0? And we get some relevant documents to that query that mention the 0.1.0 release. Hooray. Now that we've got our retrieval pipeline set up, that's the R in RAG, we need to look at creating that AG. So what we're going to do is showcase a few different ways that we can create a prompt template. You can just pull it from the hub. So there are lots of different community created or Langchain created hubs. The idea is that, you know, you can just pull one that fits your task from the hub, but the one that we're showcasing is maybe not ideal. So we're going to go ahead and create our own. You can still do this process if you want to create your own. You don't have to use a, you know, one from the hub. And so we're just going to create the simple one, answer the question based only on the following context. If you cannot answer the question in context context please respond with i don't know that's a classic we pass in our context we pass on our question away we go and you'll notice that this is exactly the same as it used to be let's go laying chain now we'll set up our basic qa chain i've left a lot of comments here in the uh implementation of this uh lcel chain in order to hopefully clarify exactly what's going on. But for now, we'll just leave it at we can create this chain using LCL. And we want to pass out our context along with our response. This is important in order for us to be able to do those evaluations that we're hoping to do with RAGUS. So we do need to make sure that we pass out our context as well as our response. This is an important step. And we'll look at another way to implement this chain a little bit later, which is going to showcase a little bit more exactly what we can do to do this a little bit easier with still getting the advantages of LCL. You'll notice we're just using GPT-305 Turbo. That's it. And there you go. Now we can test it out and we can see, you know, what are the major changes in v0.1.0? The major changes are information. It goes on, it gives a correct answer. That's great. And we have what is a laying graph. And basically the response from the LLM is, I don't know, which is a laying graph. And basically, the response from the LLM is I don't know, which is not necessarily satisfying. So we're going to see a way to improve our chain to get a better answer to that question. And the next step now that we have this base chain would be to evaluate it. But before we do that, let's hear from Greg about how we're going to evaluate it and what we're going to evaluate it with. And with that, I'll pass you back to Greg. Thanks, Wiz. Yeah, so that was Langchain v0.1.0 RAG. Now let's talk RAG assessment. The RAGIS framework essentially wraps around a RAG system. If we think about what comes out in our answer, we can look at that, we can assess different pieces that helped generate that answer within the RAG system. And we can use that information to then decide on updates, on different things that we might try to add to either augment our retrieval or our generation. And we can continue the process of improvement by continually measuring. But what are we measuring? Well, this is where the RAG evaluation really gets particular. We have to make sure that we understand the core concepts of RAG eval. And in order to sort of do this in an automated way, we need four primary pieces of information. You're probably familiar with question, answer, input, output, and you may even be familiar with question, answer, context triples. What we need for eval is we need to also add a fourth component, the ground truth, sort of the correct or right answer, so to speak. Now, in practice, it's often not feasible to collect a comprehensive, robust ground truth data set. So again, what we can do, since we're not focused on absolutes here, is we can actually create a ground truth data set synthetically. And this is what we'll do today. We'll find the best model that we can, pull GPT-4 off the shelf, and we'll generate this set of information that will allow us to do evaluation. Okay, so we'll see how this works. It's pretty cool. And Ragus has a new library for this. But in terms of actual evaluation, when we finally have this data set up, we need to look at two different components. The first component is retrieval. There are two metrics that focus on retrieval exclusively. One is called context precision, and context precision asks the question, how relevant is the context to the question? All right, context recall, on the other hand, asks the question, is the retriever able to retrieve all of the relevant context relevant to the ground truth answer? On the generation side, we have two metrics as well. The first is answer relevancy, which asks the question, how relevant is the answer to our initial query? And finally, faithfulness tries to address the problem of hallucinations and asks, is the answer fact checkable from the context or is this a hallucination? So the four primary metrics in the RAGUS framework are these four, two for retrieval, two for generation. Let's dig in a little bit deeper to each one so that we really try to start grokking each metric individually because they're slightly different but nuanced. Faithfulness is trying to measure this factual consistency. Let's look at an example. The question, where and when was Einstein born? Context. If this is the context, Albert Einstein, born 14 March 1879, was a German-born theoretical physicist, etc., etc. So a high faithfulness answer is something that says, well, he was born in Germany and he was born on 14 March 1879. Where a low faithfulness answer might get part of it right, but might hallucinate, right? We want to avoid these hallucinations with faithfulness. So we're looking at the number of claims that can be inferred from the given context over the total number of claims in the generated answer. To be 100% faithful to the facts, we want this to be the same number. Okay, so answer relevancy is trying to, of course, measure how relevant the answer is. Rather than considering factuality, how factual it is, what we're doing here is we're penalizing when the answer lacks completeness or on the other side, when it contains redundant details. So, for instance, where is France and what is its capital? A low relevance answer is like talking to somebody that's not paying attention to everything that you said. Oh, France is in Western Europe. It's like, yeah, okay, well, what about the other part of my question, right? You want it to be completely relevant to the input, just like a good conversationalist's answer would be. Very relevant, right? Okay, so context precision, as we get into the retrieval metrics, we're thinking about, in this case, a way that we can evaluate whether all of the ground truth relevant items are present in the context and how well ranked they are in order. So what we're looking for is we want all the most relevant chunks that we return from our vector database to appear in the top reference ranks. Okay. We want lots of good stuff ranked at the top. That's what we want. And so we're really looking for everything that's relevant to the question to then be returned in our context and to be order ranked by relevancy. Makes sense, you know, just the way we would want to do it if we were writing a book report or something. Finally, context recall is again kind of doing this same thing that we talked about before. We want to make sure we're paying attention to everything that's relevant. We want to make sure that we're addressing everything that's asked. So if the question here, where is France and what is its capital? Once again, if we have a ground truth answer already, the key here is we're actually leveraging ground truth as part of calculating this metric. France is in Western Europe and its capital is in Paris. A high context recall is addressing both of these. And within each sentence of the output addressing both of these. You can look sort of ground truth sentences that can be attributed to context over number of sentences in ground truth. And a low context recall is going to kind of be doing the same thing that we saw earlier. Well, France is in Western Europe, simple villages, Mediterranean beaches, country is renowned, sophisticated cuisine, on and on and on, but it doesn't address anything about Paris, which of course the ground truth does. And we can start to get a picture of, if we look at each of these metrics, we get some idea of how our system is performing overall. But that's generally kind of difficult to get a perfect picture of that. These are the tools we have, and they work, as we mentioned, very well for directional improvements. Context precision is sort of conveying this sort of high-level quality idea, right? Not too much redundant info, but not too much left out. Context recall is measuring our ability to retrieve all of the necessary or relevant information. Faithfulness is trying to help us avoid hallucinations. And answer relevancy is sort of, am I to the point here? Am I very, very relevant to the question that was asked? Or am I kind of going off on a tangent here? And finally, RAGUS also has a few end-to-end metrics. We're just going to look at one of them today, just to give you an idea. And that one is called answer correctness. This is a great one for your bosses out there. You want to know if it's correct? Boom. How about we look at correctness, boss? So this is potentially a very powerful one to use for others, but beware, you know what's really going on and directional improvements is really what we want to be focusing on. But we want to basically look at how the answer is related to the ground truth. Of course, if we have like a true ground truth data set, this is probably a very, very useful metric. If we have one that's generated by AI, we might want to be a little bit particular, a little bit more careful in looking at this metric and relying on it too much. But if we have this great alignment between ground truth and answer, we're doing a pretty good job, right? Let's see a quick example for this one. We're kind of looking at two different things. We're looking at that factual similarity, but we're also looking at semantic similarity. So, you know, again, you can use this Einstein example. If the ground truth was Einstein was born in 1879 in Germany, the high answer correctness answer is exactly that. And then of course, low answer correctness is you're getting something literally wrong. So there is overlap between all of these things and it's important to sort of track that. But overall, the steps for doing RAGIS are to generate the question answer context ground truth data. And there's a awesome new way to do this called synthetic test data generation that has recently been released by RAGUS. We'll show you how to get it done today. Run that eval and then go ahead and try to improve your RAG pipeline. We're just going to take one simple retrieval improvement off the shelf from Langchain today. It's called the multi-query retriever. This is going to sort of generate many queries from our single query and then answer all of those and then return the relevant context from each of those questions into the prompt. So we're actually getting more information. But you can pick any retrievers off the shelf and you can then go back, you can look, did my metrics go up? Did they go down? What's happening as I add more data or more different retrieval advanced methods to my system? And in this way, we can see how we can combine RAGIS with RAG improvement as Wiz will go ahead and show us right now. Oh yeah, Greg, can't wait. Thank you. So RAGIS, this is the thing we're here to talk about, right? It's a amazing library that does a lot of cool, powerful things. But the thing that is, you know, most important is that it allows us to have some insight into changes we make in terms of the directional impact they have, right? So while we might not be able to say, you know, these answers are definitely true, as Greg was expressing, we can say, it appears as though these answers are truer than the ones we had before, which is awesome. So let's look at how we can do this. First of all, in order to actually do, you know, a evaluation on all of the metrics, we'd have two important things. One, we need to have questions. So these are questions that are potentially relevant to our data. In fact, they should be relevant to our data if we're trying to assess our retrieval pipeline, as well as our generations. And also some ground truths, right? As Greg was mentioning, you know, we are going to use synthetically created ground truths. So it might be more performant to use, let's say, you know, human labeled ground truths. But for now, we can let the LLM handle this. I'll just zoom in just a little bit here. And the idea is that we're going to leverage Ragus's new synthetic test data generation, which is very easy to use, much better than what the process we had to do before, which is kind of do this process manually. We're going to go ahead and use this to create our test data set. Now, it's important to keep in mind that this does use GPT-3, 5 Turbo 16 K as the base model, and it also includes GPT-4 as the critic. So we want to make sure we're not evaluating or creating too much data, or if we are, that we're staying very cognizant of the costs. So the first thing we're going to do is just create a separate data set or separate document pile that we're going to pull from. We're doing this to mitigate the potential that we're just asking the same LLM, the same questions with the same context, which might, you know, unfairly benefit the more simple method. So we're just going to create some new chunks with size 1000, overlap 200. We're going to have 24 docs, so about the same, 29, 24. And then we're going to use the test set generator. It really is as easy as test set generator with open AI. That's what we're using for our LLM. And then we're going to generate with langchain docs. You'll notice this is specifically integrated with langchain. There's also a version for Lama index. And all we need to do is pass in our documents, the size that we like of our test set, and then the distributions. Now this distributions is quite interesting. Basically, this is going to create us questions at these ratios from these subcategories. So the idea is that this is going to be able to test our system on a variety of potentially different, you know, tests, right? So we have simple, which is, you know, as you might think, very simple. And we have, you know, this reasoning, which is going to require some more complex reasoning that might, you know, tax our LLM a little harder. And then we have this multi-context, which is going to require multiple contexts. So our LLM is going to have to pick up a bunch of them in order to be very good at this particular kind of task. And the reason this is important is that not only do we get kind of an aggregate directional indication of how our system is improving, but we can also see how it's improving across specific subcategories of application. Very cool, very awesome. Thanks to the RAGUS team for putting this in. You know, we love this and it makes the job very much a lot easier. So that's great. We look at an example of the test data. We have our question, we have some contexts, and then we have our ground truth response, as well as our evaluation type, which is in this case, simple. In terms of generating responses with the RAG pipeline, it's pretty straightforward. There is an integration that exists between Langchain and RAGIS. It's currently being worked on to be brought up to speed. But for now, we're just going to kind of do this manually. So what we're going to do is we're going to take our test set. We're going to look and see. We've got our questions, context, ground truths, as well as our evolution type. This is our distribution that we talked about earlier. And then we're going to grab a list of questions and ground truths. We're going to ask those questions to our RAG pipeline. And we're going to collect the answers and we're going to collect the contexts. And then we're going to create a Hugging Face data set from those collected responses along with those test questions and our test ground truths. We can see that each of the rows in our data set has a question with our RAG pipeline's answer, our RAG pipeline's context, as well as the ground truth for that response. Now that we have this data set, we're good to go and we can go ahead and we can start evaluating. Now, Greg's talked about these metrics in depth. The code and the methodology can be found in the documentation from Ragas, which is very good. These are the ones we're caring about today. Faithfulness, answer relevancy, context precision, context recall, and answer correctness. And you can see it's as simple as loading, importing them, and then putting them into a list so that when we call the evaluate, you know, we're going to pass in our response data set, which is this data set we created above that has these rows for every question, and then our metrics, which we've just set above. That's all we have to do. Now, the test set generation is awesome and very useful. Another change that Ragas made recently is that they've made their evaluation async. This is a much faster process than it used to be. As you can see, this was around 42 seconds, which is much better than the times that we used to see. Thanks, Ragas team, for making this change. We can get our results here. We have our faithfulness, our answer relevancy, our context recall, our context precision, and our answer correctness. You can see that it does all right. But again, these numbers in a vacuum aren't really indicative of what's happening. It's like we want these numbers to be high, but we're more interested in seeing if changes we make to our system make those numbers higher. So let's look at another awesome part of RAGUS before we move on to making a change and seeing how it goes, which is we have the ability to look at these scores at a per-question level in the Pandas data frame. So you can see that we have all of our scores and they're given to us in this data frame this is huge especially because we can map these questions back to those evolution types and we can see how our model performs on different subsets of those uh those distribute the elements of that distribution so now we're going to just make a simple change. We're going to use the multi-query retriever. This is stock from the Langchain documentation. We're going to use this as an advanced retriever. So this should retrieve more relevant context for us. That's the hope anyway. We'll have our retriever and our primary QA LLM. So we're using the same retriever base and the same LLM base that we were using before. We're just wrapping it in this multi-query retriever. Now, before we used LCEL to create our chain, but now we'll showcase the abstraction, which is going to implement a very similar chain in LCEL, but we don't have to actually write out all that LCEL. So we're going to first create our stuff documents chain, which is going to be our prompt. We're using the same prompt that we used before. So we're not changing the prompt at all. And then we're going to create retrieval chain, which is going to do exactly what we did before in LCL, but it's, you know, we don't have to write all that LCL. So if you're looking for an easier abstracted method, here you go uh you'll notice we call it in basically the same way and then we are also looking at uh this answer the answer is basically uh you know the response.content from before and then uh you know we can see this is a good answer makes sense to me uh but we also have a better answer for this what is Landgraf question. So this heartens me, right? I'm feeling better. Like maybe this will be a better system. And before you might have to just look at it and be like, yeah, it feels better. But now with RAGUS, we can go ahead and just evaluate. We're going to do the same process we did before by cycling through each of the questions in our test set and then getting responses and context for them and then we're going to evaluate across the same metrics you'll notice that our metrics uh have definitely changed so let's look at a little bit more closely how they've changed so it looks like we've gotten better at our faithfulness metric we've gotten significantly better at answer relevancy which is nice we've gotten a little bit better at context recall. We've taken some small hits, a small hit on context precision, and a fairly robust hit on answer correctness. So it's good to know that this is going to improve kind of what we hoped it would improve. And now we are left to tinker to figure out how would we improve this or answer correctness doesn't get impacted by this change, but at least we know in what ways, how, and we're able to now more intelligently reason about how to improve our RAG systems, thanks to RAGIS. And each of these metrics correspond to specific parts of our RAGIS application. And so it is a great tool to figure out how to improve these systems by providing those directional changes. With that, I'm going to kick it back to Greg to close this out and lead us into our Q&A. Thanks, Wiz. Yeah, that was totally awesome, man. It's great to see that we can improve our rag systems not just sort of by thinking about i think that's better uh land graph question got answered better but actually we can go and we can show our bosses our investors anybody that might be out there listening hey look we have a more faithful system check it out went from base model to multi-query retriever and improved our generations. Of course, as developers, you want to keep in mind exactly what the limitations of each of these things are. But for all of those folks out there that aren't down in the weeds with us, if they really want an answer, here's an answer. And so it's awesome that we can go and take just things off the shelf that we're trying to qualitatively analyze before and directionally improve our systems by instrumenting them with RAGIS and measuring before and after small iterations to our application. So today we saw Langchain v0.1.0 to build RAG, and then we actually did RAG on the Langchain v0.1.0 blog. Expect stable releases from here. It's more production ready than ever. And you can not just measure faithfulness, you can measure different generation metrics, different retrieval metrics even different end-to-end metrics and big shout out to everybody today that supported our event shout out to langchain shout out to ragas and shout out to everybody joining us live on youtube with that it's time for q a and i'd like to welcome Wiz back to the stage as well as Jithin and Shaul from Vragus, co-founders and maintainers. If you have questions for us, please scan the QR code and we'll get to as many as we can. Guys, welcome. Let's jump right in. Hey guys. Hey. What's up? All right. Let's see. I'll go ahead and toss this one up to Jitin and Shaul. What's the difference between memorization and hallucination in RAG systems? How can developers prevent hallucinated content while keeping the text rich. Yeah. You want to go for it? I know I didn't actually understand what you actually mean by memorization. Yeah. Oh, yeah. OK. You want to take a crack at this, Shaul? Yeah, I mean, what is the difference between memorization and hallucination rack systems? That's it. The line between memorization and hallucination, I don't know where to draw that particular line. It's something seems like, seems like what it meant is the usage of internal knowledge versus you know there are situations in drag when knowledge is a continually evolving thing right so maybe the llm thing that a person is you know is still alive but the person died yesterday or something now the now if if that particular thing is uh is read using wikipedia or something there will be a contrasting knowledge between the LLM and what the ground truth Wikipedia sees. Now, that can be hard to overcome because the LLM still believes something else. So it's a hard to crack problem and I hope there will be many future works on it. But how can we prevent such hallucination? The thing is, what we require is when using LLMs to build RAC, we can align LLMs so that LLMs answer only from the given grounded text data and not from the internal knowledge. So, or there must be high preference to the grounded text data compared to what is there in the LLMs internal knowledge. So that can be one of the situations. Yeah, definitely. Wiz, any thoughts on memorization versus hallucination before we move on here? I think the answer to the question was already provided uh basically really i mean yeah yeah we when it comes to the memorization versus hallucination i think the the most important thing is uh you know memorization is that you could maybe frame it as a slightly less negative form of hallucination because it's likely to be closer to whatever the training data was. But in terms of RAG application, both bad. We want it to really take into account that context and stick to it. Okay. We've got a question from Jan Boers. I'm curious if you already have experience with smart context aware chunking. Can we expect significant improvements of rag results using smart chunking? What do you think, Jitin? Is this something that we can expect improvements in? Yeah, so how you, so one thing that we see when we're building rag systems is that how you're formatting the data is where most of the problems are. Like if you take some time to clean up the data and to format the data is like where most of the problems are like if you if you take some time to clean up the data and like to format data that actually makes it easier for your act the performance difference like like really great because like models right now if you're using a very stable model if you provide with the correct context the model will be able to use the information in the context to get it so all these tips and tricks to optimize about even like um chris was using the multi uh context method right it's also another trick to get make sure that you get different context from different perspectives into the final answer so all these different types of tricks can be used and this is actually why we started this also we wanted to like evaluate all the different different tricks that are there out there and try to see which works best because it can be different on your domain. So yeah, smart chunking is smart. Yeah. So you're saying that it actually matters what data you put into these systems just because they're LLMs, it doesn't solve the problem for you? Yeah. That actually matters a lot more because what goes in comes out. So that's important that you format your data. That's right. The data-centric paradigm has not gone anywhere, people. You heard it here first. Garbage in, garbage out. So Matt Parker asks, maybe I'll send this one over to Shaul. Can you compare TrueLens and RAGAS? This is the first I've heard of TrueLens. Maybe if other people have, and maybe you can tell us a little bit about what they're doing and what you're doing and the overlap you see. Sure. Yeah, TrueLens has been around for a while for evaluating ML applications, and they are also doing a lot of applications. So RAGAS currently is mostly focused on racks as in we wanted to crack the application that most people care about that is racks. And so we are mostly, you know, doing things that can help people to evaluate and improve their racks. We are not building any UI. We are largely providing for the integrations part. We are largely interested in providing integrations to players like Langsmith so that people can trace and see their UI rather than building a UI on top of Raga. So Raga mainly offers metrics and features like as you have seen, synthetic test data generation to help you evaluate your racks. I don't think TrueLens has a synthetic data generation feature, which is something that most of our developers really liked because it has saved a ton of their time because nobody really wants to go and label hundreds of documents of documents it's a boring job right so we are trying to double down on these points that we have seen that developers really like and we are trying to stay true to the open source community as well nice okay very cool very cool rad asks I'll send this one over to you, Wiz. Can you combine multiple query retriever with conversational retrieval chain? Sure. Yeah. Basically, Langchain works in a way where you can combine any retriever inside of any chain, right? So a retriever is going to be some kind of slot that we need to fill with something. So if you want to use a more complex retrieval process or combine many different retrievers in an ensemble, you can do that with basically any chain. Basically, that conversational retrieval chain is looking for a retriever. And so as long as it can be accessed through the retrieval API, it's going to work fine. retriever. And so as long as it can be accessed through the retrieval API, it's gonna work fine. I would I would add though, conversational retrieval chain, you'll want to use the 0.1.0 version, which is, you know, been implemented with LCL. But other than that, you're good to go. Okay, okay. And sort of back to this idea of sort of smart, chunking, smart hierarchy of data. Is there sort of like, we often talk in our classes about this sort of black art of chunking. Everybody's like, well, what's the chunk size I should use? What's the chunk size? So Sujit asks, and maybe I'll send this one over to you, Jithin, I know the chunk size matters. Are there like guidelines for chunking that you guys are aware of or that you recommend when people are building rag systems? Yeah, so I don't have like a very good guideline. Maybe Shahul can take back it up. But one thing that I've like seen like personally from experience is like, so A, do the evaluations, but then B, like also making sure that you get, you combine like multiple, like, so you basically, you create a hierarchy system where you have like different chunks. Then you summarize the different like concepts, like define the, uh, summarize the different channels so that, uh, even like all the beer, like core ideas are there in the hierarchy that actually has been like very like helpful. So, yeah. like core ideas are there in the hierarchy that actually has been like very like helpful so yeah so exactly like chunking size i haven't seen it in the uh like matrices as such um but all the like all the recursive like summarization that has helped and i think uh lament x has like uh a few retrievers right there what shall what do you think? VARSHAAL KUMAR- Yeah, just adding some more points into it. I think there is no one size fits chunk size that fits all type of documents and all type of text data. So it's a relative thing that should either you get. So there are two ways to handle this problem. Either you can, the general rule of thumb is to ensure that enough context the context makes sense even without any you know as as an individual you know as an individual chunk it it should make con if it should make some sense if you read it if a person writes it so how to how to achieve this you can achieve this either using writing a set of heuristics or let's say you know it can be something like okay determine the document you know type or something and change it using that and i think the from moving from heuristics to where we are going i think we might even see smaller models smaller very smaller models that are capable of chunking determining the chunk boundaries smartly so that you don't really have to rely on the heuristics it's more a generalizable way of doing it so I think that's where we are going in in the future um of chunking and uh hopefully the problem gets solved like that yeah yeah yeah I really like this idea of making sure each individual chunk makes sense before sort of moving up a level and thinking about, okay, what's the exact, you know, hierarchical parent document, multi-equal, like whatever it is that you're doing, each chunk should make sense. And that's going to be dependent on data. Yeah. I really liked that. And okay. So let's, let's go ahead and sort of related to that, I wanna go to this embedding model question in the Slido from Ron. It's similar in sort of relation to this chunking idea. I mean, people always want the answer, you know? So what chunk size? Here, Ron asks, which embedding models should I be using when I develop a system? Any emergent models or techniques that I can see significant improvements with? Maybe Shaul, if you want to continue here. Sure. Again, there is no one fit size for this answer. You know, the thing is that, again, it depends on a lot of factors. So if you don't want to really you know use again first first you know question will be open source or closed source you have like a lot of open source players even revealing open a with their open source models like i think recently uh by uh alibaba group uh released their m3 embedding which is like awesome it's like most powerful open source embeddings which we we have ever seen uh even revealing open is at our buildings right so it's it's a set of questions that you have to answer if you want to go for easy way of building a baseline rag of course open is embeddings you know good place to start you don't have to worry about anything else then you you can iteratively improve it that's where also ragas comes in let's say you have now you have an abundance of embeddings to choose from right so now you have you want a way to compare it so you don't use ragas you know you can just compare all these different embeddings choose the one that fits you and you're done there it it is. There it is. Just closing up this topic on chunks and embedding models. Wiz, I wonder, why did you choose Ada? Why did you choose, what is it? 750 overlap. Any particular reason? Zero thought put into those decisions. We used Ada because it's the best open AI model that's currently implemented. And we used 750 because we did. Basically, we wanted to show that those naive settings are worse than a more considerate or a more mindful approach. And so to do that, we just kind of selected them. I think the thing I really want to echo that we've heard so far is when we're thinking about our index or we're thinking about our vector store, we really want to be able to represent individual like quanta of information. And so the closer we can get to that, the better it will be. And then we can add that hierarchy on top. And I think what was said about, you you know using models to determine that at some point is definitely a future we can uh we can imagine we'll be living in soon yeah yeah and i think again we go back to this data centric idea it's easy to get the rag system set up and to get instrumented with aragus but like you're gonna get the improvements you're gonna get the thing really doing what you need it to do for your users by doing the hard kind of boring data work data engineering data science on the front end that really you just can't outsource to ai and you just have to kind of deal with yourself okay one more sort of like what's the answer question. I want to maybe send this one to Jithin. If somebody is picking up ragas and they build a rag system and they're like, okay, well, which ragas metric should I use? You know, which one should I look at? Right. What would you say? Is there, is there a starting point? Is there a sequence that you'd look at? Or the jury's not out yet on this. VATSAL SHARANAMANIYARANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANAN And then just like first of all, the first one, just try out like with all of the stuff, like basically like once you know which component, like what figuring out which component work, how, like what the state of which all these components are gives you an idea of, okay, where can I make an improvement with like as fast as possible? If, if your generator is bad, maybe try out a few other other LLMs or maybe if your retriever is bad, then figure out okay, in the retriever part, what is actually happening is context relevancy? Is it is it the recall that's bad? And like that is the way so starting off try out, try out all the metrics that you have. And then for the ones that are the bad the worst. And like after you understand like what the metrics are, you will get an idea of how you could like what other stuff you can actually try out to improve it and if it's like try out the easiest part like cross out the low-hanging fruits first and that is how you would like over time like progressively like uh improve it like but like i said it's not the absolute values that matter it's like the trends that matter right so you guys did a good job in explaining that so make sure like you go for the easiest things that you can patch up fast and keep that trend in the upward direction. Yeah, yeah, I love it. If you're getting low retrieval metrics, maybe pay attention to some retriever stuff. If you're getting low generation metrics, maybe try a different model. It's like, yeah, it's so simple when we can break it down like this. And you know, just a shout out to everybody out in Manny, just shouting out to Manny. That was kind of an attempt to answer one of your many questions today. We'll see if we can get some more on LinkedIn, but I think this idea of like getting your system instrumented so you can start to look at and chunk up different pieces of it and try to improve them. There's a lot of content that needs to be made on this. These guys are open source first, open source forward. We'd love to see some folks in the community start to put some guides together for how to actually break down and use RAGUS in sophisticated ways. So last question, guys, we're at time here, but what's next for RAGUS in 2024? Maybe if either of you wanna go ahead and take take this go ahead and take it let us know what to expect from you guys heading forward this year yeah shall we we want to take this yeah yeah that's a tricky question so you want to go where the community takes us so yeah doubling down on um things like synthetic data generation there are there are a lot of interests there there are a lot of interest in expanding ragas to other llm tasks as well so yeah there are all these interesting directions to take hopefully uh you know we'll get more signals from the community on which path so to take i mean we do have a lot of directions a lot of feature requests coming in so we have to just you know take that decision and move on but uh but yeah as of now um the the synthetic test generation is something that gets a lot of interest we want to you know make it very stable very useful make sure that that we push the limits of you know uh the the closed source models and plus frameworks analogy uh to build a great uh you know test data point that's that's very easy and uh easy to use yeah yeah anything to add yet then yeah like honestly like so right now we have a good base right now we're like very curious what like what we can do like evaluation driven development what are the extremes of that so like curious to see like what like uh what the community comes up with what like like you guys can like we come up with so yeah excited really excited for that yeah yeah let's see what everybody builds ships and shares out there and uh and contributes well thanks so much jiten thanks shaul thanks Wiz. We'll go ahead and close it out for today. And thanks everybody for joining us. Next week, you can continue learning with us. We're talking alignment with reinforcement learning with AI feedback. If you haven't yet, please like and subscribe on YouTube. And if you haven't yet, but you liked the vibe today, think about joining our community on Discord, where we're always getting together and teaching and learning. You can check out the community calendar directly if you're not a Discord user to see what's happening this week and in upcoming weeks. And finally, if you're ready to really accelerate LLM application development in your career or for your company, we have a brand new AI engineering bootcamp that's going to cover everything you need to prompt engineer, fine-tune, build RAG systems, deploy them, and operate them in production using many of the tools we touched on today, but also many more. You can check out the syllabus and also download the detailed schedule for more information. And then finally, any feedback from today's event, we'll drop a feedback form in the chat. I just want to shout out Jonathan Hodges as well. We will get back to your question and we will share all the questions today with the RAGUS guys to see if we can get follow-ups for everybody that joined us and asked great questions today. So until next time and as always keep building, shipping and sharing and we and the RAGUS guys will definitely keep doing the same. Thanks everybody. See you next time.
RAG with LangChain v0.1 and RAG Evaluation with RAGAS (RAG ASessment) v0.1
3,842
AI Makerspace
20240207
GPT-4 Summary: Join us for an enlightening YouTube event that delves into the critical art of evaluating and improving production Large Language Model (LLM) applications. With the rise of open-source evaluation tools like RAG Assessment (RAGAS) and built-in tools in LLM Ops platforms such as LangSmith, we're uncovering how to quantitatively measure and enhance the accuracy of LLM outputs. Discover how Metrics-Driven Development (MDD) can systematically refine your applications, leveraging the latest advancements in Retrieval Augmented Generation (RAG) to ensure outputs are factually grounded. We'll start with creating a RAG system using LangChain v0.1.0, assess its performance with RAGAS, and explore how to boost retrieval metrics for better results. Don't miss this deep dive into overcoming the challenges and understanding the limitations of current AI evaluation methods, with insights from our partners at LangChain and RAGAS. This is your opportunity to learn how to build and fine-tune RAG systems for your LLM applications effectively! Special thanks to LangChain and RAGAS for partnering with us on this event! Event page: https://lu.ma/theartofrag Have a question for a speaker? Drop them here: https://app.sli.do/event/2rLa8RML994YsMQt1KLrJi Speakers: Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/greglough... The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply for our new AI Engineering Bootcamp on Maven today! https://bit.ly/aie1 How'd we do? Share your feedback and suggestions for future events. https://forms.gle/ryzhbvxZtbvQ4BCv5
2024-06-09T21:43:27.950396
https://www.youtube.com/watch?v=KuAn6Fy9UX4
Welcome back to session eight. This is on embedding, fine tuning, and we're going to go ahead and see how we can do this in a tool like Lama Index. Now, this is bringing a couple of things together. We want to align ourselves to everything that we're doing here. We're going to do a quick review, review Lama Index, and then we're doing here. We're going to do a quick review, review Llama Index, and then we're going to build a veterinary camelid index. We're going to build a llama index with Llama Index. All right, you guys ready for this? Then we're going to fine tune it because vets say some crazy words. Remember, why rag? Well, because LLMs lie to us, because we need to fact check them, because we need references, and we need to sprinkle some references into our prompts. But also remember that we talked about how important it is when you have very specialized domains with very specialized language that you may want to consider that as soon as you build up a RAG system or in the process of building up your first RAG system, you consider fine-tuning embedding models. That's what we're going to do here today. Because the language is so specialized, it's just not anything anybody would ever say to anyone and you would randomly find on the internet. So let's take a look here. During this RAG system, which of course looks like this, it combines dense vector retrieval and in-context learning. We're going to leverage Lama Index to build this thing. And we're also going to use Lama Index to fine-tune our embedding model. So recall, Lama Index is a data framework. It's all about that data. Lama Index uses nodes as NLP documents and PDF documents, aka documents as source documents. Those first-class citizens are just chunks of source docs. They're called nodes. The parsers allow us to take the docs and then chunk the docs. They're called nodes. The parsers allow us to take the docs and then chunk the docs and create those nodes. Okay. Query engines is what it's all about. This is the big idea with Lama Index. So what is a camelid, you might say? Well, camelids include camels, of course, llamas, alpacas, vicunas, guanacos. Hey, look at that. If you were wondering where any of those names came from, they came from the camelid family. And if you're wondering why we don't have any more camelids, well, there's none left in the picture. So maybe that has something to do with it. We've moved on to winds like Mistral and Zephyr. But for this, we're going to look and dig super deep on camelids. Shout out to Ohio State for having really in-depth vet info in their research library on camelids. Apparently, this is where you'll find the International Camelid Institute. And this kind of place, if you're doing work in a place like this, this is the kind of place where you might consider fine-tuning your LLM embeddings, because that's probably definitely going to help improve some of the retrieval and some of the generation. Because otherwise, if you just don't understand the words, you're just going to have a tough time. So building this camelid index, this llama index with llama index looks similar to other indexes that we've built with llama index. And if you consider the ways that you might go about improving retrieval, Lama Index is constantly building out these capabilities. But they're often talking about a lot of different ways that you might start to do more interesting and complicated things. And one of those ways is fine tuning of embeddings. In this particular case, because we have such specialized language, we're going to fine tune those embeddings. The way that we fine tune embeddings is we're going to, and if we're going to have another like joker in here, we're going to have to kick them out again. So bring it. Please mute if you are messing around. The ingredients for fine tuning embeddings are these question retrieved context pairs, right? So what we're going to do is we're actually going to create these question retrieved context pairs, and then we're going to take an existing embedding model, and we're going to train it, so to speak, on CAMELID research paper context. That's it. And we use sort of a very simple approach here using a built-in Hugging Face sentence transformers loss function. And it's really not that complicated. What we see when we do this is that our hit rate, our ability to find the appropriate context, given any particular query, actually improves. And so if you have very specialized languages, you might consider fine-tuning embeddings. And the way you do that, we're going to show you right now. right now. Chris, Camelid Embeddings. Oh yeah, let's get rocking with some Camelids. Okay, hopefully you can see my screen. The basic idea here is we are going to fine-tune our embeddings. So why would we want to fine-tune our embeddings? Well, as Greg said, you know, especially in these kind of veterinary papers, there's just so much language that we have, you know, no idea about or we don't know how it's related to other, you know, tokens in our in our corpus or, you know, it might have one meeting and kind of common parlance, but have a totally different uh in in the case of this specific application so the thing we need to do is we need to I'll link the collab sure one second sorry here we go thing we need to do is fine-tune those embeddings right so first of all get a bunch of dependencies second of all we're going to grab our OpenAI key. Then we're going to get the camel data from our data repository. We're going to go to high-performance RAG, and we're going to download camel papers test and camel papers train. You can see there's a bunch of crazy papers about camelids and and uh you know that's great what is the intuition between the behind the cue retrieved answer pair idea to fine-tune the beddings is the last binary type of last so the the way that they do it actually is they they make the assumption that every other context in the QA pair data set is a counter example to the found context or to like the selected context. And then I can't remember the specific loss function, but I'll bring it up and I'll show you guys. Now that we've got just a lot of papers about camels or camelids to be more precise uh we're going to go ahead and load those uh we're going to load those using our simple directory reader which reads directories our simple node parser and our metadata node our simple node parser is going to uh parse out all of our uh documents into nodes for us yeah yeah i'll bring up the loss uh function for sure once we have these two corpuses we're good to go now we're going to generate QA embedding pairs which we're going to do with everyone's favorite of course AI so we're going to use open AIs gpt 3.5 Turbo to generate QA embedding pairs. Then we're going to save those as a data set. And then we're going to do the same thing for our validation training set. So now we have our validation and we have our train. Everything's good so far. Next, we're going to use the sentence transformers implementation to get BGE small 1.5. It's just a good embeddings model. It's trained on a big corpus, and it performs very well on our task, which is the retrieval task. So that's why we're picking it. The embeddings leaderboards update very frequently, so you can use whichever one you want that performs the best at whatever tasks you need to do. Now we're going to use the sentence transformers fine-tune engine from Lama Index. Thanks, Lama Index. We pass in our training data set, the model we wish to fine-tune, the output that we wish to have our model be saved in, our validation data set, and then the number of epochs we're going to train for. Of course, we could train for more or less time. It's totally up to you. But the idea here is that we have, you know, the same kind of training process that we would for a normal model, but this is for a sentence transformers model. And it's to kind of drag, right? We have these embeddings. If we just imagine them in 3D space, right we we know they're kind of in this cloud and their proximity to each other or their direction from the from the the origin uh you know is it in a particular spot and we're just kind of dragging them around or moving them around re reclustering them in that space in order to align with our actual corpus of documents better. So that's a way you could visualize this if you were a person who liked to visualize things. Once we do all that preparation, we do everyone's favorite step. We call.fintune, and we see it go. And then we can grab our fine-tuned model out of our fine-tune engine now we can set it up as its own embedding model and what we're going to do now is evaluate that embedding model so we've created it and that's good but we need to evaluate it we're going to evaluate it with this. So there's a lot of metrics that you're going to get from this evaluation. We're only going to really care about the map at K. So this is the mean average precision at K. I believe it's map at five that it reports back to us. The idea here is we just want to make sure we're retrieving the right kinds of documents in the top five documents retrieved, right? So how often are we doing that, you know, is kind of what we're caring about here. So we want to, for the most part, always retrieve the correct document in the top five. Now, obviously, we're not going to get to perfect with two epochs of training on a very strange corpus, but we can see that with this evaluation, which is all done through wonderful abstractions, thanks to sentence transformers in this case, not Lama index, We can see that our base unfine-tuned embedding model receives a MAPIT 5 of 0.76, and our fine-tuned embedding model receives a MAPIT 5 of 0.79. So we do see that there is a real increase between the two. Again, this is two epochs on a very, very strange data set. Ideally, we train for longer in order to get this result even better. But it just goes to show you that even with the smallest amount of effort, we can improve these systems to be better at the tasks we need them to perform. One thing I do want to point out or mention, tasks we need them to perform one thing I do want to point out or mention when you're looking at your map at K scores it is important that you set your retrieval K to be the same value as you see uh in the in the metrics right if we have a very high map at K or map at five but we only retrieve three documents we're potentially shooting ourselves in the foot. So you want to make sure that you align the desired behavior of your retrieval pipeline with the metric that you're looking at. Just to point out the fact that, you know, you might not see, let's say we did RAGUS on this, but we kept it at only three retrieved documents, we might not see an improvement. And that's because we weren't actually looking at the right metric in order to make a decision about which is quote unquote better at that task. But with that, I will kick it on back to Greg. All right. So we saw those numbies going up. That was pretty cool. We didn't even train that long, but it did help. And that's pretty legit. You know, that's kind of the big idea. There it is. In a nutshell, fine-tuning embeddings. Lots of other things we could do for any given RAG system. All sorts of fun retrieval. All sorts of fun different node parser stuff in Lama Index to play with. All sorts of different evaluation things we could potentially do to instrument this thing and measure different numbers, see if they go up too. But that's a wrap for this little sesh. There are many ways to enhance retrieval and thus generation. Fine-tuning is one that you might want to pick up for very specialized domain language. And, you know, an example of that is the VET LLAMA index with LLAMA index. So as we wrap up session eight, the final session, session nine, that's not directly related to you guys presenting what you've got today, is going to be on deployment of your app. Once you've got all the logic, all the brains, all the everything in the RAG system, it's time to serve that thing up to users. And so we're going to see how to wrap everything in a chainlet front end, deploy it to Hugging Face, and make sure that you've got that killer demo that you can show live by the end of the day, night, or morning, depending on where you are in the world, as we start to make it into the final hours of the first annual Chativersary Hackathon.
Session 8: Fine-Tuning Embedding Models for RAG Systems
946
AI Makerspace
20231204
What you'll learn this session: - How to tune open-source embedding models to align with specialized language, like that used for research Speakers: Dr. Greg Loughnane, Founder & CEO AI Makerspace. https://www.linkedin.com/in/greglough... Chris Alexiuk, CTO AI Makerspace. https://www.linkedin.com/in/csalexiuk/ Apply for one of our AI Engineering Courses today! https://www.aimakerspace.io/cohorts
2024-06-09T21:10:45.244781
https://www.youtube.com/live/Anr1br0lLz8?si=qz792SKvBHbY-n4N
Hey, Wiz, is there a way to know what comes out of any RAG application that we build is right or correct? Well, it's really hard to say things like it's absolutely right, it's absolutely correct, it's absolutely true. That's pretty difficult. Okay. Okay. So there's no absolutes. It's absolutely correct. It's absolutely true. That's pretty difficult. Okay. Okay. So there's no absolutes, but is there a way to know that changes that we make to the system to our RAG application makes the performance better or worse? That we can know absolutely. So you're saying there's a way to assess RAG systems? Yeah. I think like assess RAG systems? Yeah. I think like a RAG assessment kind of make. A RAG assessment. Yeah, man. Let's show everybody how to do that today. Let's do it. All right, man. My name is Greg and we are here to talk RAG eval today. Hey, I'm Makerspace. Thanks for taking the time to join us. Everybody in our community, we'd love to hear your shout out in the chat where you're calling in from. Today we're going to walk through a simple RAG system built with the latest and greatest from Langchain, their most recent stable release and most stable version ever, we're also going to outline how you can actually assess your RAG systems using the RAG assessment or RAGIS framework. Finally, we'll do some advanced retrieval. We'll just sort of pick one off the shelf that's built into Langchain and show how we can go about this improvement process. We are very excited to have the Ragas co-founders and maintainers Jitin and Shaul joining us for the Q&A today. So definitely get your questions in the chat, anything you're curious about Ragas. We have the creators in the house today. And of course, we'll see Wiz, aka the LLMm wizard and cto at am makerspace back for demos real soon so let's get into it everybody today we're talking rag evaluation this black art that everybody is really really focused on as they start to build prototype and deploy these systems to production in 2024. as we align ourselves to this session we want to get out of this what's up with this langchain v 0.1 that just came out we want to understand how we can build a rag system with the latest syntax and then also evaluate it there's a lot of changes happening on the ragas side just as on the langchain side finally we want to see how we can pick different tools different ways to improve our system our application see how we can pick different tools, different ways to improve our system, our application, and how we can then quantify that using evaluation. So first we'll go into laying chain, then we'll go into a high level view of RAG and see exactly where the different laying chain components fit in. Finally, we're going to see what you all came here for today, the RAGIS metrics and how to implement the RAGIS framework. So we'll be building, we'll be evaluating, we'll be improving today and the Q&A should be pretty dope. So, Langchain v0.1.0. What's Langchain all about again? Well, it's all about enabling us to build LLM applications that leverage context, our so-called context aware, so we can connect to other sources of data. We can do lots of interesting prompt engineering. We can essentially do stuff in the context window that makes our applications more powerful. And also reasoning. This is the agentic behavior stuff. And look for another event from us soon that focuses more on reasoning. Today, we're focused on context, though. And we're doing that in the context of V0.1.0. The blog that they put this out with said, the journey of a thousand miles always starts with a single step. And that's kind of where Langchain sees themselves to be today. Langchain Core has come together, Langchain Community has come together, and they're officially going to be incrementing v0.1 to v0.2 if there are any breaking changes they'll be incrementing this and they'll continue to support v0.1 for a time every time this gets incremented of course as bug fixes and new features come out, they're also going to be incrementing now in this third v0.1.x slot. So pay attention to how quickly the development goes from here, because I imagine there's a lot of great stuff on the horizon coming from Langchain. There was a lot of great stuff in the v0.1 release. There was a lot of great stuff in the v0.1 release. And we're going to primarily focus on retrieval today, and also on this sort of langchain core that leverages L-C-E-L or the langchain expression language. So in terms of retrieval, there's going to be a lot that you can check out and add after today's event that you can then go assess to see if it actually helps your pipelines. So definitely encourage you to check those things out in more detail after today. For production components, there's a lot that we hope to explore in future events as well. But starting from the ground up here, we want to kind of focus on this Langchain core. This is the Langchain expression language, and this is really a very easy kind of elegant way to compose chains with syntax like this. This dovetails directly into deployments with LangServe, into operating in production environments and monitoring and visibility tooling with LangSmith. So really it kind of all starts from here and allows you to really do some industry-leading best practice stuff with these tools. Now today we're going to focus on a couple of the aspects of Langchain. We're going to take Langchain core functionality, and then we're also going to leverage models and prompts, as well as retrieval integrations from Langchain community. Chains, of course, are the fundamental abstraction in laying chain, and we will use those aspects to build our RAG system today. When we go and we assess, then we're going to take it to the next level with an advanced retrieval strategy. This is going to allow us to quantitatively show that we improved our RAG system. So quick recap on RAG in general for everybody. The point of RAG is to really help avoid these hallucinations. This is the number one issue. Everybody's talking about confident responses that are false. We want our systems, our applications to be faithful. And we'll see that we can actually evaluate this after we build out systems and instrument them with the latest evaluation tools. We want them to be faithful to the facts. We want them to be fact checkable. This idea of RAG is going and finding reference material, adding that reference material to the prompt, augmenting the prompt, and thus improving the answers that we generate. Visually, we can think of asking a question, converting that question to a vector, embedding representation, And then looking inside of our vector database, our vector store, the place where we store all of our data in vector format, we're looking for similar things, similar to the vector question we asked. We can find those similar things. And if we've set up a proper prompt template before we go into our LLM, something that says, for instance, use the provided context to answer the user's query. You may not answer the user's query unless you have context. If you don't know, say, I don't know. And then into this prompt, we inject these references, we augment this prompt. And then of course, where does the prompt go? Well, it goes into the chat model into our LLM. This gives us our answer and completes the RAG application input and output. So again, RAG is going to leverage models, prompts, and retrieval. In terms of models, we're going to use OpenAI models today. One note on syntax is that the chat style models we use generally leverage a system user assistant message syntax and Langchain is going to tend to prefer this system human AI syntax instead which personally I think is a little bit more straightforward in terms of the prompt template well we already saw it this is simply setting ourselves up for success so that we can inject those reference materials in and we can generate better answers. Now, it's important what these reference materials contain and how they're ordered. And that is going to be the focus of our evaluation. Of course, when we create a vector store, we're simply loading the docs. That's a document loader. Splitting the text. That's the text splitter. Creating embeddings. We use an embedding model. And storing the vectors in our vector store. Then we need to wrap a retriever around, and we're ready to rock and rag. Our build today is going to leverage, as mentioned, OpenAI models. We're going to leverage the Ada Embeddings model and OpenAI's GPT models. And the data we're going to use is actually, we're going to set up a rag system that allows us to query the Langchain v0.1.0 blog. So we'll read in this data and we'll create a rag based on this Langchain blog that we can ask, see if we missed anything that we might want to take away from this session that we could also learn about the 0.1.0. So to set up our initial rag system, we're gonna send you over to Wiz to show us Langchain v0.1.0 RAG setup. Hey, thank you, Greg. Yes. So today we're going to be looking at a very straightforward RAG pipeline. Basically, all we're going to see is how we get that context into our LLM to answer our questions. And then later on, we're going to think about how we might evaluate that. Now, the biggest changes between this and what we might have done before is the release of Langchain v0.1.0. So this is basically Langchain's, you know, first real minor version. We're looking to see this idea of, you know, splitting the core langchain features out. And that's exactly what, you know, Greg was just walking us through. Now, you'll see that we have mostly the same code that you're familiar with and used to, we can still use LCL, as we always have have that staying part of the core library. But we also have a lot of different ways we can add bells and whistles or different features to our Langchain application or pipeline. So in this case, we'll start, of course, with our classic import or dependency Langchain. We noticed we also have a specific package for OpenAI, for core, for the community Langchain, as well as Langchain Hub. And so all of these let us pick and choose, pick and choose whatever we'd like really, from the Langchain package. This is huge, right? So one of the things that people oftentimes are worried about language there's a ton of extra kind of uh unnecessary things in there well this is you know goes a long way to solving that problem um and it's awesome so let's see first which version we're working with uh so if you're watching this in the future you can be sure so we're on version 0.1.5 so we're already at dot five um line chain you know they're they're hard at work over there uh we're gonna need to add our open AI API key since we are going to be leveraging open AI uh basically this is a uh you know way that we can both use our lm for evaluation but also for generation and also for powering the application. We're just going to use this one LLM today for everything. When it comes to building our pipeline, it's very much so the case that, you know, we have the same stuff that we always have. We need to create an index and then we need to use an LLM to generate responses based on the retrieved context from that index. And we're going to get started as we always do with creating the index. Now we can and will still use LCEL. LCEL is important. You know, one of the things that we're going to show in this notebook, because you don't have to use LCL, they've implemented some abstractions in order to modify the, you know, the base chains that you're used to importing to LCL format, so you get all the advantages. But we're still going to look at LCL today, because it is an important piece of the line chain puzzle. because it is an important piece of the Langchain puzzle. But first, we're going to start with our first difference, right? So we're going to load some data, and we're going to load this from the Langchain community package where we're going to grab our document loader to get our web-based loader. You know, importantly, this is not part of core Langchain. This is a community package, and it works exactly the same as it used to, as it always has. You know, our web-based loader is going to let us load this web page, which we can do with loader.load. And then we can check out that we have our metadata, which is just for our web page. We're happy with that. Next, we need to do the second classic step of creating index. We have a document in this case. You know, it's just one document, but we have it and we need to convert it into several smaller documents, which we're going to do with the always fun recursive character text splitter. You'll notice that this has stayed part of core. So this is in just the langchain base package. Hooray. We have a recursive character text splitter. We've chosen some very arbitrary chunk sizes and overlaps here and then we can split those documents this is less so focused on a specific uh Lang chain rag and more on the evaluation so we're just kind of choosing these values uh you know to to showcase what we're trying to showcase you see that we've converted that one web page into 29 distinct documents. That's great. That's what we want to do with our splitting. Next, we're going to load the OpenAI embeddings model. Now, you'll notice that we're still using text embedding AIDA 002. We don't need to use this embeddings model. And it looks like very soon we'll be able to use OpenAI's latest model once the tick token library updates there's a PR that's ready just waiting to be merged which is going to let us be able to do that but for now until that change is implemented we're going to stick with text data embedding 002 and this is like the classic embedding model, right? Nothing too fancy. Just what we need. When it comes to our face vector store, what we need is to get that from lane chain community. But otherwise, this is exactly the same as it used to be, right? So there's no difference in the actual implementation of the VectorStore. It's just coming from the community channel. We'll pass in our split documents as well as our embedding model and away we go. Next, we're gonna create a retriever. This is the same as we've always done, dot as retriever on our VectorStore. Now we can interact with it through that retrieval API. We can test it to see it working. Why did they change to version 0.1.0? And we get some relevant documents to that query that mention the 0.1.0 release. Hooray. Now that we've got our retrieval pipeline set up, that's the R in RAG, we need to look at creating that AG. So what we're going to do is showcase a few different ways that we can create a prompt template. You can just pull it from the hub. So there are lots of different community created or Langchain created hubs. The idea is that, you know, you can just pull one that fits your task from the hub, but the one that we're showcasing is maybe not ideal. So we're going to go ahead and create our own. You can still do this process if you want to create your own. You don't have to use a, you know, one from the hub. And so we're just going to create the simple one, answer the question based only on the following context. If you cannot answer the question in context context please respond with i don't know that's a classic we pass in our context we pass on our question away we go and you'll notice that this is exactly the same as it used to be let's go laying chain now we'll set up our basic qa chain i've left a lot of comments here in the uh implementation of this uh lcel chain in order to hopefully clarify exactly what's going on. But for now, we'll just leave it at we can create this chain using LCL. And we want to pass out our context along with our response. This is important in order for us to be able to do those evaluations that we're hoping to do with RAGUS. So we do need to make sure that we pass out our context as well as our response. This is an important step. And we'll look at another way to implement this chain a little bit later, which is going to showcase a little bit more exactly what we can do to do this a little bit easier with still getting the advantages of LCL. You'll notice we're just using GPT-305 Turbo. That's it. And there you go. Now we can test it out and we can see, you know, what are the major changes in v0.1.0? The major changes are information. It goes on, it gives a correct answer. That's great. And we have what is a laying graph. And basically the response from the LLM is, I don't know, which is a laying graph. And basically, the response from the LLM is I don't know, which is not necessarily satisfying. So we're going to see a way to improve our chain to get a better answer to that question. And the next step now that we have this base chain would be to evaluate it. But before we do that, let's hear from Greg about how we're going to evaluate it and what we're going to evaluate it with. And with that, I'll pass you back to Greg. Thanks, Wiz. Yeah, so that was Langchain v0.1.0 RAG. Now let's talk RAG assessment. The RAGIS framework essentially wraps around a RAG system. If we think about what comes out in our answer, we can look at that, we can assess different pieces that helped generate that answer within the RAG system. And we can use that information to then decide on updates, on different things that we might try to add to either augment our retrieval or our generation. And we can continue the process of improvement by continually measuring. But what are we measuring? Well, this is where the RAG evaluation really gets particular. We have to make sure that we understand the core concepts of RAG eval. And in order to sort of do this in an automated way, we need four primary pieces of information. You're probably familiar with question, answer, input, output, and you may even be familiar with question, answer, context triples. What we need for eval is we need to also add a fourth component, the ground truth, sort of the correct or right answer, so to speak. Now, in practice, it's often not feasible to collect a comprehensive, robust ground truth data set. So again, what we can do, since we're not focused on absolutes here, is we can actually create a ground truth data set synthetically. And this is what we'll do today. We'll find the best model that we can, pull GPT-4 off the shelf, and we'll generate this set of information that will allow us to do evaluation. Okay, so we'll see how this works. It's pretty cool. And Ragus has a new library for this. But in terms of actual evaluation, when we finally have this data set up, we need to look at two different components. The first component is retrieval. There are two metrics that focus on retrieval exclusively. One is called context precision, and context precision asks the question, how relevant is the context to the question? All right, context recall, on the other hand, asks the question, is the retriever able to retrieve all of the relevant context relevant to the ground truth answer? On the generation side, we have two metrics as well. The first is answer relevancy, which asks the question, how relevant is the answer to our initial query? And finally, faithfulness tries to address the problem of hallucinations and asks, is the answer fact checkable from the context or is this a hallucination? So the four primary metrics in the RAGUS framework are these four, two for retrieval, two for generation. Let's dig in a little bit deeper to each one so that we really try to start grokking each metric individually because they're slightly different but nuanced. Faithfulness is trying to measure this factual consistency. Let's look at an example. The question, where and when was Einstein born? Context. If this is the context, Albert Einstein, born 14 March 1879, was a German-born theoretical physicist, etc., etc. So a high faithfulness answer is something that says, well, he was born in Germany and he was born on 14 March 1879. Where a low faithfulness answer might get part of it right, but might hallucinate, right? We want to avoid these hallucinations with faithfulness. So we're looking at the number of claims that can be inferred from the given context over the total number of claims in the generated answer. To be 100% faithful to the facts, we want this to be the same number. Okay, so answer relevancy is trying to, of course, measure how relevant the answer is. Rather than considering factuality, how factual it is, what we're doing here is we're penalizing when the answer lacks completeness or on the other side, when it contains redundant details. So, for instance, where is France and what is its capital? A low relevance answer is like talking to somebody that's not paying attention to everything that you said. Oh, France is in Western Europe. It's like, yeah, okay, well, what about the other part of my question, right? You want it to be completely relevant to the input, just like a good conversationalist's answer would be. Very relevant, right? Okay, so context precision, as we get into the retrieval metrics, we're thinking about, in this case, a way that we can evaluate whether all of the ground truth relevant items are present in the context and how well ranked they are in order. So what we're looking for is we want all the most relevant chunks that we return from our vector database to appear in the top reference ranks. Okay. We want lots of good stuff ranked at the top. That's what we want. And so we're really looking for everything that's relevant to the question to then be returned in our context and to be order ranked by relevancy. Makes sense, you know, just the way we would want to do it if we were writing a book report or something. Finally, context recall is again kind of doing this same thing that we talked about before. We want to make sure we're paying attention to everything that's relevant. We want to make sure that we're addressing everything that's asked. So if the question here, where is France and what is its capital? Once again, if we have a ground truth answer already, the key here is we're actually leveraging ground truth as part of calculating this metric. France is in Western Europe and its capital is in Paris. A high context recall is addressing both of these. And within each sentence of the output addressing both of these. You can look sort of ground truth sentences that can be attributed to context over number of sentences in ground truth. And a low context recall is going to kind of be doing the same thing that we saw earlier. Well, France is in Western Europe, simple villages, Mediterranean beaches, country is renowned, sophisticated cuisine, on and on and on, but it doesn't address anything about Paris, which of course the ground truth does. And we can start to get a picture of, if we look at each of these metrics, we get some idea of how our system is performing overall. But that's generally kind of difficult to get a perfect picture of that. These are the tools we have, and they work, as we mentioned, very well for directional improvements. Context precision is sort of conveying this sort of high-level quality idea, right? Not too much redundant info, but not too much left out. Context recall is measuring our ability to retrieve all of the necessary or relevant information. Faithfulness is trying to help us avoid hallucinations. And answer relevancy is sort of, am I to the point here? Am I very, very relevant to the question that was asked? Or am I kind of going off on a tangent here? And finally, RAGUS also has a few end-to-end metrics. We're just going to look at one of them today, just to give you an idea. And that one is called answer correctness. This is a great one for your bosses out there. You want to know if it's correct? Boom. How about we look at correctness, boss? So this is potentially a very powerful one to use for others, but beware, you know what's really going on and directional improvements is really what we want to be focusing on. But we want to basically look at how the answer is related to the ground truth. Of course, if we have like a true ground truth data set, this is probably a very, very useful metric. If we have one that's generated by AI, we might want to be a little bit particular, a little bit more careful in looking at this metric and relying on it too much. But if we have this great alignment between ground truth and answer, we're doing a pretty good job, right? Let's see a quick example for this one. We're kind of looking at two different things. We're looking at that factual similarity, but we're also looking at semantic similarity. So, you know, again, you can use this Einstein example. If the ground truth was Einstein was born in 1879 in Germany, the high answer correctness answer is exactly that. And then of course, low answer correctness is you're getting something literally wrong. So there is overlap between all of these things and it's important to sort of track that. But overall, the steps for doing RAGIS are to generate the question answer context ground truth data. And there's a awesome new way to do this called synthetic test data generation that has recently been released by RAGUS. We'll show you how to get it done today. Run that eval and then go ahead and try to improve your RAG pipeline. We're just going to take one simple retrieval improvement off the shelf from Langchain today. It's called the multi-query retriever. This is going to sort of generate many queries from our single query and then answer all of those and then return the relevant context from each of those questions into the prompt. So we're actually getting more information. But you can pick any retrievers off the shelf and you can then go back, you can look, did my metrics go up? Did they go down? What's happening as I add more data or more different retrieval advanced methods to my system? And in this way, we can see how we can combine RAGIS with RAG improvement as Wiz will go ahead and show us right now. Oh yeah, Greg, can't wait. Thank you. So RAGIS, this is the thing we're here to talk about, right? It's a amazing library that does a lot of cool, powerful things. But the thing that is, you know, most important is that it allows us to have some insight into changes we make in terms of the directional impact they have, right? So while we might not be able to say, you know, these answers are definitely true, as Greg was expressing, we can say, it appears as though these answers are truer than the ones we had before, which is awesome. So let's look at how we can do this. First of all, in order to actually do, you know, a evaluation on all of the metrics, we'd have two important things. One, we need to have questions. So these are questions that are potentially relevant to our data. In fact, they should be relevant to our data if we're trying to assess our retrieval pipeline, as well as our generations. And also some ground truths, right? As Greg was mentioning, you know, we are going to use synthetically created ground truths. So it might be more performant to use, let's say, you know, human labeled ground truths. But for now, we can let the LLM handle this. I'll just zoom in just a little bit here. And the idea is that we're going to leverage Ragus's new synthetic test data generation, which is very easy to use, much better than what the process we had to do before, which is kind of do this process manually. We're going to go ahead and use this to create our test data set. Now, it's important to keep in mind that this does use GPT-3, 5 Turbo 16 K as the base model, and it also includes GPT-4 as the critic. So we want to make sure we're not evaluating or creating too much data, or if we are, that we're staying very cognizant of the costs. So the first thing we're going to do is just create a separate data set or separate document pile that we're going to pull from. We're doing this to mitigate the potential that we're just asking the same LLM, the same questions with the same context, which might, you know, unfairly benefit the more simple method. So we're just going to create some new chunks with size 1000, overlap 200. We're going to have 24 docs, so about the same, 29, 24. And then we're going to use the test set generator. It really is as easy as test set generator with open AI. That's what we're using for our LLM. And then we're going to generate with langchain docs. You'll notice this is specifically integrated with langchain. There's also a version for Lama index. And all we need to do is pass in our documents, the size that we like of our test set, and then the distributions. Now this distributions is quite interesting. Basically, this is going to create us questions at these ratios from these subcategories. So the idea is that this is going to be able to test our system on a variety of potentially different, you know, tests, right? So we have simple, which is, you know, as you might think, very simple. And we have, you know, this reasoning, which is going to require some more complex reasoning that might, you know, tax our LLM a little harder. And then we have this multi-context, which is going to require multiple contexts. So our LLM is going to have to pick up a bunch of them in order to be very good at this particular kind of task. And the reason this is important is that not only do we get kind of an aggregate directional indication of how our system is improving, but we can also see how it's improving across specific subcategories of application. Very cool, very awesome. Thanks to the RAGUS team for putting this in. You know, we love this and it makes the job very much a lot easier. So that's great. We look at an example of the test data. We have our question, we have some contexts, and then we have our ground truth response, as well as our evaluation type, which is in this case, simple. In terms of generating responses with the RAG pipeline, it's pretty straightforward. There is an integration that exists between Langchain and RAGIS. It's currently being worked on to be brought up to speed. But for now, we're just going to kind of do this manually. So what we're going to do is we're going to take our test set. We're going to look and see. We've got our questions, context, ground truths, as well as our evolution type. This is our distribution that we talked about earlier. And then we're going to grab a list of questions and ground truths. We're going to ask those questions to our RAG pipeline. And we're going to collect the answers and we're going to collect the contexts. And then we're going to create a Hugging Face data set from those collected responses along with those test questions and our test ground truths. We can see that each of the rows in our data set has a question with our RAG pipeline's answer, our RAG pipeline's context, as well as the ground truth for that response. Now that we have this data set, we're good to go and we can go ahead and we can start evaluating. Now, Greg's talked about these metrics in depth. The code and the methodology can be found in the documentation from Ragas, which is very good. These are the ones we're caring about today. Faithfulness, answer relevancy, context precision, context recall, and answer correctness. And you can see it's as simple as loading, importing them, and then putting them into a list so that when we call the evaluate, you know, we're going to pass in our response data set, which is this data set we created above that has these rows for every question, and then our metrics, which we've just set above. That's all we have to do. Now, the test set generation is awesome and very useful. Another change that Ragas made recently is that they've made their evaluation async. This is a much faster process than it used to be. As you can see, this was around 42 seconds, which is much better than the times that we used to see. Thanks, Ragas team, for making this change. We can get our results here. We have our faithfulness, our answer relevancy, our context recall, our context precision, and our answer correctness. You can see that it does all right. But again, these numbers in a vacuum aren't really indicative of what's happening. It's like we want these numbers to be high, but we're more interested in seeing if changes we make to our system make those numbers higher. So let's look at another awesome part of RAGUS before we move on to making a change and seeing how it goes, which is we have the ability to look at these scores at a per-question level in the Pandas data frame. So you can see that we have all of our scores and they're given to us in this data frame this is huge especially because we can map these questions back to those evolution types and we can see how our model performs on different subsets of those uh those distribute the elements of that distribution so now we're going to just make a simple change. We're going to use the multi-query retriever. This is stock from the Langchain documentation. We're going to use this as an advanced retriever. So this should retrieve more relevant context for us. That's the hope anyway. We'll have our retriever and our primary QA LLM. So we're using the same retriever base and the same LLM base that we were using before. We're just wrapping it in this multi-query retriever. Now, before we used LCEL to create our chain, but now we'll showcase the abstraction, which is going to implement a very similar chain in LCEL, but we don't have to actually write out all that LCEL. So we're going to first create our stuff documents chain, which is going to be our prompt. We're using the same prompt that we used before. So we're not changing the prompt at all. And then we're going to create retrieval chain, which is going to do exactly what we did before in LCL, but it's, you know, we don't have to write all that LCL. So if you're looking for an easier abstracted method, here you go uh you'll notice we call it in basically the same way and then we are also looking at uh this answer the answer is basically uh you know the response.content from before and then uh you know we can see this is a good answer makes sense to me uh but we also have a better answer for this what is Landgraf question. So this heartens me, right? I'm feeling better. Like maybe this will be a better system. And before you might have to just look at it and be like, yeah, it feels better. But now with RAGUS, we can go ahead and just evaluate. We're going to do the same process we did before by cycling through each of the questions in our test set and then getting responses and context for them and then we're going to evaluate across the same metrics you'll notice that our metrics uh have definitely changed so let's look at a little bit more closely how they've changed so it looks like we've gotten better at our faithfulness metric we've gotten significantly better at answer relevancy which is nice we've gotten a little bit better at context recall. We've taken some small hits, a small hit on context precision, and a fairly robust hit on answer correctness. So it's good to know that this is going to improve kind of what we hoped it would improve. And now we are left to tinker to figure out how would we improve this or answer correctness doesn't get impacted by this change, but at least we know in what ways, how, and we're able to now more intelligently reason about how to improve our RAG systems, thanks to RAGIS. And each of these metrics correspond to specific parts of our RAGIS application. And so it is a great tool to figure out how to improve these systems by providing those directional changes. With that, I'm going to kick it back to Greg to close this out and lead us into our Q&A. Thanks, Wiz. Yeah, that was totally awesome, man. It's great to see that we can improve our rag systems not just sort of by thinking about i think that's better uh land graph question got answered better but actually we can go and we can show our bosses our investors anybody that might be out there listening hey look we have a more faithful system check it out went from base model to multi-query retriever and improved our generations. Of course, as developers, you want to keep in mind exactly what the limitations of each of these things are. But for all of those folks out there that aren't down in the weeds with us, if they really want an answer, here's an answer. And so it's awesome that we can go and take just things off the shelf that we're trying to qualitatively analyze before and directionally improve our systems by instrumenting them with RAGIS and measuring before and after small iterations to our application. So today we saw Langchain v0.1.0 to build RAG, and then we actually did RAG on the Langchain v0.1.0 blog. Expect stable releases from here. It's more production ready than ever. And you can not just measure faithfulness, you can measure different generation metrics, different retrieval metrics even different end-to-end metrics and big shout out to everybody today that supported our event shout out to langchain shout out to ragas and shout out to everybody joining us live on youtube with that it's time for q a and i'd like to welcome Wiz back to the stage as well as Jithin and Shaul from Vragus, co-founders and maintainers. If you have questions for us, please scan the QR code and we'll get to as many as we can. Guys, welcome. Let's jump right in. Hey guys. Hey. What's up? All right. Let's see. I'll go ahead and toss this one up to Jitin and Shaul. What's the difference between memorization and hallucination in RAG systems? How can developers prevent hallucinated content while keeping the text rich. Yeah. You want to go for it? I know I didn't actually understand what you actually mean by memorization. Yeah. Oh, yeah. OK. You want to take a crack at this, Shaul? Yeah, I mean, what is the difference between memorization and hallucination rack systems? That's it. The line between memorization and hallucination, I don't know where to draw that particular line. It's something seems like, seems like what it meant is the usage of internal knowledge versus you know there are situations in drag when knowledge is a continually evolving thing right so maybe the llm thing that a person is you know is still alive but the person died yesterday or something now the now if if that particular thing is uh is read using wikipedia or something there will be a contrasting knowledge between the LLM and what the ground truth Wikipedia sees. Now, that can be hard to overcome because the LLM still believes something else. So it's a hard to crack problem and I hope there will be many future works on it. But how can we prevent such hallucination? The thing is, what we require is when using LLMs to build RAC, we can align LLMs so that LLMs answer only from the given grounded text data and not from the internal knowledge. So, or there must be high preference to the grounded text data compared to what is there in the LLMs internal knowledge. So that can be one of the situations. Yeah, definitely. Wiz, any thoughts on memorization versus hallucination before we move on here? I think the answer to the question was already provided uh basically really i mean yeah yeah we when it comes to the memorization versus hallucination i think the the most important thing is uh you know memorization is that you could maybe frame it as a slightly less negative form of hallucination because it's likely to be closer to whatever the training data was. But in terms of RAG application, both bad. We want it to really take into account that context and stick to it. Okay. We've got a question from Jan Boers. I'm curious if you already have experience with smart context aware chunking. Can we expect significant improvements of rag results using smart chunking? What do you think, Jitin? Is this something that we can expect improvements in? Yeah, so how you, so one thing that we see when we're building rag systems is that how you're formatting the data is where most of the problems are. Like if you take some time to clean up the data and to format the data is like where most of the problems are like if you if you take some time to clean up the data and like to format data that actually makes it easier for your act the performance difference like like really great because like models right now if you're using a very stable model if you provide with the correct context the model will be able to use the information in the context to get it so all these tips and tricks to optimize about even like um chris was using the multi uh context method right it's also another trick to get make sure that you get different context from different perspectives into the final answer so all these different types of tricks can be used and this is actually why we started this also we wanted to like evaluate all the different different tricks that are there out there and try to see which works best because it can be different on your domain. So yeah, smart chunking is smart. Yeah. So you're saying that it actually matters what data you put into these systems just because they're LLMs, it doesn't solve the problem for you? Yeah. That actually matters a lot more because what goes in comes out. So that's important that you format your data. That's right. The data-centric paradigm has not gone anywhere, people. You heard it here first. Garbage in, garbage out. So Matt Parker asks, maybe I'll send this one over to Shaul. Can you compare TrueLens and RAGAS? This is the first I've heard of TrueLens. Maybe if other people have, and maybe you can tell us a little bit about what they're doing and what you're doing and the overlap you see. Sure. Yeah, TrueLens has been around for a while for evaluating ML applications, and they are also doing a lot of applications. So RAGAS currently is mostly focused on racks as in we wanted to crack the application that most people care about that is racks. And so we are mostly, you know, doing things that can help people to evaluate and improve their racks. We are not building any UI. We are largely providing for the integrations part. We are largely interested in providing integrations to players like Langsmith so that people can trace and see their UI rather than building a UI on top of Raga. So Raga mainly offers metrics and features like as you have seen, synthetic test data generation to help you evaluate your racks. I don't think TrueLens has a synthetic data generation feature, which is something that most of our developers really liked because it has saved a ton of their time because nobody really wants to go and label hundreds of documents of documents it's a boring job right so we are trying to double down on these points that we have seen that developers really like and we are trying to stay true to the open source community as well nice okay very cool very cool rad asks I'll send this one over to you, Wiz. Can you combine multiple query retriever with conversational retrieval chain? Sure. Yeah. Basically, Langchain works in a way where you can combine any retriever inside of any chain, right? So a retriever is going to be some kind of slot that we need to fill with something. So if you want to use a more complex retrieval process or combine many different retrievers in an ensemble, you can do that with basically any chain. Basically, that conversational retrieval chain is looking for a retriever. And so as long as it can be accessed through the retrieval API, it's going to work fine. retriever. And so as long as it can be accessed through the retrieval API, it's gonna work fine. I would I would add though, conversational retrieval chain, you'll want to use the 0.1.0 version, which is, you know, been implemented with LCL. But other than that, you're good to go. Okay, okay. And sort of back to this idea of sort of smart, chunking, smart hierarchy of data. Is there sort of like, we often talk in our classes about this sort of black art of chunking. Everybody's like, well, what's the chunk size I should use? What's the chunk size? So Sujit asks, and maybe I'll send this one over to you, Jithin, I know the chunk size matters. Are there like guidelines for chunking that you guys are aware of or that you recommend when people are building rag systems? Yeah, so I don't have like a very good guideline. Maybe Shahul can take back it up. But one thing that I've like seen like personally from experience is like, so A, do the evaluations, but then B, like also making sure that you get, you combine like multiple, like, so you basically, you create a hierarchy system where you have like different chunks. Then you summarize the different like concepts, like define the, uh, summarize the different channels so that, uh, even like all the beer, like core ideas are there in the hierarchy that actually has been like very like helpful. So, yeah. like core ideas are there in the hierarchy that actually has been like very like helpful so yeah so exactly like chunking size i haven't seen it in the uh like matrices as such um but all the like all the recursive like summarization that has helped and i think uh lament x has like uh a few retrievers right there what shall what do you think? VARSHAAL KUMAR- Yeah, just adding some more points into it. I think there is no one size fits chunk size that fits all type of documents and all type of text data. So it's a relative thing that should either you get. So there are two ways to handle this problem. Either you can, the general rule of thumb is to ensure that enough context the context makes sense even without any you know as as an individual you know as an individual chunk it it should make con if it should make some sense if you read it if a person writes it so how to how to achieve this you can achieve this either using writing a set of heuristics or let's say you know it can be something like okay determine the document you know type or something and change it using that and i think the from moving from heuristics to where we are going i think we might even see smaller models smaller very smaller models that are capable of chunking determining the chunk boundaries smartly so that you don't really have to rely on the heuristics it's more a generalizable way of doing it so I think that's where we are going in in the future um of chunking and uh hopefully the problem gets solved like that yeah yeah yeah I really like this idea of making sure each individual chunk makes sense before sort of moving up a level and thinking about, okay, what's the exact, you know, hierarchical parent document, multi-equal, like whatever it is that you're doing, each chunk should make sense. And that's going to be dependent on data. Yeah. I really liked that. And okay. So let's, let's go ahead and sort of related to that, I wanna go to this embedding model question in the Slido from Ron. It's similar in sort of relation to this chunking idea. I mean, people always want the answer, you know? So what chunk size? Here, Ron asks, which embedding models should I be using when I develop a system? Any emergent models or techniques that I can see significant improvements with? Maybe Shaul, if you want to continue here. Sure. Again, there is no one fit size for this answer. You know, the thing is that, again, it depends on a lot of factors. So if you don't want to really you know use again first first you know question will be open source or closed source you have like a lot of open source players even revealing open a with their open source models like i think recently uh by uh alibaba group uh released their m3 embedding which is like awesome it's like most powerful open source embeddings which we we have ever seen uh even revealing open is at our buildings right so it's it's a set of questions that you have to answer if you want to go for easy way of building a baseline rag of course open is embeddings you know good place to start you don't have to worry about anything else then you you can iteratively improve it that's where also ragas comes in let's say you have now you have an abundance of embeddings to choose from right so now you have you want a way to compare it so you don't use ragas you know you can just compare all these different embeddings choose the one that fits you and you're done there it it is. There it is. Just closing up this topic on chunks and embedding models. Wiz, I wonder, why did you choose Ada? Why did you choose, what is it? 750 overlap. Any particular reason? Zero thought put into those decisions. We used Ada because it's the best open AI model that's currently implemented. And we used 750 because we did. Basically, we wanted to show that those naive settings are worse than a more considerate or a more mindful approach. And so to do that, we just kind of selected them. I think the thing I really want to echo that we've heard so far is when we're thinking about our index or we're thinking about our vector store, we really want to be able to represent individual like quanta of information. And so the closer we can get to that, the better it will be. And then we can add that hierarchy on top. And I think what was said about, you you know using models to determine that at some point is definitely a future we can uh we can imagine we'll be living in soon yeah yeah and i think again we go back to this data centric idea it's easy to get the rag system set up and to get instrumented with aragus but like you're gonna get the improvements you're gonna get the thing really doing what you need it to do for your users by doing the hard kind of boring data work data engineering data science on the front end that really you just can't outsource to ai and you just have to kind of deal with yourself okay one more sort of like what's the answer question. I want to maybe send this one to Jithin. If somebody is picking up ragas and they build a rag system and they're like, okay, well, which ragas metric should I use? You know, which one should I look at? Right. What would you say? Is there, is there a starting point? Is there a sequence that you'd look at? Or the jury's not out yet on this. VATSAL SHARANAMANIYARANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANAN And then just like first of all, the first one, just try out like with all of the stuff, like basically like once you know which component, like what figuring out which component work, how, like what the state of which all these components are gives you an idea of, okay, where can I make an improvement with like as fast as possible? If, if your generator is bad, maybe try out a few other other LLMs or maybe if your retriever is bad, then figure out okay, in the retriever part, what is actually happening is context relevancy? Is it is it the recall that's bad? And like that is the way so starting off try out, try out all the metrics that you have. And then for the ones that are the bad the worst. And like after you understand like what the metrics are, you will get an idea of how you could like what other stuff you can actually try out to improve it and if it's like try out the easiest part like cross out the low-hanging fruits first and that is how you would like over time like progressively like uh improve it like but like i said it's not the absolute values that matter it's like the trends that matter right so you guys did a good job in explaining that so make sure like you go for the easiest things that you can patch up fast and keep that trend in the upward direction. Yeah, yeah, I love it. If you're getting low retrieval metrics, maybe pay attention to some retriever stuff. If you're getting low generation metrics, maybe try a different model. It's like, yeah, it's so simple when we can break it down like this. And you know, just a shout out to everybody out in Manny, just shouting out to Manny. That was kind of an attempt to answer one of your many questions today. We'll see if we can get some more on LinkedIn, but I think this idea of like getting your system instrumented so you can start to look at and chunk up different pieces of it and try to improve them. There's a lot of content that needs to be made on this. These guys are open source first, open source forward. We'd love to see some folks in the community start to put some guides together for how to actually break down and use RAGUS in sophisticated ways. So last question, guys, we're at time here, but what's next for RAGUS in 2024? Maybe if either of you wanna go ahead and take take this go ahead and take it let us know what to expect from you guys heading forward this year yeah shall we we want to take this yeah yeah that's a tricky question so you want to go where the community takes us so yeah doubling down on um things like synthetic data generation there are there are a lot of interests there there are a lot of interest in expanding ragas to other llm tasks as well so yeah there are all these interesting directions to take hopefully uh you know we'll get more signals from the community on which path so to take i mean we do have a lot of directions a lot of feature requests coming in so we have to just you know take that decision and move on but uh but yeah as of now um the the synthetic test generation is something that gets a lot of interest we want to you know make it very stable very useful make sure that that we push the limits of you know uh the the closed source models and plus frameworks analogy uh to build a great uh you know test data point that's that's very easy and uh easy to use yeah yeah anything to add yet then yeah like honestly like so right now we have a good base right now we're like very curious what like what we can do like evaluation driven development what are the extremes of that so like curious to see like what like uh what the community comes up with what like like you guys can like we come up with so yeah excited really excited for that yeah yeah let's see what everybody builds ships and shares out there and uh and contributes well thanks so much jiten thanks shaul thanks Wiz. We'll go ahead and close it out for today. And thanks everybody for joining us. Next week, you can continue learning with us. We're talking alignment with reinforcement learning with AI feedback. If you haven't yet, please like and subscribe on YouTube. And if you haven't yet, but you liked the vibe today, think about joining our community on Discord, where we're always getting together and teaching and learning. You can check out the community calendar directly if you're not a Discord user to see what's happening this week and in upcoming weeks. And finally, if you're ready to really accelerate LLM application development in your career or for your company, we have a brand new AI engineering bootcamp that's going to cover everything you need to prompt engineer, fine-tune, build RAG systems, deploy them, and operate them in production using many of the tools we touched on today, but also many more. You can check out the syllabus and also download the detailed schedule for more information. And then finally, any feedback from today's event, we'll drop a feedback form in the chat. I just want to shout out Jonathan Hodges as well. We will get back to your question and we will share all the questions today with the RAGUS guys to see if we can get follow-ups for everybody that joined us and asked great questions today. So until next time and as always keep building, shipping and sharing and we and the RAGUS guys will definitely keep doing the same. Thanks everybody. See you next time.
RAG with LangChain v0.1 and RAG Evaluation with RAGAS (RAG ASessment) v0.1
3,842
AI Makerspace
20240207
GPT-4 Summary: Join us for an enlightening YouTube event that delves into the critical art of evaluating and improving production Large Language Model (LLM) applications. With the rise of open-source evaluation tools like RAG Assessment (RAGAS) and built-in tools in LLM Ops platforms such as LangSmith, we're uncovering how to quantitatively measure and enhance the accuracy of LLM outputs. Discover how Metrics-Driven Development (MDD) can systematically refine your applications, leveraging the latest advancements in Retrieval Augmented Generation (RAG) to ensure outputs are factually grounded. We'll start with creating a RAG system using LangChain v0.1.0, assess its performance with RAGAS, and explore how to boost retrieval metrics for better results. Don't miss this deep dive into overcoming the challenges and understanding the limitations of current AI evaluation methods, with insights from our partners at LangChain and RAGAS. This is your opportunity to learn how to build and fine-tune RAG systems for your LLM applications effectively! Special thanks to LangChain and RAGAS for partnering with us on this event! Event page: https://lu.ma/theartofrag Have a question for a speaker? Drop them here: https://app.sli.do/event/2rLa8RML994YsMQt1KLrJi Speakers: Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/greglough... The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply for our new AI Engineering Bootcamp on Maven today! https://bit.ly/aie1 How'd we do? Share your feedback and suggestions for future events. https://forms.gle/ryzhbvxZtbvQ4BCv5
2024-06-09T21:25:23.164053
https://www.youtube.com/watch?v=EeZIKQmWSXg
" Hey, Wiz. So if I'm a super beginner trying to get into fine tuning, should I use Hugging Face and(...TRUNCATED)
null
null
null
null
null
2024-06-09T19:20:46.574940
https://www.youtube.com/watch?v=Anr1br0lLz8
" Hey, Wiz, is there a way to know what comes out of any RAG application that we build is right or c(...TRUNCATED)
RAG with LangChain v0.1 and RAG Evaluation with RAGAS (RAG ASessment) v0.1
3,842
AI Makerspace
20240207
"GPT-4 Summary: Join us for an enlightening YouTube event that delves into the critical art of evalu(...TRUNCATED)
2024-06-09T21:02:21.809985
https://www.youtube.com/live/XOb-djcw6hs
" Hey Chris, is it true that we can improve on our PEFT-LORA approach with this quantization thing? (...TRUNCATED)
null
null
null
null
null
2024-06-09T20:04:44.501496
https://www.youtube.com/live/XOb-djcw6hs
" Hey Chris, is it true that we can improve on our PEFT-LORA approach with this quantization thing? (...TRUNCATED)
Fine-tuning with QLoRA (Quantized Low-Rank Adaptation)
3,710
AI Makerspace
20240111
"​GPT-4 Summary: Discover how to supercharge your LLM application development by mastering quantiz(...TRUNCATED)
2024-06-09T20:46:24.159829
https://www.youtube.com/live/XOb-djcw6hs
" Hey Chris, is it true that we can improve on our PEFT-LORA approach with this quantization thing? (...TRUNCATED)
null
null
null
null
null
2024-06-09T19:37:57.768795
https://www.youtube.com/watch?v=EeZIKQmWSXg
" Hey, whiz. Hey Wiz, so if I'm a super beginner trying to get into fine-tuning, should I use Huggin(...TRUNCATED)
null
null
null
null
null
2024-06-09T20:20:27.378549
README.md exists but content is empty.
Downloads last month
34