Spaces:
Sleeping
Task 1: Defining your Problem and Audience
You are an AI Solutions Engineer.
What problem do you want to solve? Who is it a problem for?
✅ Deliverables
- Write a succinct 1-sentence description of the problem
- Write 1-2 paragraphs on why this is a problem for your specific user
Task 2: Propose a Solution
Now that you’ve defined a problem and a user, there are many possible solutions.
Choose one, and articulate it.
✅ Deliverables
- Write 1-2 paragraphs on your proposed solution. How will it look and feel to the user?
- Describe the tools you plan to use in each part of your stack. Write one sentence on why you made each tooling choice.
- LLM
- Embedding Model
- Orchestration
- Vector Database
- Monitoring
- Evaluation
- User Interface
- (Optional) Serving & Inference
- Where will you use an agent or agents? What will you use “agentic reasoning” for in your app?
Task 3: Dealing with the Data
You are an AI Systems Engineer. The AI Solutions Engineer has handed off the plan to you. Now you must identify some source data that you can use for your application.
Assume that you’ll be doing at least RAG (e.g., a PDF) with a general agentic search (e.g., a search API like Tavily or SERP).
Do you also plan to do fine-tuning or alignment? Should you collect data, use Synthetic Data Generation, or use an off-the-shelf dataset from HF Datasets or Kaggle?
✅ Deliverables
- Describe all of your data sources and external APIs, and describe what you’ll use them for.
- Describe the default chunking strategy that you will use. Why did you make this decision?
- [Optional] Will you need specific data for any other part of your application? If so, explain.
Task 4: Building a Quick End-to-End Prototype
✅ Deliverables
- Build an end-to-end prototype and deploy it to a Hugging Face Space (or other endpoint)
Task 5: Creating a Golden Test Data Set
You are an AI Evaluation & Performance Engineer. The AI Systems Engineer who built the initial RAG system has asked for your help and expertise in creating a "Golden Data Set" for evaluation.
✅ Deliverables
- Assess your pipeline using the RAGAS framework including key metrics faithfulness, response relevance, context precision, and context recall. Provide a table of your output results.
- What conclusions can you draw about the performance and effectiveness of your pipeline with this information?
Task 6: Fine-Tuning Open-Source Embeddings
You are a Machine Learning Engineer. The AI Evaluation and Performance Engineer has asked for your help to fine-tune the embedding model.
✅ Deliverables
- Swap out your existing embedding model for the new fine-tuned version. Provide a link to your fine-tuned embedding model on the Hugging Face Hub.
Task 7: Assessing Performance
You are the AI Evaluation & Performance Engineer. It's time to assess all options for this product.
✅ Deliverables
- How does the performance compare to your original RAG application? Test the fine-tuned embedding model using the RAGAS frameworks to quantify any improvements. Provide results in a table.
- Articulate the changes that you expect to make to your app in the second half of the course. How will you improve your application?
Your Final Submission
Please include the following in your final submission:
- A public (or otherwise shared) link to a GitHub repo that contains:
- A 5-minute (OR LESS) loom video of a live demo of your application that also describes the use case.
- A written document addressing each deliverable and answering each question
- All relevant code
- A public (or otherwise shared) link to the final version of your public application on Hugging Face (or other)
- A public link to your fine-tuned embedding model on Hugging Face