LLMGameHub: How We Won the Gradio Agents & MCP Hackathon 2025

📝 Note: At the time of winning the Gradio Agents & MCP Hackathon 2025, our project was known as LLMGameHub. It has since evolved into Immersia, reflecting our broader vision for generative gaming experiences.
Create. Play. Imagine.
In June 2025, my teammate and I participated in the international Gradio Agents & MCP Hackathon — a week of experimentation with multi-agent systems and MCP. In the Agentic Demo Showcase track, participants had one goal: to demonstrate the real capabilities of multi-agent LLM applications. That's how LLMGameHub was born — a platform where anyone can create an interactive game in minutes based on their own ideas and texts.
How the Idea Was Born
In the beginning was the word, and the word was creativity. We pondered deeply how to showcase the full power of a multi-agent system while ensuring the product appealed to as many people as possible. What do people enjoy? Music, movies, games… Games! That’s what people genuinely love, blending the best aspects of all creative forms. Thus, the idea was born: to unite generative models for text, images, and music into a cohesive ecosystem, enabling users to feel like both directors and writers, crafting their own stories. The hackathon was the perfect opportunity to turn this ambitious concept into a fully operational prototype.
What is LLMGameHub?
LLMGameHub is a playground for generative adventures. You describe a world, choose a hero and a genre—and immediately dive into the game. The story unfolds on-screen: scenes are generated by a large language model, first-person images are dynamically created, and adaptive music accompanies each plot twist. All of this is powered by a Gradio interface and a backend pool of agents.
Each session lasts around five minutes, featuring several choices, interactive forks, and an ending.
🚀 How Does it Work?
At the core of LLMGameHub is a set of specialized agents, each responsible for a particular aspect of the game:
- Story Agent generates plot scenes and player action options. We use LangGraph and LangChain to build a dialogue graph that directs the narrative flow.
- Image Agent dynamically creates or modifies images using Google Gemini, analyzing scenes and generating prompts to visualize them from a first-person perspective.
- Music Agent produces atmospheric soundtracks via Google Lyria, dynamically altering the composition's mood according to the narrative.
- State Manager stores game progress in Redis, ensuring logical story continuity and accounting for previous player actions.
All agents operate asynchronously, with communication managed through Gradio Blocks, allowing users to see text, select actions, receive images, and hear music with minimal latency.
Example Node in llm_graph.py
async def player_step(state: GraphState) -> GraphState:
# Save player's choice
await update_state_with_choice(state.user_hash, state.choice_text)
# Check if the game has reached an ending
ending = await check_ending(state.user_hash)
if ending["ending_reached"]:
state.ending = ending
return state
# Generate the next scene and simultaneously launch music and images
next_scene = await generate_scene(state.user_hash, state.choice_text)
await asyncio.gather(
generate_scene_image(state.user_hash, next_scene),
generate_music_prompt(state.user_hash, next_scene),
)
state.scene = next_scene
return state
This fragment illustrates how the graph decides the next events following a player's choice. Music and image generation are initiated concurrently.
User Interface
Our goal was to make playing as enjoyable as story creation. The Gradio-based builder lets you specify the setting, main character, and genre of your future story. After clicking "Start Game," the user instantly immerses themselves in the narrative: text, background illustrations, interactive action choices, and dynamic music.
Secret of Success
The main challenge was integrating multiple generative services while ensuring acceptable response times. We optimized prompts, leveraged asynchronous programming, executed parallel generation requests, and stored intermediate results in Redis. This approach minimized response times and significantly enhanced user experience.
What’s Next?
We've already published the project on Hugging Face Spaces and continue its development by adding new genres and improving music generation. A demo of the project is available on youtube.
From LLMGameHub to Immersia
While LLMGameHub was our starting point and the name under which we achieved victory at the Gradio Agents & MCP Hackathon, our ambitions have expanded. To better reflect our vision—immersive, creative AI-powered gaming—we rebranded the project to Immersia. This new identity emphasizes deeper engagement, creativity, and an expanding set of generative gaming capabilities, which we'll continue to enhance moving forward.
Conclusion
Participating in the hackathon was a true adventure for us. We became acquainted with powerful tools, broadened our technical horizons, and discovered that multi-agent applications can be both useful and incredibly engaging. Winning came as a genuine surprise (though we certainly believed in our abilities!) and motivated us to continue developing both the project and ourselves in this exciting field.
We thank the organizers for the opportunity to showcase our talents and invite everyone to support the continued evolution of Immersia on our website — your feedback will help us make it even better!