WebGen-Bench
Collection
Datasets and models introduced in the paper "WebGen-Bench: Evaluating LLMs on Generating Interactive and Functional Websites from Scratch".
•
11 items
•
Updated
•
1
WebGen-LM is a code language model specifically trained for generating interactive and functional websites from scratch. It is trained using the Bolt.diy trajectories generated from a subset of the training set of WebGen-Bench (🤗 luzimu/WebGen-Bench). It has been introduced in the paper WebGen-Bench: Evaluating LLMs on Generating Interactive and Functional Websites from Scratch.
The training data and code can be found at WebGen-Bench (Github).
The WebGen-LM family of models are as follows:
Models | HF Links |
---|---|
WebGen-LM-7B | 🤗 luzimu/WebGen-LM-7B |
WebGen-LM-14B | 🤗 luzimu/WebGen-LM-14B |
WebGen-LM-32B | 🤗 luzimu/WebGen-LM-32B |
You can use WebGen-LM
with the transformers
library to generate website code.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "luzimu/WebGen-LM-32B" # You can also use WebGen-LM-7B or WebGen-LM-14B
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Example for website generation
prompt = """Generate the complete HTML, CSS, and JavaScript code for a responsive website.
The website should be a simple landing page for a coffee shop.
It needs:
1. A navigation bar at the top with "Home", "Menu", "About Us", and "Contact" links.
2. A hero section with a background image, a title "Brewing Perfection", and a call-to-action button "View Our Menu".
3. A menu section displaying at least 3 coffee items with their names and prices.
4. An "About Us" section with a brief description of the coffee shop.
5. A "Contact" section with an address, phone number, and a simple contact form (Name, Email, Message, Submit button).
6. Basic responsive design for mobile views.
"""
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=2048, # Adjust as needed for full website code
do_sample=True,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.05,
)
# Decode the generated output, skipping special tokens
response = tokenizer.batch_decode(generated_ids[0], skip_special_tokens=True)[0]
# The response will contain the full conversation history including the input prompt.
# To get only the newly generated text, you might need to slice it or use the appropriate
# tokenizer behavior based on how apply_chat_template adds prompt.
# For simplicity, if the model just appends to the prompt, direct decode might suffice.
# A more robust approach might be:
# generated_text_only = tokenizer.decode(generated_ids[0][len(model_inputs.input_ids[0]):], skip_special_tokens=True)
print(response)
# You might need to parse the output to separate HTML, CSS, and JS if the model outputs a combined file.
# For example, look for specific markers like <html>, <style>, <script>
If you find our project useful, please cite:
@misc{lu2025webgenbenchevaluatingllmsgenerating,
title={WebGen-Bench: Evaluating LLMs on Generating Interactive and Functional Websites from Scratch},
author={Zimu Lu and Yunqiao Yang and Houxing Ren and Haotian Hou and Han Xiao and Ke Wang and Weikang Shi and Aojun Zhou and Mingjie Zhan and Hongsheng Li},
year={2025},
eprint={2505.03733},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.03733},
}