WEBGEN-4B-Preview-f32-GGUF

WEBGEN-4B-Preview is a 4B parameter model purpose-built for generating modern, responsive web pages with clean semantic HTML, CSS, and Tailwind, optimized for single-file sites and component blocks. With its compact size for local and fast iteration, the model consistently produces production-quality layoutsโ€”favoring structured markup, balanced spacing, and contemporary design patternsโ€”making it ideal for quickly prototyping or deploying landing pages, marketing sites, and web components directly from a natural language prompt.WEBGEN-4B-Preview is a compact, 4B parameter web generation model designed to turn natural prompts into clean, production-ready HTML, CSS, and Tailwind markup, optimized for fast, local runs and consistent, modern layouts suitable for single-file sites and reusable components. This web-only generator emphasizes semantic structure, responsive spacing, and opinionated design, making it ideal for quick prototyping and web development without dependencies on external JavaScript libraries.

Model Files

File Name Quant Type File Size
WEBGEN-4B-Preview.BF16.gguf BF16 8.05 GB
WEBGEN-4B-Preview.F16.gguf F16 8.05 GB
WEBGEN-4B-Preview.F32.gguf F32 16.1 GB
WEBGEN-4B-Preview.Q2_K.gguf Q2_K 1.67 GB
WEBGEN-4B-Preview.Q3_K_L.gguf Q3_K_L 2.24 GB
WEBGEN-4B-Preview.Q3_K_M.gguf Q3_K_M 2.08 GB
WEBGEN-4B-Preview.Q3_K_S.gguf Q3_K_S 1.89 GB
WEBGEN-4B-Preview.Q4_K_M.gguf Q4_K_M 2.5 GB
WEBGEN-4B-Preview.Q4_K_S.gguf Q4_K_S 2.38 GB
WEBGEN-4B-Preview.Q5_K_M.gguf Q5_K_M 2.89 GB
WEBGEN-4B-Preview.Q5_K_S.gguf Q5_K_S 2.82 GB
WEBGEN-4B-Preview.Q6_K.gguf Q6_K 3.31 GB
WEBGEN-4B-Preview.Q8_0.gguf Q8_0 4.28 GB

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
360
GGUF
Model size
4.02B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for prithivMLmods/WEBGEN-4B-Preview-f32-GGUF

Quantized
(7)
this model