Foundation-Sec-8B-Q4_K_M-GGUF Model Card

This model was quantized from fdtn-ai/Foundation-Sec-8B to an 4-bit (Q4_K_M) GGUF checkpoint using llama.cpp. It retains the cybersecurity specialization of the original 8-billion-parameter model while reducing the memory footprint from approximately 16GB (BF16) to around 4.92GB (Q4_K_M) for inference.

Model Description

fdtn-ai/Foundation-Sec-8B-Q4_K_M-GGUF is an 4-bit quantized variant of Foundation-Sec-8B — an 8B-parameter LLaMA 3.1–based model that was continued-pretrained on a curated corpus of cybersecurity-specific text (e.g., CVEs, threat intel reports, exploit write-ups, compliance guides). The base model was originally released on April 28, 2025 under Apache 2.0, and excels at tasks such as:

  • Threat intelligence summarization (e.g., summarizing CVE details)
  • Vulnerability classification (mapping CVEs/CWEs to MITRE ATT&CK)
  • Incident triage assistance (extracting IoCs, summarizing log data)
  • Red-team simulation prompts and security-workflow generation

Rather than re-uploading or replicating the entire training details, please refer to the original model card for foundational architecture, training data, evaluation results, and known limitations.

Quantization Details

  • Quantization Scheme: 4-bit, GPTQ-inspired “Q4_K_M” (vector-wise quantization with per-group scales)
  • Toolchain: Converted via llama.cpp’s export utilities (commit v0.1.81 or newer) to GGUF format.
  • Resulting File Size: ~ 4.92 GB on disk (raw GGUF blob)
  • Runtime Footprint:
    • Memory: ≈ 4.94 GB of RAM when loaded on CPU with llama.cpp
  • Format:
    • File extension: .gguf
    • Internally contains:
      1. Metadata (architecture, tokenizer vocab, hyperparameters)
      2. Vocabulary list (BPE tokens)
      3. Weight tensors (for each layer and head) stored in 4-bit quantized form
    • Compliant with LlamaCpp Python wrapper (llama_cpp) and C++ CLI (llama.cpp) inference engines

How to Use

Install llama.cpp on Mac

Use Homebrew:

brew install llama-cpp

or install from scratch:

# Install dependencies
brew install cmake

# Clone and build llama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
make

# Add to PATH (optional)
sudo cp llama-cli /usr/local/bin/

Run the Model

llama-cli -m foundation-sec-8b-q4_k_m.gguf -p "CVE-2021-44228 is a remote code execution flaw in Apache Log4j2 via unsafe JNDI lookups (\"Log4Shell\"). The CWE is CWE-502.\n\nCVE-2017-0144 is a remote code execution vulnerability in Microsoft's SMBv1 server (\"EternalBlue\") due to a buffer overflow. The CWE is CWE-119.\n\nCVE-2014-0160 is an information-disclosure bug in OpenSSL's heartbeat extension (\"Heartbleed\") due to out-of-bounds reads. The CWE is CWE-125.\n\nCVE-2017-5638 is a remote code execution issue in Apache Struts 2's Jakarta Multipart parser stemming from improper input validation of the Content-Type header. The CWE is CWE-20.\n\nCVE-2019-0708 is a remote code execution vulnerability in Microsoft's Remote Desktop Services (\"BlueKeep\") triggered by a use-after-free. The CWE is CWE-416.\n\nCVE-2015-10011 is a vulnerability about OpenDNS OpenResolve improper log output neutralization. The CWE is" -n 128

References

  1. Original Model Card:
    fdtn-ai/Foundation-Sec-8B (April 28, 2025) – continued pretraining of LLaMA 3.1-8B on cybersecurity data.

  2. Llama-cpp GGUF Quantization:
    Ggerganov, J. (2022). Llama.cpp: Llama inference in pure C/C++/Assembly/GGUF. GitHub repository.

  3. ZeroQuant:
    Yao, Z. et al. (2022). “ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers.” arXiv: 2206.01861.

  4. SmoothQuant:
    Xiao, G. et al. (2022). “SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models.” arXiv: 2211.10438.

License: Apache 2.0 (same as base)
Contact: For questions about usage, quantization details, or license terms, please open an issue on the Hugging Face repo or contact [email protected].

Downloads last month
203
GGUF
Model size
8.03B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for fdtn-ai/Foundation-Sec-8B-Q4_K_M-GGUF

Quantized
(15)
this model

Collection including fdtn-ai/Foundation-Sec-8B-Q4_K_M-GGUF