nyasukun's picture
.
0bfe434

A newer version of the Gradio SDK is available: 5.35.0

Upgrade
metadata
title: LLM Threat Association Analysis
emoji: 🕸️
colorFrom: red
colorTo: purple
sdk: gradio
sdk_version: 5.32.0
app_file: app.py
pinned: false
license: mit
short_description: Can a security-tuned LLM rival STIX’s expressiveness?

🕸️ LLM Threat Association Analysis

Visualizing Campaign-Actor-Technique relationships using Language Models

Features

  • Campaign-Actor Associations: Probabilistic analysis using softmax normalization
  • Campaign-Technique Associations: Independent binary scoring with length normalization
  • Customizable Prompt Templates: Edit templates for different analysis scenarios
  • Interactive Heatmaps: Matplotlib/Seaborn visualizations
  • ZeroGPU Support: Optimized for Hugging Face Spaces GPU infrastructure

ZeroGPU Configuration

This Space is optimized for ZeroGPU deployment with the following configuration:

Environment Variables Required

Set these in your Space settings:

Secret Variables:

  • HF_TOKEN: Your Hugging Face access token

Regular Variables:

  • ZEROGPU_V2=true: Enables ZeroGPU v2
  • ZERO_GPU_PATCH_TORCH_DEVICE=1: Enables device patching for PyTorch

Technical Specifications

  • GPU Type: NVIDIA H200 slice
  • Available VRAM: 70GB per workload
  • PyTorch Version: 2.4.0 (ZeroGPU compatible)
  • Gradio Version: 5.29.0

Usage

  1. Enter Campaigns: Comma-separated list of threat campaigns
  2. Configure Prompt Templates: Customize the language patterns used for analysis
  3. Select Actors/Techniques: Enter relevant threat actors and techniques
  4. Generate Heatmaps: Click buttons to create visualizations

Installation

For local development:

pip install -r requirements.txt
python app.py

Architecture

Campaign-Actor Analysis

  • Uses P(actor | "{campaign} is conducted by") with softmax normalization
  • Results in probability distributions (sum to 1.0 per campaign)
  • Shows relative likelihood of actor attribution

Campaign-Technique Analysis

  • Uses binary association scoring with length normalization
  • Independent scores for each campaign-technique pair
  • Accounts for phrase length bias in language models

Model Support

Currently supports any Hugging Face transformers model. Default model is sshleifer/tiny-gpt2 for demonstration purposes.

To use a different model, update the MODEL_NAME variable in app.py.

References

Based on the ZeroGPU usage guide: https://huggingface.co/spaces/nyasukun/compare-security-models/blob/main/zerogpu.md