Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
Duplicated from
TamisAI/inference-lamp-api
TamisAI
/
inference-api-g1
like
0
Sleeping
App
Files
Files
Community
Fetching metadata from the HF Docker repository...
main
inference-api-g1
Ctrl+K
Ctrl+K
1 contributor
History:
79 commits
alexfremont
Update model lookup to use filename instead of ID in get_model function
9f9c6d5
3 days ago
api
Improve model unloading with explicit GPU memory cleanup and CUDA cache clearing
4 days ago
architecture
Refactor API architecture with modular design and database integration
8 days ago
config
Add model management endpoints and database fetch functionality
6 days ago
db
Disable prepared statement cache for pgbouncer compatibility
4 days ago
models
Update model lookup to use filename instead of ID in get_model function
3 days ago
schemas
Refactor API architecture with modular design and database integration
8 days ago
steps
Refactor API architecture with modular design and database integration
8 days ago
utils
Add timestamps to memory monitoring logs and display outputs
4 days ago
.gitattributes
Safe
1.52 kB
initial commit
7 months ago
.gitignore
Safe
347 Bytes
first commit for API
7 months ago
Dockerfile
Safe
832 Bytes
Merge Gradio UI into FastAPI app and standardize port to 7860
8 days ago
README.md
Safe
276 Bytes
Update README.md
16 days ago
docker-compose.yml
Safe
205 Bytes
Merge Gradio UI into FastAPI app and standardize port to 7860
8 days ago
main.py
Safe
8.75 kB
Remove periodic memory status updates and related helper function
4 days ago
requirements.txt
Safe
299 Bytes
Add system monitoring features and memory usage tracking for loaded models
4 days ago