MCP server for LLM quantization. Compress any HuggingFace model to GGUF, GPTQ, or AWQ format. 6 tools: info, check, recommend, quantize, evaluate, push. Self-contained Python server — no external CLI needed.
Tools
0
Indexed
Today
Transport
—
Security Scan
Security scan pending — this server has not yet been analyzed.