Ministral 3 14B Reasoning 2512 is the largest model in the Ministral 3 series, delivering frontier-level performance with capabilities comparable to the Mistral Small 3.2 24B model. It pairs a 13.5B-parameter language model with a 0.4B vision encoder, enabling strong multimodal reasoning across both text and images. This version is specifically post-trained for reasoning tasks, making it highly effective for math, coding, STEM workloads, and complex multi-step problem-solving. Despite its scale, the model is engineered for practical deployment and can run locally on 32GB of VRAM in BF16 or under 24GB when quantized. It maintains robust system-prompt adherence, supports dozens of languages, and provides native function calling with clean JSON output for agentic workflows. The model's architecture also delivers a 256k context window, unlocking large-document analysis and long-form reasoning.
Features
- 13.5B language model paired with a 0.4B vision encoder for multimodal reasoning
- Post-trained for advanced reasoning tasks in math, coding, and STEM domains
- Deployable locally with 32GB BF16 VRAM or <24GB when quantized
- Supports dozens of major languages including English, German, Chinese, Arabic, and more
- Strong system-prompt adherence for predictable reasoning behavior
- Native agentic abilities with function calling and structured JSON output
- Large 256k context window for extended analysis and multi-document workflows
- Edge-optimized design enabling deployment across diverse hardware environments