Back to Models

Ministral 3 3B

Text Generation

Ministral 3 3B Instruct - Mistral AI's edge-optimized instruction-tuned LLM with 256K context

Integration

main.rs
use xybrid_sdk::{Xybrid, Envelope};

// Load the LLM
let model = Xybrid::model("ministral-3-3b").load()?;

// Run text generation
let result = model.run(&Envelope::text("Explain quantum computing in simple terms."))?;

println!("{}", result.text.unwrap());

Details

Task
Text Generation
Family
Mistral
Parameters
3.4B
Format
gguf
Quantization
q4_k_m
Size
2.0 GB
Model ID
ministral-3-3b