Back to Models
Mistral 7B
Text GenerationMistral 7B Instruct v0.3 - High-quality desktop LLM with function calling support
Integration
main.rs
use xybrid_sdk::{Xybrid, Envelope};
// Load the LLM
let model = Xybrid::model("mistral-7b").load()?;
// Run text generation
let result = model.run(&Envelope::text("Explain quantum computing in simple terms."))?;
println!("{}", result.text.unwrap());Details
- Task
- Text Generation
- Family
- Mistral
- Parameters
- 7B
- Format
- gguf
- Quantization
- q4_k_m
- Size
- 4.0 GB
- Model ID
- mistral-7b