Summary

Mistral Medium 3.5 launched publicly as a 128B model positioned for coding, reasoning, and long-running tasks. The release extends Mistral's effort to compete on practical frontier-style workloads without matching the largest model footprints.

What changed

Mistral introduced Medium 3.5 as a new 128B model for coding and long-horizon reasoning work.

Why it matters

Model vendors are still using efficient model sizing and workload framing as a positioning strategy against larger frontier stacks. Mistral's messaging targets teams that want capable long-task performance without fully buying into the biggest-model cost profile.

Evidence excerpt

The May 2 Product Hunt digest described Mistral Medium 3.5 as a 128B model built for coding, reasoning, and long tasks.

Sources