Skip to content

STEM (Science, Technology, Engineering, Mathematics)

Thinking-tuned models optimized for technical reasoning, mathematical precision, scientific problem-solving, and code-heavy workflows. These models excel at tasks like deriving equations, debugging complex systems, simulating experiments, and explaining STEM concepts with rigor.

💡 Note: All models listed below are thinking-tuned variants (e.g., qwen3 4b thinking), which are specifically fine-tuned for analytical depth—not general-purpose chat. They outperform standard instruct models on logic-heavy, multi-step STEM problems.

Use the selector below to find the best thinking-tuned model for your hardware:

Recommended model:Qwen3 4B Thinking 2507 (Q8_K_XL) Use in LMStudio
Parameters: 4B | Quantization: Q8_K_XL | Model Memory Impact: 5.0 GB
Context Overhead: 1.27GB (based on 16,384 token context)
Note: The size of context in GB is an estimate that shows the "worst case". It may be smaller in actual use, but is assumed to be higher for safely assessing upper bounds.
TotalContextModelLeftover
RAM16.00GB0.00GB0.00GB8.00GB
VRAM8.00GB1.27GB5.00GB0.23GB

Released under the MIT License.