
DeepSeek-R1
DeepSeek's first-generation reasoning models. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning without supervised fine-tuning, demonstrated remarkable performance on reasoning.


Mixtral 8x7B, a high-quality sparse mixture of experts model (SMoE) with open weights. Mixtral outperforms Llama 2 70B on most benchmarks with 6x faster inference.
3
Views
0
Likes
Jan 2026
Added
github.com
Website
Mixtral 8x7B, a high-quality sparse mixture of experts model (SMoE) with open weights. Mixtral outperforms Llama 2 70B on most benchmarks with 6x faster inference.
Mixtral is an excellent tool in the open-source-llm category, suitable for all users who need AI assistance.
Visit the official website to get started
Have an AI tool to share?
Submit Your Tool
DeepSeek's first-generation reasoning models. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning without supervised fine-tuning, demonstrated remarkable performance on reasoning.

A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.

Qwen3 is the large language model series developed by Qwen team, Alibaba Cloud.

Llama3 is a large language model developed by Meta AI. It is the successor to Meta's Llama2 language model.