
DeepSeek-V3
A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token. - スマートな AI ツールで生産性を向上。


DeepSeek's first-generation reasoning models. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning without supervised fine-tuning, demonstrated remarkable performance on reasoning. - スマートな AI ツールで生産性を向上。
2
Views
0
Likes
Jan 2026
Added
github.com
Website
DeepSeek's first-generation reasoning models. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning without supervised fine-tuning, demonstrated remarkable performance on reasoning.
DeepSeek-R1 は open-source-llm カテゴリーの優れたツールで、AI 支援を必要とするすべてのユーザーに適しています。
Visit the official website to get started
Have an AI tool to share?
Submit Your Tool
A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token. - スマートな AI ツールで生産性を向上。

Qwen3 is the large language model series developed by Qwen team, Alibaba Cloud. - スマートな AI ツールで生産性を向上。

Llama3 is a large language model developed by Meta AI. It is the successor to Meta's Llama2 language model. - スマートな AI ツールで生産性を向上。

Mixtral 8x7B, a high-quality sparse mixture of experts model (SMoE) with open weights. Mixtral outperforms Llama 2 70B on most benchmarks with 6x faster inference. - スマートな AI ツールで生産性を向上。