DeepSeek-R1
DeepSeek-R1
Active

DeepSeek-R1

DeepSeek's first-generation reasoning models. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning without supervised fine-tuning, demonstrated remarkable performance on reasoning. - スマートな AI ツールで生産性を向上。

2

Views

0

Likes

Jan 2026

Added

github.com

Website

Tags

open-source-llmfree

About DeepSeek-R1

DeepSeek-R1 について

DeepSeek's first-generation reasoning models. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning without supervised fine-tuning, demonstrated remarkable performance on reasoning.

主な機能

  • 強力な AI テクノロジー搭載
  • ユーザーフレンドリーなインターフェース
  • 効率的なワークフロー統合
  • 継続的なアップデートと改善

ユースケース

DeepSeek-R1 は open-source-llm カテゴリーの優れたツールで、AI 支援を必要とするすべてのユーザーに適しています。

Ready to try DeepSeek-R1?

Visit the official website to get started

Visit DeepSeek-R1

Quick Info

Added
1/21/2026
Updated
1/23/2026

Share This Tool

Have an AI tool to share?

Submit Your Tool

Related Tools

DeepSeek-V3

DeepSeek-V3

A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token. - スマートな AI ツールで生産性を向上。

open-source-llmfree
30
Qwen3

Qwen3

Qwen3 is the large language model series developed by Qwen team, Alibaba Cloud. - スマートな AI ツールで生産性を向上。

open-source-llmfree
40
Llama 3

Llama 3

Llama3 is a large language model developed by Meta AI. It is the successor to Meta's Llama2 language model. - スマートな AI ツールで生産性を向上。

open-source-llmfree
30
Mixtral

Mixtral

Mixtral 8x7B, a high-quality sparse mixture of experts model (SMoE) with open weights. Mixtral outperforms Llama 2 70B on most benchmarks with 6x faster inference. - スマートな AI ツールで生産性を向上。

open-source-llmfree
30