Press "Enter" to skip to content

DeepSeek on Debian

Tunnel numérique

It is pretty simple to install a simplified DeepSeek on a Debian GNU/Linux laptop without GPU.

> curl -fsSL https://ollama.com/install.sh | sh
> ollama run deepseek-r1:8b

This installs a simplified DeepSeek R1 LLM chatbot, taking ~5GB in /usr/share/ollama/models. The real full DeepSeek model is 671 billion parameter and requires multiple high-end GPUs servers. The simplified DeepSeek has the following characteristics:

  • Distillation. Only 8 billion parameter (~x80 smaller), trained to imitate the full model.
  • Quantization. The original weights are stored in 4 bits instead of 32 bits, hence the ~5GB.
  • Static inference. No update of parameters, only probabilistic output given the context.

Of course, with such a reduced soft and hard environment, we cannot expect a miracle. It is slow and limited. Yet it remains relatively impressive. However, for research in mathematics, it is almost useless. Ollama allows to play with many other open LLM models, including the famous OpenClaw.

    Leave a Reply

    Your email address will not be published.

    This site uses Akismet to reduce spam. Learn how your comment data is processed.

    Syntax · Style · .