Deploying DeepSeek on UGREEN NAS with Ollama and Integrating into n8n Workflows
To integrate AI capabilities into n8n workflows on a UGREEN NAS, deploy DeepSeek using Ollama via Docker. This setup enables local model execution for cost-effective automation tasks.
Ollama is a lightweight tool for managing AI models locally, while n8n is a low-code platform for building automated workflows. Running smaller models on NAS is feasible, but large models may impact performance; consider cloud services for heavy tasks.
Installing Ollama
In the Docker interface, search for and download the ollama/ollama image. Create a container with auto-restart enabled and note the assigend port. After launching, access the container terminal and run ollama list to verify installation.
Downloading DeepSeek Model
Visit the Ollama website to find models. For this example, use the deepseek-r1:1.5b model. In the container terminal, execute:
ollama run deepseek-r1:1.5b
To download without immediate execution, use pull instead of run. Check available commands with ollama --help.
Integrating Ollama with n8n
Access n8n via its NAS-deployed URL. Create a new workflow and add an "AI Agent" node by searching in the node palette. Configure the node by creating a new credential:
- Set "Base URL" to the Ollama instance (e.g.,
http://NAS_IP:OLLAMA_PORT). - Select the
deepseek-r1:1.5bmodel from the dropdown.
Once connected, use the "When chat message received" node to test the integration via the chat interface. Monitor workflow execution in the console to confirm functionality.
This example demonstrates basic integration; for advanced automation, explore n8n's node-based system to design complex workflows leveraging local AI models.