Fading Coder

One Final Commit for the Last Sprint

Home > Tech > Content

Building a Private AI News Assistant with GPUStack and n8n

Tech May 12 2

n8n is an open-source, low-code automation tool that enables users to connect various services and APIs through visual nodes, creating complex workflows. It supports self-hosting, ensuring data privacy and high customizability.

For integrating AI capabilities, n8n's extensive node ecosystem can connect smoothly with local large language models deployed via GPUStack. This approach eliminates recurring API costs and keeps sensitive data within the local network, making it ideal for private AI agents. The following walkthrough demonstrates this integration.

Setup Environment

  1. GPUStack v2.0.3: Follow the official installation guide at https://docs.gpustack.ai.
  2. Latest n8n: Use Docker for quick deployment as per https://docs.n8n.io/hosting/installation/docker.
  3. gpt-oss-120b: Deploy this model on GPUStack. It offers high concurrent performance.

Workflow Construction

1. Obtain Model API Credentials

In GPUStack's Deployments list, locate your model. Click the menu and select API Access Info. A dialog appears with connection details. If no API key exists, use the provided link to create one.

Generate an API Key

After creation, the API key appears once. Save it securely as it is the token for n8n to authenticate with GPUStack.

2. Configure Model Connection in n8n

Since GPUStack exposes an OpenAI-compatible API, add an OpenAI API credential type in n8n.

In the configuration window, enter the API key and GPUStack's endpoint URL. Click Save. If correct, a Connection tested successfully message appears.

Close the credential configuration. Enable Limit models and specify the local model name to restrict usage.

3. Build an Automated Workflow

The goal: trigger daily at 8:30 AM, fetch RSS feeds, extract summaries via AI, and send them to an email.

  1. Create a new blank Workflow.
  2. Set the first node to On a schedule.

Configure the trigger for 8:30 AM daily.

  1. Add an RSS Read node using https://36kr.com/feed as an example.

Click test to verify the node works.

Double-click the RSS Read node to inspect execution logs and data.

  1. Insert a Basic LLM Chain node to generate summaries.

In the node configuration, set Source for Prompt (User Message) to Define below. Drag the contentSnippet field from the left panel into the Prompt (User Message) input.

Set System Prompt to: You are a senior tech editor. Summarize the following article concisely and capture the essence.

  1. Configure the LLM Model with the previously created credential.

  2. Add a Send Email node.

Create an email credential. Click Create new credential and follow the popup.

The exact SMTP settings (server, port, authorization code) depend on your email provider.

Set the recipient address and email body. For this demo, use the raw output from the LLM.

No manual expression writing needed; drag fields into the input boxes.

Testing Results

Click Execute Workflow to trigger it manually. n8n will fetch the latest RSS items, invoke GPUStack for inference, and send the email.

Be cautious: running this step as-is sends 30 emails at once!

Execution results display as shown.

Email receipt example.

Workflow Refinement

The current workflow sends one email per RSS item (30 total). We want to aggregate summaries into a single, beautifully formatted email.

  1. Modify the Basic LLM Chain node's system prompt to output an HTML list item.
You are a senior tech editor. Summarize the user's article into a concise HTML list item (<li>...</li>). Include the title and key points.

Format example:
<li><b>Title</b>: Key summary</li>

Requirements:
1. Output only <li> tags and content. Do not include <ul> or markdown.
2. Keep summaries under 50 characters.
  1. Insert a Code node between Basic LLM Chain and Send Email to aggregate items into HTML.

Choose Code in JavaScript or Code in Python (Native). This example uses JavaScript.

Paste the following code into the node:

Note: When copying from WeChat, replace any non-breaking spaces with regular spaces.

const items = $input.all();

const style = {
  container: "font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; max-width: 600px; margin: 0 auto; padding: 20px; background-color: #f9f9f9; border-radius: 10px; border: 1px solid #e0e0e0;",
  header: "color: #2c3e50; border-bottom: 2px solid #3498db; padding-bottom: 10px; margin-bottom: 20px; font-size: 24px;",
  list: "list-style-type: none; padding: 0;",
  listItem: "background-color: #ffffff; margin-bottom: 15px; padding: 15px; border-radius: 8px; box-shadow: 0 2px 4px rgba(0,0,0,0.05); line-height: 1.6; color: #555;",
  footer: "margin-top: 30px; font-size: 12px; color: #999; text-align: center; border-top: 1px solid #e0e0e0; padding-top: 10px;"
};

let htmlContent = `<div style="${style.container}">`;
htmlContent += `<h2 style="${style.header}">📅 Daily Tech News Digest</h2>`;
htmlContent += `<ul style="${style.list}">`;

for (const item of items) {
  if (item.json.text) {
    let styledItem = item.json.text.replace('<li>', `<li style="${style.listItem}">`);
    htmlContent += styledItem + "\n";
  }
}

htmlContent += `</ul>`;
htmlContent += `<div style="${style.footer}">Generated by n8n & GPUStack • ${new Date().toLocaleDateString()}</div>`;
htmlContent += `</div>`;

return [{
  json: {
    email_content: htmlContent
  }
}];
  1. Update the Send Email node. n8n lets you embed JavaScript expressions within {{ }}. Use {{ $now.format('yyyy-MM-dd') }} to include the current date in the email subject.

  2. Final result after re-execution.

  3. Save and publish the workflow.

Once active, the workflow runs automatically at 8:30 AM daily as long as n8n is running.

Conclusion

This guide demonstrated building a fully automated, zero-cost AI news assistant using n8n and GPUStack. The entire pipeline—RSS fetching, AI summarization, and email delivery—runs locally, protecting data privacy and eliminating API fees.

Finally, check the GPUStack Dashboard overview page. You can view token consumption (prompt and completion) and total API requests for your model over time, giving you full visibility into AI service usage.

Related Articles

Understanding Strong and Weak References in Java

Strong References Strong reference are the most prevalent type of object referencing in Java. When an object has a strong reference pointing to it, the garbage collector will not reclaim its memory. F...

Comprehensive Guide to SSTI Explained with Payload Bypass Techniques

Introduction Server-Side Template Injection (SSTI) is a vulnerability in web applications where user input is improper handled within the template engine and executed on the server. This exploit can r...

Implement Image Upload Functionality for Django Integrated TinyMCE Editor

Django’s Admin panel is highly user-friendly, and pairing it with TinyMCE, an effective rich text editor, simplifies content management significantly. Combining the two is particular useful for bloggi...

Leave a Comment

Anonymous

â—ŽFeel free to join the discussion and share your thoughts.