Skip to content
Back to Blog
Telegram n8n Claude Code Automation

Wiring Telegram, n8n, and Claude Code Into a Single Pipeline

Nur Ikhwan Idris ·

I wanted to talk to my server from my phone. Not SSH — not typing commands into a terminal while on the train. I wanted to send a Telegram message like "check the OKHalal logs" or "write a new blog post about X" and have it actually happen. Read files, edit code, run commands, push commits — all triggered from a chat message.

This post is about how I wired Telegram, a self-hosted n8n instance, and Claude Code into a pipeline where a single chat message can execute real engineering work on my server. The system I call Khadam — Arabic for "servant" — and yes, it helped write this very post.

This blog post was written by Khadam itself, triggered by a Telegram message: "Write a new blog in our portfolio about how we manage to link telegram n8n and you the published it." That message travelled through the exact pipeline described below.

1. The Idea: Chat-Driven Development

I already had Claude Code running on my home server for coding tasks. And I already had n8n self-hosted for workflow automation. Telegram was my daily messaging app. The question was: can I connect these three so that a Telegram message becomes a Claude Code session that does real work?

The answer turned out to be surprisingly straightforward. The pieces:

  • Telegram Bot API — receives messages, sends responses
  • n8n — the orchestration layer, receives webhooks and routes them
  • A local webhook listener — a small Python HTTP server that invokes Claude Code
  • Claude Code CLI — the actual brain, with full filesystem and shell access

2. The Architecture

Here's how a message flows from my phone to executed code and back:

Telegram Message

Telegram Bot API (webhook)

n8n Workflow "Khadam Telegram Chat"

Local Webhook Listener (Python, port 5679)

claude -p (Claude Code CLI)

Response sent back via Telegram API

Each layer has a single responsibility. Telegram handles the chat interface. n8n handles webhook reception, validation, and routing. The webhook listener translates HTTP requests into CLI invocations. Claude Code does the actual work.


3. Setting Up the Telegram Bot

The Telegram side is the simplest part. Create a bot via @BotFather, get the token, and register a webhook URL pointing to your n8n instance:

curl -X POST "https://api.telegram.org/bot<TOKEN>/setWebhook" \
  -d "url=https://n8n.yourdomain.com/webhook/telegram-bot"

Every message sent to the bot now hits your n8n webhook. The payload includes the chat ID, message text, and sender info — everything you need to authenticate and route.


4. The n8n Workflow

The n8n workflow "Khadam Telegram Chat" is intentionally minimal. It does three things:

  1. Receive — a Webhook node listens at /webhook/telegram-bot
  2. Validate — check that the chat_id matches my personal Telegram (security gate — only I can trigger it)
  3. Forward — POST the message text to the local webhook listener on port 5679

n8n runs in Docker on the same server, so the webhook listener is reachable at host.docker.internal:5679. The Docker container is configured with --add-host=host.docker.internal:host-gateway to make this work.

One important gotcha: n8n imported workflows need both active=1 AND a published version record to actually activate. Setting active in the database alone isn't enough — you also need an entry in the workflow_published_version table with a matching activeVersionId. This one cost me an hour the first time.


5. The Webhook Listener

The webhook listener is a lightweight Python HTTP server running as a systemd service. It receives the forwarded message from n8n and invokes Claude Code via the CLI:

# Simplified version of the core logic
import subprocess, json
from http.server import HTTPServer, BaseHTTPRequestHandler

class Handler(BaseHTTPRequestHandler):
    def do_POST(self):
        body = json.loads(self.rfile.read(
            int(self.headers['Content-Length'])
        ))
        message = body.get("message", "")

        # Invoke Claude Code with the message as prompt
        result = subprocess.run(
            ["claude", "-p", message],
            capture_output=True, text=True,
            stdin=subprocess.DEVNULL,  # Prevents stdin hang
            timeout=300
        )

        response = result.stdout.strip()
        self.send_response(200)
        self.end_headers()
        self.wfile.write(json.dumps({
            "response": response
        }).encode())

HTTPServer(("0.0.0.0", 5679), Handler).serve_forever()

A few details that matter:

  • stdin=subprocess.DEVNULL — Without this, claude -p hangs waiting for stdin when invoked from a subprocess. This was a painful discovery.
  • Timeout of 300 seconds — Claude Code can take a while on complex tasks. Five minutes is generous but prevents runaway sessions.
  • systemd service — The listener restarts automatically on crashes and starts on boot. No manual babysitting.
  • UFW rules — Port 5679 is only open to Docker's internal network ranges (172.16.0.0/12 and the n8n Docker network). Not exposed to the internet.

6. Claude Code: The Brain

The claude -p flag runs Claude Code in "prompt mode" — it takes a single prompt, executes it with full tool access (file read/write, bash, git, etc.), and returns the output. Combined with --dangerously-skip-permissions, it operates fully autonomously without interactive approval.

Claude Code runs with a system prompt (via CLAUDE.md) that gives it context about who it is, what projects exist on the server, and how to behave:

  • It knows the project layout: ~/projects/okhalal, ~/projects/portfolio-v2, etc.
  • It has persistent memory across sessions via the memory system described in a previous post
  • It can read and edit files, run shell commands, manage Docker containers, interact with git
  • It sends Telegram notifications for important events (deploy results, error alerts)

When Claude finishes, its response travels back up the chain: webhook listener returns it to n8n, n8n sends it to the Telegram Bot API, and I get the reply on my phone.


7. The Autonomous Issue Pipeline

The chat interface was the first use case, but the same architecture powers something more ambitious: an autonomous bug-fixing pipeline.

A second n8n workflow, "OKHalal Issue Watcher," polls GitHub every five minutes for new issues on our main project. When it finds one:

  1. Telegram notification: "New issue detected, Claude working on it..."
  2. A checker script pulls the issue details from the GitHub API
  3. A fixer script invokes claude -p with the full issue context
  4. Claude reads the codebase, diagnoses the problem, writes the fix
  5. Creates a branch (fix/issue-N), commits, and pushes to origin
  6. Telegram notification: "Fix pushed to fix/issue-42, please review and deploy"

Deployment remains manual — intentionally. The pipeline handles detection, diagnosis, and fix. A human reviews and deploys. This boundary exists because a bad deploy to production is not something you want automated without guardrails.


8. What I Learned Building This

The stdin trap

claude -p expects a terminal. When invoked from a Python subprocess, it blocks waiting for stdin unless you explicitly pass stdin=subprocess.DEVNULL or pipe in an empty string. This isn't documented anywhere obvious, and it manifests as the process simply hanging with no error output.

n8n workflow activation is two steps

If you import an n8n workflow via the API or database and set active=1, it won't actually run. You also need a published version record. The UI handles this transparently, but direct database manipulation misses it. This is the kind of thing you only learn by staring at the SQLite schema wondering why your cron isn't firing.

Docker networking needs explicit host access

n8n runs in Docker but needs to reach a service on the host (the webhook listener). The --add-host=host.docker.internal:host-gateway flag solves this, but you also need UFW rules that allow Docker's network ranges to reach the listener port. Without both, the connection silently fails.

Telegram has a 4096-character message limit

Claude Code responses can be long. Telegram truncates messages over 4096 characters. The solution is to split long responses or summarize — the system prompt instructs Claude to keep responses concise for Telegram, and to summarize long outputs rather than dumping them raw.


9. Security Considerations

Running an autonomous code agent that accepts instructions from a chat app requires thinking about security:

  • Chat ID validation — Only messages from my specific Telegram chat ID are processed. Everyone else is ignored at the n8n layer.
  • Network isolation — The webhook listener only accepts connections from Docker's internal network, not from the internet.
  • No auto-deploy — The pipeline pushes code but never deploys. A human reviews and merges.
  • Cloudflare Tunnel — n8n's webhook endpoint is behind Cloudflare, adding DDoS protection and TLS termination without exposing the server's IP.
  • Limited blast radius — The server is a home machine running personal projects. The worst case is a broken side project, not a production outage at a company.

10. What It Looks Like in Practice

Here's what a typical interaction looks like. I'm on my phone, waiting for coffee:

Me:  check if okhalal tests are passing

Bot: All 460 tests passing. 2 skipped (MyInvois sandbox
     rate-limited). No failures. Last run: 4 seconds.

Me:  write a new blog post about this pipeline

Bot: Blog post created at blog/telegram-n8n-claude-code-pipeline.html
     and added to the blog listing. Ready for review.

That second message — the one asking for this blog post — is real. This post was generated by the pipeline it describes. Claude Code read the existing blog structure, matched the template, wrote the HTML, updated the listing page, and reported back. All from a chat message.


11. The Stack

For anyone who wants to replicate this, here's the full component list:

  • Telegram Bot — created via @BotFather, webhook mode
  • n8n — self-hosted in Docker, webhook trigger workflows
  • Webhook listener — Python HTTP server, systemd-managed, port 5679
  • Claude Code — Anthropic's CLI tool, running in prompt mode (claude -p)
  • Cloudflare Tunnel — exposes n8n's webhook endpoint without opening ports
  • UFW — firewall rules scoped to Docker network ranges only
  • systemd — keeps the webhook listener alive

Total infrastructure cost: $0/month (running on a repurposed desktop). The only paid component is the Claude API usage, which is minimal for the volume of messages I send.


12. What's Next

The pipeline is stable and I use it daily. A few things I'm considering:

  • Image support — Telegram supports sending images. Claude Code is multimodal. Sending a screenshot of a bug and asking "fix this" should be possible.
  • Multi-turn conversations — Currently each message is a standalone prompt. Adding session continuity would let me have back-and-forth debugging conversations from Telegram.
  • Approval workflows — For destructive actions (deleting files, force-pushing), having Claude ask for confirmation before proceeding rather than acting immediately.

The broader point is this: the gap between "I had an idea" and "it's done" has collapsed. I can be on the train, think of a fix, describe it in natural language, and have working code pushed to a branch before I reach my stop. That's the real value of connecting a chat app to an AI coding agent — not replacing engineering judgment, but removing the friction between intention and execution.

Questions or want to set this up yourself? Reach out via the contact section.