5 min read

Building a Slack Bot without a Server

How to let your AI agents post to Slack using a secure, serverless Dwizi tool.

Dw

Dwizi Team

Editorial

Building a Slack Bot without a Server

Imagine hiring a brilliant new team member. They are incredibly smart, have read every book in the library, and can analyze complex data in seconds. But there’s a catch: they sit in a soundproof room. They can’t talk to the rest of the team. If you want to know what they think, you have to walk into their room, ask a question, and wait for an answer.

This is the current state of most AI agents.

We build these incredible systems in ChatGPT, Claude, or custom dashboards, but they remain siloed. They generate insights—"Server usage is up 15%", "This lead looks promising", "Deploy failed"—but those insights die in the chat interface. They don't reach the place where your team actually works: Slack.

The Friction of "Just" Sending a Message

If you're a developer, you know that "just" sending a Slack message is rarely just sending a message. To give your AI a voice, you typically have to:

  1. Create a Slack App in their complex dashboard.
  2. Spin up a server (is it an Express app? A Python script? A Lambda function?).
  3. Configure webhooks and event listeners.
  4. Manage authentication tokens.
  5. Pay for hosting and keep it running 24/7, even if you only send one message a day.

It’s a massive amount of infrastructure overhead for a simple act of communication. This friction is why most agents remain silent observers rather than active participants.

The Solution: Tools as URLs

At Dwizi, we believe infrastructure should be invisible. You shouldn't have to become a DevOps engineer just to let your agent say "Hello."

We treat tools as ephemeral, secure endpoints. You write the logic—"Take a message and send it to this channel"—and we handle the rest. There is no server to manage, no "cold starts" to worry about, and certainly no monthly bill for an idle VM.

Let's build a tool called post_to_slack. It will serve as the vocal cords for your AI agent.

The Implementation: A Deep Dive

We are going to write a simple TypeScript function. But don't let the simplicity fool you; under the hood, this is a fully isolated, secure micro-application.

/**
 * Posts a message to a Slack channel.
 * 
 * Description for LLM: "Use this tool to post updates, summaries, or alerts to the team's Slack channel."
 */

// 1. Define the Contract (The Input Schema)
// This is where the magic happens. We don't write complex JSON schemas.
// We just write TypeScript. Dwizi reads this and tells the LLM:
// "I need a channel name and a message text. If it's urgent, tell me."
type Input = {
  channel: string;
  message: string;
  severity?: "info" | "alert";
};

export default async function postToSlack(args: Input) {
  // 2. The Vault (Secure Secret Access)
  // Never, ever put your API keys in your code. If you commit a key to GitHub,
  // it gets scraped in seconds. Dwizi injects secrets at runtime into a 
  // secure environment variable. It's safe, encrypted, and invisible to the outside world.
  const token = Deno.env.get("SLACK_BOT_TOKEN");
  if (!token) throw new Error("Missing SLACK_BOT_TOKEN secret");

  const { channel, message, severity = "info" } = args;

  // 3. The Logic (Formatting the Voice)
  // Your agent might be dry, but your Slack bot doesn't have to be.
  // Here we add a visual cue based on severity. The LLM decides if it's an alert,
  // but *we* decide how an alert looks. This gives you control over the brand.
  const icon = severity === "alert" ? "🔴" : "🟢";
  const text = `${icon} *New Agent Report*\n${message}`;

  // 4. The Action (Calling the API)
  // We use the standard Fetch API. No heavy SDKs, no 'npm install slack-client'.
  // This keeps the tool lightweight and incredibly fast.
  const response = await fetch("https://slack.com/api/chat.postMessage", {
    method: "POST",
    headers: {
      Authorization: `Bearer ${token}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({ channel, text }),
  });

  const result = await response.json();

  // 5. The Feedback Loop (Return Values)
  // An agent is only as good as the feedback it gets. If the message fails,
  // we return the error so the agent knows. It might try again, or it might
  // tell the user "I couldn't reach Slack."
  if (!result.ok) {
    return { success: false, error: result.error };
  }
  return { success: true, ts: result.ts };
}

How the Magic Happens

When you save this code in Dwizi, three things happen instantly:

  1. Inference: Our engine parses your Input type. It understands that severity is optional and can only be "info" or "alert". It creates a rigid contract that the LLM must follow.
  2. Deployment: The code is packaged into a secure, immutable bundle. It's not running on a server yet; it's waiting.
  3. Routing: We generate a unique URL for this tool. You can plug this URL into Claude Desktop, your own agent framework, or any MCP-compliant client.

The Execution Story

Imagine your agent is monitoring your server logs. It notices a spike in error rates.

Without this tool, the agent would just print: "I see a spike." And you would only see it if you happened to be looking at the dashboard.

With this tool, the story changes:

  1. Agent: "This looks bad. I need to alert the team."
  2. Decision: "I will use post_to_slack with severity: 'alert'."
  3. Action: The agent constructs the JSON payload.
    {
      "channel": "#engineering",
      "message": "Error rate spiked to 5% in the last 10 minutes.",
      "severity": "alert"
    }
    
  4. Result: Your #engineering channel lights up with a "🔴 New Agent Report".

Your team reacts immediately. The agent isn't just a chatbot anymore; it's a vigilant sentry that knows exactly how to reach you.

Subscribe to Dwizi Blog

Get stories on the future of work, autonomous agents, and the infrastructure that powers them. No fluff.

We respect your inbox. Unsubscribe at any time.

Read Next

The Economics of Agency

Why 'Serverless' isn't just a buzzword—it's the only economic model that makes sense for AI Agents.

Read article