5 min read

Chat with your Kubernetes Cluster

Chat with your Kubernetes Cluster
Photo by Steve Johnson / Unsplash

Hey there! Ever wished you could just chat with your Kubernetes cluster instead of wrestling with endless commands? Well, guess what? Now you can, thanks to a bit of magic from OpenAI's GPT.

In this post, I'm gonna show you a super simple Python script that lets you do just that. Whether you're a coding newbie or a seasoned pro, you'll see how easy it is to make your Kubernetes cluster understand plain English.

So, grab your favorite snack, and let's get this coding party started. It's gonna be fun, I promise!


Here is a diagram illustrating the workflow of the chatbot system, including interactions between the User, Chatbot, OpenAI API, and the Kubernetes cluster:

alt text
  • User to Chatbot: The user inputs a command or query.
  • Chatbot to OpenAI API: The chatbot sends the query to the OpenAI API.
  • Conditional Interaction with Kubernetes:
    • If the OpenAI API requests the execution of a Kubernetes command, the chatbot executes it on the Kubernetes System and returns the output back to the OpenAI API.
    • If no Kubernetes command is needed, the OpenAI API generates a response based on AI.
  • Final Response to User: The OpenAI API sends the final response to the chatbot, which then displays it to the user.

Environment Set Up

Before we go into the tutorial, ensure you have Python 3 installed and accessible in your terminal. We'll begin by creating a virtual environment and installing two essential libraries: openai and colorama. This setup is crucial for a streamlined development experience.

mkdir k8s-chat
cd k8s-chat

# set up env
python3 -m venv .venv
source .venv/bin/activate

# install libraries
pip install openai
pip install colorama

# create the python file
# for our code
touch chat_with_k8s.py


Let's start by writing a simple python function that will be able to execute any arbitrary kubectl commands:

Executing Kubectl commands:

import subprocess

def execute_kubectl_cmd(cmd):
    # add kubectl prefix if it's not already there
    if not cmd.startswith("kubectl"):
        cmd = f"kubectl {cmd}"

    if cmd.startswith("kubectl delete"):
        return "I'm sorry, Deleting resources is disabled."
        output = subprocess.check_output(cmd, shell=True)
        return output.decode('utf-8')
    except subprocess.CalledProcessError as e:
        return f"Error executing kubectl command: {e}"

This function is a utility for safely executing Kubernetes commands from a Python script. It ensures that commands are correctly formatted, prevents potentially destructive delete operations, and provides feedback on the success or failure of command execution.

Tool Definition:

Let's then define the using the openai function calling documentation,

import json
import subprocess

from colorama import Back, init, Fore
from openai import OpenAI

client = OpenAI()
model_name = "gpt-3.5-turbo-0125"

tools = [
        "type": "function",
        "function": {
            "name": "execute_kubectl_cmd",
            "description": "execute the kubeclt command against the current kubernetes cluster",
            "parameters": {
                "type": "object",
                    "properties": {
                        "cmd": {
                            "type": "string",
                            "description": "the kubectl command to execute",
                    "required": ["cmd"],

The client is simply initializing the openai api, and the model_name is just a variable for what model we want to use with our system.

A list named tools is defined with a single tool (execute_kubectl_cmd), detailing its purpose and required parameters. This tool is intended to execute Kubernetes commands.

You are not limited to one function when using gpt function calling

Processing Chat Input:

This function contains the chat completion and the interaction with the openai api

def chat_completion(user_input):
    messages = [{"role": "user", "content": user_input}]
    response = client.chat.completions.create(
    response_message = response.choices[0].message
    tool_calls = response_message.tool_calls
    if tool_calls:
        available_functions = {
            "execute_kubectl_cmd": execute_kubectl_cmd,
        for tool_call in tool_calls:
            function_name = tool_call.function.name
            function_to_call = available_functions[function_name]
            function_args = json.loads(tool_call.function.arguments)
            function_response = function_to_call(
                    "tool_call_id": tool_call.id,
                    "role": "tool",
                    "name": function_name,
                    "content": function_response,
        second_response = client.chat.completions.create(
        return second_response.choices[0].message.content
        return response_message

Below are the steps the function takes.

  • The chat_completion function simulates processing user input through an AI model. It prepares the input message, sends it to the OpenAI client, and retrieves a response.
  • If the response includes a call to the execute_kubectl_cmd tool, it executes the command and appends both the AI's response and the command's output to the conversation.
  • It then sends this extended conversation back to the model for further processing, if necessary, and returns the final message content to be displayed to the user.

Running the Chatbot:

The run_conversation function initiates an interactive chat session. It welcomes the user and enters a loop to accept user input.

def run_conversation():
    print("Welcome to the Kubernetes chatbot!")
    print("You can ask me anything about your Kubernetes cluster.")
    while True:
        print(Back.CYAN + Fore.BLACK + " You: ", end="")
        print(" > ", end="")
        user_input = input()
        if user_input.lower() == "exit" or user_input.lower() == "q":
            resp = chat_completion(user_input)
            print(Back.GREEN + Fore.BLACK + " Assitant: ", end="")
            print(" > ", end="")

If the user types "exit" or "q", the chatbot ends the session. Otherwise, it processes the user's input through the chat_completion function.

Main Execution

Finally, the script initializes Colorama's auto-reset feature (to prevent color codes from leaking into unrelated terminal output) and starts the conversation loop.

def main():

if __name__ == "__main__":

To run the code, just make sure that you have OPENAI_API_KEY secret in the env and that you are running from the venv env created at the beginning of the post.

In order to obtain the OpenAI api key secret, you need to create an account with OpenAI, and add $$ to it. You can find more on this page.
export OPENAI_API_KEY="xxxxxxxxxx"
python chat_with_k8s.py


We will use Kind to run a local kubernetes cluster, and chat with it.

chat with k8s demo
chat with k8s demo


developing a Python-based chatbot for Kubernetes operations is both feasible and straightforward. Our script leverages essential libraries and concepts such as json for JSON data manipulation, subprocess for running shell commands, and colorama for enhancing terminal output with color.

Utilizing the openai library, we seamlessly interact with the GPT AI model, particularly through function calling, streamlining the process of communicating with functions or APIs. This demonstrates the practicality and efficiency of integrating AI into operational scripts.

for the full code, you can check the git repository.

And don't forget to sign up to get fresh content delivered to your inbox.