Claude Code agents, native to Kubernetes

A Rust controller built on kube-rs that turns AI agent runs into first-class Kubernetes resources. One CRD per agent — the operator handles scheduling, RBAC, secrets, persistence, and lifecycle.

Apache-2.0 Built with Rust
agentjob.yaml
apiVersion: thurkube.thurbeen.eu/v1alpha1
kind: AgentJob
metadata:
  name: pr-fixer
  namespace: agents
spec:
  schedule: "0 */6 * * *"
  timezone: Europe/Paris
  runtimeRef: claude-code
  authRef:    claude-oauth
  roleRef:    default
  repositoryRefs: [thurkube]
  prompt: "Fix any failing PRs on this repository."
  persist: true

Features

Eight CRDs, one binary, the full Kubernetes contract

AgentJob orchestration

One AgentJob resource defines a one-shot or scheduled agent run. The controller emits the matching Job or CronJob and owns its lifecycle end-to-end.

🧩

Composable references

Runtimes, auth, roles, skills, MCP servers, repositories, and RBAC are separate CRDs. Reference them by name and reuse them across many jobs.

🔐

Secrets & RBAC

Auth tokens flow from Secret keys into the right env var. ClusterAccess materializes a ServiceAccount, ClusterRole, and ClusterRoleBinding scoped to the job.

💾

Optional persistence

Set persist: true and the controller provisions a PVC with sensible defaults — mounted at the runtime's declared persistPath .

🔁

Drift detection

A configHash in .status tracks the rendered ConfigMap. Spec or reference changes redeploy the underlying Job without manual cleanup.

🔌

MCP servers & skills

Bind McpServer resources (local command or remote url ) and reusable AgentSkill repositories. The controller assembles them into the agent's runtime config.

🛡

Locked-down pods

The controller runs as runAsNonRoot with readOnlyRootFilesystem , dropped capabilities, and the RuntimeDefault seccomp profile.

🏗

kube-rs powered

Single Rust binary with a tokio runtime, server-side apply via field manager thurkube , and standard /healthz and /readyz probes.

📦

Helm-installable

Ship the chart from charts/thurkube or pull the OCI artifact from GHCR. CRDs install on chart apply (toggle with crds.install ).

How It Works

Four steps from cluster to running agent

1

Install

Deploy the controller and CRDs via Helm.

bash
$ helm install thurkube oci://ghcr.io/thurbeen/charts/thurkube \
    --namespace thurkube-system --create-namespace
2

Define a runtime + auth

An AgentRuntime points at your container image. An AgentAuth wires a Secret key into it.

runtime.yaml
apiVersion: thurkube.thurbeen.eu/v1alpha1
kind: AgentRuntime
metadata: { name: claude-code }
spec:
  image: ghcr.io/thurbeen/claude-code-job:latest
  authEnvVar: CLAUDE_CODE_OAUTH_TOKEN
  configPath: /etc/claude-code-job
  persistPath: /var/lib/claude-code-job
3

Create an AgentJob

Reference your runtime, auth, role, and any optional repositories or MCP servers.

job.yaml
apiVersion: thurkube.thurbeen.eu/v1alpha1
kind: AgentJob
metadata: { name: summarize }
spec:
  runtimeRef: claude-code
  authRef:    claude-oauth
  roleRef:    read-only
  prompt: "Summarize today's incoming issues."
4

Watch it reconcile

The controller emits a Job or CronJob , wires up the ConfigMap, ServiceAccount, and PVC, and reports phase via printer columns.

bash
$ kubectl get aj -n agents
NAME        SCHEDULE      SUSPENDED  PHASE      LAST RUN
summarize   <none>        false      Succeeded  2m
pr-fixer    0 */6 * * *   false      Running    14s

Installation

Helm is the supported path

bash
$ helm install thurkube oci://ghcr.io/thurbeen/charts/thurkube \
    --namespace thurkube-system --create-namespace

Installs the eight CRDs and the controller Deployment with locked-down pod security. Override anything via --set or --values .

CRDs only

docker run --rm ghcr.io/thurbeen/thurkube:latest --crd | kubectl apply -f -

Pin a version

--set image.tag=v0.1.0

From source

cargo build --release

Prerequisites

  • Kubernetes 1.28+ — cluster to run the controller
  • Helm 3.16+ — chart installation
  • kubectl — for direct CRD installs
  • Rust 1.75+ — only for building from source (optional)
  • k3d or kind — running the E2E suite locally (optional)