Files
schedule-planner/deploy/setup-kubeconfig.sh
Hera Zhao d3f3b4f37b
Some checks failed
Test / build-check (push) Successful in 3s
PR Preview / test (pull_request) Successful in 3s
PR Preview / teardown-preview (pull_request) Has been skipped
Test / e2e-test (push) Failing after 55s
PR Preview / deploy-preview (pull_request) Failing after 40s
Refactor to Vue 3 + FastAPI + SQLite architecture
- Backend: FastAPI + SQLite (WAL mode), 22 tables, ~40 API endpoints
- Frontend: Vue 3 + Vite + Pinia + Vue Router, 8 views, 3 stores
- Database: migrate from JSON file to SQLite with proper schema
- Dockerfile: multi-stage build (node + python)
- Deploy: K8s manifests (namespace, deployment, service, ingress, pvc, backup)
- CI/CD: Gitea Actions (test, deploy, PR preview at pr-$id.planner.oci.euphon.net)
- Tests: 20 Cypress E2E test files, 196 test cases, ~85% coverage
- Doc: test-coverage.md with full feature coverage report

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-07 21:18:22 +00:00

83 lines
2.0 KiB
Bash

#!/bin/bash
# Creates a restricted kubeconfig for the planner namespace only.
# Run on the k8s server as a user with cluster-admin access.
set -e
NAMESPACE=planner
SA_NAME=planner-deployer
echo "Creating ServiceAccount, Role, and RoleBinding..."
kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: ${SA_NAME}
namespace: ${NAMESPACE}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ${SA_NAME}-role
namespace: ${NAMESPACE}
rules:
- apiGroups: ["", "apps", "networking.k8s.io"]
resources: ["pods", "services", "deployments", "replicasets", "ingresses", "persistentvolumeclaims", "configmaps", "secrets", "pods/log"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ${SA_NAME}-binding
namespace: ${NAMESPACE}
subjects:
- kind: ServiceAccount
name: ${SA_NAME}
namespace: ${NAMESPACE}
roleRef:
kind: Role
name: ${SA_NAME}-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Secret
metadata:
name: ${SA_NAME}-token
namespace: ${NAMESPACE}
annotations:
kubernetes.io/service-account.name: ${SA_NAME}
type: kubernetes.io/service-account-token
EOF
echo "Waiting for token..."
sleep 3
# Get cluster info
CLUSTER_SERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
CLUSTER_CA=$(kubectl config view --raw --minify -o jsonpath='{.clusters[0].cluster.certificate-authority-data}')
TOKEN=$(kubectl get secret ${SA_NAME}-token -n ${NAMESPACE} -o jsonpath='{.data.token}' | base64 -d)
cat > kubeconfig <<EOF
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: ${CLUSTER_CA}
server: ${CLUSTER_SERVER}
name: planner
contexts:
- context:
cluster: planner
namespace: ${NAMESPACE}
user: ${SA_NAME}
name: planner
current-context: planner
users:
- name: ${SA_NAME}
user:
token: ${TOKEN}
EOF
echo "Kubeconfig written to ./kubeconfig"
echo "Test with: KUBECONFIG=./kubeconfig kubectl get pods"