GOOGLE ADS

Donnerstag, 21. April 2022

Kubernetes-Pod bleibt beim Erstellen von Containern hängen

Ich versuche, diese docker-composeApp auf GCP Kubernetes bereitzustellen.

version: "3.5"
x-environment:
&default-back-environment
# Database settings
POSTGRES_DB: taiga
POSTGRES_USER: taiga
POSTGRES_PASSWORD: taiga
POSTGRES_HOST: taiga-db
# Taiga settings
TAIGA_SECRET_KEY: "taiga-back-secret-key"
TAIGA_SITES_SCHEME: "http"
TAIGA_SITES_DOMAIN: "localhost:9000"
TAIGA_SUBPATH: "" # "" or "/subpath"
# Email settings. Uncomment following lines and configure your SMTP server
# EMAIL_BACKEND: "django.core.mail.backends.smtp.EmailBackend"
# DEFAULT_FROM_EMAIL: "no-reply@example.com"
# EMAIL_USE_TLS: "False"
# EMAIL_USE_SSL: "False"
# EMAIL_HOST: "smtp.host.example.com"
# EMAIL_PORT: 587
# EMAIL_HOST_USER: "user"
# EMAIL_HOST_PASSWORD: "password"
# Rabbitmq settings
# Should be the same as in taiga-async-rabbitmq and taiga-events-rabbitmq
RABBITMQ_USER: taiga
RABBITMQ_PASS: taiga
# Telemetry settings
ENABLE_TELEMETRY: "True"
x-volumes:
&default-back-volumes
- taiga-static-data:/taiga-back/static
- taiga-media-data:/taiga-back/media
# -./config.py:/taiga-back/settings/config.py
services:
taiga-db:
image: postgres:12.3
environment:
POSTGRES_DB: taiga
POSTGRES_USER: taiga
POSTGRES_PASSWORD: taiga
volumes:
- taiga-db-data:/var/lib/postgresql/data
networks:
- taiga
taiga-back:
image: taigaio/taiga-back:latest
environment: *default-back-environment
volumes: *default-back-volumes
networks:
- taiga
depends_on:
- taiga-db
- taiga-events-rabbitmq
- taiga-async-rabbitmq
taiga-async:
image: taigaio/taiga-back:latest
entrypoint: ["/taiga-back/docker/async_entrypoint.sh"]
environment: *default-back-environment
volumes: *default-back-volumes
networks:
- taiga
depends_on:
- taiga-db
- taiga-back
- taiga-async-rabbitmq
taiga-async-rabbitmq:
image: rabbitmq:3.8-management-alpine
environment:
RABBITMQ_ERLANG_COOKIE: secret-erlang-cookie
RABBITMQ_DEFAULT_USER: taiga
RABBITMQ_DEFAULT_PASS: taiga
RABBITMQ_DEFAULT_VHOST: taiga
volumes:
- taiga-async-rabbitmq-data:/var/lib/rabbitmq
networks:
- taiga
taiga-front:
image: taigaio/taiga-front:latest
environment:
TAIGA_URL: "http://localhost:9000"
TAIGA_WEBSOCKETS_URL: "ws://localhost:9000"
TAIGA_SUBPATH: "" # "" or "/subpath"
networks:
- taiga
# volumes:
# -./conf.json:/usr/share/nginx/html/conf.json
taiga-events:
image: taigaio/taiga-events:latest
environment:
RABBITMQ_USER: taiga
RABBITMQ_PASS: taiga
TAIGA_SECRET_KEY: "taiga-back-secret-key"
networks:
- taiga
depends_on:
- taiga-events-rabbitmq
taiga-events-rabbitmq:
image: rabbitmq:3.8-management-alpine
environment:
RABBITMQ_ERLANG_COOKIE: secret-erlang-cookie
RABBITMQ_DEFAULT_USER: taiga
RABBITMQ_DEFAULT_PASS: taiga
RABBITMQ_DEFAULT_VHOST: taiga
volumes:
- taiga-events-rabbitmq-data:/var/lib/rabbitmq
networks:
- taiga
taiga-protected:
image: taigaio/taiga-protected:latest
environment:
MAX_AGE: 360
SECRET_KEY: "taiga-back-secret-key"
networks:
- taiga
taiga-gateway:
image: nginx:1.19-alpine
ports:
- "9000:80"
volumes:
-./taiga-gateway/taiga.conf:/etc/nginx/conf.d/default.conf
- taiga-static-data:/taiga/static
- taiga-media-data:/taiga/media
networks:
- taiga
depends_on:
- taiga-front
- taiga-back
- taiga-events
volumes:
taiga-static-data:
taiga-media-data:
taiga-db-data:
taiga-async-rabbitmq-data:
taiga-events-rabbitmq-data:
networks:
taiga:

Ich habe verwendet Kompose, um meine Kubernetes-Bereitstellungsdateien zu generieren. Alle Pods laufen nackt zwei. Sie zeigen jedoch keinen Fehler außer diesem.

Volumes können nicht angehängt oder bereitgestellt werden: nicht bereitgestellte Volumes=[taiga-static-data taiga-media-data], nicht angefügte Volumes=[kube-api-access-9c74v taiga-gateway-claim0 taiga-static-data taiga-media-data]: Zeitüberschreitung beim Warten auf die Bedingung

Pod-Status

taiga-async-6c7d9dbd7b-btv79 1/1 Running 19 16h
taiga-async-rabbitmq-86979cf759-lvj2m 1/1 Running 0 16h
taiga-back-7bc574768d-hst2v 0/1 ContainerCreating 0 6m34s
taiga-db-59b554854-qdb65 1/1 Running 0 16h
taiga-events-74f494df97-8rpjd 1/1 Running 0 16h
taiga-events-rabbitmq-7f558ddf88-wc2js 1/1 Running 0 16h
taiga-front-6f66c475df-8cmf6 1/1 Running 0 16h
taiga-gateway-77976dc77-w5hp4 0/1 ContainerCreating 0 3m6s
taiga-protected-7794949d49-crgbt 1/1 Running 0 16h

Es ist ein Problem mit dem Mounten des Volumes, ich bin mir sicher, wie es aus einem früheren Fehler hervorgeht, dass taiga-backund taiga-dbein Volume gemeinsam nutzen.

Dies ist die KomposeDatei, die ich habe.

apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml
kompose.version: 1.26.1 (a9d05d509)
creationTimestamp: null
labels:
io.kompose.service: taiga-gateway
name: taiga-gateway
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: taiga-gateway
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml
kompose.version: 1.26.1 (a9d05d509)
creationTimestamp: null
labels:
io.kompose.network/taiga: "true"
io.kompose.service: taiga-gateway
spec:
containers:
- image: nginx:1.19-alpine
name: taiga-gateway
ports:
- containerPort: 80
resources: {}
volumeMounts:
- mountPath: /etc/nginx/conf.d/default.conf

name: taiga-gateway-claim0
- mountPath: /taiga/static
name: taiga-static-data

- mountPath: /taiga/media
name: taiga-media-data

restartPolicy: Always
volumes:
- name: taiga-gateway-claim0
persistentVolumeClaim:
claimName: taiga-gateway-claim0
- name: taiga-static-data
persistentVolumeClaim:
claimName: taiga-static-data
- name: taiga-media-data
persistentVolumeClaim:
claimName: taiga-media-data
status: {}

Wenn ich einen reparieren kann, kann ich vielleicht auch den anderen Pod herausfinden. Dies ist die Anwendung
https://github.com/kaleidos-ventures/taiga-docker. Jeder Hinweis ist willkommen. kubectl describe podAusgang

Name: taiga-gateway-77976dc77-w5hp4
Namespace: default
Priority: 0
Node: gke-taiga-cluster-default-pool-9e5ed1f4-0hln/10.128.0.18
Start Time: Wed, 13 Apr 2022 05:32:10 +0000
Labels: io.kompose.network/taiga=true
io.kompose.service=taiga-gateway
pod-template-hash=77976dc77
Annotations: kompose.cmd: kompose convert -f docker-compose.yml
kompose.version: 1.26.1 (a9d05d509)
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/taiga-gateway-77976dc77
Containers:
taiga-gateway:
Container ID:
Image: nginx:1.19-alpine
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/etc/nginx/conf.d/default.conf from taiga-gateway-claim0 (rw)
/taiga/media from taiga-media-data (rw)
/taiga/static from taiga-static-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9c74v (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
taiga-gateway-claim0:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: taiga-gateway-claim0
ReadOnly: false
taiga-static-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: taiga-static-data
ReadOnly: false
taiga-media-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: taiga-media-data
ReadOnly: false
kube-api-access-9c74v:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16m default-scheduler Successfully assigned default/taiga-gateway-77976dc77-w5hp4 to gke-taiga-cluster-default-pool-9e5ed1f4-0hln
Warning FailedMount 5m49s (x4 over 14m) kubelet Unable to attach or mount volumes: unmounted volumes=[taiga-static-data taiga-media-data], unattached volumes=[taiga-gateway-claim0 taiga-static-data taiga-media-data kube-api-access-9c74v]: timed out waiting for the condition
Warning FailedMount 81s (x3 over 10m) kubelet Unable to attach or mount volumes: unmounted volumes=[taiga-static-data taiga-media-data], unattached volumes=[kube-api-access-9c74v taiga-gateway-claim0 taiga-static-data taiga-media-data]: timed out waiting for the condition


Lösung des Problems

Sie haben höchstwahrscheinlich Ihre PVCs nicht richtig konfiguriert und der Container versucht, den Claim zu mounten, aber der Claim ist nicht an ein PV gebunden.

Keine Kommentare:

Kommentar veröffentlichen

Warum werden SCHED_FIFO-Threads derselben physischen CPU zugewiesen, obwohl CPUs im Leerlauf verfügbar sind?

Lösung des Problems Wenn ich das richtig verstehe, versuchen Sie, SCHED_FIFO mit aktiviertem Hyperthreading ("HT") zu verwenden, ...