2022-01-26 09:13:07.537507 I | rookcmd: starting Rook v1.8.3 with arguments '/usr/local/bin/rook ceph operator' 2022-01-26 09:13:07.537691 I | rookcmd: flag values: --enable-machine-disruption-budget=false, --help=false, --kubeconfig=, --log-level=INFO, --operator-image=, --service-account= 2022-01-26 09:13:07.537697 I | cephcmd: starting Rook-Ceph operator 2022-01-26 09:13:07.896495 I | cephcmd: base ceph version inside the rook operator image is "ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)" 2022-01-26 09:13:07.902068 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var) 2022-01-26 09:13:07.902094 I | operator: watching all namespaces for Ceph CRs 2022-01-26 09:13:07.902196 I | operator: setting up schemes 2022-01-26 09:13:07.904338 I | operator: setting up the controller-runtime manager I0126 09:13:08.954709 1 request.go:665] Waited for 1.047164087s due to client-side throttling, not priority and fairness, request: GET:https://10.43.0.1:443/apis/wgpolicyk8s.io/v1alpha2?timeout=32s 2022-01-26 09:13:09.108142 I | operator: looking for admission webhook secret "rook-ceph-admission-controller" 2022-01-26 09:13:09.110491 I | operator: admission webhook secret "rook-ceph-admission-controller" not found. proceeding without the admission controller 2022-01-26 09:13:09.110569 I | ceph-cluster-controller: successfully started 2022-01-26 09:13:09.110676 I | ceph-cluster-controller: enabling hotplug orchestration 2022-01-26 09:13:09.110708 I | ceph-crashcollector-controller: successfully started 2022-01-26 09:13:09.110744 I | ceph-block-pool-controller: successfully started 2022-01-26 09:13:09.110792 I | ceph-object-store-user-controller: successfully started 2022-01-26 09:13:09.110825 I | ceph-object-realm-controller: successfully started 2022-01-26 09:13:09.110850 I | ceph-object-zonegroup-controller: successfully started 2022-01-26 09:13:09.110872 I | ceph-object-zone-controller: successfully started 2022-01-26 09:13:09.111031 I | ceph-object-controller: successfully started 2022-01-26 09:13:09.111082 I | ceph-file-controller: successfully started 2022-01-26 09:13:09.111126 I | ceph-nfs-controller: successfully started 2022-01-26 09:13:09.111172 I | ceph-rbd-mirror-controller: successfully started 2022-01-26 09:13:09.111205 I | ceph-client-controller: successfully started 2022-01-26 09:13:09.111243 I | ceph-filesystem-mirror-controller: successfully started 2022-01-26 09:13:09.111276 I | operator: rook-ceph-operator-config-controller successfully started 2022-01-26 09:13:09.111304 I | ceph-csi: rook-ceph-operator-csi-controller successfully started 2022-01-26 09:13:09.111336 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started 2022-01-26 09:13:09.111368 I | ceph-bucket-topic: successfully started 2022-01-26 09:13:09.111389 I | ceph-bucket-notification: successfully started 2022-01-26 09:13:09.111411 I | ceph-bucket-notification: successfully started 2022-01-26 09:13:09.111428 I | ceph-fs-subvolumegroup-controller: successfully started 2022-01-26 09:13:09.112441 I | operator: starting the controller-runtime manager 2022-01-26 09:13:09.483626 I | op-k8sutil: ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS="15" (configmap) 2022-01-26 09:13:09.483659 I | op-k8sutil: ROOK_LOG_LEVEL="INFO" (configmap) 2022-01-26 09:13:09.483672 I | op-k8sutil: ROOK_ENABLE_DISCOVERY_DAEMON="false" (env var) 2022-01-26 09:13:09.487479 I | operator: rook-ceph-operator-config-controller done reconciling 2022-01-26 09:13:13.665993 I | ceph-spec: adding finalizer "cephblockpool.ceph.rook.io" on "ceph-blockpool" 2022-01-26 09:13:13.666017 E | clusterdisruption-controller: cephcluster "rook-ceph/" seems to be deleted, not requeuing until triggered again 2022-01-26 09:13:13.673082 E | clusterdisruption-controller: cephcluster "rook-ceph/" seems to be deleted, not requeuing until triggered again 2022-01-26 09:13:13.675371 I | ceph-spec: adding finalizer "cephcluster.ceph.rook.io" on "rook-ceph" 2022-01-26 09:13:13.685856 I | ceph-spec: adding finalizer "cephfilesystem.ceph.rook.io" on "ceph-filesystem" 2022-01-26 09:13:13.689287 I | clusterdisruption-controller: deleted all legacy node drain canary pods 2022-01-26 09:13:13.689567 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph" 2022-01-26 09:13:13.693324 I | ceph-cluster-controller: clusterInfo not yet found, must be a new cluster. 2022-01-26 09:13:13.702174 W | ceph-file-controller: failed to set filesystem "ceph-filesystem" status to "". failed to update object "rook-ceph/ceph-filesystem" status: Operation cannot be fulfilled on cephfilesystems.ceph.rook.io "ceph-filesystem": the object has been modified; please apply your changes to the latest version and try again 2022-01-26 09:13:13.702622 I | ceph-csi: successfully created csi config map "rook-ceph-csi-config" 2022-01-26 09:13:13.789253 I | op-k8sutil: ROOK_CSI_ENABLE_RBD="true" (configmap) 2022-01-26 09:13:13.789301 I | op-k8sutil: ROOK_CSI_ENABLE_CEPHFS="true" (configmap) 2022-01-26 09:13:13.789312 I | op-k8sutil: ROOK_CSI_ALLOW_UNSUPPORTED_VERSION="false" (default) 2022-01-26 09:13:13.789320 I | op-k8sutil: ROOK_CSI_ENABLE_GRPC_METRICS="false" (configmap) 2022-01-26 09:13:13.789328 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (configmap) 2022-01-26 09:13:13.789346 I | op-k8sutil: ROOK_CSI_CEPH_IMAGE="<...>/cephcsi/cephcsi:v3.5.1" (configmap) 2022-01-26 09:13:13.789355 I | op-k8sutil: ROOK_CSI_REGISTRAR_IMAGE="<...>/sig-storage/csi-node-driver-registrar:v2.4.0" (configmap) 2022-01-26 09:13:13.789364 I | op-k8sutil: ROOK_CSI_PROVISIONER_IMAGE="<...>/sig-storage/csi-provisioner:v3.1.0" (configmap) 2022-01-26 09:13:13.789374 I | op-k8sutil: ROOK_CSI_ATTACHER_IMAGE="<...>/sig-storage/csi-attacher:v3.4.0" (configmap) 2022-01-26 09:13:13.789380 I | op-k8sutil: ROOK_CSI_SNAPSHOTTER_IMAGE="<...>/sig-storage/csi-snapshotter:v4.2.0" (configmap) 2022-01-26 09:13:13.789385 I | op-k8sutil: ROOK_CSI_KUBELET_DIR_PATH="/var/lib/kubelet" (default) 2022-01-26 09:13:13.789394 I | op-k8sutil: CSI_VOLUME_REPLICATION_IMAGE="quay.io/csiaddons/volumereplication-operator:v0.1.0" (default) 2022-01-26 09:13:13.789405 I | op-k8sutil: ROOK_CSIADDONS_IMAGE="quay.io/csiaddons/k8s-sidecar:v0.2.1" (default) 2022-01-26 09:13:13.789410 I | op-k8sutil: ROOK_CSI_CEPHFS_POD_LABELS="" (default) 2022-01-26 09:13:13.789417 I | op-k8sutil: ROOK_CSI_RBD_POD_LABELS="" (default) 2022-01-26 09:13:13.789428 I | ceph-csi: detecting the ceph csi image version for image "<...>/cephcsi/cephcsi:v3.5.1" 2022-01-26 09:13:13.790430 I | op-k8sutil: CSI_PROVISIONER_TOLERATIONS="" (default) 2022-01-26 09:13:13.790453 I | op-k8sutil: CSI_PROVISIONER_NODE_AFFINITY="role=compute" (configmap) 2022-01-26 09:13:13.808066 I | ceph-spec: detecting the ceph image version for image <...>/ceph/ceph:v16.2.7... 2022-01-26 09:13:14.118484 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet. 2022-01-26 09:13:14.407798 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet. 2022-01-26 09:13:23.915991 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet. 2022-01-26 09:13:24.312651 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet. 2022-01-26 09:13:24.613240 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet. 2022-01-26 09:13:34.516530 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet. 2022-01-26 09:13:34.813028 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet. 2022-01-26 09:13:44.715697 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet. 2022-01-26 09:13:45.013963 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet. 2022-01-26 09:13:54.914538 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet. 2022-01-26 09:13:55.214237 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet. 2022-01-26 09:14:05.115849 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet. 2022-01-26 09:14:05.415329 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet. 2022-01-26 09:14:15.314798 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet. 2022-01-26 09:14:15.612855 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet. 2022-01-26 09:14:15.761341 I | ceph-spec: detected ceph image version: "16.2.7-0 pacific" 2022-01-26 09:14:15.761365 I | ceph-cluster-controller: validating ceph version from provided image 2022-01-26 09:14:15.763277 I | ceph-cluster-controller: cluster "rook-ceph": version "16.2.7-0 pacific" detected for image "<...>/ceph/ceph:v16.2.7" 2022-01-26 09:14:15.787413 E | ceph-spec: failed to update cluster condition to {Type:Progressing Status:True Reason:ClusterProgressing Message:Configuring the Ceph cluster LastHeartbeatTime:2022-01-26 09:14:15.779372693 +0000 UTC m=+68.270002618 LastTransitionTime:2022-01-26 09:14:15.779372523 +0000 UTC m=+68.270002491}. failed to update object "rook-ceph/rook-ceph" status: Operation cannot be fulfilled on cephclusters.ceph.rook.io "rook-ceph": the object has been modified; please apply your changes to the latest version and try again 2022-01-26 09:14:15.797328 I | op-mon: start running mons 2022-01-26 09:14:15.902987 I | op-mon: creating mon secrets for a new cluster 2022-01-26 09:14:15.915151 I | op-mon: existing maxMonID not found or failed to load. configmaps "rook-ceph-mon-endpoints" not found 2022-01-26 09:14:15.957723 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":[]}] data: mapping:{"node":{}} maxMonId:-1] 2022-01-26 09:14:16.756810 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config 2022-01-26 09:14:16.757107 I | cephclient: generated admin config in /var/lib/rook/rook-ceph 2022-01-26 09:14:17.359042 I | ceph-csi: Detected ceph CSI image version: "v3.5.1" 2022-01-26 09:14:17.364483 I | op-k8sutil: CSI_FORCE_CEPHFS_KERNEL_CLIENT="true" (configmap) 2022-01-26 09:14:17.364504 I | op-k8sutil: CSI_CEPHFS_GRPC_METRICS_PORT="9091" (default) 2022-01-26 09:14:17.364511 I | op-k8sutil: CSI_CEPHFS_GRPC_METRICS_PORT="9091" (default) 2022-01-26 09:14:17.364517 I | op-k8sutil: CSI_CEPHFS_LIVENESS_METRICS_PORT="9081" (default) 2022-01-26 09:14:17.364521 I | op-k8sutil: CSI_CEPHFS_LIVENESS_METRICS_PORT="9081" (default) 2022-01-26 09:14:17.364525 I | op-k8sutil: CSI_RBD_GRPC_METRICS_PORT="9090" (default) 2022-01-26 09:14:17.364528 I | op-k8sutil: CSI_RBD_GRPC_METRICS_PORT="9090" (default) 2022-01-26 09:14:17.364532 I | op-k8sutil: CSIADDONS_PORT="9070" (default) 2022-01-26 09:14:17.364536 I | op-k8sutil: CSIADDONS_PORT="9070" (default) 2022-01-26 09:14:17.364539 I | op-k8sutil: CSI_RBD_LIVENESS_METRICS_PORT="9080" (default) 2022-01-26 09:14:17.364544 I | op-k8sutil: CSI_RBD_LIVENESS_METRICS_PORT="9080" (default) 2022-01-26 09:14:17.364551 I | op-k8sutil: CSI_PLUGIN_PRIORITY_CLASSNAME="" (default) 2022-01-26 09:14:17.364555 I | op-k8sutil: CSI_PROVISIONER_PRIORITY_CLASSNAME="" (default) 2022-01-26 09:14:17.364559 I | op-k8sutil: CSI_ENABLE_OMAP_GENERATOR="false" (configmap) 2022-01-26 09:14:17.364563 I | op-k8sutil: CSI_ENABLE_RBD_SNAPSHOTTER="true" (configmap) 2022-01-26 09:14:17.364569 I | op-k8sutil: CSI_ENABLE_CEPHFS_SNAPSHOTTER="true" (configmap) 2022-01-26 09:14:17.364574 I | op-k8sutil: CSI_ENABLE_VOLUME_REPLICATION="false" (configmap) 2022-01-26 09:14:17.364579 I | op-k8sutil: CSI_ENABLE_CSIADDONS="false" (configmap) 2022-01-26 09:14:17.364586 I | op-k8sutil: CSI_CEPHFS_PLUGIN_UPDATE_STRATEGY="RollingUpdate" (default) 2022-01-26 09:14:17.364593 I | op-k8sutil: CSI_RBD_PLUGIN_UPDATE_STRATEGY="RollingUpdate" (default) 2022-01-26 09:14:17.364608 I | op-k8sutil: CSI_PLUGIN_ENABLE_SELINUX_HOST_MOUNT="false" (configmap) 2022-01-26 09:14:17.364631 I | ceph-csi: Kubernetes version is 1.21 2022-01-26 09:14:17.364642 I | op-k8sutil: ROOK_CSI_RESIZER_IMAGE="<...>/sig-storage/csi-resizer:v1.3.0" (configmap) 2022-01-26 09:14:17.364648 I | op-k8sutil: CSI_LOG_LEVEL="" (default) 2022-01-26 09:14:17.761773 I | op-k8sutil: CSI_PROVISIONER_REPLICAS="2" (configmap) 2022-01-26 09:14:17.768180 I | op-k8sutil: CSI_PROVISIONER_TOLERATIONS="" (default) 2022-01-26 09:14:17.768210 I | op-k8sutil: CSI_PROVISIONER_NODE_AFFINITY="role=compute" (configmap) 2022-01-26 09:14:17.768229 I | op-k8sutil: CSI_PLUGIN_TOLERATIONS="" (default) 2022-01-26 09:14:17.768236 I | op-k8sutil: CSI_PLUGIN_NODE_AFFINITY="" (default) 2022-01-26 09:14:17.768241 I | op-k8sutil: CSI_RBD_PLUGIN_TOLERATIONS="" (default) 2022-01-26 09:14:17.768248 I | op-k8sutil: CSI_RBD_PLUGIN_NODE_AFFINITY="" (default) 2022-01-26 09:14:17.768256 I | op-k8sutil: CSI_RBD_PLUGIN_RESOURCE="" (default) 2022-01-26 09:14:17.924571 I | op-k8sutil: CSI_RBD_PROVISIONER_TOLERATIONS="" (default) 2022-01-26 09:14:17.924600 I | op-k8sutil: CSI_RBD_PROVISIONER_NODE_AFFINITY="role=compute" (configmap) 2022-01-26 09:14:17.924614 I | op-k8sutil: CSI_RBD_PROVISIONER_RESOURCE="" (default) 2022-01-26 09:14:18.154901 I | ceph-csi: successfully started CSI Ceph RBD driver 2022-01-26 09:14:18.363993 I | op-k8sutil: CSI_CEPHFS_PLUGIN_TOLERATIONS="" (default) 2022-01-26 09:14:18.364023 I | op-k8sutil: CSI_CEPHFS_PLUGIN_NODE_AFFINITY="" (default) 2022-01-26 09:14:18.364032 I | op-k8sutil: CSI_CEPHFS_PLUGIN_RESOURCE="" (default) 2022-01-26 09:14:18.631122 I | op-k8sutil: CSI_CEPHFS_PROVISIONER_TOLERATIONS="" (default) 2022-01-26 09:14:18.631154 I | op-k8sutil: CSI_CEPHFS_PROVISIONER_NODE_AFFINITY="role=compute" (configmap) 2022-01-26 09:14:18.631172 I | op-k8sutil: CSI_CEPHFS_PROVISIONER_RESOURCE="" (default) 2022-01-26 09:14:18.815608 I | ceph-csi: successfully started CSI CephFS driver 2022-01-26 09:14:19.162955 I | op-k8sutil: CSI_RBD_FSGROUPPOLICY="ReadWriteOnceWithFSType" (configmap) 2022-01-26 09:14:19.181695 I | ceph-csi: CSIDriver object created for driver "rook-ceph.rbd.csi.ceph.com" 2022-01-26 09:14:19.181724 I | op-k8sutil: CSI_CEPHFS_FSGROUPPOLICY="None" (configmap) 2022-01-26 09:14:19.207503 I | ceph-csi: CSIDriver object created for driver "rook-ceph.cephfs.csi.ceph.com" 2022-01-26 09:14:19.357527 I | op-mon: targeting the mon count 3 2022-01-26 09:14:19.490206 I | op-mon: created canary deployment rook-ceph-mon-a-canary 2022-01-26 09:14:19.574693 I | op-mon: waiting for canary pod creation rook-ceph-mon-a-canary 2022-01-26 09:14:19.669576 I | op-mon: created canary deployment rook-ceph-mon-b-canary 2022-01-26 09:14:19.759706 I | op-mon: waiting for canary pod creation rook-ceph-mon-b-canary 2022-01-26 09:14:19.816172 I | op-mon: created canary deployment rook-ceph-mon-c-canary 2022-01-26 09:14:20.157795 I | op-mon: canary monitor deployment rook-ceph-mon-c-canary scheduled to dev1-cmp3l 2022-01-26 09:14:20.157822 I | op-mon: mon c assigned to node dev1-cmp3l 2022-01-26 09:14:23.723817 I | op-mon: parsing mon endpoints: 2022-01-26 09:14:23.723841 W | op-mon: ignoring invalid monitor 2022-01-26 09:14:23.723866 I | op-k8sutil: ROOK_OBC_WATCH_OPERATOR_NAMESPACE="true" (configmap) 2022-01-26 09:14:23.723874 I | op-bucket-prov: ceph bucket provisioner launched watching for provisioner "rook-ceph.ceph.rook.io/bucket" 2022-01-26 09:14:23.725228 I | op-bucket-prov: successfully reconciled bucket provisioner I0126 09:14:23.725328 1 manager.go:135] objectbucket.io/provisioner-manager "msg"="starting provisioner" "name"="rook-ceph.ceph.rook.io/bucket" 2022-01-26 09:14:24.593133 I | op-mon: canary monitor deployment rook-ceph-mon-a-canary scheduled to dev1-cmp2l 2022-01-26 09:14:24.593157 I | op-mon: mon a assigned to node dev1-cmp2l 2022-01-26 09:14:24.770267 I | op-mon: canary monitor deployment rook-ceph-mon-b-canary scheduled to dev1-cmp1l 2022-01-26 09:14:24.770291 I | op-mon: mon b assigned to node dev1-cmp1l 2022-01-26 09:14:24.775407 I | op-mon: cleaning up canary monitor deployment "rook-ceph-mon-a-canary" 2022-01-26 09:14:24.779505 I | op-mon: cleaning up canary monitor deployment "rook-ceph-mon-b-canary" 2022-01-26 09:14:24.785637 I | op-mon: cleaning up canary monitor deployment "rook-ceph-mon-c-canary" 2022-01-26 09:14:24.801544 I | op-mon: creating mon a 2022-01-26 09:14:24.822495 I | op-mon: mon "a" endpoint is [v2:10.43.146.221:3300,v1:10.43.146.221:6789] 2022-01-26 09:14:24.854517 I | op-mon: monitor endpoints changed, updating the bootstrap peer token 2022-01-26 09:14:24.854949 I | op-mon: monitor endpoints changed, updating the bootstrap peer token 2022-01-26 09:14:24.855213 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.43.146.221:6789"]}] data:a=10.43.146.221:6789 mapping:{"node":{"a":{"Name":"dev1-cmp2l","Hostname":"dev1-cmp2l","Address":"10.246.143.76"},"b":{"Name":"dev1-cmp1l","Hostname":"dev1-cmp1l","Address":"10.246.143.70"},"c":{"Name":"dev1-cmp3l","Hostname":"dev1-cmp3l","Address":"10.246.143.67"}}} maxMonId:-1] 2022-01-26 09:14:24.892119 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config 2022-01-26 09:14:24.892310 I | cephclient: generated admin config in /var/lib/rook/rook-ceph 2022-01-26 09:14:25.185574 I | op-mon: 0 of 1 expected mons are ready. creating or updating deployments without checking quorum in attempt to achieve a healthy mon cluster 2022-01-26 09:14:25.486794 I | op-mon: updating maxMonID from -1 to 0 after committing mon "a" 2022-01-26 09:14:26.683015 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.43.146.221:6789"]}] data:a=10.43.146.221:6789 mapping:{"node":{"a":{"Name":"dev1-cmp2l","Hostname":"dev1-cmp2l","Address":"10.246.143.76"},"b":{"Name":"dev1-cmp1l","Hostname":"dev1-cmp1l","Address":"10.246.143.70"},"c":{"Name":"dev1-cmp3l","Hostname":"dev1-cmp3l","Address":"10.246.143.67"}}} maxMonId:0] 2022-01-26 09:14:26.683038 I | op-mon: waiting for mon quorum with [a] 2022-01-26 09:14:26.692774 I | op-mon: mons running: [a] 2022-01-26 09:14:40.594975 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . timed out: exit status 1 2022-01-26 09:14:46.996183 I | op-mon: mons running: [a] 2022-01-26 09:14:55.821657 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . timed out: exit status 1 2022-01-26 09:15:11.023854 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . timed out: exit status 1 2022-01-26 09:15:13.607433 I | op-mon: mons running: [a] 2022-01-26 09:15:26.226519 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . timed out: exit status 1 2022-01-26 09:15:33.836087 I | op-mon: mon a is not yet running 2022-01-26 09:15:33.836112 I | op-mon: mons running: [] 2022-01-26 09:15:41.427150 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . timed out: exit status 1 2022-01-26 09:15:54.026581 I | op-mon: mons running: [a] 2022-01-26 09:15:56.628104 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . timed out: exit status 1 2022-01-26 09:16:09.011132 I | clusterdisruption-controller: all PGs are active+clean. Restoring default OSD pdb settings 2022-01-26 09:16:09.011161 I | clusterdisruption-controller: creating the default pdb "rook-ceph-osd" with maxUnavailable=1 for all osd 2022-01-26 09:16:09.018638 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2022-01-26 09:16:09.600696 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2022-01-26 09:16:10.126556 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2022-01-26 09:16:14.289896 I | op-mon: mons running: [a] 2022-01-26 09:16:14.795239 I | op-mon: Monitors in quorum: [a] 2022-01-26 09:16:14.795274 I | op-mon: mons created: 1 2022-01-26 09:16:15.323533 I | op-mon: waiting for mon quorum with [a] 2022-01-26 09:16:15.331103 I | op-mon: mons running: [a] 2022-01-26 09:16:15.898504 I | op-mon: Monitors in quorum: [a] 2022-01-26 09:16:15.898558 I | op-config: setting "global"="mon allow pool delete"="true" option to the mon configuration database 2022-01-26 09:16:16.424516 I | op-config: successfully set "global"="mon allow pool delete"="true" option to the mon configuration database 2022-01-26 09:16:16.424542 I | op-config: setting "global"="mon cluster log file"="" option to the mon configuration database 2022-01-26 09:16:16.996713 I | op-config: successfully set "global"="mon cluster log file"="" option to the mon configuration database 2022-01-26 09:16:16.996741 I | op-config: setting "global"="mon allow pool size one"="true" option to the mon configuration database 2022-01-26 09:16:17.528641 I | op-config: successfully set "global"="mon allow pool size one"="true" option to the mon configuration database 2022-01-26 09:16:17.528673 I | op-config: setting "global"="osd scrub auto repair"="true" option to the mon configuration database 2022-01-26 09:16:18.096950 I | op-config: successfully set "global"="osd scrub auto repair"="true" option to the mon configuration database 2022-01-26 09:16:18.096976 I | op-config: setting "global"="log to file"="false" option to the mon configuration database 2022-01-26 09:16:18.618719 I | op-config: successfully set "global"="log to file"="false" option to the mon configuration database 2022-01-26 09:16:18.618745 I | op-config: setting "global"="rbd_default_features"="3" option to the mon configuration database 2022-01-26 09:16:19.132262 I | op-config: successfully set "global"="rbd_default_features"="3" option to the mon configuration database 2022-01-26 09:16:19.132298 I | op-config: deleting "log file" option from the mon configuration database 2022-01-26 09:16:19.698252 I | op-config: successfully deleted "log file" option from the mon configuration database 2022-01-26 09:16:19.698281 I | op-mon: creating mon b 2022-01-26 09:16:19.720226 I | op-mon: mon "a" endpoint is [v2:10.43.146.221:3300,v1:10.43.146.221:6789] 2022-01-26 09:16:19.726233 I | op-mon: mon "b" endpoint is [v2:10.43.194.68:3300,v1:10.43.194.68:6789] 2022-01-26 09:16:19.739717 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.43.146.221:6789","10.43.194.68:6789"]}] data:a=10.43.146.221:6789,b=10.43.194.68:6789 mapping:{"node":{"a":{"Name":"dev1-cmp2l","Hostname":"dev1-cmp2l","Address":"10.246.143.76"},"b":{"Name":"dev1-cmp1l","Hostname":"dev1-cmp1l","Address":"10.246.143.70"},"c":{"Name":"dev1-cmp3l","Hostname":"dev1-cmp3l","Address":"10.246.143.67"}}} maxMonId:0] 2022-01-26 09:16:19.901894 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config 2022-01-26 09:16:19.902123 I | cephclient: generated admin config in /var/lib/rook/rook-ceph 2022-01-26 09:16:20.307070 I | op-mon: 1 of 2 expected mon deployments exist. creating new deployment(s). 2022-01-26 09:16:20.311192 I | op-mon: deployment for mon rook-ceph-mon-a already exists. updating if needed 2022-01-26 09:16:20.322348 I | op-k8sutil: deployment "rook-ceph-mon-a" did not change, nothing to update 2022-01-26 09:16:21.547746 I | op-mon: updating maxMonID from 0 to 1 after committing mon "b" 2022-01-26 09:16:21.558348 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.43.146.221:6789","10.43.194.68:6789"]}] data:a=10.43.146.221:6789,b=10.43.194.68:6789 mapping:{"node":{"a":{"Name":"dev1-cmp2l","Hostname":"dev1-cmp2l","Address":"10.246.143.76"},"b":{"Name":"dev1-cmp1l","Hostname":"dev1-cmp1l","Address":"10.246.143.70"},"c":{"Name":"dev1-cmp3l","Hostname":"dev1-cmp3l","Address":"10.246.143.67"}}} maxMonId:1] 2022-01-26 09:16:21.558372 I | op-mon: waiting for mon quorum with [a b] 2022-01-26 09:16:21.707114 I | op-mon: mon b is not yet running 2022-01-26 09:16:21.707146 I | op-mon: mons running: [a] 2022-01-26 09:16:22.217053 I | op-mon: Monitors in quorum: [a] 2022-01-26 09:16:22.217077 I | op-mon: mons created: 2 2022-01-26 09:16:22.795623 I | op-mon: waiting for mon quorum with [a b] 2022-01-26 09:16:22.809332 I | op-mon: mon b is not yet running 2022-01-26 09:16:22.809364 I | op-mon: mons running: [a] 2022-01-26 09:16:27.822549 I | op-mon: mons running: [a b] 2022-01-26 09:16:31.219799 I | op-mon: Monitors in quorum: [a b] 2022-01-26 09:16:31.219842 I | op-config: setting "global"="mon allow pool delete"="true" option to the mon configuration database 2022-01-26 09:16:31.793789 I | op-config: successfully set "global"="mon allow pool delete"="true" option to the mon configuration database 2022-01-26 09:16:31.793827 I | op-config: setting "global"="mon cluster log file"="" option to the mon configuration database 2022-01-26 09:16:32.315701 I | op-config: successfully set "global"="mon cluster log file"="" option to the mon configuration database 2022-01-26 09:16:32.315739 I | op-config: setting "global"="mon allow pool size one"="true" option to the mon configuration database 2022-01-26 09:16:32.885464 I | op-config: successfully set "global"="mon allow pool size one"="true" option to the mon configuration database 2022-01-26 09:16:32.885491 I | op-config: setting "global"="osd scrub auto repair"="true" option to the mon configuration database 2022-01-26 09:16:33.405602 I | op-config: successfully set "global"="osd scrub auto repair"="true" option to the mon configuration database 2022-01-26 09:16:33.405629 I | op-config: setting "global"="log to file"="false" option to the mon configuration database 2022-01-26 09:16:33.920219 I | op-config: successfully set "global"="log to file"="false" option to the mon configuration database 2022-01-26 09:16:33.920253 I | op-config: setting "global"="rbd_default_features"="3" option to the mon configuration database 2022-01-26 09:16:34.491288 I | op-config: successfully set "global"="rbd_default_features"="3" option to the mon configuration database 2022-01-26 09:16:34.491317 I | op-config: deleting "log file" option from the mon configuration database 2022-01-26 09:16:35.019143 I | op-config: successfully deleted "log file" option from the mon configuration database 2022-01-26 09:16:35.019170 I | op-mon: creating mon c 2022-01-26 09:16:35.037611 I | op-mon: mon "a" endpoint is [v2:10.43.146.221:3300,v1:10.43.146.221:6789] 2022-01-26 09:16:35.051895 I | op-mon: mon "b" endpoint is [v2:10.43.194.68:3300,v1:10.43.194.68:6789] 2022-01-26 09:16:35.056704 I | op-mon: mon "c" endpoint is [v2:10.43.98.67:3300,v1:10.43.98.67:6789] 2022-01-26 09:16:35.423124 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.43.146.221:6789","10.43.194.68:6789","10.43.98.67:6789"]}] data:a=10.43.146.221:6789,b=10.43.194.68:6789,c=10.43.98.67:6789 mapping:{"node":{"a":{"Name":"dev1-cmp2l","Hostname":"dev1-cmp2l","Address":"10.246.143.76"},"b":{"Name":"dev1-cmp1l","Hostname":"dev1-cmp1l","Address":"10.246.143.70"},"c":{"Name":"dev1-cmp3l","Hostname":"dev1-cmp3l","Address":"10.246.143.67"}}} maxMonId:1] 2022-01-26 09:16:36.024091 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config 2022-01-26 09:16:36.024315 I | cephclient: generated admin config in /var/lib/rook/rook-ceph 2022-01-26 09:16:36.429568 I | op-mon: 2 of 3 expected mon deployments exist. creating new deployment(s). 2022-01-26 09:16:36.433231 I | op-mon: deployment for mon rook-ceph-mon-a already exists. updating if needed 2022-01-26 09:16:36.448875 I | op-k8sutil: deployment "rook-ceph-mon-a" did not change, nothing to update 2022-01-26 09:16:36.453374 I | op-mon: deployment for mon rook-ceph-mon-b already exists. updating if needed 2022-01-26 09:16:36.484694 I | op-k8sutil: deployment "rook-ceph-mon-b" did not change, nothing to update 2022-01-26 09:16:36.746023 I | op-mon: updating maxMonID from 1 to 2 after committing mon "c" 2022-01-26 09:16:37.423355 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.43.146.221:6789","10.43.194.68:6789","10.43.98.67:6789"]}] data:a=10.43.146.221:6789,b=10.43.194.68:6789,c=10.43.98.67:6789 mapping:{"node":{"a":{"Name":"dev1-cmp2l","Hostname":"dev1-cmp2l","Address":"10.246.143.76"},"b":{"Name":"dev1-cmp1l","Hostname":"dev1-cmp1l","Address":"10.246.143.70"},"c":{"Name":"dev1-cmp3l","Hostname":"dev1-cmp3l","Address":"10.246.143.67"}}} maxMonId:2] 2022-01-26 09:16:37.423383 I | op-mon: waiting for mon quorum with [a b c] 2022-01-26 09:16:38.026805 I | op-mon: mon c is not yet running 2022-01-26 09:16:38.026875 I | op-mon: mons running: [a b] 2022-01-26 09:16:38.498067 I | op-mon: Monitors in quorum: [a b] 2022-01-26 09:16:38.498093 I | op-mon: mons created: 3 2022-01-26 09:16:39.023391 I | op-mon: waiting for mon quorum with [a b c] 2022-01-26 09:16:39.094223 I | op-mon: mon c is not yet running 2022-01-26 09:16:39.094252 I | op-mon: mons running: [a b] 2022-01-26 09:16:39.606813 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2022-01-26 09:16:40.127626 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2022-01-26 09:16:44.115395 I | op-mon: mons running: [a b c] 2022-01-26 09:16:45.905523 I | op-mon: Monitors in quorum: [a b c] 2022-01-26 09:16:45.905559 I | op-config: setting "global"="mon allow pool delete"="true" option to the mon configuration database 2022-01-26 09:16:46.427265 I | op-config: successfully set "global"="mon allow pool delete"="true" option to the mon configuration database 2022-01-26 09:16:46.427293 I | op-config: setting "global"="mon cluster log file"="" option to the mon configuration database 2022-01-26 09:16:46.999099 I | op-config: successfully set "global"="mon cluster log file"="" option to the mon configuration database 2022-01-26 09:16:46.999124 I | op-config: setting "global"="mon allow pool size one"="true" option to the mon configuration database 2022-01-26 09:16:47.519032 I | op-config: successfully set "global"="mon allow pool size one"="true" option to the mon configuration database 2022-01-26 09:16:47.519056 I | op-config: setting "global"="osd scrub auto repair"="true" option to the mon configuration database 2022-01-26 09:16:48.088328 I | op-config: successfully set "global"="osd scrub auto repair"="true" option to the mon configuration database 2022-01-26 09:16:48.088359 I | op-config: setting "global"="log to file"="false" option to the mon configuration database 2022-01-26 09:16:48.602547 I | op-config: successfully set "global"="log to file"="false" option to the mon configuration database 2022-01-26 09:16:48.602580 I | op-config: setting "global"="rbd_default_features"="3" option to the mon configuration database 2022-01-26 09:16:49.123585 I | op-config: successfully set "global"="rbd_default_features"="3" option to the mon configuration database 2022-01-26 09:16:49.123615 I | op-config: deleting "log file" option from the mon configuration database 2022-01-26 09:16:49.686065 I | op-config: successfully deleted "log file" option from the mon configuration database 2022-01-26 09:16:49.691000 I | cephclient: getting or creating ceph auth key "client.csi-rbd-provisioner" 2022-01-26 09:16:50.231874 I | cephclient: getting or creating ceph auth key "client.csi-rbd-node" 2022-01-26 09:16:50.798518 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-provisioner" 2022-01-26 09:16:51.339056 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-node" 2022-01-26 09:16:51.946623 I | ceph-csi: created kubernetes csi secrets for cluster "rook-ceph" 2022-01-26 09:16:51.946658 I | cephclient: getting or creating ceph auth key "client.crash" 2022-01-26 09:16:52.503141 I | ceph-crashcollector-controller: created kubernetes crash collector secret for cluster "rook-ceph" 2022-01-26 09:16:53.021216 I | cephclient: successfully enabled msgr2 protocol 2022-01-26 09:16:53.021264 I | op-config: deleting "mon_mds_skip_sanity" option from the mon configuration database 2022-01-26 09:16:53.589974 I | op-config: successfully deleted "mon_mds_skip_sanity" option from the mon configuration database 2022-01-26 09:16:53.590014 I | cephclient: create rbd-mirror bootstrap peer token "client.rbd-mirror-peer" 2022-01-26 09:16:53.590021 I | cephclient: getting or creating ceph auth key "client.rbd-mirror-peer" 2022-01-26 09:16:54.125034 I | cephclient: successfully created rbd-mirror bootstrap peer token for cluster "rook-ceph" 2022-01-26 09:16:54.135630 I | op-mgr: start running mgr 2022-01-26 09:16:54.135676 I | cephclient: getting or creating ceph auth key "mgr.a" 2022-01-26 09:16:55.583445 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Operation cannot be fulfilled on deployments.apps "rook-ceph-crashcollector-dev1-cmp1l": the object has been modified; please apply your changes to the latest version and try again 2022-01-26 09:17:07.416946 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Operation cannot be fulfilled on deployments.apps "rook-ceph-crashcollector-dev1-cmp3l": the object has been modified; please apply your changes to the latest version and try again 2022-01-26 09:17:10.512838 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2022-01-26 09:17:11.488696 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2022-01-26 09:17:15.973521 I | op-k8sutil: finished waiting for updated deployment "rook-ceph-mgr-a" 2022-01-26 09:17:15.976559 I | op-mgr: setting services to point to mgr "a" W0126 09:17:15.989802 1 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2022-01-26 09:17:16.002720 I | op-mgr: no need to update service "rook-ceph-mgr" 2022-01-26 09:17:16.002749 I | op-mgr: no need to update service "rook-ceph-mgr-dashboard" 2022-01-26 09:17:16.003169 I | op-mgr: successful modules: balancer W0126 09:17:16.005709 1 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2022-01-26 09:17:16.082551 I | op-mgr: prometheusRule deployed 2022-01-26 09:17:16.094082 I | op-osd: start running osds in namespace "rook-ceph" 2022-01-26 09:17:16.094112 I | op-osd: wait timeout for healthy OSDs during upgrade or restart is "10m0s" 2022-01-26 09:17:16.097465 I | op-osd: start provisioning the OSDs on PVCs, if needed 2022-01-26 09:17:16.182276 I | op-osd: no storageClassDeviceSets defined to configure OSDs on PVCs 2022-01-26 09:17:16.182305 I | op-osd: start provisioning the OSDs on nodes, if needed 2022-01-26 09:17:16.182321 W | op-osd: useAllNodes is TRUE, but nodes are specified. NODES in the cluster CR will be IGNORED unless useAllNodes is FALSE. 2022-01-26 09:17:16.282744 I | op-osd: 3 of the 7 storage nodes are valid 2022-01-26 09:17:16.437527 I | op-osd: started OSD provisioning job for node "dev1-cmp1l" 2022-01-26 09:17:16.648196 I | op-osd: started OSD provisioning job for node "dev1-cmp2l" 2022-01-26 09:17:16.881874 I | op-osd: started OSD provisioning job for node "dev1-cmp3l" 2022-01-26 09:17:16.884787 I | op-osd: OSD orchestration status for node dev1-cmp1l is "starting" 2022-01-26 09:17:16.884814 I | op-osd: OSD orchestration status for node dev1-cmp2l is "starting" 2022-01-26 09:17:16.884828 I | op-osd: OSD orchestration status for node dev1-cmp3l is "starting" 2022-01-26 09:17:17.977322 I | op-osd: OSD orchestration status for node dev1-cmp1l is "orchestrating" 2022-01-26 09:17:19.081061 I | op-osd: OSD orchestration status for node dev1-cmp3l is "orchestrating" 2022-01-26 09:17:19.089407 I | op-osd: OSD orchestration status for node dev1-cmp2l is "orchestrating" 2022-01-26 09:17:20.396319 I | op-config: setting "global"="osd_pool_default_pg_autoscale_mode"="on" option to the mon configuration database 2022-01-26 09:17:20.396842 I | op-mgr: successful modules: prometheus 2022-01-26 09:17:21.331160 I | op-config: successfully set "global"="osd_pool_default_pg_autoscale_mode"="on" option to the mon configuration database 2022-01-26 09:17:21.331183 I | op-config: setting "global"="mon_pg_warn_min_per_osd"="0" option to the mon configuration database 2022-01-26 09:17:22.391806 I | op-config: successfully set "global"="mon_pg_warn_min_per_osd"="0" option to the mon configuration database 2022-01-26 09:17:22.391833 I | op-mgr: successful modules: mgr module(s) from the spec 2022-01-26 09:17:23.339281 I | op-osd: OSD orchestration status for node dev1-cmp1l is "failed" 2022-01-26 09:17:23.339317 E | op-osd: failed to provision OSD(s) on node dev1-cmp1l. &{OSDs:[] Status:failed PvcBackedOSD:false Message:failed to configure devices: failed to initialize osd: failed to run ceph-volume raw command. Running command: /usr/bin/ceph-authtool --gen-print-key Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b184ec48-083d-426c-9659-7e40b5d33faf Running command: /usr/bin/ceph-authtool --gen-print-key Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /dev/sdd1 Running command: /usr/bin/ln -s /dev/sdd1 /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap stderr: got monmap epoch 3 Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQCgEfFhSuBxJxAAC3WFZ4G+tzOzLLiIG/Ix7w== stdout: creating /var/lib/ceph/osd/ceph-0/keyring added entity osd.0 auth(key=AQCgEfFhSuBxJxAAC3WFZ4G+tzOzLLiIG/Ix7w==) Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/ Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid b184ec48-083d-426c-9659-7e40b5d33faf --setuser ceph --setgroup ceph stderr: 2022-01-26T09:17:21.852+0000 7fd9c3d7e080 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid stderr: 2022-01-26T09:17:21.871+0000 7fd9c3d7e080 -1 bluefs _replay 0x0: stop: uuid 00000000-0000-0000-0000-000000000000 != super.uuid 882e46d9-3b42-4b17-b7f5-3c8d6152827e, block dump: stderr: 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| stderr: * stderr: 00000ff0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| stderr: 00001000 stderr: 2022-01-26T09:17:22.366+0000 7fd9c3d7e080 -1 rocksdb: verify_sharding unable to list column families: NotFound: stderr: 2022-01-26T09:17:22.366+0000 7fd9c3d7e080 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _open_db erroring opening db: stderr: 2022-01-26T09:17:22.868+0000 7fd9c3d7e080 -1 OSD::mkfs: ObjectStore::mkfs failed with error (5) Input/output error stderr: 2022-01-26T09:17:22.868+0000 7fd9c3d7e080 -1  ** ERROR: error creating empty object store in /var/lib/ceph/osd/ceph-0/: (5) Input/output error --> Was unable to complete a new OSD, will rollback changes Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it stderr: purged osd.0 Traceback (most recent call last): File "/usr/sbin/ceph-volume", line 11, in load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')() File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 40, in __init__ self.main(self.argv) File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc return f(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 152, in main terminal.dispatch(self.mapper, subcommand_args) File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch instance.main() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/main.py", line 32, in main terminal.dispatch(self.mapper, self.argv) File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch instance.main() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 169, in main self.safe_prepare(self.args) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 91, in safe_prepare self.prepare() File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root return func(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 134, in prepare tmpfs, File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 68, in prepare_bluestore db=db File "/usr/lib/python3.6/site-packages/ceph_volume/util/prepare.py", line 481, in osd_mkfs_bluestore raise RuntimeError('Command failed with exit code %s: %s' % (returncode, ' '.join(command))) RuntimeError: Command failed with exit code 250: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid b184ec48-083d-426c-9659-7e40b5d33faf --setuser ceph --setgroup ceph: exit status 1} 2022-01-26 09:17:23.983959 I | op-osd: OSD orchestration status for node dev1-cmp1l is "orchestrating" 2022-01-26 09:17:24.059584 I | op-osd: OSD orchestration status for node dev1-cmp2l is "failed" 2022-01-26 09:17:24.059633 E | op-osd: failed to provision OSD(s) on node dev1-cmp2l. &{OSDs:[] Status:failed PvcBackedOSD:false Message:failed to configure devices: failed to initialize osd: failed to run ceph-volume raw command. Running command: /usr/bin/ceph-authtool --gen-print-key Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b9e3b839-41b9-4d99-b0a5-02fe705e0eaa Running command: /usr/bin/ceph-authtool --gen-print-key Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2 Running command: /usr/bin/chown -R ceph:ceph /dev/sde1 Running command: /usr/bin/ln -s /dev/sde1 /var/lib/ceph/osd/ceph-2/block Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap stderr: got monmap epoch 3 Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-2/keyring --create-keyring --name osd.2 --add-key AQChEfFhpi95JBAA3rVav0Gjd6jkCAW+F0r+bQ== stdout: creating /var/lib/ceph/osd/ceph-2/keyring added entity osd.2 auth(key=AQChEfFhpi95JBAA3rVav0Gjd6jkCAW+F0r+bQ==) Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/ Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid b9e3b839-41b9-4d99-b0a5-02fe705e0eaa --setuser ceph --setgroup ceph stderr: 2022-01-26T09:17:22.596+0000 7ff75f812080 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid stderr: 2022-01-26T09:17:22.603+0000 7ff75f812080 -1 bluefs _replay 0x0: stop: uuid 00000000-0000-0000-0000-000000000000 != super.uuid 22ca17d0-a094-4133-a973-fd2f984b06bb, block dump: stderr: 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| stderr: * stderr: 00000ff0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| stderr: 00001000 stderr: 2022-01-26T09:17:23.108+0000 7ff75f812080 -1 rocksdb: verify_sharding unable to list column families: NotFound: stderr: 2022-01-26T09:17:23.108+0000 7ff75f812080 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _open_db erroring opening db: stderr: 2022-01-26T09:17:23.610+0000 7ff75f812080 -1 OSD::mkfs: ObjectStore::mkfs failed with error (5) Input/output error stderr: 2022-01-26T09:17:23.610+0000 7ff75f812080 -1  ** ERROR: error creating empty object store in /var/lib/ceph/osd/ceph-2/: (5) Input/output error --> Was unable to complete a new OSD, will rollback changes Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.2 --yes-i-really-mean-it stderr: purged osd.2 Traceback (most recent call last): File "/usr/sbin/ceph-volume", line 11, in load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')() File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 40, in __init__ self.main(self.argv) File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc return f(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 152, in main terminal.dispatch(self.mapper, subcommand_args) File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch instance.main() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/main.py", line 32, in main terminal.dispatch(self.mapper, self.argv) File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch instance.main() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 169, in main self.safe_prepare(self.args) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 91, in safe_prepare self.prepare() File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root return func(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 134, in prepare tmpfs, File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 68, in prepare_bluestore db=db File "/usr/lib/python3.6/site-packages/ceph_volume/util/prepare.py", line 481, in osd_mkfs_bluestore raise RuntimeError('Command failed with exit code %s: %s' % (returncode, ' '.join(command))) RuntimeError: Command failed with exit code 250: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid b9e3b839-41b9-4d99-b0a5-02fe705e0eaa --setuser ceph --setgroup ceph: exit status 1} 2022-01-26 09:17:24.071419 I | op-osd: OSD orchestration status for node dev1-cmp3l is "failed" 2022-01-26 09:17:24.071458 E | op-osd: failed to provision OSD(s) on node dev1-cmp3l. &{OSDs:[] Status:failed PvcBackedOSD:false Message:failed to configure devices: failed to initialize osd: failed to run ceph-volume raw command. Running command: /usr/bin/ceph-authtool --gen-print-key Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 32d76e4a-c2c8-473c-aa6c-01a5162619ee Running command: /usr/bin/ceph-authtool --gen-print-key Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1 Running command: /usr/bin/chown -R ceph:ceph /dev/sdd1 Running command: /usr/bin/ln -s /dev/sdd1 /var/lib/ceph/osd/ceph-1/block Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap stderr: got monmap epoch 3 Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-1/keyring --create-keyring --name osd.1 --add-key AQChEfFhySGhIxAA2gy980hrAEJp4CkIzUNvEA== stdout: creating /var/lib/ceph/osd/ceph-1/keyring added entity osd.1 auth(key=AQChEfFhySGhIxAA2gy980hrAEJp4CkIzUNvEA==) Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/ Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 32d76e4a-c2c8-473c-aa6c-01a5162619ee --setuser ceph --setgroup ceph stderr: 2022-01-26T09:17:22.583+0000 7f47d5116080 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid stderr: 2022-01-26T09:17:22.602+0000 7f47d5116080 -1 bluefs _replay 0x0: stop: uuid 00000000-0000-0000-0000-000000000000 != super.uuid 0c49ae3c-9b9a-44be-982f-a2cdb37329de, block dump: stderr: 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| stderr: * stderr: 00000ff0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| stderr: 00001000 stderr: 2022-01-26T09:17:23.102+0000 7f47d5116080 -1 rocksdb: verify_sharding unable to list column families: NotFound: stderr: 2022-01-26T09:17:23.102+0000 7f47d5116080 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _open_db erroring opening db: stderr: 2022-01-26T09:17:23.605+0000 7f47d5116080 -1 OSD::mkfs: ObjectStore::mkfs failed with error (5) Input/output error stderr: 2022-01-26T09:17:23.605+0000 7f47d5116080 -1  ** ERROR: error creating empty object store in /var/lib/ceph/osd/ceph-1/: (5) Input/output error --> Was unable to complete a new OSD, will rollback changes Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.1 --yes-i-really-mean-it stderr: purged osd.1 Traceback (most recent call last): File "/usr/sbin/ceph-volume", line 11, in load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')() File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 40, in __init__ self.main(self.argv) File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc return f(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 152, in main terminal.dispatch(self.mapper, subcommand_args) File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch instance.main() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/main.py", line 32, in main terminal.dispatch(self.mapper, self.argv) File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch instance.main() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 169, in main self.safe_prepare(self.args) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 91, in safe_prepare self.prepare() File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root return func(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 134, in prepare tmpfs, File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 68, in prepare_bluestore db=db File "/usr/lib/python3.6/site-packages/ceph_volume/util/prepare.py", line 481, in osd_mkfs_bluestore raise RuntimeError('Command failed with exit code %s: %s' % (returncode, ' '.join(command))) RuntimeError: Command failed with exit code 250: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 32d76e4a-c2c8-473c-aa6c-01a5162619ee --setuser ceph --setgroup ceph: exit status 1} 2022-01-26 09:17:24.087217 E | ceph-cluster-controller: failed to reconcile CephCluster "rook-ceph/rook-ceph". failed to reconcile cluster "rook-ceph": failed to configure local ceph cluster: failed to create cluster: failed to start ceph osds: 3 failures encountered while running osds on nodes in namespace "rook-ceph". failed to provision OSD(s) on node dev1-cmp1l. &{OSDs:[] Status:failed PvcBackedOSD:false Message:failed to configure devices: failed to initialize osd: failed to run ceph-volume raw command. Running command: /usr/bin/ceph-authtool --gen-print-key Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b184ec48-083d-426c-9659-7e40b5d33faf Running command: /usr/bin/ceph-authtool --gen-print-key Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /dev/sdd1 Running command: /usr/bin/ln -s /dev/sdd1 /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap stderr: got monmap epoch 3 Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQCgEfFhSuBxJxAAC3WFZ4G+tzOzLLiIG/Ix7w== stdout: creating /var/lib/ceph/osd/ceph-0/keyring added entity osd.0 auth(key=AQCgEfFhSuBxJxAAC3WFZ4G+tzOzLLiIG/Ix7w==) Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/ Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid b184ec48-083d-426c-9659-7e40b5d33faf --setuser ceph --setgroup ceph stderr: 2022-01-26T09:17:21.852+0000 7fd9c3d7e080 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid stderr: 2022-01-26T09:17:21.871+0000 7fd9c3d7e080 -1 bluefs _replay 0x0: stop: uuid 00000000-0000-0000-0000-000000000000 != super.uuid 882e46d9-3b42-4b17-b7f5-3c8d6152827e, block dump: stderr: 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| stderr: * stderr: 00000ff0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| stderr: 00001000 stderr: 2022-01-26T09:17:22.366+0000 7fd9c3d7e080 -1 rocksdb: verify_sharding unable to list column families: NotFound: stderr: 2022-01-26T09:17:22.366+0000 7fd9c3d7e080 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _open_db erroring opening db: stderr: 2022-01-26T09:17:22.868+0000 7fd9c3d7e080 -1 OSD::mkfs: ObjectStore::mkfs failed with error (5) Input/output error stderr: 2022-01-26T09:17:22.868+0000 7fd9c3d7e080 -1  ** ERROR: error creating empty object store in /var/lib/ceph/osd/ceph-0/: (5) Input/output error --> Was unable to complete a new OSD, will rollback changes Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it stderr: purged osd.0 Traceback (most recent call last): File "/usr/sbin/ceph-volume", line 11, in load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')() File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 40, in __init__ self.main(self.argv) File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc return f(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 152, in main terminal.dispatch(self.mapper, subcommand_args) File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch instance.main() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/main.py", line 32, in main terminal.dispatch(self.mapper, self.argv) File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch instance.main() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 169, in main self.safe_prepare(self.args) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 91, in safe_prepare self.prepare() File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root return func(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 134, in prepare tmpfs, File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 68, in prepare_bluestore db=db File "/usr/lib/python3.6/site-packages/ceph_volume/util/prepare.py", line 481, in osd_mkfs_bluestore raise RuntimeError('Command failed with exit code %s: %s' % (returncode, ' '.join(command))) RuntimeError: Command failed with exit code 250: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid b184ec48-083d-426c-9659-7e40b5d33faf --setuser ceph --setgroup ceph: exit status 1} failed to provision OSD(s) on node dev1-cmp2l. &{OSDs:[] Status:failed PvcBackedOSD:false Message:failed to configure devices: failed to initialize osd: failed to run ceph-volume raw command. Running command: /usr/bin/ceph-authtool --gen-print-key Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b9e3b839-41b9-4d99-b0a5-02fe705e0eaa Running command: /usr/bin/ceph-authtool --gen-print-key Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2 Running command: /usr/bin/chown -R ceph:ceph /dev/sde1 Running command: /usr/bin/ln -s /dev/sde1 /var/lib/ceph/osd/ceph-2/block Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap stderr: got monmap epoch 3 Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-2/keyring --create-keyring --name osd.2 --add-key AQChEfFhpi95JBAA3rVav0Gjd6jkCAW+F0r+bQ== stdout: creating /var/lib/ceph/osd/ceph-2/keyring added entity osd.2 auth(key=AQChEfFhpi95JBAA3rVav0Gjd6jkCAW+F0r+bQ==) Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/ Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid b9e3b839-41b9-4d99-b0a5-02fe705e0eaa --setuser ceph --setgroup ceph stderr: 2022-01-26T09:17:22.596+0000 7ff75f812080 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid stderr: 2022-01-26T09:17:22.603+0000 7ff75f812080 -1 bluefs _replay 0x0: stop: uuid 00000000-0000-0000-0000-000000000000 != super.uuid 22ca17d0-a094-4133-a973-fd2f984b06bb, block dump: stderr: 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| stderr: * stderr: 00000ff0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| stderr: 00001000 stderr: 2022-01-26T09:17:23.108+0000 7ff75f812080 -1 rocksdb: verify_sharding unable to list column families: NotFound: stderr: 2022-01-26T09:17:23.108+0000 7ff75f812080 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _open_db erroring opening db: stderr: 2022-01-26T09:17:23.610+0000 7ff75f812080 -1 OSD::mkfs: ObjectStore::mkfs failed with error (5) Input/output error stderr: 2022-01-26T09:17:23.610+0000 7ff75f812080 -1  ** ERROR: error creating empty object store in /var/lib/ceph/osd/ceph-2/: (5) Input/output error --> Was unable to complete a new OSD, will rollback changes Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.2 --yes-i-really-mean-it stderr: purged osd.2 Traceback (most recent call last): File "/usr/sbin/ceph-volume", line 11, in load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')() File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 40, in __init__ self.main(self.argv) File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc return f(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 152, in main terminal.dispatch(self.mapper, subcommand_args) File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch instance.main() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/main.py", line 32, in main terminal.dispatch(self.mapper, self.argv) File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch instance.main() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 169, in main self.safe_prepare(self.args) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 91, in safe_prepare self.prepare() File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root return func(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 134, in prepare tmpfs, File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 68, in prepare_bluestore db=db File "/usr/lib/python3.6/site-packages/ceph_volume/util/prepare.py", line 481, in osd_mkfs_bluestore raise RuntimeError('Command failed with exit code %s: %s' % (returncode, ' '.join(command))) RuntimeError: Command failed with exit code 250: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid b9e3b839-41b9-4d99-b0a5-02fe705e0eaa --setuser ceph --setgroup ceph: exit status 1} failed to provision OSD(s) on node dev1-cmp3l. &{OSDs:[] Status:failed PvcBackedOSD:false Message:failed to configure devices: failed to initialize osd: failed to run ceph-volume raw command. Running command: /usr/bin/ceph-authtool --gen-print-key Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 32d76e4a-c2c8-473c-aa6c-01a5162619ee Running command: /usr/bin/ceph-authtool --gen-print-key Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1 Running command: /usr/bin/chown -R ceph:ceph /dev/sdd1 Running command: /usr/bin/ln -s /dev/sdd1 /var/lib/ceph/osd/ceph-1/block Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap stderr: got monmap epoch 3 Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-1/keyring --create-keyring --name osd.1 --add-key AQChEfFhySGhIxAA2gy980hrAEJp4CkIzUNvEA== stdout: creating /var/lib/ceph/osd/ceph-1/keyring added entity osd.1 auth(key=AQChEfFhySGhIxAA2gy980hrAEJp4CkIzUNvEA==) Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/ Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 32d76e4a-c2c8-473c-aa6c-01a5162619ee --setuser ceph --setgroup ceph stderr: 2022-01-26T09:17:22.583+0000 7f47d5116080 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid stderr: 2022-01-26T09:17:22.602+0000 7f47d5116080 -1 bluefs _replay 0x0: stop: uuid 00000000-0000-0000-0000-000000000000 != super.uuid 0c49ae3c-9b9a-44be-982f-a2cdb37329de, block dump: stderr: 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| stderr: * stderr: 00000ff0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| stderr: 00001000 stderr: 2022-01-26T09:17:23.102+0000 7f47d5116080 -1 rocksdb: verify_sharding unable to list column families: NotFound: stderr: 2022-01-26T09:17:23.102+0000 7f47d5116080 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _open_db erroring opening db: stderr: 2022-01-26T09:17:23.605+0000 7f47d5116080 -1 OSD::mkfs: ObjectStore::mkfs failed with error (5) Input/output error stderr: 2022-01-26T09:17:23.605+0000 7f47d5116080 -1  ** ERROR: error creating empty object store in /var/lib/ceph/osd/ceph-1/: (5) Input/output error --> Was unable to complete a new OSD, will rollback changes Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.1 --yes-i-really-mean-it stderr: purged osd.1 Traceback (most recent call last): File "/usr/sbin/ceph-volume", line 11, in load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')() File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 40, in __init__ self.main(self.argv) File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc return f(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 152, in main terminal.dispatch(self.mapper, subcommand_args) File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch instance.main() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/main.py", line 32, in main terminal.dispatch(self.mapper, self.argv) File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch instance.main() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 169, in main self.safe_prepare(self.args) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 91, in safe_prepare self.prepare() File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root return func(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 134, in prepare tmpfs, File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 68, in prepare_bluestore db=db File "/usr/lib/python3.6/site-packages/ceph_volume/util/prepare.py", line 481, in osd_mkfs_bluestore raise RuntimeError('Command failed with exit code %s: %s' % (returncode, ' '.join(command))) RuntimeError: Command failed with exit code 250: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 32d76e4a-c2c8-473c-aa6c-01a5162619ee --setuser ceph --setgroup ceph: exit status 1} 2022-01-26 09:17:24.092575 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph" 2022-01-26 09:17:24.096217 I | op-mon: parsing mon endpoints: a=10.43.146.221:6789,b=10.43.194.68:6789,c=10.43.98.67:6789 2022-01-26 09:17:24.100730 I | ceph-cluster-controller: enabling ceph mon monitoring goroutine for cluster "rook-ceph" 2022-01-26 09:17:24.100770 I | op-osd: ceph osd status in namespace "rook-ceph" check interval "1m0s" 2022-01-26 09:17:24.100777 I | ceph-cluster-controller: enabling ceph osd monitoring goroutine for cluster "rook-ceph" 2022-01-26 09:17:24.100790 I | ceph-cluster-controller: ceph status check interval is 1m0s 2022-01-26 09:17:24.100796 I | ceph-cluster-controller: enabling ceph status monitoring goroutine for cluster "rook-ceph" 2022-01-26 09:17:24.114891 I | ceph-spec: detecting the ceph image version for image <...>/ceph/ceph:v16.2.7... 2022-01-26 09:17:24.494776 I | op-mgr: the dashboard secret was already generated 2022-01-26 09:17:24.494805 I | op-mgr: setting ceph dashboard "admin" login creds 2022-01-26 09:17:26.584306 I | ceph-spec: detected ceph image version: "16.2.7-0 pacific" 2022-01-26 09:17:26.584337 I | ceph-cluster-controller: validating ceph version from provided image 2022-01-26 09:17:26.588728 I | op-mon: parsing mon endpoints: a=10.43.146.221:6789,b=10.43.194.68:6789,c=10.43.98.67:6789 2022-01-26 09:17:26.590792 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config 2022-01-26 09:17:26.590998 I | cephclient: generated admin config in /var/lib/rook/rook-ceph 2022-01-26 09:17:28.082762 I | ceph-cluster-controller: Disabling the insecure global ID as no legacy clients are currently connected. If you still require the insecure connections, see the CVE to suppress the health warning and re-enable the insecure connections. https://docs.ceph.com/en/latest/security/CVE-2021-20288/ 2022-01-26 09:17:28.082799 I | op-config: setting "mon"="auth_allow_insecure_global_id_reclaim"="false" option to the mon configuration database 2022-01-26 09:17:28.588958 I | ceph-cluster-controller: cluster "rook-ceph": version "16.2.7-0 pacific" detected for image "<...>/ceph/ceph:v16.2.7" 2022-01-26 09:17:28.690331 I | op-mon: start running mons 2022-01-26 09:17:28.694718 I | op-mon: parsing mon endpoints: a=10.43.146.221:6789,b=10.43.194.68:6789,c=10.43.98.67:6789 2022-01-26 09:17:28.713710 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.43.194.68:6789","10.43.98.67:6789","10.43.146.221:6789"]}] data:a=10.43.146.221:6789,b=10.43.194.68:6789,c=10.43.98.67:6789 mapping:{"node":{"a":{"Name":"dev1-cmp2l","Hostname":"dev1-cmp2l","Address":"10.246.143.76"},"b":{"Name":"dev1-cmp1l","Hostname":"dev1-cmp1l","Address":"10.246.143.70"},"c":{"Name":"dev1-cmp3l","Hostname":"dev1-cmp3l","Address":"10.246.143.67"}}} maxMonId:2] 2022-01-26 09:17:28.814958 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config 2022-01-26 09:17:28.815165 I | cephclient: generated admin config in /var/lib/rook/rook-ceph 2022-01-26 09:17:29.281184 I | op-config: successfully set "mon"="auth_allow_insecure_global_id_reclaim"="false" option to the mon configuration database 2022-01-26 09:17:29.281206 I | ceph-cluster-controller: insecure global ID is now disabled 2022-01-26 09:17:30.414733 I | op-mon: targeting the mon count 3 2022-01-26 09:17:30.418404 I | op-config: setting "global"="mon allow pool delete"="true" option to the mon configuration database 2022-01-26 09:17:31.128513 I | op-config: successfully set "global"="mon allow pool delete"="true" option to the mon configuration database 2022-01-26 09:17:31.128545 I | op-config: setting "global"="mon cluster log file"="" option to the mon configuration database 2022-01-26 09:17:31.983204 I | op-config: successfully set "global"="mon cluster log file"="" option to the mon configuration database 2022-01-26 09:17:31.983229 I | op-config: setting "global"="mon allow pool size one"="true" option to the mon configuration database 2022-01-26 09:17:32.795082 I | op-config: successfully set "global"="mon allow pool size one"="true" option to the mon configuration database 2022-01-26 09:17:32.795116 I | op-config: setting "global"="osd scrub auto repair"="true" option to the mon configuration database 2022-01-26 09:17:33.597029 I | op-config: successfully set "global"="osd scrub auto repair"="true" option to the mon configuration database 2022-01-26 09:17:33.597065 I | op-config: setting "global"="log to file"="false" option to the mon configuration database 2022-01-26 09:17:33.704949 I | op-mon: parsing mon endpoints: a=10.43.146.221:6789,b=10.43.194.68:6789,c=10.43.98.67:6789 2022-01-26 09:17:33.783518 I | op-mon: parsing mon endpoints: a=10.43.146.221:6789,b=10.43.194.68:6789,c=10.43.98.67:6789 2022-01-26 09:17:33.783583 I | ceph-spec: detecting the ceph image version for image <...>/ceph/ceph:v16.2.7... 2022-01-26 09:17:35.181219 I | op-config: successfully set "global"="log to file"="false" option to the mon configuration database 2022-01-26 09:17:35.181257 I | op-config: setting "global"="rbd_default_features"="3" option to the mon configuration database 2022-01-26 09:17:35.505675 I | ceph-block-pool-controller: creating pool "ceph-blockpool" in namespace "rook-ceph" 2022-01-26 09:17:36.291882 I | ceph-spec: detected ceph image version: "16.2.7-0 pacific" 2022-01-26 09:17:37.092363 I | op-config: successfully set "global"="rbd_default_features"="3" option to the mon configuration database 2022-01-26 09:17:37.092413 I | op-config: deleting "log file" option from the mon configuration database 2022-01-26 09:17:37.684016 I | op-mgr: successfully set ceph dashboard creds 2022-01-26 09:17:40.090055 I | op-config: successfully deleted "log file" option from the mon configuration database 2022-01-26 09:17:40.090080 I | op-mon: checking for basic quorum with existing mons 2022-01-26 09:17:40.181365 I | op-mon: mon "a" endpoint is [v2:10.43.146.221:3300,v1:10.43.146.221:6789] 2022-01-26 09:17:40.201062 I | op-mon: mon "b" endpoint is [v2:10.43.194.68:3300,v1:10.43.194.68:6789] 2022-01-26 09:17:40.496138 I | op-mon: mon "c" endpoint is [v2:10.43.98.67:3300,v1:10.43.98.67:6789] 2022-01-26 09:17:41.094688 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.43.146.221:6789","10.43.194.68:6789","10.43.98.67:6789"]}] data:a=10.43.146.221:6789,b=10.43.194.68:6789,c=10.43.98.67:6789 mapping:{"node":{"a":{"Name":"dev1-cmp2l","Hostname":"dev1-cmp2l","Address":"10.246.143.76"},"b":{"Name":"dev1-cmp1l","Hostname":"dev1-cmp1l","Address":"10.246.143.70"},"c":{"Name":"dev1-cmp3l","Hostname":"dev1-cmp3l","Address":"10.246.143.67"}}} maxMonId:2] 2022-01-26 09:17:41.388354 I | op-config: setting "mgr.a"="mgr/dashboard/url_prefix"="/ceph/dashboard" option to the mon configuration database 2022-01-26 09:17:41.781252 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config 2022-01-26 09:17:41.781484 I | cephclient: generated admin config in /var/lib/rook/rook-ceph 2022-01-26 09:17:42.281128 I | op-mon: deployment for mon rook-ceph-mon-a already exists. updating if needed 2022-01-26 09:17:42.387318 I | op-k8sutil: deployment "rook-ceph-mon-a" did not change, nothing to update 2022-01-26 09:17:42.387368 I | op-mon: waiting for mon quorum with [a b c] 2022-01-26 09:17:42.782083 I | op-mon: mons running: [a b c] 2022-01-26 09:17:44.185044 I | ceph-file-controller: start running mdses for filesystem "ceph-filesystem" 2022-01-26 09:17:44.185084 W | ceph-spec: running the "mds" daemon(s) with 2048MB of ram, but at least 4096MB is recommended 2022-01-26 09:17:46.185881 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2022-01-26 09:17:46.788047 I | op-config: successfully set "mgr.a"="mgr/dashboard/url_prefix"="/ceph/dashboard" option to the mon configuration database 2022-01-26 09:17:49.281191 I | op-mon: Monitors in quorum: [a b c] 2022-01-26 09:17:49.287712 I | op-mon: deployment for mon rook-ceph-mon-b already exists. updating if needed 2022-01-26 09:17:49.488668 I | op-k8sutil: deployment "rook-ceph-mon-b" did not change, nothing to update 2022-01-26 09:17:49.488704 I | op-mon: waiting for mon quorum with [a b c] 2022-01-26 09:17:49.781493 I | op-mon: mons running: [a b c] 2022-01-26 09:17:50.684184 I | cephclient: getting or creating ceph auth key "mds.ceph-filesystem-a" 2022-01-26 09:17:51.284060 I | op-config: setting "mgr.a"="mgr/dashboard/ssl"="false" option to the mon configuration database 2022-01-26 09:17:51.986973 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2022-01-26 09:17:54.191208 I | cephclient: creating replicated pool ceph-blockpool succeeded 2022-01-26 09:17:54.683522 I | op-mon: Monitors in quorum: [a b c] 2022-01-26 09:17:54.689994 I | op-mon: deployment for mon rook-ceph-mon-c already exists. updating if needed 2022-01-26 09:17:54.885784 I | op-k8sutil: deployment "rook-ceph-mon-c" did not change, nothing to update 2022-01-26 09:17:54.885818 I | op-mon: waiting for mon quorum with [a b c] 2022-01-26 09:17:54.989106 I | op-mon: mons running: [a b c] 2022-01-26 09:17:55.681293 I | op-config: successfully set "mgr.a"="mgr/dashboard/ssl"="false" option to the mon configuration database 2022-01-26 09:17:55.786350 I | op-mds: setting mds config flags 2022-01-26 09:17:55.786413 I | op-config: setting "mds.ceph-filesystem-a"="mds_cache_memory_limit"="1073741824" option to the mon configuration database 2022-01-26 09:17:59.285301 I | op-mon: Monitors in quorum: [a b c] 2022-01-26 09:17:59.285327 I | op-mon: mons created: 3 2022-01-26 09:17:59.287655 I | op-config: setting "mgr.a"="mgr/dashboard/server_port"="7000" option to the mon configuration database 2022-01-26 09:17:59.481261 I | op-config: successfully set "mds.ceph-filesystem-a"="mds_cache_memory_limit"="1073741824" option to the mon configuration database 2022-01-26 09:17:59.481371 I | op-config: setting "mds.ceph-filesystem-a"="mds_join_fs"="ceph-filesystem" option to the mon configuration database 2022-01-26 09:18:01.181117 I | ceph-block-pool-controller: initializing pool "ceph-blockpool" 2022-01-26 09:18:02.784157 I | op-config: successfully set "mgr.a"="mgr/dashboard/server_port"="7000" option to the mon configuration database 2022-01-26 09:18:02.784194 I | op-mgr: dashboard config has changed. restarting the dashboard module 2022-01-26 09:18:02.784201 I | op-mgr: restarting the mgr module 2022-01-26 09:18:02.883362 I | op-config: successfully set "mds.ceph-filesystem-a"="mds_join_fs"="ceph-filesystem" option to the mon configuration database 2022-01-26 09:18:02.998738 I | op-mon: waiting for mon quorum with [a b c] 2022-01-26 09:18:03.091132 I | op-mon: mons running: [a b c] 2022-01-26 09:18:03.091961 I | cephclient: getting or creating ceph auth key "mds.ceph-filesystem-b" 2022-01-26 09:18:03.949516 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Operation cannot be fulfilled on deployments.apps "rook-ceph-crashcollector-dev1-cmp3l": the object has been modified; please apply your changes to the latest version and try again 2022-01-26 09:18:05.990235 I | op-mon: Monitors in quorum: [a b c] 2022-01-26 09:18:05.992950 I | cephclient: getting or creating ceph auth key "client.csi-rbd-provisioner" 2022-01-26 09:18:06.248737 I | op-mds: setting mds config flags 2022-01-26 09:18:06.248768 I | op-config: setting "mds.ceph-filesystem-b"="mds_cache_memory_limit"="1073741824" option to the mon configuration database 2022-01-26 09:18:07.259720 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Operation cannot be fulfilled on deployments.apps "rook-ceph-crashcollector-dev1-cmp3l": the object has been modified; please apply your changes to the latest version and try again 2022-01-26 09:18:09.286139 I | cephclient: getting or creating ceph auth key "client.csi-rbd-node" 2022-01-26 09:18:09.792040 I | op-config: successfully set "mds.ceph-filesystem-b"="mds_cache_memory_limit"="1073741824" option to the mon configuration database 2022-01-26 09:18:09.792073 I | op-config: setting "mds.ceph-filesystem-b"="mds_join_fs"="ceph-filesystem" option to the mon configuration database 2022-01-26 09:18:11.181661 I | op-mgr: successful modules: dashboard 2022-01-26 09:18:12.792883 I | op-mon: checking if multiple mons are on the same node 2022-01-26 09:18:13.001981 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-provisioner" 2022-01-26 09:18:13.002126 I | op-config: successfully set "mds.ceph-filesystem-b"="mds_join_fs"="ceph-filesystem" option to the mon configuration database 2022-01-26 09:18:14.984653 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-node" 2022-01-26 09:18:17.498382 I | ceph-csi: created kubernetes csi secrets for cluster "rook-ceph" 2022-01-26 09:18:17.498412 I | cephclient: getting or creating ceph auth key "client.crash" 2022-01-26 09:18:19.284239 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:unknown Count:32}]" 2022-01-26 09:18:19.290555 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2022-01-26 09:18:19.702715 I | ceph-file-controller: creating filesystem "ceph-filesystem" 2022-01-26 09:18:20.105422 I | ceph-crashcollector-controller: created kubernetes crash collector secret for cluster "rook-ceph" 2022-01-26 09:18:22.596043 I | cephclient: successfully enabled msgr2 protocol 2022-01-26 09:18:22.596084 I | op-config: deleting "mon_mds_skip_sanity" option from the mon configuration database 2022-01-26 09:18:24.791170 I | clusterdisruption-controller: all "host" failure domains: []. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:unknown Count:32}]" 2022-01-26 09:18:24.882745 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0 2022-01-26 09:18:24.983823 I | op-config: successfully deleted "mon_mds_skip_sanity" option from the mon configuration database 2022-01-26 09:18:24.983850 I | cephclient: create rbd-mirror bootstrap peer token "client.rbd-mirror-peer" 2022-01-26 09:18:24.983856 I | cephclient: getting or creating ceph auth key "client.rbd-mirror-peer" 2022-01-26 09:18:27.981082 I | cephclient: successfully created rbd-mirror bootstrap peer token for cluster "rook-ceph" 2022-01-26 09:18:28.000932 I | op-mgr: start running mgr 2022-01-26 09:18:28.000967 I | cephclient: getting or creating ceph auth key "mgr.a" 2022-01-26 09:18:28.483666 I | cephclient: creating replicated pool ceph-filesystem-metadata succeeded 2022-01-26 09:18:31.185882 I | op-mgr: deployment for mgr rook-ceph-mgr-a already exists. updating if needed 2022-01-26 09:18:31.283495 I | op-k8sutil: deployment "rook-ceph-mgr-a" did not change, nothing to update 2022-01-26 09:18:31.292511 I | op-mgr: setting services to point to mgr "a" W0126 09:18:31.320811 1 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2022-01-26 09:18:31.391413 I | op-mgr: no need to update service "rook-ceph-mgr" 2022-01-26 09:18:31.391435 I | op-mgr: no need to update service "rook-ceph-mgr-dashboard" 2022-01-26 09:18:31.391560 I | op-mgr: successful modules: balancer W0126 09:18:31.394514 1 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2022-01-26 09:18:31.582092 I | op-mgr: prometheusRule deployed 2022-01-26 09:18:31.681313 I | op-osd: start running osds in namespace "rook-ceph" 2022-01-26 09:18:31.681350 I | op-osd: wait timeout for healthy OSDs during upgrade or restart is "10m0s" 2022-01-26 09:18:31.686319 I | op-osd: start provisioning the OSDs on PVCs, if needed 2022-01-26 09:18:31.688506 I | op-osd: no storageClassDeviceSets defined to configure OSDs on PVCs 2022-01-26 09:18:31.688534 I | op-osd: start provisioning the OSDs on nodes, if needed 2022-01-26 09:18:31.688544 W | op-osd: useAllNodes is TRUE, but nodes are specified. NODES in the cluster CR will be IGNORED unless useAllNodes is FALSE. 2022-01-26 09:18:31.883279 I | op-osd: 3 of the 7 storage nodes are valid 2022-01-26 09:18:31.898273 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-dev1-cmp1l to start a new one 2022-01-26 09:18:31.984248 I | op-k8sutil: batch job rook-ceph-osd-prepare-dev1-cmp1l deleted 2022-01-26 09:18:32.181902 I | op-osd: started OSD provisioning job for node "dev1-cmp1l" 2022-01-26 09:18:32.298817 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-dev1-cmp2l to start a new one 2022-01-26 09:18:32.481708 I | op-k8sutil: batch job rook-ceph-osd-prepare-dev1-cmp2l still exists 2022-01-26 09:18:35.486249 I | op-k8sutil: batch job rook-ceph-osd-prepare-dev1-cmp2l deleted 2022-01-26 09:18:35.681749 I | op-osd: started OSD provisioning job for node "dev1-cmp2l" 2022-01-26 09:18:35.690587 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-dev1-cmp3l to start a new one 2022-01-26 09:18:35.696772 I | op-k8sutil: batch job rook-ceph-osd-prepare-dev1-cmp3l still exists 2022-01-26 09:18:38.585056 I | op-config: setting "global"="osd_pool_default_pg_autoscale_mode"="on" option to the mon configuration database 2022-01-26 09:18:38.681223 I | op-mgr: successful modules: prometheus 2022-01-26 09:18:38.699794 I | op-k8sutil: batch job rook-ceph-osd-prepare-dev1-cmp3l deleted 2022-01-26 09:18:38.881526 I | op-osd: started OSD provisioning job for node "dev1-cmp3l" 2022-01-26 09:18:38.884360 I | op-osd: OSD orchestration status for node dev1-cmp1l is "completed" 2022-01-26 09:18:38.884395 I | op-osd: creating OSD 0 on node "dev1-cmp1l" 2022-01-26 09:18:39.297597 I | op-osd: OSD orchestration status for node dev1-cmp2l is "orchestrating" 2022-01-26 09:18:39.297620 I | op-osd: OSD orchestration status for node dev1-cmp3l is "starting" 2022-01-26 09:18:39.511798 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down but no node drain is detected 2022-01-26 09:18:40.081423 I | op-osd: OSD orchestration status for node dev1-cmp2l is "completed" 2022-01-26 09:18:40.081449 I | op-osd: creating OSD 1 on node "dev1-cmp2l" 2022-01-26 09:18:40.181288 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Operation cannot be fulfilled on deployments.apps "rook-ceph-crashcollector-dev1-cmp1l": the object has been modified; please apply your changes to the latest version and try again 2022-01-26 09:18:41.090386 I | op-config: successfully set "global"="osd_pool_default_pg_autoscale_mode"="on" option to the mon configuration database 2022-01-26 09:18:41.090416 I | op-config: setting "global"="mon_pg_warn_min_per_osd"="0" option to the mon configuration database 2022-01-26 09:18:41.328483 I | op-osd: OSD orchestration status for node dev1-cmp3l is "orchestrating" 2022-01-26 09:18:41.381151 I | cephclient: creating replicated pool ceph-filesystem-data0 succeeded 2022-01-26 09:18:43.185153 I | clusterdisruption-controller: osd is down in failure domain "dev1-cmp1l" and pgs are not active+clean. pg health: "cluster is not fully clean. PGs: [{StateName:unknown Count:32}]" 2022-01-26 09:18:43.186642 I | clusterdisruption-controller: deleting the default pdb "rook-ceph-osd" with maxUnavailable=1 for all osd 2022-01-26 09:18:43.194928 I | clusterdisruption-controller: osd "rook-ceph-osd-1" is down but no node drain is detected 2022-01-26 09:18:43.195071 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down but no node drain is detected 2022-01-26 09:18:43.588866 I | op-mgr: the dashboard secret was already generated 2022-01-26 09:18:43.588898 I | op-mgr: setting ceph dashboard "admin" login creds 2022-01-26 09:18:44.241141 I | op-osd: OSD orchestration status for node dev1-cmp3l is "completed" 2022-01-26 09:18:44.241167 I | op-osd: creating OSD 2 on node "dev1-cmp3l" 2022-01-26 09:18:44.301133 I | op-config: successfully set "global"="mon_pg_warn_min_per_osd"="0" option to the mon configuration database 2022-01-26 09:18:44.301164 I | op-mgr: successful modules: mgr module(s) from the spec 2022-01-26 09:18:47.385174 I | clusterdisruption-controller: osd is down in failure domain "dev1-cmp1l" and pgs are not active+clean. pg health: "cluster is not fully clean. PGs: [{StateName:unknown Count:96}]" 2022-01-26 09:18:47.386901 I | clusterdisruption-controller: creating temporary blocking pdb "rook-ceph-osd-host-dev1-cmp2l" with maxUnavailable=0 for "host" failure domain "dev1-cmp2l" 2022-01-26 09:18:47.490894 I | cephclient: creating filesystem "ceph-filesystem" with metadata pool "ceph-filesystem-metadata" and data pools [ceph-filesystem-data0] 2022-01-26 09:18:48.183067 I | op-osd: finished running OSDs in namespace "rook-ceph" 2022-01-26 09:18:48.183089 I | ceph-cluster-controller: done reconciling ceph cluster in namespace "rook-ceph" 2022-01-26 09:18:49.293995 I | clusterdisruption-controller: osd "rook-ceph-osd-2" is down but no node drain is detected 2022-01-26 09:18:49.294124 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down but no node drain is detected 2022-01-26 09:18:49.294233 I | clusterdisruption-controller: osd "rook-ceph-osd-1" is down but no node drain is detected 2022-01-26 09:18:51.186820 I | ceph-file-controller: created filesystem "ceph-filesystem" on 1 data pool(s) and metadata pool "ceph-filesystem-metadata" 2022-01-26 09:18:51.186849 I | cephclient: setting allow_standby_replay for filesystem "ceph-filesystem" 2022-01-26 09:18:51.190466 I | clusterdisruption-controller: osd is down in failure domain "dev1-cmp1l" and pgs are not active+clean. pg health: "cluster is not fully clean. PGs: [{StateName:unknown Count:96}]" 2022-01-26 09:18:51.195636 I | clusterdisruption-controller: creating temporary blocking pdb "rook-ceph-osd-host-dev1-cmp3l" with maxUnavailable=0 for "host" failure domain "dev1-cmp3l" 2022-01-26 09:18:52.133023 I | clusterdisruption-controller: osd "rook-ceph-osd-1" is down but no node drain is detected 2022-01-26 09:18:52.133114 I | clusterdisruption-controller: osd "rook-ceph-osd-2" is down but no node drain is detected 2022-01-26 09:18:52.133187 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down but no node drain is detected 2022-01-26 09:18:52.590228 I | op-mgr: successfully set ceph dashboard creds 2022-01-26 09:18:53.599678 I | clusterdisruption-controller: osd is down in failure domain "dev1-cmp1l" and pgs are not active+clean. pg health: "cluster is not fully clean. PGs: [{StateName:creating+peering Count:96}]" 2022-01-26 09:18:54.114798 I | ceph-block-pool-controller: successfully initialized pool "ceph-blockpool" 2022-01-26 09:18:54.114876 I | op-config: deleting "mgr/prometheus/rbd_stats_pools" option from the mon configuration database 2022-01-26 09:18:54.887593 I | clusterdisruption-controller: osd "rook-ceph-osd-2" is down but no node drain is detected 2022-01-26 09:18:54.887680 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down but no node drain is detected 2022-01-26 09:18:54.887751 I | clusterdisruption-controller: osd "rook-ceph-osd-1" is down but no node drain is detected 2022-01-26 09:18:56.381254 I | op-config: successfully deleted "mgr/prometheus/rbd_stats_pools" option from the mon configuration database 2022-01-26 09:18:57.385320 I | clusterdisruption-controller: osd is down in failure domain "dev1-cmp1l" and pgs are not active+clean. pg health: "cluster is not fully clean. PGs: [{StateName:peering Count:62} {StateName:active+clean Count:34} {StateName:creating+peering Count:1}]" 2022-01-26 09:18:57.395216 I | clusterdisruption-controller: osd "rook-ceph-osd-1" is down but no node drain is detected 2022-01-26 09:18:57.395317 I | clusterdisruption-controller: osd "rook-ceph-osd-2" is down but no node drain is detected 2022-01-26 09:18:57.395406 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down but no node drain is detected 2022-01-26 09:18:57.694554 I | op-mgr: successful modules: dashboard 2022-01-26 09:18:59.184441 I | clusterdisruption-controller: osd is down in failure domain "dev1-cmp1l" and pgs are not active+clean. pg health: "cluster is not fully clean. PGs: [{StateName:peering Count:62} {StateName:active+clean Count:34} {StateName:creating+peering Count:1}]" 2022-01-26 09:19:22.094112 I | clusterdisruption-controller: all PGs are active+clean. Restoring default OSD pdb settings 2022-01-26 09:19:22.094143 I | clusterdisruption-controller: creating the default pdb "rook-ceph-osd" with maxUnavailable=1 for all osd 2022-01-26 09:19:22.102099 I | clusterdisruption-controller: deleting temporary blocking pdb with "rook-ceph-osd-host-dev1-cmp2l" with maxUnavailable=0 for "host" failure domain "dev1-cmp2l" 2022-01-26 09:19:22.107867 I | clusterdisruption-controller: deleting temporary blocking pdb with "rook-ceph-osd-host-dev1-cmp3l" with maxUnavailable=0 for "host" failure domain "dev1-cmp3l"