Project

General

Profile

Bug #54019 » operator-no-buffering.log

Paul Bormans, 01/26/2022 01:25 PM

 
2022-01-26 08:52:04.591775 I | rookcmd: starting Rook v1.8.3 with arguments '/usr/local/bin/rook ceph operator'
2022-01-26 08:52:04.591915 I | rookcmd: flag values: --enable-machine-disruption-budget=false, --help=false, --kubeconfig=, --log-level=INFO, --operator-image=, --service-account=
2022-01-26 08:52:04.591920 I | cephcmd: starting Rook-Ceph operator
2022-01-26 08:52:04.760428 I | cephcmd: base ceph version inside the rook operator image is "ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)"
2022-01-26 08:52:04.767307 I | op-k8sutil: ROOK_CURRENT_NAMESPACE_ONLY="false" (env var)
2022-01-26 08:52:04.767336 I | operator: watching all namespaces for Ceph CRs
2022-01-26 08:52:04.767405 I | operator: setting up schemes
2022-01-26 08:52:04.769449 I | operator: setting up the controller-runtime manager
I0126 08:52:05.820774 1 request.go:665] Waited for 1.048246386s due to client-side throttling, not priority and fairness, request: GET:https://10.43.0.1:443/apis/authentication.k8s.io/v1beta1?timeout=32s
2022-01-26 08:52:05.973398 I | operator: looking for admission webhook secret "rook-ceph-admission-controller"
2022-01-26 08:52:05.975880 I | operator: admission webhook secret "rook-ceph-admission-controller" not found. proceeding without the admission controller
2022-01-26 08:52:05.975961 I | ceph-cluster-controller: successfully started
2022-01-26 08:52:05.976053 I | ceph-cluster-controller: enabling hotplug orchestration
2022-01-26 08:52:05.976093 I | ceph-crashcollector-controller: successfully started
2022-01-26 08:52:05.976150 I | ceph-block-pool-controller: successfully started
2022-01-26 08:52:05.976184 I | ceph-object-store-user-controller: successfully started
2022-01-26 08:52:05.976218 I | ceph-object-realm-controller: successfully started
2022-01-26 08:52:05.976244 I | ceph-object-zonegroup-controller: successfully started
2022-01-26 08:52:05.976269 I | ceph-object-zone-controller: successfully started
2022-01-26 08:52:05.976393 I | ceph-object-controller: successfully started
2022-01-26 08:52:05.976441 I | ceph-file-controller: successfully started
2022-01-26 08:52:05.976486 I | ceph-nfs-controller: successfully started
2022-01-26 08:52:05.976530 I | ceph-rbd-mirror-controller: successfully started
2022-01-26 08:52:05.976559 I | ceph-client-controller: successfully started
2022-01-26 08:52:05.976584 I | ceph-filesystem-mirror-controller: successfully started
2022-01-26 08:52:05.976619 I | operator: rook-ceph-operator-config-controller successfully started
2022-01-26 08:52:05.976650 I | ceph-csi: rook-ceph-operator-csi-controller successfully started
2022-01-26 08:52:05.976675 I | op-bucket-prov: rook-ceph-operator-bucket-controller successfully started
2022-01-26 08:52:05.976707 I | ceph-bucket-topic: successfully started
2022-01-26 08:52:05.976728 I | ceph-bucket-notification: successfully started
2022-01-26 08:52:05.976748 I | ceph-bucket-notification: successfully started
2022-01-26 08:52:05.976758 I | ceph-fs-subvolumegroup-controller: successfully started
2022-01-26 08:52:05.977732 I | operator: starting the controller-runtime manager
2022-01-26 08:52:06.081235 I | op-k8sutil: ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS="15" (configmap)
2022-01-26 08:52:06.081268 I | op-k8sutil: ROOK_LOG_LEVEL="INFO" (configmap)
2022-01-26 08:52:06.081281 I | op-k8sutil: ROOK_ENABLE_DISCOVERY_DAEMON="false" (env var)
2022-01-26 08:52:06.084774 I | operator: rook-ceph-operator-config-controller done reconciling
2022-01-26 08:52:11.123855 E | clusterdisruption-controller: cephcluster "rook-ceph/" seems to be deleted, not requeuing until triggered again
2022-01-26 08:52:11.123967 I | ceph-spec: adding finalizer "cephblockpool.ceph.rook.io" on "ceph-blockpool"
2022-01-26 08:52:11.129147 E | clusterdisruption-controller: cephcluster "rook-ceph/" seems to be deleted, not requeuing until triggered again
2022-01-26 08:52:11.134349 I | ceph-spec: adding finalizer "cephcluster.ceph.rook.io" on "rook-ceph"
2022-01-26 08:52:11.136894 I | clusterdisruption-controller: deleted all legacy node drain canary pods
2022-01-26 08:52:11.138700 I | ceph-spec: adding finalizer "cephfilesystem.ceph.rook.io" on "ceph-filesystem"
2022-01-26 08:52:11.151300 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph"
2022-01-26 08:52:11.153997 I | ceph-cluster-controller: clusterInfo not yet found, must be a new cluster.
2022-01-26 08:52:11.154042 I | ceph-csi: successfully created csi config map "rook-ceph-csi-config"
2022-01-26 08:52:11.161005 I | op-k8sutil: ROOK_CSI_ENABLE_RBD="true" (configmap)
2022-01-26 08:52:11.161060 I | op-k8sutil: ROOK_CSI_ENABLE_CEPHFS="true" (configmap)
2022-01-26 08:52:11.161069 I | op-k8sutil: ROOK_CSI_ALLOW_UNSUPPORTED_VERSION="false" (default)
2022-01-26 08:52:11.161075 I | op-k8sutil: ROOK_CSI_ENABLE_GRPC_METRICS="false" (configmap)
2022-01-26 08:52:11.161081 I | op-k8sutil: CSI_ENABLE_HOST_NETWORK="true" (configmap)
2022-01-26 08:52:11.161090 I | op-k8sutil: ROOK_CSI_CEPH_IMAGE="<...>/cephcsi/cephcsi:v3.5.1" (configmap)
2022-01-26 08:52:11.161098 I | op-k8sutil: ROOK_CSI_REGISTRAR_IMAGE="<...>/sig-storage/csi-node-driver-registrar:v2.4.0" (configmap)
2022-01-26 08:52:11.161103 I | op-k8sutil: ROOK_CSI_PROVISIONER_IMAGE="<...>/sig-storage/csi-provisioner:v3.1.0" (configmap)
2022-01-26 08:52:11.161108 I | op-k8sutil: ROOK_CSI_ATTACHER_IMAGE="<...>/sig-storage/csi-attacher:v3.4.0" (configmap)
2022-01-26 08:52:11.161113 I | op-k8sutil: ROOK_CSI_SNAPSHOTTER_IMAGE="<...>/sig-storage/csi-snapshotter:v4.2.0" (configmap)
2022-01-26 08:52:11.161117 I | op-k8sutil: ROOK_CSI_KUBELET_DIR_PATH="/var/lib/kubelet" (default)
2022-01-26 08:52:11.161141 I | op-k8sutil: CSI_VOLUME_REPLICATION_IMAGE="quay.io/csiaddons/volumereplication-operator:v0.1.0" (default)
2022-01-26 08:52:11.161149 I | op-k8sutil: ROOK_CSIADDONS_IMAGE="quay.io/csiaddons/k8s-sidecar:v0.2.1" (default)
2022-01-26 08:52:11.161153 I | op-k8sutil: ROOK_CSI_CEPHFS_POD_LABELS="" (default)
2022-01-26 08:52:11.161159 I | op-k8sutil: ROOK_CSI_RBD_POD_LABELS="" (default)
2022-01-26 08:52:11.161169 I | ceph-csi: detecting the ceph csi image version for image "<...>/cephcsi/cephcsi:v3.5.1"
2022-01-26 08:52:11.161239 I | op-k8sutil: CSI_PROVISIONER_TOLERATIONS="" (default)
2022-01-26 08:52:11.161251 I | op-k8sutil: CSI_PROVISIONER_NODE_AFFINITY="role=compute" (configmap)
2022-01-26 08:52:11.178040 I | ceph-spec: detecting the ceph image version for image <...>/ceph/ceph:v16.2.7...
2022-01-26 08:52:11.267493 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet.
2022-01-26 08:52:11.403313 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet.
2022-01-26 08:52:11.538740 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet.
2022-01-26 08:52:21.398394 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet.
2022-01-26 08:52:21.534152 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet.
2022-01-26 08:52:31.525775 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet.
2022-01-26 08:52:31.661609 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet.
2022-01-26 08:52:41.653257 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet.
2022-01-26 08:52:41.790909 I | clusterdisruption-controller: Ceph "rook-ceph" cluster not ready, cannot check Ceph status yet.
2022-01-26 08:52:47.348841 I | ceph-spec: detected ceph image version: "16.2.7-0 pacific"
2022-01-26 08:52:47.348862 I | ceph-cluster-controller: validating ceph version from provided image
2022-01-26 08:52:47.350510 I | ceph-cluster-controller: cluster "rook-ceph": version "16.2.7-0 pacific" detected for image "<...>/ceph/ceph:v16.2.7"
2022-01-26 08:52:47.376518 E | ceph-spec: failed to update cluster condition to {Type:Progressing Status:True Reason:ClusterProgressing Message:Configuring the Ceph cluster LastHeartbeatTime:2022-01-26 08:52:47.367828862 +0000 UTC m=+42.798709157 LastTransitionTime:2022-01-26 08:52:47.367828691 +0000 UTC m=+42.798708769}. failed to update object "rook-ceph/rook-ceph" status: Operation cannot be fulfilled on cephclusters.ceph.rook.io "rook-ceph": the object has been modified; please apply your changes to the latest version and try again
2022-01-26 08:52:47.395471 I | op-mon: start running mons
2022-01-26 08:52:47.460648 I | op-mon: creating mon secrets for a new cluster
2022-01-26 08:52:47.468609 I | ceph-csi: Detected ceph CSI image version: "v3.5.1"
2022-01-26 08:52:47.472595 I | op-k8sutil: CSI_FORCE_CEPHFS_KERNEL_CLIENT="true" (configmap)
2022-01-26 08:52:47.472614 I | op-k8sutil: CSI_CEPHFS_GRPC_METRICS_PORT="9091" (default)
2022-01-26 08:52:47.472619 I | op-k8sutil: CSI_CEPHFS_GRPC_METRICS_PORT="9091" (default)
2022-01-26 08:52:47.472625 I | op-k8sutil: CSI_CEPHFS_LIVENESS_METRICS_PORT="9081" (default)
2022-01-26 08:52:47.472630 I | op-k8sutil: CSI_CEPHFS_LIVENESS_METRICS_PORT="9081" (default)
2022-01-26 08:52:47.472634 I | op-k8sutil: CSI_RBD_GRPC_METRICS_PORT="9090" (default)
2022-01-26 08:52:47.472638 I | op-k8sutil: CSI_RBD_GRPC_METRICS_PORT="9090" (default)
2022-01-26 08:52:47.472642 I | op-k8sutil: CSIADDONS_PORT="9070" (default)
2022-01-26 08:52:47.472646 I | op-k8sutil: CSIADDONS_PORT="9070" (default)
2022-01-26 08:52:47.472650 I | op-k8sutil: CSI_RBD_LIVENESS_METRICS_PORT="9080" (default)
2022-01-26 08:52:47.472654 I | op-k8sutil: CSI_RBD_LIVENESS_METRICS_PORT="9080" (default)
2022-01-26 08:52:47.472659 I | op-k8sutil: CSI_PLUGIN_PRIORITY_CLASSNAME="" (default)
2022-01-26 08:52:47.472664 I | op-k8sutil: CSI_PROVISIONER_PRIORITY_CLASSNAME="" (default)
2022-01-26 08:52:47.472668 I | op-k8sutil: CSI_ENABLE_OMAP_GENERATOR="false" (configmap)
2022-01-26 08:52:47.472673 I | op-k8sutil: CSI_ENABLE_RBD_SNAPSHOTTER="true" (configmap)
2022-01-26 08:52:47.472677 I | op-k8sutil: CSI_ENABLE_CEPHFS_SNAPSHOTTER="true" (configmap)
2022-01-26 08:52:47.472680 I | op-k8sutil: CSI_ENABLE_VOLUME_REPLICATION="false" (configmap)
2022-01-26 08:52:47.472684 I | op-k8sutil: CSI_ENABLE_CSIADDONS="false" (configmap)
2022-01-26 08:52:47.472688 I | op-k8sutil: CSI_CEPHFS_PLUGIN_UPDATE_STRATEGY="RollingUpdate" (default)
2022-01-26 08:52:47.472694 I | op-k8sutil: CSI_RBD_PLUGIN_UPDATE_STRATEGY="RollingUpdate" (default)
2022-01-26 08:52:47.472706 I | op-k8sutil: CSI_PLUGIN_ENABLE_SELINUX_HOST_MOUNT="false" (configmap)
2022-01-26 08:52:47.472712 I | ceph-csi: Kubernetes version is 1.21
2022-01-26 08:52:47.472720 I | op-k8sutil: ROOK_CSI_RESIZER_IMAGE="<...>/sig-storage/csi-resizer:v1.3.0" (configmap)
2022-01-26 08:52:47.472726 I | op-k8sutil: CSI_LOG_LEVEL="" (default)
2022-01-26 08:52:47.748179 I | op-k8sutil: CSI_PROVISIONER_REPLICAS="2" (configmap)
2022-01-26 08:52:47.755183 I | op-k8sutil: CSI_PROVISIONER_TOLERATIONS="" (default)
2022-01-26 08:52:47.755206 I | op-k8sutil: CSI_PROVISIONER_NODE_AFFINITY="role=compute" (configmap)
2022-01-26 08:52:47.755231 I | op-k8sutil: CSI_PLUGIN_TOLERATIONS="" (default)
2022-01-26 08:52:47.755235 I | op-k8sutil: CSI_PLUGIN_NODE_AFFINITY="" (default)
2022-01-26 08:52:47.755239 I | op-k8sutil: CSI_RBD_PLUGIN_TOLERATIONS="" (default)
2022-01-26 08:52:47.755245 I | op-k8sutil: CSI_RBD_PLUGIN_NODE_AFFINITY="" (default)
2022-01-26 08:52:47.755251 I | op-k8sutil: CSI_RBD_PLUGIN_RESOURCE="" (default)
2022-01-26 08:52:47.900387 I | op-k8sutil: CSI_RBD_PROVISIONER_TOLERATIONS="" (default)
2022-01-26 08:52:47.900415 I | op-k8sutil: CSI_RBD_PROVISIONER_NODE_AFFINITY="role=compute" (configmap)
2022-01-26 08:52:47.900427 I | op-k8sutil: CSI_RBD_PROVISIONER_RESOURCE="" (default)
2022-01-26 08:52:47.942582 I | op-mon: existing maxMonID not found or failed to load. configmaps "rook-ceph-mon-endpoints" not found
2022-01-26 08:52:48.144374 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":[]}] data: mapping:{"node":{}} maxMonId:-1]
2022-01-26 08:52:48.151026 I | ceph-csi: successfully started CSI Ceph RBD driver
2022-01-26 08:52:48.549271 I | op-k8sutil: CSI_CEPHFS_PLUGIN_TOLERATIONS="" (default)
2022-01-26 08:52:48.549298 I | op-k8sutil: CSI_CEPHFS_PLUGIN_NODE_AFFINITY="" (default)
2022-01-26 08:52:48.549305 I | op-k8sutil: CSI_CEPHFS_PLUGIN_RESOURCE="" (default)
2022-01-26 08:52:48.696163 I | op-k8sutil: CSI_CEPHFS_PROVISIONER_TOLERATIONS="" (default)
2022-01-26 08:52:48.696192 I | op-k8sutil: CSI_CEPHFS_PROVISIONER_NODE_AFFINITY="role=compute" (configmap)
2022-01-26 08:52:48.696207 I | op-k8sutil: CSI_CEPHFS_PROVISIONER_RESOURCE="" (default)
2022-01-26 08:52:48.890702 I | ceph-csi: successfully started CSI CephFS driver
2022-01-26 08:52:49.152757 I | op-k8sutil: CSI_RBD_FSGROUPPOLICY="ReadWriteOnceWithFSType" (configmap)
2022-01-26 08:52:49.184874 I | ceph-csi: CSIDriver object created for driver "rook-ceph.rbd.csi.ceph.com"
2022-01-26 08:52:49.184910 I | op-k8sutil: CSI_CEPHFS_FSGROUPPOLICY="None" (configmap)
2022-01-26 08:52:49.193558 I | ceph-csi: CSIDriver object created for driver "rook-ceph.cephfs.csi.ceph.com"
2022-01-26 08:52:49.342906 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2022-01-26 08:52:49.343116 I | cephclient: generated admin config in /var/lib/rook/rook-ceph
2022-01-26 08:52:50.944781 I | op-mon: targeting the mon count 3
2022-01-26 08:52:51.101861 I | op-mon: created canary deployment rook-ceph-mon-a-canary
2022-01-26 08:52:51.162682 I | op-mon: waiting for canary pod creation rook-ceph-mon-a-canary
2022-01-26 08:52:51.451441 I | op-mon: created canary deployment rook-ceph-mon-b-canary
2022-01-26 08:52:51.590391 I | op-mon: created canary deployment rook-ceph-mon-c-canary
2022-01-26 08:52:51.596286 I | op-mon: waiting for canary pod creation rook-ceph-mon-b-canary
2022-01-26 08:52:51.755183 I | op-mon: parsing mon endpoints:
2022-01-26 08:52:51.755204 W | op-mon: ignoring invalid monitor
2022-01-26 08:52:51.755229 I | op-k8sutil: ROOK_OBC_WATCH_OPERATOR_NAMESPACE="true" (configmap)
2022-01-26 08:52:51.755238 I | op-bucket-prov: ceph bucket provisioner launched watching for provisioner "rook-ceph.ceph.rook.io/bucket"
2022-01-26 08:52:51.756702 I | op-bucket-prov: successfully reconciled bucket provisioner
I0126 08:52:51.756798 1 manager.go:135] objectbucket.io/provisioner-manager "msg"="starting provisioner" "name"="rook-ceph.ceph.rook.io/bucket"
2022-01-26 08:52:51.815459 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . unable to get monitor info from DNS SRV with service name: ceph-mon
2022-01-26T08:52:51.812+0000 7f0dac8dc700 -1 failed for service _ceph-mon._tcp
2022-01-26T08:52:51.812+0000 7f0dac8dc700 -1 monclient: get_monmap_and_config cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster): exit status 1
2022-01-26 08:52:51.958648 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . unable to get monitor info from DNS SRV with service name: ceph-mon
2022-01-26T08:52:51.955+0000 7f6498711700 -1 failed for service _ceph-mon._tcp
2022-01-26T08:52:51.955+0000 7f6498711700 -1 monclient: get_monmap_and_config cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster): exit status 1
2022-01-26 08:52:52.098820 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . unable to get monitor info from DNS SRV with service name: ceph-mon
2022-01-26T08:52:52.095+0000 7fc1df409700 -1 failed for service _ceph-mon._tcp
2022-01-26T08:52:52.095+0000 7fc1df409700 -1 monclient: get_monmap_and_config cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster): exit status 1
2022-01-26 08:52:52.144937 I | op-mon: canary monitor deployment rook-ceph-mon-c-canary scheduled to dev2-cmp2l
2022-01-26 08:52:52.144968 I | op-mon: mon c assigned to node dev2-cmp2l
2022-01-26 08:52:52.255701 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . unable to get monitor info from DNS SRV with service name: ceph-mon
2022-01-26T08:52:52.252+0000 7fbec47e1700 -1 failed for service _ceph-mon._tcp
2022-01-26T08:52:52.252+0000 7fbec47e1700 -1 monclient: get_monmap_and_config cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster): exit status 1
2022-01-26 08:52:52.399372 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . unable to get monitor info from DNS SRV with service name: ceph-mon
2022-01-26T08:52:52.396+0000 7f8569ebc700 -1 failed for service _ceph-mon._tcp
2022-01-26T08:52:52.396+0000 7f8569ebc700 -1 monclient: get_monmap_and_config cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster): exit status 1
2022-01-26 08:52:52.538067 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . unable to get monitor info from DNS SRV with service name: ceph-mon
2022-01-26T08:52:52.534+0000 7fd08116a700 -1 failed for service _ceph-mon._tcp
2022-01-26T08:52:52.535+0000 7fd08116a700 -1 monclient: get_monmap_and_config cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster): exit status 1
2022-01-26 08:52:52.676355 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . unable to get monitor info from DNS SRV with service name: ceph-mon
2022-01-26T08:52:52.673+0000 7fc60c76c700 -1 failed for service _ceph-mon._tcp
2022-01-26T08:52:52.673+0000 7fc60c76c700 -1 monclient: get_monmap_and_config cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster): exit status 1
2022-01-26 08:52:52.811886 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . unable to get monitor info from DNS SRV with service name: ceph-mon
2022-01-26T08:52:52.808+0000 7f5c5df6b700 -1 failed for service _ceph-mon._tcp
2022-01-26T08:52:52.808+0000 7f5c5df6b700 -1 monclient: get_monmap_and_config cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster): exit status 1
2022-01-26 08:52:52.948339 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . unable to get monitor info from DNS SRV with service name: ceph-mon
2022-01-26T08:52:52.945+0000 7f0c1b38c700 -1 failed for service _ceph-mon._tcp
2022-01-26T08:52:52.945+0000 7f0c1b38c700 -1 monclient: get_monmap_and_config cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster): exit status 1
2022-01-26 08:52:53.112735 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . unable to get monitor info from DNS SRV with service name: ceph-mon
2022-01-26T08:52:53.109+0000 7f8771682700 -1 failed for service _ceph-mon._tcp
2022-01-26T08:52:53.109+0000 7f8771682700 -1 monclient: get_monmap_and_config cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster): exit status 1
2022-01-26 08:52:53.278064 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . unable to get monitor info from DNS SRV with service name: ceph-mon
2022-01-26T08:52:53.274+0000 7f2376935700 -1 failed for service _ceph-mon._tcp
2022-01-26T08:52:53.275+0000 7f2376935700 -1 monclient: get_monmap_and_config cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster): exit status 1
2022-01-26 08:52:53.416120 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . unable to get monitor info from DNS SRV with service name: ceph-mon
2022-01-26T08:52:53.412+0000 7f9033db6700 -1 failed for service _ceph-mon._tcp
2022-01-26T08:52:53.412+0000 7f9033db6700 -1 monclient: get_monmap_and_config cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster): exit status 1
2022-01-26 08:52:53.596108 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . unable to get monitor info from DNS SRV with service name: ceph-mon
2022-01-26T08:52:53.592+0000 7f8b780aa700 -1 failed for service _ceph-mon._tcp
2022-01-26T08:52:53.592+0000 7f8b780aa700 -1 monclient: get_monmap_and_config cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster): exit status 1
2022-01-26 08:52:53.743961 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . unable to get monitor info from DNS SRV with service name: ceph-mon
2022-01-26T08:52:53.740+0000 7f5ab2b9d700 -1 failed for service _ceph-mon._tcp
2022-01-26T08:52:53.740+0000 7f5ab2b9d700 -1 monclient: get_monmap_and_config cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster): exit status 1
2022-01-26 08:52:54.065711 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . unable to get monitor info from DNS SRV with service name: ceph-mon
2022-01-26T08:52:54.062+0000 7f7c0cba4700 -1 failed for service _ceph-mon._tcp
2022-01-26T08:52:54.062+0000 7f7c0cba4700 -1 monclient: get_monmap_and_config cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster): exit status 1
2022-01-26 08:52:54.206553 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . unable to get monitor info from DNS SRV with service name: ceph-mon
2022-01-26T08:52:54.203+0000 7f037593c700 -1 failed for service _ceph-mon._tcp
2022-01-26T08:52:54.203+0000 7f037593c700 -1 monclient: get_monmap_and_config cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster): exit status 1
2022-01-26 08:52:54.862264 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . unable to get monitor info from DNS SRV with service name: ceph-mon
2022-01-26T08:52:54.855+0000 7f8ce76d6700 -1 failed for service _ceph-mon._tcp
2022-01-26T08:52:54.855+0000 7f8ce76d6700 -1 monclient: get_monmap_and_config cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster): exit status 1
2022-01-26 08:52:55.037498 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . unable to get monitor info from DNS SRV with service name: ceph-mon
2022-01-26T08:52:55.033+0000 7fbd413b0700 -1 failed for service _ceph-mon._tcp
2022-01-26T08:52:55.033+0000 7fbd413b0700 -1 monclient: get_monmap_and_config cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster): exit status 1
2022-01-26 08:52:56.277177 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . unable to get monitor info from DNS SRV with service name: ceph-mon
2022-01-26T08:52:56.274+0000 7f9269554700 -1 failed for service _ceph-mon._tcp
2022-01-26T08:52:56.274+0000 7f9269554700 -1 monclient: get_monmap_and_config cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster): exit status 1
2022-01-26 08:52:56.461212 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . unable to get monitor info from DNS SRV with service name: ceph-mon
2022-01-26T08:52:56.457+0000 7f96b2c3f700 -1 failed for service _ceph-mon._tcp
2022-01-26T08:52:56.458+0000 7f96b2c3f700 -1 monclient: get_monmap_and_config cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster): exit status 1
2022-01-26 08:52:58.879632 I | op-mon: canary monitor deployment rook-ceph-mon-a-canary scheduled to dev2-cmp3l
2022-01-26 08:52:58.879659 I | op-mon: mon a assigned to node dev2-cmp3l
2022-01-26 08:52:58.885963 I | op-mon: canary monitor deployment rook-ceph-mon-b-canary scheduled to dev2-cmp1l
2022-01-26 08:52:58.885983 I | op-mon: mon b assigned to node dev2-cmp1l
2022-01-26 08:52:58.896096 I | op-mon: cleaning up canary monitor deployment "rook-ceph-mon-a-canary"
2022-01-26 08:52:58.913634 I | op-mon: cleaning up canary monitor deployment "rook-ceph-mon-b-canary"
2022-01-26 08:52:58.927760 I | op-mon: cleaning up canary monitor deployment "rook-ceph-mon-c-canary"
2022-01-26 08:52:58.939529 I | op-mon: creating mon a
2022-01-26 08:52:58.971006 I | op-mon: mon "a" endpoint is [v2:10.43.61.18:3300,v1:10.43.61.18:6789]
2022-01-26 08:52:58.981534 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . unable to get monitor info from DNS SRV with service name: ceph-mon
2022-01-26T08:52:58.979+0000 7f6a44c1a700 -1 failed for service _ceph-mon._tcp
2022-01-26T08:52:58.979+0000 7f6a44c1a700 -1 monclient: get_monmap_and_config cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster): exit status 1
2022-01-26 08:52:59.009833 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.43.61.18:6789"]}] data:a=10.43.61.18:6789 mapping:{"node":{"a":{"Name":"dev2-cmp3l","Hostname":"dev2-cmp3l","Address":"10.246.142.14"},"b":{"Name":"dev2-cmp1l","Hostname":"dev2-cmp1l","Address":"10.246.142.27"},"c":{"Name":"dev2-cmp2l","Hostname":"dev2-cmp2l","Address":"10.246.142.29"}}} maxMonId:-1]
2022-01-26 08:52:59.010341 I | op-mon: monitor endpoints changed, updating the bootstrap peer token
2022-01-26 08:52:59.010370 I | op-mon: monitor endpoints changed, updating the bootstrap peer token
2022-01-26 08:52:59.034994 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2022-01-26 08:52:59.035185 I | cephclient: generated admin config in /var/lib/rook/rook-ceph
2022-01-26 08:52:59.054363 I | op-mon: 0 of 1 expected mons are ready. creating or updating deployments without checking quorum in attempt to achieve a healthy mon cluster
2022-01-26 08:52:59.358322 I | op-mon: updating maxMonID from -1 to 0 after committing mon "a"
2022-01-26 08:53:00.031363 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.43.61.18:6789"]}] data:a=10.43.61.18:6789 mapping:{"node":{"a":{"Name":"dev2-cmp3l","Hostname":"dev2-cmp3l","Address":"10.246.142.14"},"b":{"Name":"dev2-cmp1l","Hostname":"dev2-cmp1l","Address":"10.246.142.27"},"c":{"Name":"dev2-cmp2l","Hostname":"dev2-cmp2l","Address":"10.246.142.29"}}} maxMonId:0]
2022-01-26 08:53:00.031391 I | op-mon: waiting for mon quorum with [a]
2022-01-26 08:53:00.240551 I | op-mon: mons running: [a]
2022-01-26 08:53:14.159836 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . timed out: exit status 1
2022-01-26 08:53:20.390077 I | op-mon: mons running: [a]
2022-01-26 08:53:29.296247 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . timed out: exit status 1
2022-01-26 08:53:41.223516 I | op-mon: mons running: [a]
2022-01-26 08:53:44.426934 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . timed out: exit status 1
2022-01-26 08:53:59.563829 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . timed out: exit status 1
2022-01-26 08:54:01.365553 I | op-mon: mon a is not yet running
2022-01-26 08:54:01.365577 I | op-mon: mons running: []
2022-01-26 08:54:14.697009 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . timed out: exit status 1
2022-01-26 08:54:21.503556 I | op-mon: mons running: [a]
2022-01-26 08:54:33.765700 I | op-mon: Monitors in quorum: [a]
2022-01-26 08:54:33.765725 I | op-mon: mons created: 1
2022-01-26 08:54:34.026699 I | op-mon: waiting for mon quorum with [a]
2022-01-26 08:54:34.033388 I | op-mon: mons running: [a]
2022-01-26 08:54:34.290449 I | op-mon: Monitors in quorum: [a]
2022-01-26 08:54:34.290496 I | op-config: setting "global"="mon allow pool delete"="true" option to the mon configuration database
2022-01-26 08:54:34.554209 I | op-config: successfully set "global"="mon allow pool delete"="true" option to the mon configuration database
2022-01-26 08:54:34.554238 I | op-config: setting "global"="mon cluster log file"="" option to the mon configuration database
2022-01-26 08:54:34.812893 I | op-config: successfully set "global"="mon cluster log file"="" option to the mon configuration database
2022-01-26 08:54:34.812921 I | op-config: setting "global"="mon allow pool size one"="true" option to the mon configuration database
2022-01-26 08:54:35.071682 I | op-config: successfully set "global"="mon allow pool size one"="true" option to the mon configuration database
2022-01-26 08:54:35.071708 I | op-config: setting "global"="osd scrub auto repair"="true" option to the mon configuration database
2022-01-26 08:54:35.180069 E | clusterdisruption-controller: failed to check cluster health: failed to get status. . timed out: exit status 1
2022-01-26 08:54:35.328498 I | op-config: successfully set "global"="osd scrub auto repair"="true" option to the mon configuration database
2022-01-26 08:54:35.328526 I | op-config: setting "global"="log to file"="false" option to the mon configuration database
2022-01-26 08:54:35.436661 I | clusterdisruption-controller: all PGs are active+clean. Restoring default OSD pdb settings
2022-01-26 08:54:35.436690 I | clusterdisruption-controller: creating the default pdb "rook-ceph-osd" with maxUnavailable=1 for all osd
2022-01-26 08:54:35.445374 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0
2022-01-26 08:54:35.592808 I | op-config: successfully set "global"="log to file"="false" option to the mon configuration database
2022-01-26 08:54:35.592832 I | op-config: setting "global"="rbd_default_features"="3" option to the mon configuration database
2022-01-26 08:54:35.712208 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0
2022-01-26 08:54:35.847359 I | op-config: successfully set "global"="rbd_default_features"="3" option to the mon configuration database
2022-01-26 08:54:35.847381 I | op-config: deleting "log file" option from the mon configuration database
2022-01-26 08:54:36.099147 I | op-config: successfully deleted "log file" option from the mon configuration database
2022-01-26 08:54:36.099170 I | op-mon: creating mon b
2022-01-26 08:54:36.124929 I | op-mon: mon "a" endpoint is [v2:10.43.61.18:3300,v1:10.43.61.18:6789]
2022-01-26 08:54:36.130345 I | op-mon: mon "b" endpoint is [v2:10.43.45.143:3300,v1:10.43.45.143:6789]
2022-01-26 08:54:36.146929 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.43.61.18:6789","10.43.45.143:6789"]}] data:a=10.43.61.18:6789,b=10.43.45.143:6789 mapping:{"node":{"a":{"Name":"dev2-cmp3l","Hostname":"dev2-cmp3l","Address":"10.246.142.14"},"b":{"Name":"dev2-cmp1l","Hostname":"dev2-cmp1l","Address":"10.246.142.27"},"c":{"Name":"dev2-cmp2l","Hostname":"dev2-cmp2l","Address":"10.246.142.29"}}} maxMonId:0]
2022-01-26 08:54:36.302489 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2022-01-26 08:54:36.302714 I | cephclient: generated admin config in /var/lib/rook/rook-ceph
2022-01-26 08:54:36.708909 I | op-mon: 1 of 2 expected mon deployments exist. creating new deployment(s).
2022-01-26 08:54:36.713216 I | op-mon: deployment for mon rook-ceph-mon-a already exists. updating if needed
2022-01-26 08:54:36.724558 I | op-k8sutil: deployment "rook-ceph-mon-a" did not change, nothing to update
2022-01-26 08:54:37.951890 I | op-mon: updating maxMonID from 0 to 1 after committing mon "b"
2022-01-26 08:54:37.967667 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.43.61.18:6789","10.43.45.143:6789"]}] data:a=10.43.61.18:6789,b=10.43.45.143:6789 mapping:{"node":{"a":{"Name":"dev2-cmp3l","Hostname":"dev2-cmp3l","Address":"10.246.142.14"},"b":{"Name":"dev2-cmp1l","Hostname":"dev2-cmp1l","Address":"10.246.142.27"},"c":{"Name":"dev2-cmp2l","Hostname":"dev2-cmp2l","Address":"10.246.142.29"}}} maxMonId:1]
2022-01-26 08:54:37.967687 I | op-mon: waiting for mon quorum with [a b]
2022-01-26 08:54:38.105788 I | op-mon: mon b is not yet running
2022-01-26 08:54:38.105819 I | op-mon: mons running: [a]
2022-01-26 08:54:38.365301 I | op-mon: Monitors in quorum: [a]
2022-01-26 08:54:38.365326 I | op-mon: mons created: 2
2022-01-26 08:54:38.626593 I | op-mon: waiting for mon quorum with [a b]
2022-01-26 08:54:38.639099 I | op-mon: mon b is not yet running
2022-01-26 08:54:38.639143 I | op-mon: mons running: [a]
2022-01-26 08:54:43.690705 I | op-mon: mons running: [a b]
2022-01-26 08:54:46.947216 I | op-mon: Monitors in quorum: [a b]
2022-01-26 08:54:46.947245 I | op-config: setting "global"="mon allow pool delete"="true" option to the mon configuration database
2022-01-26 08:54:47.211373 I | op-config: successfully set "global"="mon allow pool delete"="true" option to the mon configuration database
2022-01-26 08:54:47.211398 I | op-config: setting "global"="mon cluster log file"="" option to the mon configuration database
2022-01-26 08:54:47.471328 I | op-config: successfully set "global"="mon cluster log file"="" option to the mon configuration database
2022-01-26 08:54:47.471354 I | op-config: setting "global"="mon allow pool size one"="true" option to the mon configuration database
2022-01-26 08:54:47.728653 I | op-config: successfully set "global"="mon allow pool size one"="true" option to the mon configuration database
2022-01-26 08:54:47.728677 I | op-config: setting "global"="osd scrub auto repair"="true" option to the mon configuration database
2022-01-26 08:54:47.983364 I | op-config: successfully set "global"="osd scrub auto repair"="true" option to the mon configuration database
2022-01-26 08:54:47.983395 I | op-config: setting "global"="log to file"="false" option to the mon configuration database
2022-01-26 08:54:48.240655 I | op-config: successfully set "global"="log to file"="false" option to the mon configuration database
2022-01-26 08:54:48.240687 I | op-config: setting "global"="rbd_default_features"="3" option to the mon configuration database
2022-01-26 08:54:48.508390 I | op-config: successfully set "global"="rbd_default_features"="3" option to the mon configuration database
2022-01-26 08:54:48.508415 I | op-config: deleting "log file" option from the mon configuration database
2022-01-26 08:54:48.770713 I | op-config: successfully deleted "log file" option from the mon configuration database
2022-01-26 08:54:48.770738 I | op-mon: creating mon c
2022-01-26 08:54:48.791679 I | op-mon: mon "a" endpoint is [v2:10.43.61.18:3300,v1:10.43.61.18:6789]
2022-01-26 08:54:48.807390 I | op-mon: mon "b" endpoint is [v2:10.43.45.143:3300,v1:10.43.45.143:6789]
2022-01-26 08:54:48.813164 I | op-mon: mon "c" endpoint is [v2:10.43.193.126:3300,v1:10.43.193.126:6789]
2022-01-26 08:54:49.175500 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.43.45.143:6789","10.43.193.126:6789","10.43.61.18:6789"]}] data:a=10.43.61.18:6789,b=10.43.45.143:6789,c=10.43.193.126:6789 mapping:{"node":{"a":{"Name":"dev2-cmp3l","Hostname":"dev2-cmp3l","Address":"10.246.142.14"},"b":{"Name":"dev2-cmp1l","Hostname":"dev2-cmp1l","Address":"10.246.142.27"},"c":{"Name":"dev2-cmp2l","Hostname":"dev2-cmp2l","Address":"10.246.142.29"}}} maxMonId:1]
2022-01-26 08:54:49.775299 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2022-01-26 08:54:49.775668 I | cephclient: generated admin config in /var/lib/rook/rook-ceph
2022-01-26 08:54:50.185387 I | op-mon: 2 of 3 expected mon deployments exist. creating new deployment(s).
2022-01-26 08:54:50.189706 I | op-mon: deployment for mon rook-ceph-mon-a already exists. updating if needed
2022-01-26 08:54:50.201017 I | op-k8sutil: deployment "rook-ceph-mon-a" did not change, nothing to update
2022-01-26 08:54:50.205591 I | op-mon: deployment for mon rook-ceph-mon-b already exists. updating if needed
2022-01-26 08:54:50.216623 I | op-k8sutil: deployment "rook-ceph-mon-b" did not change, nothing to update
2022-01-26 08:54:50.492589 I | op-mon: updating maxMonID from 1 to 2 after committing mon "c"
2022-01-26 08:54:51.175238 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.43.61.18:6789","10.43.45.143:6789","10.43.193.126:6789"]}] data:a=10.43.61.18:6789,b=10.43.45.143:6789,c=10.43.193.126:6789 mapping:{"node":{"a":{"Name":"dev2-cmp3l","Hostname":"dev2-cmp3l","Address":"10.246.142.14"},"b":{"Name":"dev2-cmp1l","Hostname":"dev2-cmp1l","Address":"10.246.142.27"},"c":{"Name":"dev2-cmp2l","Hostname":"dev2-cmp2l","Address":"10.246.142.29"}}} maxMonId:2]
2022-01-26 08:54:51.175263 I | op-mon: waiting for mon quorum with [a b c]
2022-01-26 08:54:51.778551 I | op-mon: mon c is not yet running
2022-01-26 08:54:51.778576 I | op-mon: mons running: [a b]
2022-01-26 08:54:52.045398 I | op-mon: Monitors in quorum: [a b]
2022-01-26 08:54:52.045431 I | op-mon: mons created: 3
2022-01-26 08:54:52.309908 I | op-mon: waiting for mon quorum with [a b c]
2022-01-26 08:54:52.378092 I | op-mon: mon c is not yet running
2022-01-26 08:54:52.378118 I | op-mon: mons running: [a b]
2022-01-26 08:54:57.399794 I | op-mon: mons running: [a b c]
2022-01-26 08:54:59.073409 I | op-mon: Monitors in quorum: [a b c]
2022-01-26 08:54:59.073456 I | op-config: setting "global"="mon allow pool delete"="true" option to the mon configuration database
2022-01-26 08:54:59.331758 I | op-config: successfully set "global"="mon allow pool delete"="true" option to the mon configuration database
2022-01-26 08:54:59.331779 I | op-config: setting "global"="mon cluster log file"="" option to the mon configuration database
2022-01-26 08:54:59.590219 I | op-config: successfully set "global"="mon cluster log file"="" option to the mon configuration database
2022-01-26 08:54:59.590240 I | op-config: setting "global"="mon allow pool size one"="true" option to the mon configuration database
2022-01-26 08:54:59.842992 I | op-config: successfully set "global"="mon allow pool size one"="true" option to the mon configuration database
2022-01-26 08:54:59.843020 I | op-config: setting "global"="osd scrub auto repair"="true" option to the mon configuration database
2022-01-26 08:55:00.097789 I | op-config: successfully set "global"="osd scrub auto repair"="true" option to the mon configuration database
2022-01-26 08:55:00.097882 I | op-config: setting "global"="log to file"="false" option to the mon configuration database
2022-01-26 08:55:00.355669 I | op-config: successfully set "global"="log to file"="false" option to the mon configuration database
2022-01-26 08:55:00.355699 I | op-config: setting "global"="rbd_default_features"="3" option to the mon configuration database
2022-01-26 08:55:00.616855 I | op-config: successfully set "global"="rbd_default_features"="3" option to the mon configuration database
2022-01-26 08:55:00.616888 I | op-config: deleting "log file" option from the mon configuration database
2022-01-26 08:55:00.878921 I | op-config: successfully deleted "log file" option from the mon configuration database
2022-01-26 08:55:00.884244 I | cephclient: getting or creating ceph auth key "client.csi-rbd-provisioner"
2022-01-26 08:55:01.167171 I | cephclient: getting or creating ceph auth key "client.csi-rbd-node"
2022-01-26 08:55:01.444799 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-provisioner"
2022-01-26 08:55:01.718447 I | cephclient: getting or creating ceph auth key "client.csi-cephfs-node"
2022-01-26 08:55:02.023090 I | ceph-csi: created kubernetes csi secrets for cluster "rook-ceph"
2022-01-26 08:55:02.023134 I | cephclient: getting or creating ceph auth key "client.crash"
2022-01-26 08:55:02.301786 I | ceph-crashcollector-controller: created kubernetes crash collector secret for cluster "rook-ceph"
2022-01-26 08:55:02.564876 I | cephclient: successfully enabled msgr2 protocol
2022-01-26 08:55:02.564905 I | op-config: deleting "mon_mds_skip_sanity" option from the mon configuration database
2022-01-26 08:55:02.821481 I | op-config: successfully deleted "mon_mds_skip_sanity" option from the mon configuration database
2022-01-26 08:55:02.821517 I | cephclient: create rbd-mirror bootstrap peer token "client.rbd-mirror-peer"
2022-01-26 08:55:02.821532 I | cephclient: getting or creating ceph auth key "client.rbd-mirror-peer"
2022-01-26 08:55:03.099790 I | cephclient: successfully created rbd-mirror bootstrap peer token for cluster "rook-ceph"
2022-01-26 08:55:03.113435 I | op-mgr: start running mgr
2022-01-26 08:55:03.113466 I | cephclient: getting or creating ceph auth key "mgr.a"
2022-01-26 08:55:04.317938 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Operation cannot be fulfilled on deployments.apps "rook-ceph-crashcollector-dev2-cmp3l": the object has been modified; please apply your changes to the latest version and try again
2022-01-26 08:55:05.715262 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0
2022-01-26 08:55:08.708172 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Operation cannot be fulfilled on deployments.apps "rook-ceph-crashcollector-dev2-cmp1l": the object has been modified; please apply your changes to the latest version and try again
2022-01-26 08:55:16.601290 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0
2022-01-26 08:55:21.273286 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Operation cannot be fulfilled on deployments.apps "rook-ceph-crashcollector-dev2-cmp2l": the object has been modified; please apply your changes to the latest version and try again
2022-01-26 08:55:24.659058 I | op-k8sutil: finished waiting for updated deployment "rook-ceph-mgr-a"
2022-01-26 08:55:24.661668 I | op-mgr: setting services to point to mgr "a"
W0126 08:55:24.674561 1 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2022-01-26 08:55:24.689028 I | op-mgr: no need to update service "rook-ceph-mgr"
2022-01-26 08:55:24.689054 I | op-mgr: no need to update service "rook-ceph-mgr-dashboard"
2022-01-26 08:55:24.689152 I | op-mgr: successful modules: balancer
W0126 08:55:24.691738 1 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2022-01-26 08:55:24.703787 I | op-mgr: prometheusRule deployed
2022-01-26 08:55:24.715735 I | op-osd: start running osds in namespace "rook-ceph"
2022-01-26 08:55:24.715757 I | op-osd: wait timeout for healthy OSDs during upgrade or restart is "10m0s"
2022-01-26 08:55:24.719572 I | op-osd: start provisioning the OSDs on PVCs, if needed
2022-01-26 08:55:24.723000 I | op-osd: no storageClassDeviceSets defined to configure OSDs on PVCs
2022-01-26 08:55:24.723016 I | op-osd: start provisioning the OSDs on nodes, if needed
2022-01-26 08:55:24.723027 W | op-osd: useAllNodes is TRUE, but nodes are specified. NODES in the cluster CR will be IGNORED unless useAllNodes is FALSE.
2022-01-26 08:55:24.734454 I | op-osd: 3 of the 7 storage nodes are valid
2022-01-26 08:55:24.903164 I | op-osd: started OSD provisioning job for node "dev2-cmp1l"
2022-01-26 08:55:25.103147 I | op-osd: started OSD provisioning job for node "dev2-cmp2l"
2022-01-26 08:55:25.234537 I | op-config: setting "global"="osd_pool_default_pg_autoscale_mode"="on" option to the mon configuration database
2022-01-26 08:55:25.310562 I | op-osd: started OSD provisioning job for node "dev2-cmp3l"
2022-01-26 08:55:25.313155 I | op-osd: OSD orchestration status for node dev2-cmp1l is "starting"
2022-01-26 08:55:25.313181 I | op-osd: OSD orchestration status for node dev2-cmp2l is "starting"
2022-01-26 08:55:25.313193 I | op-osd: OSD orchestration status for node dev2-cmp3l is "starting"
2022-01-26 08:55:25.632087 I | op-config: successfully set "global"="osd_pool_default_pg_autoscale_mode"="on" option to the mon configuration database
2022-01-26 08:55:25.632114 I | op-config: setting "global"="mon_pg_warn_min_per_osd"="0" option to the mon configuration database
2022-01-26 08:55:26.110084 I | op-config: successfully set "global"="mon_pg_warn_min_per_osd"="0" option to the mon configuration database
2022-01-26 08:55:26.110111 I | op-mgr: successful modules: mgr module(s) from the spec
2022-01-26 08:55:26.311828 I | op-mgr: successful modules: prometheus
2022-01-26 08:55:26.800286 I | op-osd: OSD orchestration status for node dev2-cmp2l is "orchestrating"
2022-01-26 08:55:26.960083 I | op-osd: OSD orchestration status for node dev2-cmp1l is "orchestrating"
2022-01-26 08:55:26.982875 I | op-osd: OSD orchestration status for node dev2-cmp3l is "orchestrating"
2022-01-26 08:55:31.318007 I | op-mgr: the dashboard secret was already generated
2022-01-26 08:55:31.318039 I | op-mgr: setting ceph dashboard "admin" login creds
2022-01-26 08:55:36.174268 I | clusterdisruption-controller: reconciling osd pdb reconciler as the allowed disruptions in default pdb is 0
2022-01-26 08:55:36.537766 I | op-mgr: successfully set ceph dashboard creds
2022-01-26 08:55:36.918569 I | op-config: setting "mgr.a"="mgr/dashboard/url_prefix"="/ceph/dashboard" option to the mon configuration database
2022-01-26 08:55:37.331314 I | op-config: successfully set "mgr.a"="mgr/dashboard/url_prefix"="/ceph/dashboard" option to the mon configuration database
2022-01-26 08:55:37.710898 I | op-config: setting "mgr.a"="mgr/dashboard/ssl"="false" option to the mon configuration database
2022-01-26 08:55:38.109510 I | op-config: successfully set "mgr.a"="mgr/dashboard/ssl"="false" option to the mon configuration database
2022-01-26 08:55:38.514279 I | op-config: setting "mgr.a"="mgr/dashboard/server_port"="7000" option to the mon configuration database
2022-01-26 08:55:38.905755 I | op-config: successfully set "mgr.a"="mgr/dashboard/server_port"="7000" option to the mon configuration database
2022-01-26 08:55:38.905785 I | op-mgr: dashboard config has changed. restarting the dashboard module
2022-01-26 08:55:38.905793 I | op-mgr: restarting the mgr module
2022-01-26 08:55:41.302692 I | op-mgr: successful modules: dashboard
2022-01-26 08:55:42.208459 I | op-osd: OSD orchestration status for node dev2-cmp2l is "completed"
2022-01-26 08:55:42.208484 I | op-osd: creating OSD 0 on node "dev2-cmp2l"
2022-01-26 08:55:42.569422 I | op-osd: creating OSD 3 on node "dev2-cmp2l"
2022-01-26 08:55:42.823456 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down but no node drain is detected
2022-01-26 08:55:43.004230 I | op-osd: creating OSD 6 on node "dev2-cmp2l"
2022-01-26 08:55:43.281949 I | clusterdisruption-controller: osd is down in the failure domain "dev2-cmp2l", but pgs are active+clean. Requeuing in case pg status is not updated yet...
2022-01-26 08:55:43.323547 I | clusterdisruption-controller: osd "rook-ceph-osd-3" is down and a possible node drain is detected
2022-01-26 08:55:43.323699 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down but no node drain is detected
2022-01-26 08:55:43.450928 I | op-osd: OSD orchestration status for node dev2-cmp1l is "completed"
2022-01-26 08:55:43.450960 I | op-osd: creating OSD 7 on node "dev2-cmp1l"
2022-01-26 08:55:43.800336 I | clusterdisruption-controller: osd is down in the failure domain "dev2-cmp2l", but pgs are active+clean. Requeuing in case pg status is not updated yet...
2022-01-26 08:55:43.803476 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down but no node drain is detected
2022-01-26 08:55:43.803562 I | clusterdisruption-controller: osd "rook-ceph-osd-3" is down but no node drain is detected
2022-01-26 08:55:43.803640 I | clusterdisruption-controller: osd "rook-ceph-osd-6" is down but no node drain is detected
2022-01-26 08:55:43.890442 I | op-osd: creating OSD 4 on node "dev2-cmp1l"
2022-01-26 08:55:44.337915 I | op-osd: creating OSD 1 on node "dev2-cmp1l"
2022-01-26 08:55:44.342519 I | clusterdisruption-controller: osd is down in the failure domain "dev2-cmp2l", but pgs are active+clean. Requeuing in case pg status is not updated yet...
2022-01-26 08:55:44.345603 I | clusterdisruption-controller: osd "rook-ceph-osd-6" is down but no node drain is detected
2022-01-26 08:55:44.345724 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down but no node drain is detected
2022-01-26 08:55:44.345825 I | clusterdisruption-controller: osd "rook-ceph-osd-3" is down but no node drain is detected
2022-01-26 08:55:44.345871 I | clusterdisruption-controller: osd "rook-ceph-osd-4" is down and a possible node drain is detected
2022-01-26 08:55:44.345968 I | clusterdisruption-controller: osd "rook-ceph-osd-7" is down but no node drain is detected
2022-01-26 08:55:44.796933 I | op-osd: OSD orchestration status for node dev2-cmp3l is "completed"
2022-01-26 08:55:44.796961 I | op-osd: creating OSD 8 on node "dev2-cmp3l"
2022-01-26 08:55:44.907702 I | clusterdisruption-controller: osd is down in the failure domain "dev2-cmp1l", but pgs are active+clean. Requeuing in case pg status is not updated yet...
2022-01-26 08:55:44.910574 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down but no node drain is detected
2022-01-26 08:55:44.910609 I | clusterdisruption-controller: osd "rook-ceph-osd-1" is down and a possible node drain is detected
2022-01-26 08:55:44.910695 I | clusterdisruption-controller: osd "rook-ceph-osd-3" is down but no node drain is detected
2022-01-26 08:55:44.910807 I | clusterdisruption-controller: osd "rook-ceph-osd-4" is down but no node drain is detected
2022-01-26 08:55:44.910904 I | clusterdisruption-controller: osd "rook-ceph-osd-7" is down but no node drain is detected
2022-01-26 08:55:44.911060 I | clusterdisruption-controller: osd "rook-ceph-osd-6" is down but no node drain is detected
2022-01-26 08:55:45.222699 I | op-osd: creating OSD 5 on node "dev2-cmp3l"
2022-01-26 08:55:45.413576 I | clusterdisruption-controller: osd is down in the failure domain "dev2-cmp1l", but pgs are active+clean. Requeuing in case pg status is not updated yet...
2022-01-26 08:55:45.416862 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down but no node drain is detected
2022-01-26 08:55:45.416997 I | clusterdisruption-controller: osd "rook-ceph-osd-1" is down but no node drain is detected
2022-01-26 08:55:45.417103 I | clusterdisruption-controller: osd "rook-ceph-osd-3" is down but no node drain is detected
2022-01-26 08:55:45.417233 I | clusterdisruption-controller: osd "rook-ceph-osd-4" is down but no node drain is detected
2022-01-26 08:55:45.417266 I | clusterdisruption-controller: osd "rook-ceph-osd-8" is down and a possible node drain is detected
2022-01-26 08:55:45.417347 I | clusterdisruption-controller: osd "rook-ceph-osd-7" is down but no node drain is detected
2022-01-26 08:55:45.417450 I | clusterdisruption-controller: osd "rook-ceph-osd-6" is down but no node drain is detected
2022-01-26 08:55:45.696967 I | op-osd: creating OSD 2 on node "dev2-cmp3l"
2022-01-26 08:55:45.953666 I | clusterdisruption-controller: osd is down in the failure domain "dev2-cmp1l", but pgs are active+clean. Requeuing in case pg status is not updated yet...
2022-01-26 08:55:45.957233 I | clusterdisruption-controller: osd "rook-ceph-osd-6" is down but no node drain is detected
2022-01-26 08:55:45.957327 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down but no node drain is detected
2022-01-26 08:55:45.957418 I | clusterdisruption-controller: osd "rook-ceph-osd-1" is down but no node drain is detected
2022-01-26 08:55:45.957486 I | clusterdisruption-controller: osd "rook-ceph-osd-8" is down but no node drain is detected
2022-01-26 08:55:45.957560 I | clusterdisruption-controller: osd "rook-ceph-osd-3" is down but no node drain is detected
2022-01-26 08:55:45.957624 I | clusterdisruption-controller: osd "rook-ceph-osd-4" is down but no node drain is detected
2022-01-26 08:55:45.957681 I | clusterdisruption-controller: osd "rook-ceph-osd-7" is down but no node drain is detected
2022-01-26 08:55:45.957701 I | clusterdisruption-controller: osd "rook-ceph-osd-5" is down and a possible node drain is detected
2022-01-26 08:55:46.311382 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Operation cannot be fulfilled on deployments.apps "rook-ceph-crashcollector-dev2-cmp3l": the object has been modified; please apply your changes to the latest version and try again
2022-01-26 08:55:46.418999 I | clusterdisruption-controller: osd is down in the failure domain "dev2-cmp1l", but pgs are active+clean. Requeuing in case pg status is not updated yet...
2022-01-26 08:55:46.422086 I | clusterdisruption-controller: osd "rook-ceph-osd-1" is down but no node drain is detected
2022-01-26 08:55:46.422218 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down but no node drain is detected
2022-01-26 08:55:46.422300 I | clusterdisruption-controller: osd "rook-ceph-osd-4" is down but no node drain is detected
2022-01-26 08:55:46.422393 I | clusterdisruption-controller: osd "rook-ceph-osd-8" is down but no node drain is detected
2022-01-26 08:55:46.422470 I | clusterdisruption-controller: osd "rook-ceph-osd-3" is down but no node drain is detected
2022-01-26 08:55:46.422503 I | clusterdisruption-controller: osd "rook-ceph-osd-2" is down and a possible node drain is detected
2022-01-26 08:55:46.422586 I | clusterdisruption-controller: osd "rook-ceph-osd-7" is down but no node drain is detected
2022-01-26 08:55:46.422679 I | clusterdisruption-controller: osd "rook-ceph-osd-5" is down but no node drain is detected
2022-01-26 08:55:46.422767 I | clusterdisruption-controller: osd "rook-ceph-osd-6" is down but no node drain is detected
2022-01-26 08:55:46.664223 I | op-osd: finished running OSDs in namespace "rook-ceph"
2022-01-26 08:55:46.664251 I | ceph-cluster-controller: done reconciling ceph cluster in namespace "rook-ceph"
2022-01-26 08:55:46.676556 I | ceph-cluster-controller: enabling ceph mon monitoring goroutine for cluster "rook-ceph"
2022-01-26 08:55:46.676603 I | op-osd: ceph osd status in namespace "rook-ceph" check interval "1m0s"
2022-01-26 08:55:46.676611 I | ceph-cluster-controller: enabling ceph osd monitoring goroutine for cluster "rook-ceph"
2022-01-26 08:55:46.676628 I | ceph-cluster-controller: ceph status check interval is 1m0s
2022-01-26 08:55:46.676636 I | ceph-cluster-controller: enabling ceph status monitoring goroutine for cluster "rook-ceph"
2022-01-26 08:55:46.938647 I | clusterdisruption-controller: osd is down in the failure domain "dev2-cmp1l", but pgs are active+clean. Requeuing in case pg status is not updated yet...
2022-01-26 08:55:46.941686 I | clusterdisruption-controller: osd "rook-ceph-osd-6" is down but no node drain is detected
2022-01-26 08:55:46.941809 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down but no node drain is detected
2022-01-26 08:55:46.941931 I | clusterdisruption-controller: osd "rook-ceph-osd-1" is down but no node drain is detected
2022-01-26 08:55:46.942044 I | clusterdisruption-controller: osd "rook-ceph-osd-3" is down but no node drain is detected
2022-01-26 08:55:46.942164 I | clusterdisruption-controller: osd "rook-ceph-osd-4" is down but no node drain is detected
2022-01-26 08:55:46.942269 I | clusterdisruption-controller: osd "rook-ceph-osd-8" is down but no node drain is detected
2022-01-26 08:55:46.942363 I | clusterdisruption-controller: osd "rook-ceph-osd-7" is down but no node drain is detected
2022-01-26 08:55:46.942463 I | clusterdisruption-controller: osd "rook-ceph-osd-5" is down but no node drain is detected
2022-01-26 08:55:46.942548 I | clusterdisruption-controller: osd "rook-ceph-osd-2" is down but no node drain is detected
2022-01-26 08:55:47.409473 I | clusterdisruption-controller: osd is down in the failure domain "dev2-cmp1l", but pgs are active+clean. Requeuing in case pg status is not updated yet...
2022-01-26 08:55:47.412386 I | clusterdisruption-controller: osd "rook-ceph-osd-3" is down but no node drain is detected
2022-01-26 08:55:47.412503 I | clusterdisruption-controller: osd "rook-ceph-osd-4" is down but no node drain is detected
2022-01-26 08:55:47.412605 I | clusterdisruption-controller: osd "rook-ceph-osd-8" is down but no node drain is detected
2022-01-26 08:55:47.412727 I | clusterdisruption-controller: osd "rook-ceph-osd-7" is down but no node drain is detected
2022-01-26 08:55:47.412819 I | clusterdisruption-controller: osd "rook-ceph-osd-5" is down but no node drain is detected
2022-01-26 08:55:47.412907 I | clusterdisruption-controller: osd "rook-ceph-osd-2" is down but no node drain is detected
2022-01-26 08:55:47.412974 I | clusterdisruption-controller: osd "rook-ceph-osd-6" is down but no node drain is detected
2022-01-26 08:55:47.413040 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down but no node drain is detected
2022-01-26 08:55:47.413096 I | clusterdisruption-controller: osd "rook-ceph-osd-1" is down but no node drain is detected
2022-01-26 08:55:47.692474 I | ceph-cluster-controller: Disabling the insecure global ID as no legacy clients are currently connected. If you still require the insecure connections, see the CVE to suppress the health warning and re-enable the insecure connections. https://docs.ceph.com/en/latest/security/CVE-2021-20288/
2022-01-26 08:55:47.692509 I | op-config: setting "mon"="auth_allow_insecure_global_id_reclaim"="false" option to the mon configuration database
2022-01-26 08:55:47.903448 I | clusterdisruption-controller: osd is down in the failure domain "dev2-cmp1l", but pgs are active+clean. Requeuing in case pg status is not updated yet...
2022-01-26 08:55:48.117147 I | op-config: successfully set "mon"="auth_allow_insecure_global_id_reclaim"="false" option to the mon configuration database
2022-01-26 08:55:48.117171 I | ceph-cluster-controller: insecure global ID is now disabled
2022-01-26 08:55:50.936490 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Operation cannot be fulfilled on deployments.apps "rook-ceph-crashcollector-dev2-cmp3l": the object has been modified; please apply your changes to the latest version and try again
2022-01-26 08:55:51.155775 I | op-mon: parsing mon endpoints: a=10.43.61.18:6789,b=10.43.45.143:6789,c=10.43.193.126:6789
2022-01-26 08:55:51.181045 I | op-mon: parsing mon endpoints: a=10.43.61.18:6789,b=10.43.45.143:6789,c=10.43.193.126:6789
2022-01-26 08:55:51.181113 I | ceph-spec: detecting the ceph image version for image <...>/ceph/ceph:v16.2.7...
2022-01-26 08:55:51.533838 I | ceph-block-pool-controller: creating pool "ceph-blockpool" in namespace "rook-ceph"
2022-01-26 08:55:53.049654 I | ceph-spec: detected ceph image version: "16.2.7-0 pacific"
2022-01-26 08:55:53.883065 I | ceph-file-controller: start running mdses for filesystem "ceph-filesystem"
2022-01-26 08:55:53.883093 W | ceph-spec: running the "mds" daemon(s) with 2048MB of ram, but at least 4096MB is recommended
2022-01-26 08:55:54.348101 I | cephclient: getting or creating ceph auth key "mds.ceph-filesystem-a"
2022-01-26 08:55:54.832824 I | op-mds: setting mds config flags
2022-01-26 08:55:54.832854 I | op-config: setting "mds.ceph-filesystem-a"="mds_cache_memory_limit"="1073741824" option to the mon configuration database
2022-01-26 08:55:55.235405 I | op-config: successfully set "mds.ceph-filesystem-a"="mds_cache_memory_limit"="1073741824" option to the mon configuration database
2022-01-26 08:55:55.235440 I | op-config: setting "mds.ceph-filesystem-a"="mds_join_fs"="ceph-filesystem" option to the mon configuration database
2022-01-26 08:55:55.641796 I | op-config: successfully set "mds.ceph-filesystem-a"="mds_join_fs"="ceph-filesystem" option to the mon configuration database
2022-01-26 08:55:55.838156 I | cephclient: getting or creating ceph auth key "mds.ceph-filesystem-b"
2022-01-26 08:55:56.127306 I | cephclient: creating replicated pool ceph-blockpool succeeded
2022-01-26 08:55:56.348582 I | op-mds: setting mds config flags
2022-01-26 08:55:56.348611 I | op-config: setting "mds.ceph-filesystem-b"="mds_join_fs"="ceph-filesystem" option to the mon configuration database
2022-01-26 08:55:56.592499 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Operation cannot be fulfilled on deployments.apps "rook-ceph-crashcollector-dev2-cmp2l": the object has been modified; please apply your changes to the latest version and try again
2022-01-26 08:55:56.809398 I | op-config: successfully set "mds.ceph-filesystem-b"="mds_join_fs"="ceph-filesystem" option to the mon configuration database
2022-01-26 08:55:56.809429 I | op-config: setting "mds.ceph-filesystem-b"="mds_cache_memory_limit"="1073741824" option to the mon configuration database
2022-01-26 08:55:56.978452 I | ceph-block-pool-controller: initializing pool "ceph-blockpool"
2022-01-26 08:55:57.229901 I | op-config: successfully set "mds.ceph-filesystem-b"="mds_cache_memory_limit"="1073741824" option to the mon configuration database
2022-01-26 08:55:58.287030 I | clusterdisruption-controller: osd "rook-ceph-osd-2" is down but no node drain is detected
2022-01-26 08:55:58.287184 I | clusterdisruption-controller: osd "rook-ceph-osd-7" is down but no node drain is detected
2022-01-26 08:55:58.287288 I | clusterdisruption-controller: osd "rook-ceph-osd-5" is down but no node drain is detected
2022-01-26 08:55:58.287391 I | clusterdisruption-controller: osd "rook-ceph-osd-6" is down but no node drain is detected
2022-01-26 08:55:58.287481 I | clusterdisruption-controller: osd "rook-ceph-osd-1" is down but no node drain is detected
2022-01-26 08:55:58.287566 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down but no node drain is detected
2022-01-26 08:55:58.287654 I | clusterdisruption-controller: osd "rook-ceph-osd-4" is down but no node drain is detected
2022-01-26 08:55:58.287738 I | clusterdisruption-controller: osd "rook-ceph-osd-8" is down but no node drain is detected
2022-01-26 08:55:58.287839 I | clusterdisruption-controller: osd "rook-ceph-osd-3" is down but no node drain is detected
2022-01-26 08:55:58.717802 I | ceph-file-controller: creating filesystem "ceph-filesystem"
2022-01-26 08:55:58.776341 I | clusterdisruption-controller: osd is down in the failure domain "dev2-cmp1l", but pgs are active+clean. Requeuing in case pg status is not updated yet...
2022-01-26 08:56:00.060003 I | ceph-block-pool-controller: successfully initialized pool "ceph-blockpool"
2022-01-26 08:56:00.060083 I | op-config: deleting "mgr/prometheus/rbd_stats_pools" option from the mon configuration database
2022-01-26 08:56:00.152466 E | ceph-crashcollector-controller: node reconcile failed on op "unchanged": Operation cannot be fulfilled on deployments.apps "rook-ceph-crashcollector-dev2-cmp2l": the object has been modified; please apply your changes to the latest version and try again
2022-01-26 08:56:00.533373 I | op-config: successfully deleted "mgr/prometheus/rbd_stats_pools" option from the mon configuration database
2022-01-26 08:56:00.542080 I | clusterdisruption-controller: osd "rook-ceph-osd-6" is down but no node drain is detected
2022-01-26 08:56:00.542229 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down but no node drain is detected
2022-01-26 08:56:00.542334 I | clusterdisruption-controller: osd "rook-ceph-osd-1" is down but no node drain is detected
2022-01-26 08:56:00.542433 I | clusterdisruption-controller: osd "rook-ceph-osd-3" is down but no node drain is detected
2022-01-26 08:56:00.542526 I | clusterdisruption-controller: osd "rook-ceph-osd-4" is down but no node drain is detected
2022-01-26 08:56:00.542619 I | clusterdisruption-controller: osd "rook-ceph-osd-8" is down but no node drain is detected
2022-01-26 08:56:00.542715 I | clusterdisruption-controller: osd "rook-ceph-osd-5" is down but no node drain is detected
2022-01-26 08:56:00.542805 I | clusterdisruption-controller: osd "rook-ceph-osd-2" is down but no node drain is detected
2022-01-26 08:56:00.542895 I | clusterdisruption-controller: osd "rook-ceph-osd-7" is down but no node drain is detected
2022-01-26 08:56:01.014826 I | clusterdisruption-controller: osd is down in failure domain "dev2-cmp1l" and pgs are not active+clean. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:32} {StateName:unknown Count:1}]"
2022-01-26 08:56:01.017014 I | clusterdisruption-controller: creating temporary blocking pdb "rook-ceph-osd-host-dev2-cmp2l" with maxUnavailable=0 for "host" failure domain "dev2-cmp2l"
2022-01-26 08:56:01.022307 I | clusterdisruption-controller: creating temporary blocking pdb "rook-ceph-osd-host-dev2-cmp3l" with maxUnavailable=0 for "host" failure domain "dev2-cmp3l"
2022-01-26 08:56:01.026223 I | clusterdisruption-controller: deleting the default pdb "rook-ceph-osd" with maxUnavailable=1 for all osd
2022-01-26 08:56:02.413194 I | clusterdisruption-controller: osd "rook-ceph-osd-0" is down but no node drain is detected
2022-01-26 08:56:02.413285 I | clusterdisruption-controller: osd "rook-ceph-osd-1" is down but no node drain is detected
2022-01-26 08:56:02.413351 I | clusterdisruption-controller: osd "rook-ceph-osd-3" is down but no node drain is detected
2022-01-26 08:56:02.413451 I | clusterdisruption-controller: osd "rook-ceph-osd-4" is down but no node drain is detected
2022-01-26 08:56:02.413544 I | clusterdisruption-controller: osd "rook-ceph-osd-8" is down but no node drain is detected
2022-01-26 08:56:02.413613 I | clusterdisruption-controller: osd "rook-ceph-osd-7" is down but no node drain is detected
2022-01-26 08:56:02.413686 I | clusterdisruption-controller: osd "rook-ceph-osd-5" is down but no node drain is detected
2022-01-26 08:56:02.413749 I | clusterdisruption-controller: osd "rook-ceph-osd-2" is down but no node drain is detected
2022-01-26 08:56:02.413847 I | clusterdisruption-controller: osd "rook-ceph-osd-6" is down but no node drain is detected
2022-01-26 08:56:02.569276 I | cephclient: creating replicated pool ceph-filesystem-metadata succeeded
2022-01-26 08:56:02.883835 I | clusterdisruption-controller: osd is down in failure domain "dev2-cmp1l" and pgs are not active+clean. pg health: "cluster is not fully clean. PGs: [{StateName:unknown Count:33} {StateName:active+clean Count:32}]"
2022-01-26 08:56:06.056681 I | cephclient: creating replicated pool ceph-filesystem-data0 succeeded
2022-01-26 08:56:06.822613 I | cephclient: creating filesystem "ceph-filesystem" with metadata pool "ceph-filesystem-metadata" and data pools [ceph-filesystem-data0]
2022-01-26 08:56:08.524211 I | ceph-file-controller: created filesystem "ceph-filesystem" on 1 data pool(s) and metadata pool "ceph-filesystem-metadata"
2022-01-26 08:56:08.524240 I | cephclient: setting allow_standby_replay for filesystem "ceph-filesystem"
2022-01-26 08:56:10.010489 I | clusterdisruption-controller: all "host" failure domains: [dev2-cmp1l dev2-cmp2l dev2-cmp3l]. osd is down in failure domain: "". active node drains: false. pg health: "cluster is not fully clean. PGs: [{StateName:active+clean Count:65} {StateName:unknown Count:32}]"
2022-01-26 08:56:14.239881 I | clusterdisruption-controller: all PGs are active+clean. Restoring default OSD pdb settings
2022-01-26 08:56:14.239905 I | clusterdisruption-controller: creating the default pdb "rook-ceph-osd" with maxUnavailable=1 for all osd
2022-01-26 08:56:14.245713 I | clusterdisruption-controller: deleting temporary blocking pdb with "rook-ceph-osd-host-dev2-cmp2l" with maxUnavailable=0 for "host" failure domain "dev2-cmp2l"
2022-01-26 08:56:14.249756 I | clusterdisruption-controller: deleting temporary blocking pdb with "rook-ceph-osd-host-dev2-cmp3l" with maxUnavailable=0 for "host" failure domain "dev2-cmp3l"
2022-01-26 08:56:32.136113 I | op-mon: checking if multiple mons are on the same node
(1-1/2)