Project

General

Profile

Bug #42668

ceph daemon osd.* fails in osd container but ceph daemon mds.* does not fail in mds container

Added by Ben England almost 2 years ago. Updated about 1 year ago.

Status:
Won't Fix
Priority:
High
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
ceph cli
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

with K8S (RHOCS 4.2) OSD pods, I get this error from Ceph daemon command:

[bengland@bene-laptop ocs-operator]$ ocos rsh rook-ceph-osd-0-7d9466ccf4-4869k
sh-4.2# ceph daemon osd.0 version
Invalid command: unused arguments: ['-m', '[v2:172.30.217.9:3300,v1:172.30.217.9:6789],[v2:172.30.30.127:3300,v1:172.30.30.127:6789],[v2:172.30.254.228:3300,v1:172.30.254.228:6789]']
version : get ceph version
admin_socket: invalid command

but same command to an MDS pod works:

[bengland@bene-laptop ocs-operator]$ ocos rsh rook-ceph-mds-example-storagecluster-cephfilesystem-b-666b6fj52
sh-4.2# ceph daemon mds.example-storagecluster-cephfilesystem-b version {"version":"14.2.4","release":"nautilus","release_type":"stable"}

"ceph daemon" commands are important for debugging problems with Ceph OSDs.

History

#1 Updated by Josh Durgin almost 2 years ago

Hey Ben, I'm wondering where those extra monitor address args are coming from? Is there a local ceph.conf in the container, or are they set in an environment variable like CEPH_ARGS?

#2 Updated by Nathan Cutler over 1 year ago

  • Subject changed from ceph daemon osd.* fails but ceph daemon mds.* does not to ceph daemon osd.* fails in osd container but ceph daemon mds.* does not fail in mds container
  • Status changed from New to Need More Info

#3 Updated by S├ębastien Han about 1 year ago

  • Status changed from Need More Info to Won't Fix

Ben, just run `unset CEPH_ARGS` once in the OSD container, then you will be able to use the socket commands.

Also available in: Atom PDF