Project

General

Profile

Actions

Bug #20848

closed

upgrading from jewel to luminous- mgr create throws EACCES: access denied

Added by Vasu Kulkarni almost 7 years ago. Updated over 6 years ago.

Status:
Resolved
Priority:
High
Assignee:
Category:
ceph-mgr
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

1) have 4 node cluster with 3 mons and 3 osd's , preinstalled jewel release
2) then upgrading one by one to luminous using 'ceph-deploy install --release=luminous nodename'
3) initially the cluster was in healthy state, and after 1st upgrade, i could see the mon wasn't up but the daemon was fine

   [ubuntu@vpm161 ~]$ sudo ceph -s
    cluster 76b054f1-989f-4dab-983b-6cbe87eb5c2f
     health HEALTH_WARN
            1 mons down, quorum 0,1 vpm005,vpm089
     monmap e1: 3 mons at {vpm005=172.21.2.5:6789/0,vpm089=172.21.2.89:6789/0,vpm161=172.21.2.161:6789/0}
            election epoch 994, quorum 0,1 vpm005,vpm089
      fsmap e5: 1/1/1 up {0=vpm089=up:active}
     osdmap e46: 9 osds: 9 up, 9 in
            flags sortbitwise,require_jewel_osds
      pgmap v129: 100 pgs, 3 pools, 2068 bytes data, 20 objects
            306 MB used, 1753 GB / 1754 GB avail
                 100 active+clean
 

After upgrading all the nodes, i went ahead to see if i can create mgr on one of the mon nodes but it fails with below error, Is this something that we can hadle coming from jewel release ?

[ubuntu@vpm089 ceph-deploy]$ ./ceph-deploy mgr create vpm161
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.38): ./ceph-deploy mgr create vpm161
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('vpm161', 'vpm161')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x159efc8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0x152d410>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts vpm161:vpm161
Warning: Permanently added 'vpm161,172.21.2.161' (ECDSA) to the list of known hosts.
[vpm161][DEBUG ] connection detected need for sudo
Warning: Permanently added 'vpm161,172.21.2.161' (ECDSA) to the list of known hosts.
[vpm161][DEBUG ] connected to host: vpm161 
[vpm161][DEBUG ] detect platform information from remote host
[vpm161][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.3.1611 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to vpm161
[vpm161][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[vpm161][WARNIN] mgr keyring does not exist yet, creating one
[vpm161][DEBUG ] create a keyring file
[vpm161][DEBUG ] create path if it doesn't exist
[vpm161][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.vpm161 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-vpm161/keyring
[vpm161][ERROR ] Error EACCES: access denied
[vpm161][ERROR ] exit code from command was: 13
[ceph_deploy.mgr][ERROR ] could not create mgr
[ceph_deploy][ERROR ] GenericError: Failed to create 1 MGRs

current state is : flag is require_jewel_osds

[ubuntu@vpm161 ~]$ sudo ceph osd tree
ID WEIGHT  TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 1.71263 root default                                      
-2 0.57088     host vpm123                                   
 0 0.19029         osd.0        up  1.00000          1.00000 
 1 0.19029         osd.1        up  1.00000          1.00000 
 2 0.19029         osd.2        up  1.00000          1.00000 
-3 0.57088     host vpm089                                   
 3 0.19029         osd.3      down        0          1.00000 
 4 0.19029         osd.4      down        0          1.00000 
 5 0.19029         osd.5      down        0          1.00000 
-4 0.57088     host vpm005                                   
 6 0.19029         osd.6      down        0          1.00000 
 7 0.19029         osd.7      down        0          1.00000 
 8 0.19029         osd.8      down        0          1.00000 
[ubuntu@vpm161 ~]$ sudo ceph -s
    cluster 76b054f1-989f-4dab-983b-6cbe87eb5c2f
     health HEALTH_ERR
            66 pgs are stuck inactive for more than 300 seconds
            100 pgs degraded
            100 pgs stuck degraded
            66 pgs stuck inactive
            100 pgs stuck unclean
            100 pgs stuck undersized
            100 pgs undersized
            3 requests are blocked > 32 sec
            recovery 36/60 objects degraded (60.000%)
            recovery 4/60 objects misplaced (6.667%)
            mds cluster is degraded
            1 mons down, quorum 0,1 vpm005,vpm089
     monmap e1: 3 mons at {vpm005=172.21.2.5:6789/0,vpm089=172.21.2.89:6789/0,vpm161=172.21.2.161:6789/0}
            election epoch 1562, quorum 0,1 vpm005,vpm089
      fsmap e8: 1/1/1 up {0=vpm089=up:replay}
     osdmap e54: 9 osds: 3 up, 3 in; 100 remapped pgs
            flags sortbitwise,require_jewel_osds
      pgmap v152: 100 pgs, 3 pools, 2068 bytes data, 20 objects
            104 MB used, 584 GB / 584 GB avail
            36/60 objects degraded (60.000%)
            4/60 objects misplaced (6.667%)
                  66 undersized+degraded+peered
                  34 active+undersized+degraded

Actions

Also available in: Atom PDF