Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2019-01-10T09:50:27Z
Ceph
Redmine
rgw - Bug #37855 (Resolved): only first subuser can be exported to nfs
https://tracker.ceph.com/issues/37855
2019-01-10T09:50:27Z
min-sheng Lin
minsheng.l@inwinstack.com
<p>I have a s3 user with two subusers:</p>
<pre>
[vagrant@admin ~]$ sudo radosgw-admin user info --uid fe707977-8225-4d56-8382-42dfaa397cfc
{
"user_id": "fe707977-8225-4d56-8382-42dfaa397cfc",
"display_name": "MST107300",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [
{
"id": "fe707977-8225-4d56-8382-42dfaa397cfc:admin",
"permissions": "full-control"
},
{
"id": "fe707977-8225-4d56-8382-42dfaa397cfc:fychao68",
"permissions": "full-control"
}
],
"keys": [
{
"user": "fe707977-8225-4d56-8382-42dfaa397cfc:fychao68",
"access_key": "2ILESNIW35DYIR8BRC8K",
"secret_key": "v5WiTzCI0CKHnm6aVPTJbo22rmhy8r6hOyJ6mUog"
},
{
"user": "fe707977-8225-4d56-8382-42dfaa397cfc",
"access_key": "AM4J6WUHYEASJBND6IGO",
"secret_key": "KDMh5CMsrXgiEJNnc5pN1PWqk31esNXGDA4p3ORL"
},
{
"user": "fe707977-8225-4d56-8382-42dfaa397cfc:admin",
"access_key": "KD2QF2LRSSJGAHTULF0D",
"secret_key": "uUuIvo6AGSTPMCnmhLS2kJxdcE3VoVwQXUxUn5LD"
}
],
</pre>
<p>When use following config to export s3, I got a error "Authorization Failed for user fe707977-8225-4d56-8382-42dfaa397cfc":</p>
<pre>
Export {
Export_ID = 55688;
Path = "/";
Pseudo = "/MST107300";
Access_Type = RW;
Protocols = 3,4;
Transports = UDP,TCP;
Squash = No_Root_Squash;
FSAL {
Name = RGW;
User_Id = "fe707977-8225-4d56-8382-42dfaa397cfc";
Access_Key_Id ="AM4J6WUHYEASJBND6IGO";
Secret_Access_Key = "KDMh5CMsrXgiEJNnc5pN1PWqk31esNXGDA4p3ORL";
}
}
RGW {
ceph_conf = "/etc/ceph/ceph.conf";
name = "client.admin";
cluster = "ceph";
init_args = "--keyring=/etc/ceph/ceph.client.admin.keyring";
# init_args = "-d --debug-rgw=16";
}
</pre>
<pre>
26/12/2018 12:38:55 : epoch 5c23765f : admin : ganesha.nfsd-21680[main] create_export :FSAL :CRIT :Unable to mount RGW cluster for /.
26/12/2018 12:38:55 : epoch 5c23765f : admin : ganesha.nfsd-21680[main] create_export :FSAL :CRIT :Authorization Failed for user fe707977-8225-4d56-8382-42dfaa397cfc
26/12/2018 12:38:55 : epoch 5c23765f : admin : ganesha.nfsd-21680[main] mdcache_fsal_create_export :FSAL :MAJ :Failed to call create_export on underlying FSAL RGW
26/12/2018 12:38:55 : epoch 5c23765f : admin : ganesha.nfsd-21680[main] fsal_cfg_commit :CONFIG :CRIT :Could not create export for (/MST107300) to (/)
26/12/2018 12:38:55 : epoch 5c23765f : admin : ganesha.nfsd-21680[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
26/12/2018 12:38:55 : epoch 5c23765f : admin : ganesha.nfsd-21680[main] config_errs_to_log :CONFIG :CRIT :Config File (/etc/ganesha/ganesha.conf:9): 1 validation errors in block FSAL
26/12/2018 12:38:55 : epoch 5c23765f : admin : ganesha.nfsd-21680[main] config_errs_to_log :CONFIG :CRIT :Config File (/etc/ganesha/ganesha.conf:9): Errors processing block (FSAL)
26/12/2018 12:38:55 : epoch 5c23765f : admin : ganesha.nfsd-21680[main] config_errs_to_log :CONFIG :CRIT :Config File (/etc/ganesha/ganesha.conf:1): 1 validation errors in block EXPORT
26/12/2018 12:38:55 : epoch 5c23765f : admin : ganesha.nfsd-21680[main] config_errs_to_log :CONFIG :CRIT :Config File (/etc/ganesha/ganesha.conf:1): Errors processing block (EXPORT)
26/12/2018 12:38:55 : epoch 5c23765f : admin : ganesha.nfsd-21680[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
</pre>
<p>After deleting subuser fe707977-8225-4d56-8382-42dfaa397cfc:fychao68, I can export s3 successfully:</p>
<pre>
[vagrant@admin ~]$ sudo radosgw-admin subuser rm --subuser fe707977-8225-4d56-8382-42dfaa397cfc:fychao68
{
"user_id": "fe707977-8225-4d56-8382-42dfaa397cfc",
"display_name": "MST107300",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [
{
"id": "fe707977-8225-4d56-8382-42dfaa397cfc:admin",
"permissions": "full-control"
}
],
"keys": [
{
"user": "fe707977-8225-4d56-8382-42dfaa397cfc",
"access_key": "AM4J6WUHYEASJBND6IGO",
"secret_key": "KDMh5CMsrXgiEJNnc5pN1PWqk31esNXGDA4p3ORL"
},
{
"user": "fe707977-8225-4d56-8382-42dfaa397cfc:admin",
"access_key": "KD2QF2LRSSJGAHTULF0D",
"secret_key": "uUuIvo6AGSTPMCnmhLS2kJxdcE3VoVwQXUxUn5LD"
}
],
</pre>
<pre>
26/12/2018 12:44:55 : epoch 5c2377c7 : admin : ganesha.nfsd-22268[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 2.5.5
26/12/2018 12:44:55 : epoch 5c2377c7 : admin : ganesha.nfsd-22269[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
26/12/2018 12:44:55 : epoch 5c2377c7 : admin : ganesha.nfsd-22269[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
26/12/2018 12:44:55 : epoch 5c2377c7 : admin : ganesha.nfsd-22269[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
26/12/2018 12:44:56 : epoch 5c2377c7 : admin : ganesha.nfsd-22269[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
26/12/2018 12:44:56 : epoch 5c2377c7 : admin : ganesha.nfsd-22269[main] lower_my_caps :NFS STARTUP :EVENT :currenty set capabilities are: = cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap+ep
26/12/2018 12:44:56 : epoch 5c2377c7 : admin : ganesha.nfsd-22269[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
26/12/2018 12:44:56 : epoch 5c2377c7 : admin : ganesha.nfsd-22269[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
26/12/2018 12:44:56 : epoch 5c2377c7 : admin : ganesha.nfsd-22269[main] nfs4_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
26/12/2018 12:44:56 : epoch 5c2377c7 : admin : ganesha.nfsd-22269[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
26/12/2018 12:44:56 : epoch 5c2377c7 : admin : ganesha.nfsd-22269[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
26/12/2018 12:44:56 : epoch 5c2377c7 : admin : ganesha.nfsd-22269[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
26/12/2018 12:44:56 : epoch 5c2377c7 : admin : ganesha.nfsd-22269[main] nfs_Start_threads :THREAD :EVENT :9P/TCP dispatcher thread was started successfully
26/12/2018 12:44:56 : epoch 5c2377c7 : admin : ganesha.nfsd-22269[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
26/12/2018 12:44:56 : epoch 5c2377c7 : admin : ganesha.nfsd-22269[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
26/12/2018 12:44:56 : epoch 5c2377c7 : admin : ganesha.nfsd-22269[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
26/12/2018 12:44:56 : epoch 5c2377c7 : admin : ganesha.nfsd-22269[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
26/12/2018 12:44:56 : epoch 5c2377c7 : admin : ganesha.nfsd-22269[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
26/12/2018 12:44:56 : epoch 5c2377c7 : admin : ganesha.nfsd-22269[main] nfs_start :NFS STARTUP :EVENT : NFS SERVER INITIALIZED
26/12/2018 12:44:56 : epoch 5c2377c7 : admin : ganesha.nfsd-22269[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
</pre>
rgw - Bug #36233 (Resolved): when using nfs-ganesha to upload file, rgw es sync module get failed
https://tracker.ceph.com/issues/36233
2018-09-27T03:53:04Z
min-sheng Lin
minsheng.l@inwinstack.com
<p>We use S3FS and nfs-ganesha to upload file to s3 and sync metadata to elasticsearch by es sync module. When uploading file in nfs-ganesha, rgw sync status show "1 shards are recovering" and never change.</p>
<pre>
[vagrant@client-1 build]$ radosgw-admin sync status --rgw-zone us-east-es
2018-09-27 03:00:36.936575 7f12c16c6dc0 -1 WARNING: all dangerous and experimental features are enabled.
2018-09-27 03:00:36.936821 7f12c16c6dc0 -1 WARNING: all dangerous and experimental features are enabled.
2018-09-27 03:00:36.956403 7f12c16c6dc0 -1 WARNING: all dangerous and experimental features are enabled.
realm 64a495cc-e622-44ae-b0d2-c48edfb6e395 (gold)
zonegroup 84e47fe5-6153-4046-a10a-17d000a15d84 (us)
zone 5313469f-4381-4a26-9ef4-f570d188d2fb (us-east-es)
metadata sync syncing
full sync: 0/64 shards
incremental sync: 64/64 shards
metadata is caught up with master
data sync source: b4cda113-46e3-4662-b22e-ec427f11458c (us-east-1)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
1 shards are recovering
recovering shards: [69]
</pre>
<p>radosgw us-east-es log show failed to sync object:</p>
<pre>
2018-09-27 02:53:47.500655 7efe374ce700 20 stat of remote obj: z=b4cda113-46e3-4662-b22e-ec427f11458c b=test[b4cda113-46e3-4662-b22e-ec427f11458c.4205.1] k=test_from_nfs10 size=1048576 mtime=2018-09-27 02:53:46.0.495825s attrs={user.rgw.acl=buffer::list(len=135,
buffer::ptr(0~135 0x557dce0e50e0 in raw 0x557dce0e50e0 len 139 nref 1)
),user.rgw.etag=buffer::list(len=33,
buffer::ptr(0~33 0x557dce187100 in raw 0x557dce187100 len 37 nref 1)
),user.rgw.idtag=buffer::list(len=46,
buffer::ptr(0~46 0x557dce782240 in raw 0x557dce782240 len 52 nref 1)
),user.rgw.pg_ver=buffer::list(len=8,
buffer::ptr(0~8 0x557dce69d880 in raw 0x557dce69d880 len 13 nref 1)
),user.rgw.source_zone=buffer::list(len=4,
buffer::ptr(0~4 0x557dce8fb420 in raw 0x557dce8fb420 len 10 nref 1)
),user.rgw.tail_tag=buffer::list(len=46,
buffer::ptr(0~46 0x557dce7822d0 in raw 0x557dce7822d0 len 52 nref 1)
),user.rgw.unix-key1=buffer::list(len=26,
buffer::ptr(0~26 0x557dce187d00 in raw 0x557dce187d00 len 31 nref 1)
),user.rgw.unix1=buffer::list(len=74,
buffer::ptr(0~74 0x557dcd9c11e0 in raw 0x557dcd9c11e0 len 79 nref 1)
)}
2018-09-27 02:53:47.500702 7efe374ce700 20 cr:s=0x557dcde8c540:op=0x557dce72b800:29RGWElasticHandleRemoteObjCBCR: operate()
2018-09-27 02:53:47.500709 7efe374ce700 10 : stat of remote obj: z=b4cda113-46e3-4662-b22e-ec427f11458c b=test[b4cda113-46e3-4662-b22e-ec427f11458c.4205.1] k=test_from_nfs10 size=1048576 mtime=2018-09-27 02:53:46.0.495825s attrs={user.rgw.acl=buffer::list(len=135,
buffer::ptr(0~135 0x557dce0e50e0 in raw 0x557dce0e50e0 len 139 nref 1)
),user.rgw.etag=buffer::list(len=33,
buffer::ptr(0~33 0x557dce187100 in raw 0x557dce187100 len 37 nref 1)
),user.rgw.idtag=buffer::list(len=46,
buffer::ptr(0~46 0x557dce782240 in raw 0x557dce782240 len 52 nref 1)
),user.rgw.pg_ver=buffer::list(len=8,
buffer::ptr(0~8 0x557dce69d880 in raw 0x557dce69d880 len 13 nref 1)
),user.rgw.source_zone=buffer::list(len=4,
buffer::ptr(0~4 0x557dce8fb420 in raw 0x557dce8fb420 len 10 nref 1)
),user.rgw.tail_tag=buffer::list(len=46,
buffer::ptr(0~46 0x557dce7822d0 in raw 0x557dce7822d0 len 52 nref 1)
),user.rgw.unix-key1=buffer::list(len=26,
buffer::ptr(0~26 0x557dce187d00 in raw 0x557dce187d00 len 31 nref 1)
),user.rgw.unix1=buffer::list(len=74,
buffer::ptr(0~74 0x557dcd9c11e0 in raw 0x557dcd9c11e0 len 79 nref 1)
)}
2018-09-27 02:53:47.500752 7efe374ce700 20 cr:s=0x557dcde8c540:op=0x557dce71d800:20RGWPutRESTResourceCRI15es_obj_metadataiE: operate()
2018-09-27 02:53:47.500755 7efe374ce700 20 cr:s=0x557dcde8c540:op=0x557dce71d800:20RGWPutRESTResourceCRI15es_obj_metadataiE: operate()
2018-09-27 02:53:47.500865 7efe374ce700 20 sending request to http://192.168.15.11:9200/rgw-gold-6dcfa3f5/object/b4cda113-46e3-4662-b22e-ec427f11458c.4205.1%3Atest_from_nfs10%3Anull
2018-09-27 02:53:47.500876 7efe374ce700 20 register_request mgr=0x557dcd8f39e0 req_data->id=1156, easy_handle=0x557dcea76000
2018-09-27 02:53:47.500884 7efe374ce700 20 run: stack=0x557dcde8c540 is io blocked
2018-09-27 02:53:47.500894 7efe324c4700 20 link_request req_data=0x557dce415c00 req_data->id=1156, easy_handle=0x557dcea76000
2018-09-27 02:53:47.502798 7efe374ce700 20 cr:s=0x557dcdbac540:op=0x557dce9db800:27RGWReadRemoteDataLogShardCR: operate()
2018-09-27 02:53:47.502921 7efe374ce700 20 cr:s=0x557dcdbac540:op=0x557dce33b600:18RGWDataSyncShardCR: operate()
2018-09-27 02:53:47.502927 7efe374ce700 20 data sync: incremental_sync:1396: shard_id=69 log_entry: 1_1538016826.301943_43113.1:2018-09-27 02:53:46.0.301943s:test:b4cda113-46e3-4662-b22e-ec427f11458c.4205.1
2018-09-27 02:53:47.502957 7efe374ce700 20 data sync: incremental_sync:1432: shard_id=69 datalog_marker=1_1538016826.301943_43113.1 sync_marker.marker=1_1538016826.301943_43113.1
2018-09-27 02:53:47.502965 7efe374ce700 20 run: stack=0x557dcdbac540 is io blocked
2018-09-27 02:53:47.502968 7efe374ce700 20 cr:s=0x557dcdcb55e0:op=0x557dce3e4300:24RGWDataSyncSingleEntryCR: operate()
2018-09-27 02:53:47.502977 7efe374ce700 5 data sync: Sync:b4cda113:data:Bucket:test:b4cda113-46e3-4662-b22e-ec427f11458c.4205.1:start
2018-09-27 02:53:47.502980 7efe374ce700 20 cr:s=0x557dcdcb55e0:op=0x557dce71d000:25RGWRunBucketSyncCoroutine: operate()
2018-09-27 02:53:47.503002 7efe374ce700 20 cr:s=0x557dce1af960:op=0x557dce3e3c00:20RGWContinuousLeaseCR: operate()
2018-09-27 02:53:47.503009 7efe374ce700 20 cr:s=0x557dcdcb55e0:op=0x557dce71d000:25RGWRunBucketSyncCoroutine: operate()
2018-09-27 02:53:47.503012 7efe374ce700 20 run: stack=0x557dcdcb55e0 is_blocked_by_stack()=0 is_sleeping=1 waiting_for_child()=0
2018-09-27 02:53:47.503013 7efe374ce700 20 cr:s=0x557dce1af960:op=0x557dce9db800:20RGWSimpleRadosLockCR: operate()
2018-09-27 02:53:47.503014 7efe374ce700 20 cr:s=0x557dce1af960:op=0x557dce9db800:20RGWSimpleRadosLockCR: operate()
2018-09-27 02:53:47.503030 7efe374ce700 20 enqueued request req=0x557dce7dfc70
2018-09-27 02:53:47.503032 7efe374ce700 20 RGWWQ:
2018-09-27 02:53:47.503033 7efe374ce700 20 req: 0x557dce7dfc70
2018-09-27 02:53:47.503036 7efe374ce700 20 run: stack=0x557dce1af960 is io blocked
2018-09-27 02:53:47.503043 7efe3b4d6700 20 dequeued request req=0x557dce7dfc70
2018-09-27 02:53:47.503046 7efe3b4d6700 20 RGWWQ: empty
2018-09-27 02:53:47.503356 7efe324c4700 10 receive_http_header
2018-09-27 02:53:47.503362 7efe324c4700 10 received header:HTTP/1.1 100 Continue
2018-09-27 02:53:47.503363 7efe324c4700 10 received header:HTTP/1.1
2018-09-27 02:53:47.503364 7efe324c4700 10 receive_http_header
2018-09-27 02:53:47.503365 7efe324c4700 10 received header:
2018-09-27 02:53:47.524418 7efe374ce700 20 cr:s=0x557dce1af960:op=0x557dce9db800:20RGWSimpleRadosLockCR: operate()
2018-09-27 02:53:47.524446 7efe374ce700 20 cr:s=0x557dce1af960:op=0x557dce9db800:20RGWSimpleRadosLockCR: operate() returned r=-16
2018-09-27 02:53:47.524454 7efe374ce700 20 cr:s=0x557dce1af960:op=0x557dce3e3c00:20RGWContinuousLeaseCR: operate()
2018-09-27 02:53:47.524456 7efe374ce700 20 cr:s=0x557dce1af960:op=0x557dce3e3c00:20RGWContinuousLeaseCR: couldn't lock us-east-es.rgw.log:bucket.sync-status.b4cda113-46e3-4662-b22e-ec427f11458c:test:b4cda113-46e3-4662-b22e-ec427f11458c.4205.1:sync_lock: retcode=-16
2018-09-27 02:53:47.524460 7efe374ce700 20 cr:s=0x557dce1af960:op=0x557dce3e3c00:20RGWContinuousLeaseCR: operate() returned r=-16
2018-09-27 02:53:47.524461 7efe374ce700 20 stack->operate() returned ret=-16
2018-09-27 02:53:47.524461 7efe374ce700 20 run: stack=0x557dce1af960 is done
2018-09-27 02:53:47.524465 7efe374ce700 20 cr:s=0x557dcdcb55e0:op=0x557dce71d000:25RGWRunBucketSyncCoroutine: operate()
2018-09-27 02:53:47.524467 7efe374ce700 5 data sync: lease cr failed, done early
2018-09-27 02:53:47.524470 7efe374ce700 20 cr:s=0x557dcdcb55e0:op=0x557dce71d000:25RGWRunBucketSyncCoroutine: operate() returned r=-16
2018-09-27 02:53:47.524474 7efe374ce700 5 data sync: Sync:b4cda113:data:Bucket:test:b4cda113-46e3-4662-b22e-ec427f11458c.4205.1:finish
2018-09-27 02:53:47.524478 7efe374ce700 20 cr:s=0x557dcdcb55e0:op=0x557dce3e4300:24RGWDataSyncSingleEntryCR: operate()
2018-09-27 02:53:47.524494 7efe374ce700 20 data sync: store_marker(): updating marker marker_oid=datalog.sync-status.shard.b4cda113-46e3-4662-b22e-ec427f11458c.69 marker=1_1538016826.301943_43113.1
2018-09-27 02:53:47.524510 7efe374ce700 20 cr:s=0x557dce6135e0:op=0x557dce59ec00:13RGWOmapAppend: operate()
2018-09-27 02:53:47.524522 7efe374ce700 20 cr:s=0x557dcdcb55e0:op=0x557dce9db800:21RGWSimpleRadosWriteCRI20rgw_data_sync_markerE: operate()
2018-09-27 02:53:47.524525 7efe374ce700 20 cr:s=0x557dce6135e0:op=0x557dce838600:21RGWRadosSetOmapKeysCR: operate()
2018-09-27 02:53:47.524529 7efe374ce700 20 cr:s=0x557dcdcb55e0:op=0x557dce9db800:21RGWSimpleRadosWriteCRI20rgw_data_sync_markerE: operate()
2018-09-27 02:53:47.524544 7efe374ce700 20 enqueued request req=0x557dcdfeb440
2018-09-27 02:53:47.524547 7efe374ce700 20 RGWWQ:
2018-09-27 02:53:47.524548 7efe374ce700 20 req: 0x557dcdfeb440
2018-09-27 02:53:47.524551 7efe374ce700 20 run: stack=0x557dcdcb55e0 is io blocked
2018-09-27 02:53:47.524552 7efe374ce700 20 cr:s=0x557dce6135e0:op=0x557dce838600:21RGWRadosSetOmapKeysCR: operate()
2018-09-27 02:53:47.524571 7efe414e2700 20 dequeued request req=0x557dcdfeb440
2018-09-27 02:53:47.524576 7efe414e2700 20 RGWWQ: empty
2018-09-27 02:53:47.524634 7efe374ce700 20 run: stack=0x557dce6135e0 is io blocked
2018-09-27 02:53:47.536832 7efe374ce700 20 cr:s=0x557dcdc23dc0:op=0x557dce42aa00:21RGWReadRESTResourceCRISt4listI16rgw_bi_log_entrySaIS1_EEE: operate()
2018-09-27 02:53:47.536898 7efe374ce700 20 cr:s=0x557dcdc23dc0:op=0x557dce42aa00:21RGWReadRESTResourceCRISt4listI16rgw_bi_log_entrySaIS1_EEE: operate()
2018-09-27 02:53:47.536907 7efe374ce700 20 cr:s=0x557dcdc23dc0:op=0x557dce42aa00:21RGWReadRESTResourceCRISt4listI16rgw_bi_log_entrySaIS1_EEE: operate()
2018-09-27 02:53:47.536908 7efe374ce700 20 cr:s=0x557dcdc23dc0:op=0x557dce42aa00:21RGWReadRESTResourceCRISt4listI16rgw_bi_log_entrySaIS1_EEE: operate()
2018-09-27 02:53:47.536911 7efe374ce700 20 cr:s=0x557dcdc23dc0:op=0x557dcde64c00:23RGWListBucketIndexLogCR: operate()
2018-09-27 02:53:47.536913 7efe374ce700 20 cr:s=0x557dcdc23dc0:op=0x557dce7a0800:31RGWBucketShardIncrementalSyncCR: operate()
2018-09-27 02:53:47.536915 7efe374ce700 20 run: stack=0x557dcdc23dc0 is_blocked_by_stack()=0 is_sleeping=0 waiting_for_child()=1
2018-09-27 02:53:47.549960 7efe324c4700 10 receive_http_header
2018-09-27 02:53:47.549973 7efe324c4700 10 received header:HTTP/1.1 400 Bad Request
2018-09-27 02:53:47.549986 7efe324c4700 10 receive_http_header
2018-09-27 02:53:47.549987 7efe324c4700 10 received header:Warning: 299 Elasticsearch-5.6.12-cfe3d9f "Content type detection for rest requests is deprecated. Specify the content type using the [Content-Type] header." "Thu, 27 Sep 2018 02:53:47 GMT"
2018-09-27 02:53:47.549995 7efe324c4700 10 receive_http_header
2018-09-27 02:53:47.549995 7efe324c4700 10 received header:content-type: application/json; charset=UTF-8
2018-09-27 02:53:47.549998 7efe324c4700 10 receive_http_header
2018-09-27 02:53:47.549998 7efe324c4700 10 received header:content-length: 373
2018-09-27 02:53:47.550000 7efe324c4700 10 receive_http_header
2018-09-27 02:53:47.550001 7efe324c4700 10 received header:
2018-09-27 02:53:47.550038 7efe374ce700 20 cr:s=0x557dcde8c540:op=0x557dce71d800:20RGWPutRESTResourceCRI15es_obj_metadataiE: operate()
2018-09-27 02:53:47.550063 7efe374ce700 5 failed to wait for op, ret=-22: PUT http://192.168.15.11:9200/rgw-gold-6dcfa3f5/object/b4cda113-46e3-4662-b22e-ec427f11458c.4205.1%3Atest_from_nfs10%3Anull
2018-09-27 02:53:47.550141 7efe374ce700 20 cr:s=0x557dcde8c540:op=0x557dce71d800:20RGWPutRESTResourceCRI15es_obj_metadataiE: operate() returned r=-22
2018-09-27 02:53:47.550164 7efe374ce700 20 cr:s=0x557dcde8c540:op=0x557dce72b800:29RGWElasticHandleRemoteObjCBCR: operate()
2018-09-27 02:53:47.550165 7efe374ce700 20 cr:s=0x557dcde8c540:op=0x557dce72b800:29RGWElasticHandleRemoteObjCBCR: operate() returned r=-22
2018-09-27 02:53:47.550167 7efe374ce700 20 cr:s=0x557dcde8c540:op=0x557dce72b000:27RGWElasticHandleRemoteObjCR: operate()
2018-09-27 02:53:47.550168 7efe374ce700 0 RGWStatRemoteObjCR() callback returned -22
2018-09-27 02:53:47.550169 7efe374ce700 20 cr:s=0x557dcde8c540:op=0x557dce72b000:27RGWElasticHandleRemoteObjCR: operate() returned r=-22
2018-09-27 02:53:47.550171 7efe374ce700 20 cr:s=0x557dcde8c540:op=0x557dce7a1000:26RGWBucketSyncSingleEntryCRISs11rgw_obj_keyE: operate()
2018-09-27 02:53:47.550174 7efe374ce700 5 data sync: Sync:b4cda113:data:Object:test:b4cda113-46e3-4662-b22e-ec427f11458c.4205.1/test_from_nfs10[0]:done, retcode=-22
2018-09-27 02:53:47.550184 7efe374ce700 0 data sync: ERROR: failed to sync object: test:b4cda113-46e3-4662-b22e-ec427f11458c.4205.1/test_from_nfs10
</pre>
<p>elasticsearch log show filed to parse</p>
<pre>
[2018-09-27T02:53:47,546][DEBUG][o.e.a.b.TransportShardBulkAction] [Q8LOYBG] [rgw-gold-6dcfa3f5][9] failed to execute bulk item (index) BulkShardRequest [[rgw-gold-6dcfa3f5][9]] containing [index {[rgw-gold-6dcfa3f5][object][b4cda113-46e3-4662-b22e-ec427f11458c.4205.1:test_from_nfs10:null], source[{"bucket":"test","name":"test_from_nfs10","instance":"","versioned_epoch":0,"owner":{"id":"admin","display_name":"admin"},"permissions":["admin"],"meta":{"size":1048576,"mtime":"2018-09-27T02:53:46.495Z","etag":"68a1b9ffc7c1aab5f5cd155a0e251dbd","tail_tag":"b4cda113-46e3-4662-b22e-ec427f11458c.134118.0","unix-key1":"\u0002\u0001\u0014\u0000\u0000\u0000\u0019\u001d�?�����4�s�\u0004\u0014�\u0002\u0000\u0000","unix1":"\u0002\u0001D\u0000\u0000\u0000\u0001\u0000\u0000\u0000�I�F�n��\u0000\u0000\u0010\u0000\u0000\u0000\u0000\u0000\u0001\u0000\u0000\u0000\u0000\u0000\u0000\u0000�\u0003\u0000\u0000�\u0003\u0000\u0000��\u0000\u0000:F�[Eӌ\u001d:F�[Eӌ\u001d:F�[\u0018��\u0016\u0002\u0000\u0000"}}]}]
org.elasticsearch.index.mapper.MapperParsingException: failed to parse
at org.elasticsearch.index.mapper.DocumentParser.wrapInMapperParsingException(DocumentParser.java:176) ~[elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:69) ~[elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:277) ~[elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.index.shard.IndexShard.prepareIndex(IndexShard.java:530) ~[elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.index.shard.IndexShard.prepareIndexOnPrimary(IndexShard.java:507) ~[elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.action.bulk.TransportShardBulkAction.prepareIndexOperationOnPrimary(TransportShardBulkAction.java:458) ~[elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequestOnPrimary(TransportShardBulkAction.java:466) ~[elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:145) [elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:114) [elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:69) [elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:975) [elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:944) [elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:113) [elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:345) [elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:270) [elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:924) [elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:921) [elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:151) [elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationLock(IndexShard.java:1659) [elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryShardReference(TransportReplicationAction.java:933) [elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.action.support.replication.TransportReplicationAction.access$500(TransportReplicationAction.java:92) [elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:291) [elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:266) [elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:248) [elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) [elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:662) [elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:675) [elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.6.12.jar:5.6.12]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
Caused by: com.fasterxml.jackson.core.JsonParseException: Invalid UTF-8 start byte 0x87
at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@1839db50; line: 1, column: 368]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1702) ~[jackson-core-2.8.6.jar:2.8.6]
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:558) ~[jackson-core-2.8.6.jar:2.8.6]
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._reportInvalidInitial(UTF8StreamJsonParser.java:3544) ~[jackson-core-2.8.6.jar:2.8.6]
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._reportInvalidChar(UTF8StreamJsonParser.java:3538) ~[jackson-core-2.8.6.jar:2.8.6]
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._finishString2(UTF8StreamJsonParser.java:2543) ~[jackson-core-2.8.6.jar:2.8.6]
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._finishAndReturnString(UTF8StreamJsonParser.java:2469) ~[jackson-core-2.8.6.jar:2.8.6]
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.getText(UTF8StreamJsonParser.java:315) ~[jackson-core-2.8.6.jar:2.8.6]
at org.elasticsearch.common.xcontent.json.JsonXContentParser.text(JsonXContentParser.java:86) ~[elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.index.mapper.DocumentParser.createBuilderFromDynamicValue(DocumentParser.java:699) ~[elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.index.mapper.DocumentParser.parseDynamicValue(DocumentParser.java:807) ~[elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.index.mapper.DocumentParser.parseValue(DocumentParser.java:598) ~[elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:396) ~[elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:373) ~[elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:465) ~[elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:484) ~[elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:383) ~[elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:373) ~[elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.index.mapper.DocumentParser.internalParseDocument(DocumentParser.java:93) ~[elasticsearch-5.6.12.jar:5.6.12]
at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:66) ~[elasticsearch-5.6.12.jar:5.6.12]
... 30 more
</pre>
rgw - Bug #36106 (Triaged): rgw_file: list directory can only get 1000 files on NFS-Ganesha-RGW w...
https://tracker.ceph.com/issues/36106
2018-09-21T09:53:27Z
min-sheng Lin
minsheng.l@inwinstack.com
<p>If using nfs-ganesha-rgw to export bucket as nfs, only first 1000 files be listed</p>
<blockquote>
<p>[vagrant@admin ~]$ ls -l /mnt/test/ | grep test | wc -l <br />1000</p>
</blockquote>
<p>Using s3cmd can correctly list all files</p>
<blockquote>
<p>[vagrant@admin ~]$ s3cmd ls s3://test | wc -l<br />1500</p>
</blockquote>
<p>In rgw log, marker(offset) is changed from <em>test548</em> to <em>0</em> when execute cls_bucket_list_ordered function</p>
<blockquote>
<p>2018-09-21 06:16:43.350 7fc1582e0700 15 READDIR offset: (nil) next marker: test548 is_truncated: 1<br />2018-09-21 06:16:43.351 7fc1582e0700 15 readdir final link count=1002<br />2018-09-21 06:16:43.374 7fc1266e6700 15 rgw_readdir2 offset=test548<br />2018-09-21 06:16:43.374 7fc1266e6700 1 ====== process_request starting new request req=0x7fc1266e1cd0 ======<br />2018-09-21 06:16:43.374 7fc1266e6700 20 HTTP_HOST=<br />2018-09-21 06:16:43.374 7fc1266e6700 2 req 0:0.000005:: :list_bucket:initializing for trans_id = tx000000000000000000000-005ba48ccb-1b1e-default<br />2018-09-21 06:16:43.374 7fc1266e6700 2 req 0:0.000016:: :list_bucket:authorizing<br />2018-09-21 06:16:43.374 7fc1266e6700 2 req 0:0.000018:: :list_bucket:reading op permissions<br />2018-09-21 06:16:43.374 7fc1266e6700 15 decode_policy Read AccessControlPolicy<AccessControlPolicy xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>admin</ID><DisplayName>admin</DisplayName></Owner><AccessControlList><Grant><Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser"><ID>admin</ID><DisplayName>admin</DisplayName></Grantee><Permission>FULL_CONTROL</Permission></Grant></AccessControlList></AccessControlPolicy><br />2018-09-21 06:16:43.374 7fc1266e6700 2 req 0:0.000054:: :list_bucket:init op<br />2018-09-21 06:16:43.374 7fc1266e6700 2 req 0:0.000055:: :list_bucket:verifying op mask<br />2018-09-21 06:16:43.374 7fc1266e6700 20 required_mask= 1 user.op_mask=7<br />2018-09-21 06:16:43.374 7fc1266e6700 2 req 0:0.000056:: :list_bucket:verifying op permissions<br />2018-09-21 06:16:43.374 7fc1266e6700 20 -- Getting permissions begin with perm_mask=49<br />2018-09-21 06:16:43.374 7fc1266e6700 5 Searching permissions for identity=RGWDummyIdentityApplier(auth_id=admin, perm_mask=15, is_admin=0) mask=49<br />2018-09-21 06:16:43.374 7fc1266e6700 5 Searching permissions for uid=admin<br />2018-09-21 06:16:43.374 7fc1266e6700 5 Found permission: 15<br />2018-09-21 06:16:43.374 7fc1266e6700 5 Searching permissions for group=1 mask=49<br />2018-09-21 06:16:43.374 7fc1266e6700 5 Permissions for group not found<br />2018-09-21 06:16:43.374 7fc1266e6700 5 Searching permissions for group=2 mask=49<br />2018-09-21 06:16:43.374 7fc1266e6700 5 Permissions for group not found<br />2018-09-21 06:16:43.374 7fc1266e6700 5 -- Getting permissions done for identity=RGWDummyIdentityApplier(auth_id=admin, perm_mask=15, is_admin=0), owner=admin, perm=1<br />2018-09-21 06:16:43.374 7fc1266e6700 10 identity=RGWDummyIdentityApplier(auth_id=admin, perm_mask=15, is_admin=0) requested perm (type)=1, policy perm=1, user_perm_mask=1, acl perm=1<br />2018-09-21 06:16:43.374 7fc1266e6700 2 req 0:0.000067:: :list_bucket:verifying op params<br />2018-09-21 06:16:43.374 7fc1266e6700 2 req 0:0.000068:: :list_bucket:executing<br />2018-09-21 06:16:43.374 7fc1266e6700 10 cls_bucket_list_ordered test[81509a51-c0fd-4af9-90f9-c09d6139738e.4138.1] start 0[] num_entries 1001</p>
</blockquote>
Linux kernel client - Bug #21220 (Closed): mount error 110 = Connection timed out
https://tracker.ceph.com/issues/21220
2017-09-04T08:16:02Z
min-sheng Lin
minsheng.l@inwinstack.com
<p>Ceph and Kernel version on client:</p>
<pre>
root@ceph-admin:/var/log# uname -r
4.2.0-41-generic
root@ceph-admin:/var/log# ceph -v
ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
</pre>
<p>Got timeout error when mounting cephfs:</p>
<pre>
vagrant@ceph-admin:~$ sudo mount -vvvv -t ceph ceph-mon-1,ceph-mon-2,ceph-mon-3:6789:/ /mnt -o name=admin,secret=AQBz\/KxZLh1xERAAfVubu6RxePWyZKZNDh1QjQ==
mount: fstab path: "/etc/fstab"
mount: mtab path: "/etc/mtab"
mount: lock path: "/etc/mtab~"
mount: temp path: "/etc/mtab.tmp"
mount: UID: 0
mount: eUID: 0
mount: spec: "ceph-mon-1,ceph-mon-2,ceph-mon-3:6789:/"
mount: node: "/mnt"
mount: types: "ceph"
mount: opts: "name=admin,secret=AQBz/KxZLh1xERAAfVubu6RxePWyZKZNDh1QjQ=="
mount: external mount: argv[0] = "/sbin/mount.ceph"
mount: external mount: argv[1] = "ceph-mon-1,ceph-mon-2,ceph-mon-3:6789:/"
mount: external mount: argv[2] = "/mnt"
mount: external mount: argv[3] = "-v"
mount: external mount: argv[4] = "-o"
mount: external mount: argv[5] = "rw,name=admin,secret=AQBz/KxZLh1xERAAfVubu6RxePWyZKZNDh1QjQ=="
parsing options: rw,name=admin,secret=AQBz/KxZLh1xERAAfVubu6RxePWyZKZNDh1QjQ==
mount error 110 = Connection timed out
</pre>
<p>ceph-fuse client is just fine:</p>
<pre>
vagrant@ceph-admin:~$ sudo ceph-fuse -m 192.168.15.11,192.168.15.12,192.168.15.13 /mnt --keyring /etc/ceph/ceph.client.admin.keyring --name client.admin
ceph-fuse[20117]: starting ceph client
2017-09-04 08:13:27.163183 7f70162ac000 -1 init, newargv = 0x55c88431c540 newargc=9
ceph-fuse[20117]: starting fuse
vagrant@ceph-admin:~$ df | grep ceph
ceph-fuse 56201216 0 56201216 0% /mnt
</pre>
<p>missing required protocol features log in /var/log/syslog:<br /><pre>
Sep 4 07:45:17 vagrant kernel: [ 3086.769690] libceph: mon1 192.168.15.12:6789 feature set mismatch, my 103b84a842aca < server's 40103b84a842aca, missing 400000000000000
Sep 4 07:45:17 vagrant kernel: [ 3086.769738] libceph: mon1 192.168.15.12:6789 missing required protocol features
Sep 4 07:45:27 vagrant kernel: [ 3096.800767] libceph: mon1 192.168.15.12:6789 feature set mismatch, my 103b84a842aca < server's 40103b84a842aca, missing 400000000000000
Sep 4 07:45:27 vagrant kernel: [ 3096.800818] libceph: mon1 192.168.15.12:6789 missing required protocol features
Sep 4 07:45:37 vagrant kernel: [ 3106.816588] libceph: mon0 192.168.15.11:6789 feature set mismatch, my 103b84a842aca < server's 40103b84a842aca, missing 400000000000000
Sep 4 07:45:37 vagrant kernel: [ 3106.816639] libceph: mon0 192.168.15.11:6789 missing required protocol features
Sep 4 07:45:47 vagrant kernel: [ 3116.832675] libceph: mon2 192.168.15.13:6789 feature set mismatch, my 103b84a842aca < server's 40103b84a842aca, missing 400000000000000
Sep 4 07:45:47 vagrant kernel: [ 3116.832724] libceph: mon2 192.168.15.13:6789 missing required protocol features
Sep 4 07:45:57 vagrant kernel: [ 3126.848665] libceph: mon2 192.168.15.13:6789 feature set mismatch, my 103b84a842aca < server's 40103b84a842aca, missing 400000000000000
Sep 4 07:45:57 vagrant kernel: [ 3126.848716] libceph: mon2 192.168.15.13:6789 missing required protocol features
Sep 4 07:46:07 vagrant kernel: [ 3136.864960] libceph: mon2 192.168.15.13:6789 feature set mismatch, my 103b84a842aca < server's 40103b84a842aca, missing 400000000000000
Sep 4 07:46:07 vagrant kernel: [ 3136.865038] libceph: mon2 192.168.15.13:6789 missing required protocol features
Sep 4 07:48:10 vagrant kernel: [ 3259.952657] libceph: mon0 192.168.15.11:6789 feature set mismatch, my 103b84a842aca < server's 40103b84a842aca, missing 400000000000000
Sep 4 07:48:10 vagrant kernel: [ 3259.952704] libceph: mon0 192.168.15.11:6789 missing required protocol features
Sep 4 07:48:20 vagrant kernel: [ 3269.984679] libceph: mon2 192.168.15.13:6789 feature set mismatch, my 103b84a842aca < server's 40103b84a842aca, missing 400000000000000
Sep 4 07:48:20 vagrant kernel: [ 3269.984729] libceph: mon2 192.168.15.13:6789 missing required protocol features
Sep 4 07:48:30 vagrant kernel: [ 3280.000791] libceph: mon1 192.168.15.12:6789 feature set mismatch, my 103b84a842aca < server's 40103b84a842aca, missing 400000000000000
Sep 4 07:48:30 vagrant kernel: [ 3280.000841] libceph: mon1 192.168.15.12:6789 missing required protocol features
Sep 4 07:48:40 vagrant kernel: [ 3290.016840] libceph: mon1 192.168.15.12:6789 feature set mismatch, my 103b84a842aca < server's 40103b84a842aca, missing 400000000000000
Sep 4 07:48:40 vagrant kernel: [ 3290.016890] libceph: mon1 192.168.15.12:6789 missing required protocol features
Sep 4 07:48:50 vagrant kernel: [ 3300.032587] libceph: mon1 192.168.15.12:6789 feature set mismatch, my 103b84a842aca < server's 40103b84a842aca, missing 400000000000000
Sep 4 07:48:50 vagrant kernel: [ 3300.035900] libceph: mon1 192.168.15.12:6789 missing required protocol features
Sep 4 07:49:00 vagrant kernel: [ 3310.048834] libceph: mon2 192.168.15.13:6789 feature set mismatch, my 103b84a842aca < server's 40103b84a842aca, missing 400000000000000
Sep 4 07:49:00 vagrant kernel: [ 3310.052296] libceph: mon2 192.168.15.13:6789 missing required protocol features
Sep 4 07:49:10 vagrant kernel: [ 3319.953886] libceph: mon1 192.168.15.12:6789 feature set mismatch, my 103b84a842aca < server's 40103b84a842aca, missing 400000000000000
Sep 4 07:49:10 vagrant kernel: [ 3319.957168] libceph: mon1 192.168.15.12:6789 missing required protocol features
Sep 4 07:49:20 vagrant kernel: [ 3329.984892] libceph: mon0 192.168.15.11:6789 feature set mismatch, my 103b84a842aca < server's 40103b84a842aca, missing 400000000000000
Sep 4 07:49:20 vagrant kernel: [ 3329.988368] libceph: mon0 192.168.15.11:6789 missing required protocol features
Sep 4 07:49:30 vagrant kernel: [ 3340.000906] libceph: mon0 192.168.15.11:6789 feature set mismatch, my 103b84a842aca < server's 40103b84a842aca, missing 400000000000000
Sep 4 07:49:30 vagrant kernel: [ 3340.004269] libceph: mon0 192.168.15.11:6789 missing required protocol features
Sep 4 07:49:40 vagrant kernel: [ 3350.016806] libceph: mon2 192.168.15.13:6789 feature set mismatch, my 103b84a842aca < server's 40103b84a842aca, missing 400000000000000
Sep 4 07:49:40 vagrant kernel: [ 3350.020464] libceph: mon2 192.168.15.13:6789 missing required protocol features
Sep 4 07:49:50 vagrant kernel: [ 3360.032610] libceph: mon1 192.168.15.12:6789 feature set mismatch, my 103b84a842aca < server's 40103b84a842aca, missing 400000000000000
Sep 4 07:49:50 vagrant kernel: [ 3360.035993] libceph: mon1 192.168.15.12:6789 missing required protocol features
</pre></p>
mgr - Bug #20415 (Can't reproduce): dashboard: Health is not updated
https://tracker.ceph.com/issues/20415
2017-06-26T08:12:08Z
min-sheng Lin
minsheng.l@inwinstack.com
<p>when I try the mgr dashboard on Ceph 12.1.0, the Health is not changed after shutdown one monitor:</p>
<p><img src="https://tracker.ceph.com/attachments/download/2856/download.png" alt="" /></p>
<p>you can see health has been changed to warning in cluster.log, but not on 'Overall status'</p>
Ceph - Documentation #19972 (Closed): unknown target name in erasure-code.rst
https://tracker.ceph.com/issues/19972
2017-05-18T01:47:36Z
min-sheng Lin
minsheng.l@inwinstack.com
<p>an error happened when build doc:</p>
<blockquote>
<p>/home/vincent/Downloads/ceph/doc/rados/operations/erasure-code.rst:141: ERROR: Unknown target name: "file layout<../../cephfs/file-layouts>".</p>
</blockquote>
rbd - Bug #17951 (Resolved): AdminSocket::bind_and_listen failed after rbd-nbd mapping
https://tracker.ceph.com/issues/17951
2016-11-18T08:06:34Z
min-sheng Lin
minsheng.l@inwinstack.com
<p>Hi, I got the following error message, but looks like list-mapped and unmap still work?</p>
<pre>
vagrant@ceph-client-1:~$ sudo rbd create test --size 100M
vagrant@ceph-client-1:~$ sudo rbd-nbd map test
/dev/nbd0
vagrant@ceph-client-1:~$ sudo rbd-nbd list-mapped
2016-11-18 07:54:29.743393 7fe6145c9e00 -1 asok(0x55df5e257960) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/ceph-client.admin.asok': (17) File exists
/dev/nbd0
vagrant@ceph-client-1:~$ sudo rbd-nbd unmap /dev/nbd0
2016-11-18 07:54:48.238738 7f6243b3be00 -1 asok(0x561ed2730960) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/ceph-client.admin.asok': (17) File exists
vagrant@ceph-client-1:~$ sudo rbd-nbd list-mapped
</pre>
Ceph-deploy - Cleanup #17053 (New): load_raw method is more suitable for reading config file
https://tracker.ceph.com/issues/17053
2016-08-17T03:02:06Z
min-sheng Lin
minsheng.l@inwinstack.com
<p>load_raw method is added at <a class="external" href="https://github.com/ceph/ceph-deploy/pull/158">https://github.com/ceph/ceph-deploy/pull/158</a>,<br />it can be used to replace old practice which loads config and writes to StringIO.</p>
Ceph - Documentation #17041 (Resolved): Remove the description of deleted options in document mon...
https://tracker.ceph.com/issues/17041
2016-08-16T10:03:15Z
min-sheng Lin
minsheng.l@inwinstack.com
<p>option "paxos_trim_disabled_max_versions" and "paxos_trim_tolerance" has been deleted, but the description is not removed yet in document mon-config-ref.</p>
Ceph - Documentation #17038 (Won't Fix): typo in mon-config-ref
https://tracker.ceph.com/issues/17038
2016-08-16T08:05:39Z
min-sheng Lin
minsheng.l@inwinstack.com
<p>wrong spelling "maximimum" in config "paxos trim disabled max versions"</p>
Ceph-deploy - Bug #16443 (Resolved): ceph-deploy mon create-initial command failed with hammer ve...
https://tracker.ceph.com/issues/16443
2016-06-23T08:50:45Z
min-sheng Lin
minsheng.l@inwinstack.com
<p>Hi,<br />I got some problem when running ceph-deploy mon create-initial:</p>
<pre>
[2016-06-23 08:42:37,995][ceph_deploy.mon][INFO ] Running gatherkeys...
[2016-06-23 08:42:37,996][ceph_deploy.gatherkeys][INFO ] Storing keys in temp directory /tmp/tmpDoQGwl
[2016-06-23 08:42:38,010][ceph-mon-1][DEBUG ] connection detected need for sudo
[2016-06-23 08:42:38,023][ceph-mon-1][DEBUG ] connected to host: ceph-mon-1
[2016-06-23 08:42:38,024][ceph-mon-1][DEBUG ] detect platform information from remote host
[2016-06-23 08:42:38,037][ceph-mon-1][DEBUG ] detect machine type
[2016-06-23 08:42:38,038][ceph-mon-1][DEBUG ] find the location of an executable
[2016-06-23 08:42:38,039][ceph-mon-1][INFO ] Running command: sudo /sbin/initctl version
[2016-06-23 08:42:38,047][ceph-mon-1][DEBUG ] get remote short hostname
[2016-06-23 08:42:38,048][ceph-mon-1][DEBUG ] fetch remote file
[2016-06-23 08:42:38,049][ceph-mon-1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph-mon-1.asok mon_status
[2016-06-23 08:42:38,115][ceph-mon-1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon-1/keyring auth get-or-create client.admin osd allow * mds allow * mon allow *
[2016-06-23 08:42:38,229][ceph-mon-1][ERROR ] "ceph auth get-or-create for keytype admin returned 22
[2016-06-23 08:42:38,230][ceph-mon-1][DEBUG ] Error EINVAL: key for client.admin exists but cap mds does not match
[2016-06-23 08:42:38,434][ceph-mon-2][DEBUG ] connection detected need for sudo
[2016-06-23 08:42:38,631][ceph-mon-2][DEBUG ] connected to host: ceph-mon-2
[2016-06-23 08:42:38,632][ceph-mon-2][DEBUG ] detect platform information from remote host
[2016-06-23 08:42:38,642][ceph-mon-2][DEBUG ] detect machine type
[2016-06-23 08:42:38,644][ceph-mon-2][DEBUG ] find the location of an executable
[2016-06-23 08:42:38,646][ceph-mon-2][INFO ] Running command: sudo /sbin/initctl version
[2016-06-23 08:42:38,654][ceph-mon-2][DEBUG ] get remote short hostname
[2016-06-23 08:42:38,655][ceph-mon-2][DEBUG ] fetch remote file
[2016-06-23 08:42:38,657][ceph-mon-2][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph-mon-2.asok mon_status
[2016-06-23 08:42:38,723][ceph-mon-2][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon-2/keyring auth get-or-create client.admin osd allow * mds allow * mon allow *
[2016-06-23 08:42:38,837][ceph-mon-2][ERROR ] "ceph auth get-or-create for keytype admin returned 22
[2016-06-23 08:42:38,838][ceph-mon-2][DEBUG ] Error EINVAL: key for client.admin exists but cap mds does not match
[2016-06-23 08:42:39,036][ceph-mon-3][DEBUG ] connection detected need for sudo
[2016-06-23 08:42:39,234][ceph-mon-3][DEBUG ] connected to host: ceph-mon-3
[2016-06-23 08:42:39,234][ceph-mon-3][DEBUG ] detect platform information from remote host
[2016-06-23 08:42:39,245][ceph-mon-3][DEBUG ] detect machine type
[2016-06-23 08:42:39,247][ceph-mon-3][DEBUG ] find the location of an executable
[2016-06-23 08:42:39,249][ceph-mon-3][INFO ] Running command: sudo /sbin/initctl version
[2016-06-23 08:42:39,257][ceph-mon-3][DEBUG ] get remote short hostname
[2016-06-23 08:42:39,258][ceph-mon-3][DEBUG ] fetch remote file
[2016-06-23 08:42:39,260][ceph-mon-3][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph-mon-3.asok mon_status
[2016-06-23 08:42:39,325][ceph-mon-3][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon-3/keyring auth get-or-create client.admin osd allow * mds allow * mon allow *
[2016-06-23 08:42:39,490][ceph-mon-3][ERROR ] "ceph auth get-or-create for keytype admin returned 22
[2016-06-23 08:42:39,490][ceph-mon-3][DEBUG ] Error EINVAL: key for client.admin exists but cap mds does not match
[2016-06-23 08:42:39,492][ceph_deploy.gatherkeys][ERROR ] Failed to connect to host:ceph-mon-1, ceph-mon-2, ceph-mon-3
[2016-06-23 08:42:39,492][ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpDoQGwl
[2016-06-23 08:42:39,492][ceph_deploy][ERROR ] RuntimeError: Failed to connect any mon
</pre>
<p>Looks like some problems happened on ceph auth get-or-create command.</p>
<p>ceph-deploy version: 1.5.34<br />ceph version: 0.94.7</p>