Project

General

Profile

ceph-osd-err-16.04-luminous.txt

Error log for OSD creation in ubnt 16.04 and ceph luminous - Rainer Krienke, 03/14/2019 09:45 AM

Download (15.3 KB)

 
1

    
2

    
3
root@ac1:~/mycluster# ceph-deploy  osd create --bluestore --data /dev/sdg ceph4
4
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
5
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create --bluestore --data /dev/sdg ceph4
6
[ceph_deploy.cli][INFO  ] ceph-deploy options:
7
[ceph_deploy.cli][INFO  ]  verbose                       : False
8
[ceph_deploy.cli][INFO  ]  bluestore                     : True
9
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f6d48c9a200>
10
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
11
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
12
[ceph_deploy.cli][INFO  ]  block_wal                     : None
13
[ceph_deploy.cli][INFO  ]  default_release               : False
14
[ceph_deploy.cli][INFO  ]  username                      : None
15
[ceph_deploy.cli][INFO  ]  journal                       : None
16
[ceph_deploy.cli][INFO  ]  subcommand                    : create
17
[ceph_deploy.cli][INFO  ]  host                          : ceph4
18
[ceph_deploy.cli][INFO  ]  filestore                     : None
19
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f6d490eac80>
20
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
21
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
22
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdg
23
[ceph_deploy.cli][INFO  ]  block_db                      : None
24
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
25
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
26
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
27
[ceph_deploy.cli][INFO  ]  quiet                         : False
28
[ceph_deploy.cli][INFO  ]  debug                         : False
29
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdg
30
Password: 
31
Password: 
32
[ceph4][DEBUG ] connected to host: ceph4 
33
[ceph4][DEBUG ] detect platform information from remote host
34
[ceph4][DEBUG ] detect machine type
35
[ceph4][DEBUG ] find the location of an executable
36
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 16.04 xenial
37
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph4
38
[ceph4][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
39
[ceph4][DEBUG ] find the location of an executable
40
[ceph4][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdg
41
[ceph4][DEBUG ] Running command: /usr/bin/ceph-authtool --gen-print-key
42
[ceph4][DEBUG ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 2aef5735-543b-4f0c-9974-bf5efa1a41f2
43
[ceph4][DEBUG ] Running command: vgcreate --force --yes ceph-b3e24831-a357-436b-a982-16a00cb6c849 /dev/sdg
44
[ceph4][DEBUG ]  stdout: Physical volume "/dev/sdg" successfully created
45
[ceph4][DEBUG ]  stdout: Volume group "ceph-b3e24831-a357-436b-a982-16a00cb6c849" successfully created
46
[ceph4][DEBUG ] Running command: lvcreate --yes -l 100%FREE -n osd-block-2aef5735-543b-4f0c-9974-bf5efa1a41f2 ceph-b3e24831-a357-436b-a982-16a00cb6c849
47
[ceph4][DEBUG ]  stdout: Wiping VMFS_volume_member signature on /dev/ceph-b3e24831-a357-436b-a982-16a00cb6c849/osd-block-2aef5735-543b-4f0c-9974-bf5efa1a41f2.
48
[ceph4][DEBUG ]  stdout: Logical volume "osd-block-2aef5735-543b-4f0c-9974-bf5efa1a41f2" created.
49
[ceph4][DEBUG ] Running command: /usr/bin/ceph-authtool --gen-print-key
50
[ceph4][DEBUG ] Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
51
[ceph4][DEBUG ] --> Absolute path not found for executable: restorecon
52
[ceph4][DEBUG ] --> Ensure $PATH environment variable contains common executable locations
53
[ceph4][DEBUG ] Running command: chown -h ceph:ceph /dev/ceph-b3e24831-a357-436b-a982-16a00cb6c849/osd-block-2aef5735-543b-4f0c-9974-bf5efa1a41f2
54
[ceph4][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-0
55
[ceph4][DEBUG ] Running command: ln -s /dev/ceph-b3e24831-a357-436b-a982-16a00cb6c849/osd-block-2aef5735-543b-4f0c-9974-bf5efa1a41f2 /var/lib/ceph/osd/ceph-0/block
56
[ceph4][DEBUG ] Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
57
[ceph4][DEBUG ]  stderr: got monmap epoch 2
58
[ceph4][DEBUG ] Running command: ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQD7Y4ZcCY6WCxAAvHl7o5INk8UPONL7B/d2FA==
59
[ceph4][DEBUG ]  stdout: creating /var/lib/ceph/osd/ceph-0/keyring
60
[ceph4][DEBUG ] added entity osd.0 auth auth(auid = 18446744073709551615 key=AQD7Y4ZcCY6WCxAAvHl7o5INk8UPONL7B/d2FA== with 0 caps)
61
[ceph4][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
62
[ceph4][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
63
[ceph4][DEBUG ] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 2aef5735-543b-4f0c-9974-bf5efa1a41f2 --setuser ceph --setgroup ceph
64
[ceph4][DEBUG ]  stderr: 2019-03-11 14:34:54.820641 7f5fb957ce00 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
65
[ceph4][DEBUG ]  stderr: /build/ceph-12.2.11/src/os/bluestore/BlueFS.cc: In function 'int BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, uint64_t, size_t, ceph::bufferlist*, char*)' thread 7f5fb957ce00 time 2019-03-11 14:34:54.884072
66
[ceph4][DEBUG ]  stderr: /build/ceph-12.2.11/src/os/bluestore/BlueFS.cc: 1000: FAILED assert(r == 0)
67
[ceph4][DEBUG ]  stderr: ceph version 12.2.11 (26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous (stable)
68
[ceph4][DEBUG ]  stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x102) [0x55d9d68a3b32]
69
[ceph4][DEBUG ]  stderr: 2: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::list*, char*)+0xf6a) [0x55d9d680fdfa]
70
[ceph4][DEBUG ]  stderr: 3: (BlueFS::_replay(bool)+0x242) [0x55d9d68182b2]
71
[ceph4][DEBUG ]  stderr: 4: (BlueFS::mount()+0x209) [0x55d9d681c4a9]
72
[ceph4][DEBUG ]  stderr: 5: (BlueStore::_open_db(bool)+0x169c) [0x55d9d672baec]
73
[ceph4][DEBUG ]  stderr: 6: (BlueStore::mkfs()+0x106d) [0x55d9d67614bd]
74
[ceph4][DEBUG ]  stderr: 7: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x164) [0x55d9d62990d4]
75
[ceph4][DEBUG ]  stderr: 8: (main()+0x11aa) [0x55d9d61bba0a]
76
[ceph4][DEBUG ]  stderr: 9: (__libc_start_main()+0xf0) [0x7f5fb67dc830]
77
[ceph4][DEBUG ]  stderr: 10: (_start()+0x29) [0x55d9d624aeb9]
78
[ceph4][DEBUG ]  stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
79
[ceph4][DEBUG ]  stderr: 2019-03-11 14:34:54.889986 7f5fb957ce00 -1 /build/ceph-12.2.11/src/os/bluestore/BlueFS.cc: In function 'int BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, uint64_t, size_t, ceph::bufferlist*, char*)' thread 7f5fb957ce00 time 2019-03-11 14:34:54.884072
80
[ceph4][DEBUG ]  stderr: /build/ceph-12.2.11/src/os/bluestore/BlueFS.cc: 1000: FAILED assert(r == 0)
81
[ceph4][DEBUG ]  stderr: ceph version 12.2.11 (26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous (stable)
82
[ceph4][DEBUG ]  stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x102) [0x55d9d68a3b32]
83
[ceph4][DEBUG ]  stderr: 2: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::list*, char*)+0xf6a) [0x55d9d680fdfa]
84
[ceph4][DEBUG ]  stderr: 3: (BlueFS::_replay(bool)+0x242) [0x55d9d68182b2]
85
[ceph4][DEBUG ]  stderr: 4: (BlueFS::mount()+0x209) [0x55d9d681c4a9]
86
[ceph4][DEBUG ]  stderr: 5: (BlueStore::_open_db(bool)+0x169c) [0x55d9d672baec]
87
[ceph4][DEBUG ]  stderr: 6: (BlueStore::mkfs()+0x106d) [0x55d9d67614bd]
88
[ceph4][DEBUG ]  stderr: 7: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x164) [0x55d9d62990d4]
89
[ceph4][DEBUG ]  stderr: 8: (main()+0x11aa) [0x55d9d61bba0a]
90
[ceph4][DEBUG ]  stderr: 9: (__libc_start_main()+0xf0) [0x7f5fb67dc830]
91
[ceph4][DEBUG ]  stderr: 10: (_start()+0x29) [0x55d9d624aeb9]
92
[ceph4][DEBUG ]  stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
93
[ceph4][DEBUG ]  stderr: -15> 2019-03-11 14:34:54.820641 7f5fb957ce00 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
94
[ceph4][DEBUG ]  stderr: 0> 2019-03-11 14:34:54.889986 7f5fb957ce00 -1 /build/ceph-12.2.11/src/os/bluestore/BlueFS.cc: In function 'int BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, uint64_t, size_t, ceph::bufferlist*, char*)' thread 7f5fb957ce00 time 2019-03-11 14:34:54.884072
95
[ceph4][DEBUG ]  stderr: /build/ceph-12.2.11/src/os/bluestore/BlueFS.cc: 1000: FAILED assert(r == 0)
96
[ceph4][DEBUG ]  stderr: ceph version 12.2.11 (26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous (stable)
97
[ceph4][DEBUG ]  stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x102) [0x55d9d68a3b32]
98
[ceph4][DEBUG ]  stderr: 2: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::list*, char*)+0xf6a) [0x55d9d680fdfa]
99
[ceph4][DEBUG ]  stderr: 3: (BlueFS::_replay(bool)+0x242) [0x55d9d68182b2]
100
[ceph4][DEBUG ]  stderr: 4: (BlueFS::mount()+0x209) [0x55d9d681c4a9]
101
[ceph4][DEBUG ]  stderr: 5: (BlueStore::_open_db(bool)+0x169c) [0x55d9d672baec]
102
[ceph4][DEBUG ]  stderr: 6: (BlueStore::mkfs()+0x106d) [0x55d9d67614bd]
103
[ceph4][DEBUG ]  stderr: 7: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x164) [0x55d9d62990d4]
104
[ceph4][DEBUG ]  stderr: 8: (main()+0x11aa) [0x55d9d61bba0a]
105
[ceph4][DEBUG ]  stderr: 9: (__libc_start_main()+0xf0) [0x7f5fb67dc830]
106
[ceph4][DEBUG ]  stderr: 10: (_start()+0x29) [0x55d9d624aeb9]
107
[ceph4][DEBUG ]  stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
108
[ceph4][DEBUG ]  stderr: *** Caught signal (Aborted) **
109
[ceph4][DEBUG ]  stderr: in thread 7f5fb957ce00 thread_name:ceph-osd
110
[ceph4][DEBUG ]  stderr: ceph version 12.2.11 (26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous (stable)
111
[ceph4][DEBUG ]  stderr: 1: (()+0xaa0214) [0x55d9d6860214]
112
[ceph4][DEBUG ]  stderr: 2: (()+0x11390) [0x7f5fb7856390]
113
[ceph4][DEBUG ]  stderr: 3: (gsignal()+0x38) [0x7f5fb67f1428]
114
[ceph4][DEBUG ]  stderr: 4: (abort()+0x16a) [0x7f5fb67f302a]
115
[ceph4][DEBUG ]  stderr: 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x28e) [0x55d9d68a3cbe]
116
[ceph4][DEBUG ]  stderr: 6: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::list*, char*)+0xf6a) [0x55d9d680fdfa]
117
[ceph4][DEBUG ]  stderr: 7: (BlueFS::_replay(bool)+0x242) [0x55d9d68182b2]
118
[ceph4][DEBUG ]  stderr: 8: (BlueFS::mount()+0x209) [0x55d9d681c4a9]
119
[ceph4][DEBUG ]  stderr: 9: (BlueStore::_open_db(bool)+0x169c) [0x55d9d672baec]
120
[ceph4][DEBUG ]  stderr: 10: (BlueStore::mkfs()+0x106d) [0x55d9d67614bd]
121
[ceph4][DEBUG ]  stderr: 11: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x164) [0x55d9d62990d4]
122
[ceph4][DEBUG ]  stderr: 12: (main()+0x11aa) [0x55d9d61bba0a]
123
[ceph4][DEBUG ]  stderr: 13: (__libc_start_main()+0xf0) [0x7f5fb67dc830]
124
[ceph4][DEBUG ]  stderr: 14: (_start()+0x29) [0x55d9d624aeb9]
125
[ceph4][DEBUG ]  stderr: 2019-03-11 14:34:54.897043 7f5fb957ce00 -1 *** Caught signal (Aborted) **
126
[ceph4][DEBUG ]  stderr: in thread 7f5fb957ce00 thread_name:ceph-osd
127
[ceph4][DEBUG ]  stderr: ceph version 12.2.11 (26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous (stable)
128
[ceph4][DEBUG ]  stderr: 1: (()+0xaa0214) [0x55d9d6860214]
129
[ceph4][DEBUG ]  stderr: 2: (()+0x11390) [0x7f5fb7856390]
130
[ceph4][DEBUG ]  stderr: 3: (gsignal()+0x38) [0x7f5fb67f1428]
131
[ceph4][DEBUG ]  stderr: 4: (abort()+0x16a) [0x7f5fb67f302a]
132
[ceph4][DEBUG ]  stderr: 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x28e) [0x55d9d68a3cbe]
133
[ceph4][WARNIN] -->  RuntimeError: Command failed with exit code -6: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 2aef5735-543b-4f0c-9974-bf5efa1a41f2 --setuser ceph --setgroup ceph
134
[ceph4][DEBUG ]  stderr: 6: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::list*, char*)+0xf6a) [0x55d9d680fdfa]
135
[ceph4][DEBUG ]  stderr: 7: (BlueFS::_replay(bool)+0x242) [0x55d9d68182b2]
136
[ceph4][DEBUG ]  stderr: 8: (BlueFS::mount()+0x209) [0x55d9d681c4a9]
137
[ceph4][DEBUG ]  stderr: 9: (BlueStore::_open_db(bool)+0x169c) [0x55d9d672baec]
138
[ceph4][DEBUG ]  stderr: 10: (BlueStore::mkfs()+0x106d) [0x55d9d67614bd]
139
[ceph4][DEBUG ]  stderr: 11: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x164) [0x55d9d62990d4]
140
[ceph4][DEBUG ]  stderr: 12: (main()+0x11aa) [0x55d9d61bba0a]
141
[ceph4][DEBUG ]  stderr: 13: (__libc_start_main()+0xf0) [0x7f5fb67dc830]
142
[ceph4][DEBUG ]  stderr: 14: (_start()+0x29) [0x55d9d624aeb9]
143
[ceph4][DEBUG ]  stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
144
[ceph4][DEBUG ]  stderr: 0> 2019-03-11 14:34:54.897043 7f5fb957ce00 -1 *** Caught signal (Aborted) **
145
[ceph4][DEBUG ]  stderr: in thread 7f5fb957ce00 thread_name:ceph-osd
146
[ceph4][DEBUG ]  stderr: ceph version 12.2.11 (26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous (stable)
147
[ceph4][DEBUG ]  stderr: 1: (()+0xaa0214) [0x55d9d6860214]
148
[ceph4][DEBUG ]  stderr: 2: (()+0x11390) [0x7f5fb7856390]
149
[ceph4][DEBUG ]  stderr: 3: (gsignal()+0x38) [0x7f5fb67f1428]
150
[ceph4][DEBUG ]  stderr: 4: (abort()+0x16a) [0x7f5fb67f302a]
151
[ceph4][DEBUG ]  stderr: 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x28e) [0x55d9d68a3cbe]
152
[ceph4][DEBUG ]  stderr: 6: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned long, unsigned long, ceph::buffer::list*, char*)+0xf6a) [0x55d9d680fdfa]
153
[ceph4][DEBUG ]  stderr: 7: (BlueFS::_replay(bool)+0x242) [0x55d9d68182b2]
154
[ceph4][DEBUG ]  stderr: 8: (BlueFS::mount()+0x209) [0x55d9d681c4a9]
155
[ceph4][DEBUG ]  stderr: 9: (BlueStore::_open_db(bool)+0x169c) [0x55d9d672baec]
156
[ceph4][DEBUG ]  stderr: 10: (BlueStore::mkfs()+0x106d) [0x55d9d67614bd]
157
[ceph4][DEBUG ]  stderr: 11: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x164) [0x55d9d62990d4]
158
[ceph4][DEBUG ]  stderr: 12: (main()+0x11aa) [0x55d9d61bba0a]
159
[ceph4][DEBUG ]  stderr: 13: (__libc_start_main()+0xf0) [0x7f5fb67dc830]
160
[ceph4][DEBUG ]  stderr: 14: (_start()+0x29) [0x55d9d624aeb9]
161
[ceph4][DEBUG ]  stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
162
[ceph4][DEBUG ] --> Was unable to complete a new OSD, will rollback changes
163
[ceph4][DEBUG ] --> OSD will be fully purged from the cluster, because the ID was generated
164
[ceph4][DEBUG ] Running command: ceph osd purge osd.0 --yes-i-really-mean-it
165
[ceph4][DEBUG ]  stderr: purged osd.0
166
[ceph4][ERROR ] RuntimeError: command returned non-zero exit status: 1
167
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdg
168
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs