Project

General

Profile

Actions

Bug #22482

closed

qa: MDS can apparently journal new file on "full" metadata pool

Added by Patrick Donnelly over 6 years ago. Updated about 6 years ago.

Status:
Won't Fix
Priority:
High
Category:
-
Target version:
-
% Done:

0%

Source:
Development
Tags:
Backport:
luminous
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2017-12-19T06:19:08.951 INFO:teuthology.orchestra.run.smithi202:Running: 'cd /home/ubuntu/cephtest/mnt.0 && sudo dd if=/dev/urandom of=large_file_b bs=1M conv=fdatasync count=50 seek=280'
2017-12-19T06:19:09.086 INFO:teuthology.orchestra.run.smithi202.stderr:dd: error writing 'large_file_b': No space left on device
2017-12-19T06:19:09.086 INFO:teuthology.orchestra.run.smithi202.stderr:1+0 records in
2017-12-19T06:19:09.086 INFO:teuthology.orchestra.run.smithi202.stderr:0+0 records out
2017-12-19T06:19:09.087 INFO:teuthology.orchestra.run.smithi202.stderr:0 bytes copied, 0.0103972 s, 0.0 kB/s
2017-12-19T06:19:09.087 INFO:teuthology.orchestra.run.smithi005:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json-pretty'
...
2017-12-19T06:19:09.375 INFO:teuthology.orchestra.run.smithi005.stdout:        {
2017-12-19T06:19:09.375 INFO:teuthology.orchestra.run.smithi005.stdout:            "pool": 6,
2017-12-19T06:19:09.375 INFO:teuthology.orchestra.run.smithi005.stdout:            "pool_name": "cephfs_metadata",
2017-12-19T06:19:09.375 INFO:teuthology.orchestra.run.smithi005.stdout:            "flags": 3,
2017-12-19T06:19:09.375 INFO:teuthology.orchestra.run.smithi005.stdout:            "flags_names": "hashpspool,full",
...
2017-12-19T06:19:09.559 INFO:teuthology.orchestra.run.smithi202:Running: 'cd /home/ubuntu/cephtest/mnt.0 && sudo dd if=/dev/urandom of=small_file_1 bs=1M conv=fdatasync count=0 seek=0'
2017-12-19T06:19:09.659 INFO:teuthology.orchestra.run.smithi202.stderr:0+0 records in
2017-12-19T06:19:09.660 INFO:teuthology.orchestra.run.smithi202.stderr:0+0 records out
2017-12-19T06:19:09.660 INFO:teuthology.orchestra.run.smithi202.stderr:0 bytes copied, 0.000493095 s, 0.0 kB/s
2017-12-19T06:19:09.667 INFO:tasks.cephfs_test_runner:test_full_different_file (tasks.cephfs.test_full.TestClusterFull) ... FAIL

From: /ceph/teuthology-archive/pdonnell-2017-12-19_06:00:26-fs-mimic-dev1-testing-basic-smithi/1979446/teuthology.log

2017-12-19 06:19:09.651 7f5c11f64700  5 mds.0.log _submit_thread 4231952~1606 : EUpdate openc [metablob 0x1, 1 dirs]
2017-12-19 06:19:10.663 7f5c18771700  1 -- 172.21.15.5:6818/1076285882 <== client.4274 172.21.15.202:0/3127382166 36 ==== client_session(???) v1 ==== 28+0+0 (2172597671 0 0) 0x55d3ab46d680 con 0x55d3ab304a00
2017-12-19 06:19:10.663 7f5c18771700 20 mds.0.server get_session have 0x55d3ab246380 client.4274 172.21.15.202:0/3127382166 state open
2017-12-19 06:19:10.663 7f5c18771700  3 mds.0.server handle_client_session client_session(???) v1 from client.4274
2017-12-19 06:19:10.663 7f5c18771700  1 -- 172.21.15.5:6818/1076285882 --> 172.21.15.202:6800/21923 -- osd_op(unknown.0.24:62 6.4 6:2e2fa760:::200.00000001:head [write 37648~1626 [fadvise_dontneed]] snapc 0=[] ondisk+write+known_if_redirected+full_force e42) v9 -- 0x55d3ab3caa40 con 0
2017-12-19 06:19:10.667 7f5c1a712700  1 -- 172.21.15.5:6818/1076285882 <== osd.7 172.21.15.202:6800/21923 41 ==== osd_op_reply(62 200.00000001 [write 37648~1626 [fadvise_dontneed]] v42'33 uv33 ondisk = 0) v9 ==== 157+0+0 (885685870 0 0) 0x55d3ab3caa40 con 0x55d3ab295800

From: /ceph/teuthology-archive/pdonnell-2017-12-19_06:00:26-fs-mimic-dev1-testing-basic-smithi/1979446/remote/smithi005/log/ceph-mds.b.log.gz


Related issues 1 (0 open1 closed)

Related to CephFS - Bug #22483: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obsoleteResolvedPatrick Donnelly12/19/2017

Actions
Actions #1

Updated by Patrick Donnelly over 6 years ago

  • Description updated (diff)
Actions #2

Updated by Patrick Donnelly over 6 years ago

  • Related to Bug #22483: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obsolete added
Actions #3

Updated by Patrick Donnelly about 6 years ago

  • Status changed from New to Won't Fix

This is expected. The MDS is treated specially by the OSDs to allow some writes when the pool is full.

Actions

Also available in: Atom PDF