Project

General

Profile

Actions

Bug #816

closed

fs size underflowed

Added by Josh Durgin about 13 years ago. Updated about 13 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

ceph version 0.25~rc (0c97d79056ba982f571ef8e720c9d488e3982f81)

joshd@pudgy:~$ ceph -s
2011-02-17 15:49:53.826901    pg v2595: 2056 pgs: 2056 active+clean+degraded; 16777215 PB data, 151 GB used, 1246 GB / 1397 GB avail; 18446744073709551557/18446744073709551498 degraded (100.000%)
2011-02-17 15:49:53.830120   mds e18: 1/1/1 up, 3 up:standby
2011-02-17 15:49:53.830157   osd e56: 1 osds: 1 up, 1 in
2011-02-17 15:49:53.830213   log 2011-02-17 15:41:49.832997 mon0 10.0.1.247:6789/0 7 : [INF] mds? 10.0.1.247:6802/6808 up:boot
2011-02-17 15:49:53.830274   class rbd (v1.3 [x86-64])
2011-02-17 15:49:53.830290   mon e1: 3 mons at {a=10.0.1.247:6789/0,b=10.0.1.247:6790/0,c=10.0.1.247:6791/0}

This happened some time as I was testing snapshot operations with an rbd-backed vm, creating snapshots through qemu and deleting them with the rbd tool while the vm was also writing from /dev/random to a file.
The rbd tool deadlocked a couple times after the snapshot was deleted.

Actions #1

Updated by Sage Weil about 13 years ago

  • Assignee set to Josh Durgin

First step here is to figure out how to reproduce, and/or find/generate full osd logs of it happening.

Actions #2

Updated by Sage Weil about 13 years ago

  • Status changed from New to Can't reproduce
  • Translation missing: en.field_position set to 1
  • Translation missing: en.field_position changed from 1 to 598
Actions

Also available in: Atom PDF