Jun 20 18:18:12 test10 kernel: imklog 4.6.2, log source = /proc/kmsg started. Jun 20 18:18:12 test10 rsyslogd: [origin software="rsyslogd" swVersion="4.6.2" x-pid="1373" x-info="http://www.rsyslog.com"] (re)start Jun 20 18:18:12 test10 kernel: Initializing cgroup subsys cpuset Jun 20 18:18:12 test10 kernel: Initializing cgroup subsys cpu Jun 20 18:18:12 test10 kernel: Linux version 2.6.32-44.1.el6.x86_64 (mockbuild@x86-010.build.bos.redhat.com) (gcc version 4.4.4 20100713 (Red Hat 4.4.4-12) (GCC) ) #1 SMP Wed Jul 14 18:51:29 EDT 2010 Jun 20 18:18:12 test10 kernel: Command line: ro root=UUID=32f21c01-33d1-4669-98d8-b6df549ad97d rd_MD_UUID=4ed9ac80:54166382:27eda738:00495ef4 rd_NO_LUKS rd_NO_LVM rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us crashkernel=auto rhgb quiet Jun 20 18:18:12 test10 kernel: KERNEL supported cpus: Jun 20 18:18:12 test10 kernel: Intel GenuineIntel Jun 20 18:18:12 test10 kernel: AMD AuthenticAMD Jun 20 18:18:12 test10 kernel: Centaur CentaurHauls Jun 20 18:18:12 test10 kernel: BIOS-provided physical RAM map: Jun 20 18:18:12 test10 kernel: BIOS-e820: 0000000000000000 - 000000000009dc00 (usable) Jun 20 18:18:12 test10 kernel: BIOS-e820: 000000000009dc00 - 00000000000a0000 (reserved) Jun 20 18:18:12 test10 kernel: BIOS-e820: 00000000000e4000 - 0000000000100000 (reserved) Jun 20 18:18:12 test10 kernel: BIOS-e820: 0000000000100000 - 000000003f680000 (usable) Jun 20 18:18:12 test10 kernel: BIOS-e820: 000000003f680000 - 000000003f68b000 (ACPI data) Jun 20 18:18:12 test10 kernel: BIOS-e820: 000000003f68b000 - 000000003f700000 (ACPI NVS) Jun 20 18:18:12 test10 kernel: BIOS-e820: 000000003f700000 - 0000000040000000 (reserved) Jun 20 18:18:12 test10 kernel: BIOS-e820: 00000000e0000000 - 00000000e4000000 (reserved) Jun 20 18:18:12 test10 kernel: BIOS-e820: 00000000fec00000 - 00000000fec10000 (reserved) Jun 20 18:18:12 test10 kernel: BIOS-e820: 00000000fee00000 - 00000000fee01000 (reserved) Jun 20 18:18:12 test10 kernel: BIOS-e820: 00000000ff000000 - 0000000100000000 (reserved) Jun 20 18:18:12 test10 kernel: DMI present. Jun 20 18:18:12 test10 kernel: Phoenix BIOS detected: BIOS may corrupt low RAM, working around it. Jun 20 18:18:12 test10 kernel: last_pfn = 0x3f680 max_arch_pfn = 0x400000000 Jun 20 18:18:12 test10 kernel: x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106 Jun 20 18:18:12 test10 kernel: total RAM covered: 1015M Jun 20 18:18:12 test10 kernel: Found optimal setting for mtrr clean up Jun 20 18:18:12 test10 kernel: gran_size: 64K chunk_size: 16M num_reg: 3 lose cover RAM: 0G Jun 20 18:18:12 test10 kernel: init_memory_mapping: 0000000000000000-000000003f680000 Jun 20 18:18:12 test10 kernel: RAMDISK: 373e6000 - 37fefd88 Jun 20 18:18:12 test10 kernel: ACPI: RSDP 00000000000f6ee0 00014 (v00 INTELB) Jun 20 18:18:12 test10 kernel: ACPI: RSDT 000000003f6841bb 0003C (v01 PTLTD RSDT 06040000 LTP 00000000) Jun 20 18:18:12 test10 kernel: ACPI: FACP 000000003f68ae7e 00084 (v01 SUPRMC 06040000 PTL 00000003) Jun 20 18:18:12 test10 kernel: ACPI: DSDT 000000003f6855e3 0589B (v01 INTEL BR_WATER 06040000 MSFT 03000001) Jun 20 18:18:12 test10 kernel: ACPI: FACS 000000003f68bfc0 00040 Jun 20 18:18:12 test10 kernel: ACPI: TCPA 000000003f68af02 00032 (v01 SMC 06040000 PTL 00000000) Jun 20 18:18:12 test10 kernel: ACPI: MCFG 000000003f68af34 0003C (v01 INTELB R_WATERP 06040000 LTP 00000000) Jun 20 18:18:12 test10 kernel: ACPI: APIC 000000003f68af70 00068 (v01 INTELB R_WATERP 06040000 LTP 00000000) Jun 20 18:18:12 test10 kernel: ACPI: BOOT 000000003f68afd8 00028 (v01 INTELB R_WATERP 06040000 LTP 00000001) Jun 20 18:18:12 test10 kernel: ACPI: SSDT 000000003f6841f7 013EC (v01 INTELB R_WATERW 00003000 INTL 20061109) Jun 20 18:18:12 test10 kernel: No NUMA configuration found Jun 20 18:18:12 test10 kernel: Faking a node at 0000000000000000-000000003f680000 Jun 20 18:18:12 test10 kernel: Bootmem setup node 0 0000000000000000-000000003f680000 Jun 20 18:18:12 test10 kernel: NODE_DATA [0000000000011000 - 0000000000044fff] Jun 20 18:18:12 test10 kernel: bootmap [0000000000045000 - 000000000004cecf] pages 8 Jun 20 18:18:12 test10 kernel: (7 early reservations) ==> bootmem [0000000000 - 003f680000] Jun 20 18:18:12 test10 kernel: #0 [0000000000 - 0000001000] BIOS data page ==> [0000000000 - 0000001000] Jun 20 18:18:12 test10 kernel: #1 [0000006000 - 0000008000] TRAMPOLINE ==> [0000006000 - 0000008000] Jun 20 18:18:12 test10 kernel: #2 [0001000000 - 0001cb90d8] TEXT DATA BSS ==> [0001000000 - 0001cb90d8] Jun 20 18:18:12 test10 kernel: #3 [00373e6000 - 0037fefd88] RAMDISK ==> [00373e6000 - 0037fefd88] Jun 20 18:18:12 test10 kernel: #4 [000009dc00 - 0000100000] BIOS reserved ==> [000009dc00 - 0000100000] Jun 20 18:18:12 test10 kernel: #5 [0001cba000 - 0001cba178] BRK ==> [0001cba000 - 0001cba178] Jun 20 18:18:12 test10 kernel: #6 [0000010000 - 0000011000] PGTABLE ==> [0000010000 - 0000011000] Jun 20 18:18:12 test10 kernel: found SMP MP-table at [ffff8800000f6f10] f6f10 Jun 20 18:18:12 test10 kernel: Zone PFN ranges: Jun 20 18:18:12 test10 kernel: DMA 0x00000010 -> 0x00001000 Jun 20 18:18:12 test10 kernel: DMA32 0x00001000 -> 0x00100000 Jun 20 18:18:12 test10 kernel: Normal 0x00100000 -> 0x00100000 Jun 20 18:18:12 test10 kernel: Movable zone start PFN for each node Jun 20 18:18:12 test10 kernel: early_node_map[2] active PFN ranges Jun 20 18:18:12 test10 kernel: 0: 0x00000010 -> 0x0000009d Jun 20 18:18:12 test10 kernel: 0: 0x00000100 -> 0x0003f680 Jun 20 18:18:12 test10 kernel: ACPI: PM-Timer IO Port: 0x1008 Jun 20 18:18:12 test10 kernel: ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) Jun 20 18:18:12 test10 kernel: ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled) Jun 20 18:18:12 test10 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jun 20 18:18:12 test10 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jun 20 18:18:12 test10 kernel: ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0]) Jun 20 18:18:12 test10 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23 Jun 20 18:18:12 test10 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jun 20 18:18:12 test10 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 18:18:12 test10 kernel: Using ACPI (MADT) for SMP configuration information Jun 20 18:18:12 test10 kernel: SMP: Allowing 2 CPUs, 0 hotplug CPUs Jun 20 18:18:12 test10 kernel: PM: Registered nosave memory: 000000000009d000 - 000000000009e000 Jun 20 18:18:12 test10 kernel: PM: Registered nosave memory: 000000000009e000 - 00000000000a0000 Jun 20 18:18:12 test10 kernel: PM: Registered nosave memory: 00000000000a0000 - 00000000000e4000 Jun 20 18:18:12 test10 kernel: PM: Registered nosave memory: 00000000000e4000 - 0000000000100000 Jun 20 18:18:12 test10 kernel: Allocating PCI resources starting at 40000000 (gap: 40000000:a0000000) Jun 20 18:18:12 test10 kernel: Booting paravirtualized kernel on bare hardware Jun 20 18:18:12 test10 kernel: NR_CPUS:4096 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 20 18:18:12 test10 kernel: PERCPU: Embedded 31 pages/cpu @ffff880001e00000 s94744 r8192 d24040 u1048576 Jun 20 18:18:12 test10 kernel: pcpu-alloc: s94744 r8192 d24040 u1048576 alloc=1*2097152 Jun 20 18:18:12 test10 kernel: pcpu-alloc: [0] 0 1 Jun 20 18:18:12 test10 kernel: Built 1 zonelists in Node order, mobility grouping on. Total pages: 255944 Jun 20 18:18:12 test10 kernel: Policy zone: DMA32 Jun 20 18:18:12 test10 kernel: Kernel command line: ro root=UUID=32f21c01-33d1-4669-98d8-b6df549ad97d rd_MD_UUID=4ed9ac80:54166382:27eda738:00495ef4 rd_NO_LUKS rd_NO_LVM rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us rhgb quiet Jun 20 18:18:12 test10 kernel: PID hash table entries: 4096 (order: 3, 32768 bytes) Jun 20 18:18:12 test10 kernel: Checking aperture... Jun 20 18:18:12 test10 kernel: No AGP bridge found Jun 20 18:18:12 test10 kernel: Memory: 997948k/1038848k available (4999k kernel code, 460k absent, 40440k reserved, 3971k data, 1220k init) Jun 20 18:18:12 test10 kernel: Hierarchical RCU implementation. Jun 20 18:18:12 test10 kernel: NR_IRQS:33024 nr_irqs:424 Jun 20 18:18:12 test10 kernel: Console: colour VGA+ 80x25 Jun 20 18:18:12 test10 kernel: console [tty0] enabled Jun 20 18:18:12 test10 kernel: allocated 10485760 bytes of page_cgroup Jun 20 18:18:12 test10 kernel: please try 'cgroup_disable=memory' option if you don't want memory cgroups Jun 20 18:18:12 test10 kernel: Fast TSC calibration using PIT Jun 20 18:18:12 test10 kernel: Detected 1995.119 MHz processor. Jun 20 18:18:12 test10 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 3990.23 BogoMIPS (lpj=1995119) Jun 20 18:18:12 test10 kernel: pid_max: default: 32768 minimum: 301 Jun 20 18:18:12 test10 kernel: Security Framework initialized Jun 20 18:18:12 test10 kernel: SELinux: Initializing. Jun 20 18:18:12 test10 kernel: Dentry cache hash table entries: 131072 (order: 8, 1048576 bytes) Jun 20 18:18:12 test10 kernel: Inode-cache hash table entries: 65536 (order: 7, 524288 bytes) Jun 20 18:18:12 test10 kernel: Mount-cache hash table entries: 256 Jun 20 18:18:12 test10 kernel: Initializing cgroup subsys ns Jun 20 18:18:12 test10 kernel: Initializing cgroup subsys cpuacct Jun 20 18:18:12 test10 kernel: Initializing cgroup subsys memory Jun 20 18:18:12 test10 kernel: Initializing cgroup subsys devices Jun 20 18:18:12 test10 kernel: Initializing cgroup subsys freezer Jun 20 18:18:12 test10 kernel: Initializing cgroup subsys net_cls Jun 20 18:18:12 test10 kernel: Initializing cgroup subsys blkio Jun 20 18:18:12 test10 kernel: CPU: Physical Processor ID: 0 Jun 20 18:18:12 test10 kernel: CPU: Processor Core ID: 0 Jun 20 18:18:12 test10 kernel: mce: CPU supports 6 MCE banks Jun 20 18:18:12 test10 kernel: using mwait in idle threads. Jun 20 18:18:12 test10 kernel: Performance Events: PEBS fmt0+, Core2 events, Intel PMU driver. Jun 20 18:18:12 test10 kernel: PEBS disabled due to CPU errata. Jun 20 18:18:12 test10 kernel: ... version: 2 Jun 20 18:18:12 test10 kernel: ... bit width: 40 Jun 20 18:18:12 test10 kernel: ... generic registers: 2 Jun 20 18:18:12 test10 kernel: ... value mask: 000000ffffffffff Jun 20 18:18:12 test10 kernel: ... max period: 000000007fffffff Jun 20 18:18:12 test10 kernel: ... fixed-purpose events: 3 Jun 20 18:18:12 test10 kernel: ... event mask: 0000000700000003 Jun 20 18:18:12 test10 kernel: ACPI: Core revision 20090903 Jun 20 18:18:12 test10 kernel: ftrace: converting mcount calls to 0f 1f 44 00 00 Jun 20 18:18:12 test10 kernel: ftrace: allocating 20453 entries in 81 pages Jun 20 18:18:12 test10 kernel: Setting APIC routing to flat Jun 20 18:18:12 test10 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 20 18:18:12 test10 kernel: CPU0: Intel(R) Pentium(R) Dual CPU E2180 @ 2.00GHz stepping 0d Jun 20 18:18:12 test10 kernel: Booting Node 0, Processors #1 Ok. Jun 20 18:18:12 test10 kernel: Brought up 2 CPUs Jun 20 18:18:12 test10 kernel: Total of 2 processors activated (7979.40 BogoMIPS). Jun 20 18:18:12 test10 kernel: Testing NMI watchdog ... OK. Jun 20 18:18:12 test10 kernel: devtmpfs: initialized Jun 20 18:18:12 test10 kernel: regulator: core version 0.5 Jun 20 18:18:12 test10 kernel: NET: Registered protocol family 16 Jun 20 18:18:12 test10 kernel: ACPI: bus type pci registered Jun 20 18:18:12 test10 kernel: PCI: MCFG configuration 0: base e0000000 segment 0 buses 0 - 7 Jun 20 18:18:12 test10 kernel: PCI: MCFG area at e0000000 reserved in E820 Jun 20 18:18:12 test10 kernel: PCI: Using MMCONFIG at e0000000 - e07fffff Jun 20 18:18:12 test10 kernel: PCI: Using configuration type 1 for base access Jun 20 18:18:12 test10 kernel: bio: create slab at 0 Jun 20 18:18:12 test10 kernel: ACPI: Interpreter enabled Jun 20 18:18:12 test10 kernel: ACPI: (supports S0 S1 S4 S5) Jun 20 18:18:12 test10 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 18:18:12 test10 kernel: ACPI: No dock devices found. Jun 20 18:18:12 test10 kernel: ACPI: PCI Root Bridge [PCI0] (0000:00) Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.0: PME# disabled Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.4: PME# supported from D0 D3hot D3cold Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.4: PME# disabled Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.5: PME# supported from D0 D3hot D3cold Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.5: PME# disabled Jun 20 18:18:12 test10 kernel: pci 0000:00:1d.7: PME# supported from D0 D3hot D3cold Jun 20 18:18:12 test10 kernel: pci 0000:00:1d.7: PME# disabled Jun 20 18:18:12 test10 kernel: pci 0000:00:1f.0: quirk: region 1000-107f claimed by ICH6 ACPI/GPIO/TCO Jun 20 18:18:12 test10 kernel: pci 0000:00:1f.0: quirk: region 1180-11bf claimed by ICH6 GPIO Jun 20 18:18:12 test10 kernel: pci 0000:00:1f.0: ICH7 LPC Generic IO decode 1 PIO at 0294 (mask 0097) Jun 20 18:18:12 test10 kernel: pci 0000:00:1f.2: PME# supported from D3hot Jun 20 18:18:12 test10 kernel: pci 0000:00:1f.2: PME# disabled Jun 20 18:18:12 test10 kernel: pci 0000:06:00.0: PME# supported from D0 D3hot D3cold Jun 20 18:18:12 test10 kernel: pci 0000:06:00.0: PME# disabled Jun 20 18:18:12 test10 kernel: pci 0000:06:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jun 20 18:18:12 test10 kernel: pci 0000:07:00.0: PME# supported from D0 D3hot D3cold Jun 20 18:18:12 test10 kernel: pci 0000:07:00.0: PME# disabled Jun 20 18:18:12 test10 kernel: pci 0000:07:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jun 20 18:18:12 test10 kernel: pci 0000:00:1e.0: transparent bridge Jun 20 18:18:12 test10 kernel: Unable to assume PCIe control: Disabling ASPM Jun 20 18:18:12 test10 kernel: ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 7 *10 11 12 14 15) Jun 20 18:18:12 test10 kernel: ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 7 10 *11 12 14 15) Jun 20 18:18:12 test10 kernel: ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 *5 6 7 10 11 12 14 15) Jun 20 18:18:12 test10 kernel: ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 6 *7 10 11 12 14 15) Jun 20 18:18:12 test10 kernel: ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 7 10 11 12 14 15) *0, disabled. Jun 20 18:18:12 test10 kernel: ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 7 10 11 12 14 15) *0, disabled. Jun 20 18:18:12 test10 kernel: ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 5 6 7 10 11 12 14 15) *0, disabled. Jun 20 18:18:12 test10 kernel: ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 7 10 11 12 14 15) *0, disabled. Jun 20 18:18:12 test10 kernel: vgaarb: device added: PCI:0000:00:02.0,decodes=io+mem,owns=io+mem,locks=none Jun 20 18:18:12 test10 kernel: vgaarb: loaded Jun 20 18:18:12 test10 kernel: SCSI subsystem initialized Jun 20 18:18:12 test10 kernel: usbcore: registered new interface driver usbfs Jun 20 18:18:12 test10 kernel: usbcore: registered new interface driver hub Jun 20 18:18:12 test10 kernel: usbcore: registered new device driver usb Jun 20 18:18:12 test10 kernel: PCI: Using ACPI for IRQ routing Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.0: BAR 13: can't allocate resource Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.0: BAR 14: can't allocate resource Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.0: BAR 15: can't allocate resource Jun 20 18:18:12 test10 kernel: NetLabel: Initializing Jun 20 18:18:12 test10 kernel: NetLabel: domain hash size = 128 Jun 20 18:18:12 test10 kernel: NetLabel: protocols = UNLABELED CIPSOv4 Jun 20 18:18:12 test10 kernel: NetLabel: unlabeled traffic allowed by default Jun 20 18:18:12 test10 kernel: HPET: 3 timers in total, 0 timers will be used for per-cpu timer Jun 20 18:18:12 test10 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 20 18:18:12 test10 kernel: hpet0: 3 comparators, 64-bit 14.318180 MHz counter Jun 20 18:18:12 test10 kernel: Switching to clocksource tsc Jun 20 18:18:12 test10 kernel: pnp: PnP ACPI init Jun 20 18:18:12 test10 kernel: ACPI: bus type pnp registered Jun 20 18:18:12 test10 kernel: pnp: PnP ACPI: found 10 devices Jun 20 18:18:12 test10 kernel: ACPI: ACPI bus type pnp unregistered Jun 20 18:18:12 test10 kernel: system 00:01: ioport range 0x295-0x296 has been reserved Jun 20 18:18:12 test10 kernel: system 00:01: ioport range 0x800-0x83f has been reserved Jun 20 18:18:12 test10 kernel: system 00:01: ioport range 0x900-0x90f has been reserved Jun 20 18:18:12 test10 kernel: system 00:01: ioport range 0x1000-0x107f has been reserved Jun 20 18:18:12 test10 kernel: system 00:01: ioport range 0x1180-0x11bf has been reserved Jun 20 18:18:12 test10 kernel: system 00:01: ioport range 0x4d0-0x4d1 has been reserved Jun 20 18:18:12 test10 kernel: system 00:01: ioport range 0xfe00-0xfe00 has been reserved Jun 20 18:18:12 test10 kernel: system 00:01: iomem range 0xfed14000-0xfed17fff has been reserved Jun 20 18:18:12 test10 kernel: system 00:01: iomem range 0xe0000000-0xefffffff could not be reserved Jun 20 18:18:12 test10 kernel: system 00:01: iomem range 0xfef00000-0xfeffffff has been reserved Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.0: PCI bridge to [bus 02-02] Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.0: PCI bridge, secondary bus 0000:02 Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.0: bridge window [0x2000-0x2fff] Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.0: bridge window [0x40000000-0x401fffff] Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.0: bridge window [0x40200000-0x403fffff] Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.4: PCI bridge to [bus 06-06] Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.4: PCI bridge, secondary bus 0000:06 Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.4: bridge window [0x4000-0x4fff] Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.4: bridge window [0xd0100000-0xd01fffff] Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.4: bridge window [0x40400000-0x405fffff] Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.5: PCI bridge to [bus 07-07] Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.5: PCI bridge, secondary bus 0000:07 Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.5: bridge window [0x5000-0x5fff] Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.5: bridge window [0xd0200000-0xd02fffff] Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.5: bridge window [0x40600000-0x407fffff] Jun 20 18:18:12 test10 kernel: pci 0000:00:1e.0: PCI bridge to [bus 08-08] Jun 20 18:18:12 test10 kernel: pci 0000:00:1e.0: PCI bridge, secondary bus 0000:08 Jun 20 18:18:12 test10 kernel: pci 0000:00:1e.0: bridge window [io disabled] Jun 20 18:18:12 test10 kernel: pci 0000:00:1e.0: bridge window [mem disabled] Jun 20 18:18:12 test10 kernel: pci 0000:00:1e.0: bridge window [mem pref disabled] Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.0: enabling device (0000 -> 0003) Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.4: PCI INT A -> GSI 16 (level, low) -> IRQ 16 Jun 20 18:18:12 test10 kernel: pci 0000:00:1c.5: PCI INT B -> GSI 17 (level, low) -> IRQ 17 Jun 20 18:18:12 test10 kernel: NET: Registered protocol family 2 Jun 20 18:18:12 test10 kernel: IP route cache hash table entries: 32768 (order: 6, 262144 bytes) Jun 20 18:18:12 test10 kernel: TCP established hash table entries: 131072 (order: 9, 2097152 bytes) Jun 20 18:18:12 test10 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes) Jun 20 18:18:12 test10 kernel: TCP: Hash tables configured (established 131072 bind 65536) Jun 20 18:18:12 test10 kernel: TCP reno registered Jun 20 18:18:12 test10 kernel: NET: Registered protocol family 1 Jun 20 18:18:12 test10 kernel: Trying to unpack rootfs image as initramfs... Jun 20 18:18:12 test10 kernel: Freeing initrd memory: 12327k freed Jun 20 18:18:12 test10 kernel: Simple Boot Flag at 0x37 set to 0x1 Jun 20 18:18:12 test10 kernel: audit: initializing netlink socket (disabled) Jun 20 18:18:12 test10 kernel: type=2000 audit(1308593864.511:1): initialized Jun 20 18:18:12 test10 kernel: HugeTLB registered 2 MB page size, pre-allocated 0 pages Jun 20 18:18:12 test10 kernel: VFS: Disk quotas dquot_6.5.2 Jun 20 18:18:12 test10 kernel: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 18:18:12 test10 kernel: msgmni has been set to 1973 Jun 20 18:18:12 test10 kernel: alg: No test for stdrng (krng) Jun 20 18:18:12 test10 kernel: ksign: Installing public key data Jun 20 18:18:12 test10 kernel: Loading keyring Jun 20 18:18:12 test10 kernel: - Added public key D74F9483339158D Jun 20 18:18:12 test10 kernel: - User ID: Red Hat, Inc. (Kernel Module GPG key) Jun 20 18:18:12 test10 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 252) Jun 20 18:18:12 test10 kernel: io scheduler noop registered Jun 20 18:18:12 test10 kernel: io scheduler anticipatory registered Jun 20 18:18:12 test10 kernel: io scheduler deadline registered Jun 20 18:18:12 test10 kernel: io scheduler cfq registered (default) Jun 20 18:18:12 test10 kernel: pci_hotplug: PCI Hot Plug PCI Core version: 0.5 Jun 20 18:18:12 test10 kernel: pciehp: PCI Express Hot Plug Controller Driver version: 0.4 Jun 20 18:18:12 test10 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 18:18:12 test10 kernel: pci-stub: invalid id string "" Jun 20 18:18:12 test10 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00/PNP0C0C:00/input/input0 Jun 20 18:18:12 test10 kernel: ACPI: Power Button [PWRB] Jun 20 18:18:12 test10 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input1 Jun 20 18:18:12 test10 kernel: ACPI: Power Button [PWRF] Jun 20 18:18:12 test10 kernel: processor LNXCPU:00: registered as cooling_device0 Jun 20 18:18:12 test10 kernel: processor LNXCPU:01: registered as cooling_device1 Jun 20 18:18:12 test10 kernel: xen-platform-pci: failed Xen IOPORT backend handshake: unrecognised magic value Jun 20 18:18:12 test10 kernel: Non-volatile memory driver v1.3 Jun 20 18:18:12 test10 kernel: Linux agpgart interface v0.103 Jun 20 18:18:12 test10 kernel: agpgart-intel 0000:00:00.0: Intel 946GZ Chipset Jun 20 18:18:12 test10 kernel: agpgart-intel 0000:00:00.0: detected 7676K stolen memory Jun 20 18:18:12 test10 kernel: agpgart-intel 0000:00:00.0: AGP aperture is 256M @ 0xc0000000 Jun 20 18:18:12 test10 kernel: crash memory driver: version 1.0 Jun 20 18:18:12 test10 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 18:18:12 test10 kernel: serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A Jun 20 18:18:12 test10 kernel: serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A Jun 20 18:18:12 test10 kernel: 00:07: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A Jun 20 18:18:12 test10 kernel: 00:08: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A Jun 20 18:18:12 test10 kernel: brd: module loaded Jun 20 18:18:12 test10 kernel: loop: module loaded Jun 20 18:18:12 test10 kernel: input: Macintosh mouse button emulation as /devices/virtual/input/input2 Jun 20 18:18:12 test10 kernel: Fixed MDIO Bus: probed Jun 20 18:18:12 test10 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Jun 20 18:18:12 test10 kernel: ehci_hcd 0000:00:1d.7: PCI INT A -> GSI 16 (level, low) -> IRQ 16 Jun 20 18:18:12 test10 kernel: ehci_hcd 0000:00:1d.7: EHCI Host Controller Jun 20 18:18:12 test10 kernel: ehci_hcd 0000:00:1d.7: new USB bus registered, assigned bus number 1 Jun 20 18:18:12 test10 kernel: ehci_hcd 0000:00:1d.7: using broken periodic workaround Jun 20 18:18:12 test10 kernel: ehci_hcd 0000:00:1d.7: debug port 1 Jun 20 18:18:12 test10 kernel: ehci_hcd 0000:00:1d.7: irq 16, io mem 0xd0500000 Jun 20 18:18:12 test10 kernel: ehci_hcd 0000:00:1d.7: USB 2.0 started, EHCI 1.00 Jun 20 18:18:12 test10 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0002 Jun 20 18:18:12 test10 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Jun 20 18:18:12 test10 kernel: usb usb1: Product: EHCI Host Controller Jun 20 18:18:12 test10 kernel: usb usb1: Manufacturer: Linux 2.6.32-44.1.el6.x86_64 ehci_hcd Jun 20 18:18:12 test10 kernel: usb usb1: SerialNumber: 0000:00:1d.7 Jun 20 18:18:12 test10 kernel: usb usb1: configuration #1 chosen from 1 choice Jun 20 18:18:12 test10 kernel: hub 1-0:1.0: USB hub found Jun 20 18:18:12 test10 kernel: hub 1-0:1.0: 8 ports detected Jun 20 18:18:12 test10 kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver Jun 20 18:18:12 test10 kernel: uhci_hcd: USB Universal Host Controller Interface driver Jun 20 18:18:12 test10 kernel: uhci_hcd 0000:00:1d.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 Jun 20 18:18:12 test10 kernel: uhci_hcd 0000:00:1d.0: UHCI Host Controller Jun 20 18:18:12 test10 kernel: uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 2 Jun 20 18:18:12 test10 kernel: uhci_hcd 0000:00:1d.0: irq 16, io base 0x00003000 Jun 20 18:18:12 test10 kernel: usb usb2: New USB device found, idVendor=1d6b, idProduct=0001 Jun 20 18:18:12 test10 kernel: usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Jun 20 18:18:12 test10 kernel: usb usb2: Product: UHCI Host Controller Jun 20 18:18:12 test10 kernel: usb usb2: Manufacturer: Linux 2.6.32-44.1.el6.x86_64 uhci_hcd Jun 20 18:18:12 test10 kernel: usb usb2: SerialNumber: 0000:00:1d.0 Jun 20 18:18:12 test10 kernel: usb usb2: configuration #1 chosen from 1 choice Jun 20 18:18:12 test10 kernel: hub 2-0:1.0: USB hub found Jun 20 18:18:12 test10 kernel: hub 2-0:1.0: 2 ports detected Jun 20 18:18:12 test10 kernel: uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 Jun 20 18:18:12 test10 kernel: uhci_hcd 0000:00:1d.1: UHCI Host Controller Jun 20 18:18:12 test10 kernel: uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 3 Jun 20 18:18:12 test10 kernel: uhci_hcd 0000:00:1d.1: irq 17, io base 0x00003020 Jun 20 18:18:12 test10 kernel: usb usb3: New USB device found, idVendor=1d6b, idProduct=0001 Jun 20 18:18:12 test10 kernel: usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Jun 20 18:18:12 test10 kernel: usb usb3: Product: UHCI Host Controller Jun 20 18:18:12 test10 kernel: usb usb3: Manufacturer: Linux 2.6.32-44.1.el6.x86_64 uhci_hcd Jun 20 18:18:12 test10 kernel: usb usb3: SerialNumber: 0000:00:1d.1 Jun 20 18:18:12 test10 kernel: usb usb3: configuration #1 chosen from 1 choice Jun 20 18:18:12 test10 kernel: hub 3-0:1.0: USB hub found Jun 20 18:18:12 test10 kernel: hub 3-0:1.0: 2 ports detected Jun 20 18:18:12 test10 kernel: uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 Jun 20 18:18:12 test10 kernel: uhci_hcd 0000:00:1d.2: UHCI Host Controller Jun 20 18:18:12 test10 kernel: uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 4 Jun 20 18:18:12 test10 kernel: uhci_hcd 0000:00:1d.2: irq 18, io base 0x00003040 Jun 20 18:18:12 test10 kernel: usb usb4: New USB device found, idVendor=1d6b, idProduct=0001 Jun 20 18:18:12 test10 kernel: usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Jun 20 18:18:12 test10 kernel: usb usb4: Product: UHCI Host Controller Jun 20 18:18:12 test10 kernel: usb usb4: Manufacturer: Linux 2.6.32-44.1.el6.x86_64 uhci_hcd Jun 20 18:18:12 test10 kernel: usb usb4: SerialNumber: 0000:00:1d.2 Jun 20 18:18:12 test10 kernel: usb usb4: configuration #1 chosen from 1 choice Jun 20 18:18:12 test10 kernel: hub 4-0:1.0: USB hub found Jun 20 18:18:12 test10 kernel: hub 4-0:1.0: 2 ports detected Jun 20 18:18:12 test10 kernel: uhci_hcd 0000:00:1d.3: PCI INT D -> GSI 19 (level, low) -> IRQ 19 Jun 20 18:18:12 test10 kernel: uhci_hcd 0000:00:1d.3: UHCI Host Controller Jun 20 18:18:12 test10 kernel: uhci_hcd 0000:00:1d.3: new USB bus registered, assigned bus number 5 Jun 20 18:18:12 test10 kernel: uhci_hcd 0000:00:1d.3: irq 19, io base 0x00003060 Jun 20 18:18:12 test10 kernel: usb usb5: New USB device found, idVendor=1d6b, idProduct=0001 Jun 20 18:18:12 test10 kernel: usb usb5: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Jun 20 18:18:12 test10 kernel: usb usb5: Product: UHCI Host Controller Jun 20 18:18:12 test10 kernel: usb usb5: Manufacturer: Linux 2.6.32-44.1.el6.x86_64 uhci_hcd Jun 20 18:18:12 test10 kernel: usb usb5: SerialNumber: 0000:00:1d.3 Jun 20 18:18:12 test10 kernel: usb usb5: configuration #1 chosen from 1 choice Jun 20 18:18:12 test10 kernel: hub 5-0:1.0: USB hub found Jun 20 18:18:12 test10 kernel: hub 5-0:1.0: 2 ports detected Jun 20 18:18:12 test10 kernel: PNP: No PS/2 controller found. Probing ports directly. Jun 20 18:18:12 test10 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 20 18:18:12 test10 kernel: mice: PS/2 mouse device common for all mice Jun 20 18:18:12 test10 kernel: rtc_cmos 00:04: RTC can wake from S4 Jun 20 18:18:12 test10 kernel: rtc_cmos 00:04: rtc core: registered rtc_cmos as rtc0 Jun 20 18:18:12 test10 kernel: rtc0: alarms up to one month, y3k, 114 bytes nvram, hpet irqs Jun 20 18:18:12 test10 kernel: cpuidle: using governor ladder Jun 20 18:18:12 test10 kernel: cpuidle: using governor menu Jun 20 18:18:12 test10 kernel: usbcore: registered new interface driver hiddev Jun 20 18:18:12 test10 kernel: usbcore: registered new interface driver usbhid Jun 20 18:18:12 test10 kernel: usbhid: v2.6:USB HID core driver Jun 20 18:18:12 test10 kernel: nf_conntrack version 0.5.0 (7892 buckets, 31568 max) Jun 20 18:18:12 test10 kernel: CONFIG_NF_CT_ACCT is deprecated and will be removed soon. Please use Jun 20 18:18:12 test10 kernel: nf_conntrack.acct=1 kernel parameter, acct=1 nf_conntrack module option or Jun 20 18:18:12 test10 kernel: sysctl net.netfilter.nf_conntrack_acct=1 to enable it. Jun 20 18:18:12 test10 kernel: ip_tables: (C) 2000-2006 Netfilter Core Team Jun 20 18:18:12 test10 kernel: TCP cubic registered Jun 20 18:18:12 test10 kernel: Initializing XFRM netlink socket Jun 20 18:18:12 test10 kernel: NET: Registered protocol family 17 Jun 20 18:18:12 test10 kernel: registered taskstats version 1 Jun 20 18:18:12 test10 kernel: IMA: No TPM chip found, activating TPM-bypass! Jun 20 18:18:12 test10 kernel: rtc_cmos 00:04: setting system clock to 2011-06-20 18:17:46 UTC (1308593866) Jun 20 18:18:12 test10 kernel: Initalizing network drop monitor service Jun 20 18:18:12 test10 kernel: Freeing unused kernel memory: 1220k freed Jun 20 18:18:12 test10 kernel: Write protecting the kernel read-only data: 7284k Jun 20 18:18:12 test10 kernel: dracut: dracut-004-23.el6 Jun 20 18:18:12 test10 kernel: dracut: rd_NO_LUKS: removing cryptoluks activation Jun 20 18:18:12 test10 kernel: dracut: rd_NO_LVM: removing LVM activation Jun 20 18:18:12 test10 kernel: device-mapper: uevent: version 1.0.3 Jun 20 18:18:12 test10 kernel: device-mapper: ioctl: 4.17.0-ioctl (2010-03-05) initialised: dm-devel@redhat.com Jun 20 18:18:12 test10 kernel: udev: starting version 147 Jun 20 18:18:12 test10 kernel: [drm] Initialized drm 1.1.0 20060810 Jun 20 18:18:12 test10 kernel: i915 0000:00:02.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 Jun 20 18:18:12 test10 kernel: [drm] set up 7M of stolen space Jun 20 18:18:12 test10 kernel: [drm] initialized overlay support Jun 20 18:18:12 test10 kernel: No connectors reported connected with modes Jun 20 18:18:12 test10 kernel: [drm] Cannot find any crtc or sizes - going 1024x768 Jun 20 18:18:12 test10 kernel: fbcon: inteldrmfb (fb0) is primary device Jun 20 18:18:12 test10 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 18:18:12 test10 kernel: fb0: inteldrmfb frame buffer device Jun 20 18:18:12 test10 kernel: drm: registered panic notifier Jun 20 18:18:12 test10 kernel: Slow work thread pool: Starting up Jun 20 18:18:12 test10 kernel: Slow work thread pool: Ready Jun 20 18:18:12 test10 kernel: [drm] Initialized i915 1.6.0 20080730 for 0000:00:02.0 on minor 0 Jun 20 18:18:12 test10 kernel: dracut: Starting plymouth daemon Jun 20 18:18:12 test10 kernel: dracut: rd_NO_DM: removing DM RAID activation Jun 20 18:18:12 test10 kernel: ahci 0000:00:1f.2: PCI INT B -> GSI 17 (level, low) -> IRQ 17 Jun 20 18:18:12 test10 kernel: ahci 0000:00:1f.2: AHCI 0001.0100 32 slots 4 ports 3 Gbps 0xf impl SATA mode Jun 20 18:18:12 test10 kernel: ahci 0000:00:1f.2: flags: 64bit ncq pm led clo pio slum part Jun 20 18:18:12 test10 kernel: scsi0 : ahci Jun 20 18:18:12 test10 kernel: scsi1 : ahci Jun 20 18:18:12 test10 kernel: scsi2 : ahci Jun 20 18:18:12 test10 kernel: scsi3 : ahci Jun 20 18:18:12 test10 kernel: ata1: SATA max UDMA/133 abar m1024@0xd0500400 port 0xd0500500 irq 28 Jun 20 18:18:12 test10 kernel: ata2: SATA max UDMA/133 irq_stat 0x00400040, connection status changed irq 28 Jun 20 18:18:12 test10 kernel: ata3: SATA max UDMA/133 irq_stat 0x00400040, connection status changed irq 28 Jun 20 18:18:12 test10 kernel: ata4: SATA max UDMA/133 irq_stat 0x00400040, connection status changed irq 28 Jun 20 18:18:12 test10 kernel: ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Jun 20 18:18:12 test10 kernel: ata1.00: ATA-8: WDC WD2002FYPS-01U1B1, 04.05G05, max UDMA/133 Jun 20 18:18:12 test10 kernel: ata1.00: 3907029168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA Jun 20 18:18:12 test10 kernel: ata1.00: configured for UDMA/133 Jun 20 18:18:12 test10 kernel: scsi 0:0:0:0: Direct-Access ATA WDC WD2002FYPS-0 04.0 PQ: 0 ANSI: 5 Jun 20 18:18:12 test10 kernel: ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Jun 20 18:18:12 test10 kernel: ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Jun 20 18:18:12 test10 kernel: ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Jun 20 18:18:12 test10 kernel: ata3.00: ATA-8: WDC WD1002FBYS-02A6B0, 03.00C06, max UDMA/133 Jun 20 18:18:12 test10 kernel: ata3.00: 1953525168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA Jun 20 18:18:12 test10 kernel: ata3.00: configured for UDMA/133 Jun 20 18:18:12 test10 kernel: ata2.00: ATA-8: WDC WD2002FYPS-01U1B1, 04.05G05, max UDMA/133 Jun 20 18:18:12 test10 kernel: ata2.00: 3907029168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA Jun 20 18:18:12 test10 kernel: ata4.00: ATA-8: WDC WD2002FYPS-01U1B0, 04.05G04, max UDMA/133 Jun 20 18:18:12 test10 kernel: ata4.00: 3907029168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA Jun 20 18:18:12 test10 kernel: ata2.00: configured for UDMA/133 Jun 20 18:18:12 test10 kernel: ata4.00: configured for UDMA/133 Jun 20 18:18:12 test10 kernel: scsi 1:0:0:0: Direct-Access ATA WDC WD2002FYPS-0 04.0 PQ: 0 ANSI: 5 Jun 20 18:18:12 test10 kernel: scsi 2:0:0:0: Direct-Access ATA WDC WD1002FBYS-0 03.0 PQ: 0 ANSI: 5 Jun 20 18:18:12 test10 kernel: scsi 3:0:0:0: Direct-Access ATA WDC WD2002FYPS-0 04.0 PQ: 0 ANSI: 5 Jun 20 18:18:12 test10 kernel: sd 0:0:0:0: [sda] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB) Jun 20 18:18:12 test10 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 20 18:18:12 test10 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jun 20 18:18:12 test10 kernel: sda: Jun 20 18:18:12 test10 kernel: sd 1:0:0:0: [sdb] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB) Jun 20 18:18:12 test10 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jun 20 18:18:12 test10 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jun 20 18:18:12 test10 kernel: sdb: Jun 20 18:18:12 test10 kernel: sd 2:0:0:0: [sdc] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) Jun 20 18:18:12 test10 kernel: sd 3:0:0:0: [sdd] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB) Jun 20 18:18:12 test10 kernel: sd 2:0:0:0: [sdc] Write Protect is off Jun 20 18:18:12 test10 kernel: sd 3:0:0:0: [sdd] Write Protect is off Jun 20 18:18:12 test10 kernel: sd 2:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jun 20 18:18:12 test10 kernel: sd 3:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jun 20 18:18:12 test10 kernel: sdd: Jun 20 18:18:12 test10 kernel: sdc: sdb1 sdb2 sdb3 sdb4 < sdc1 sdc2 sdc3 sdc4 < sda1 sda2 sda3 sda4 < sdc5 sdb5 sda5 sdc6 sdd1 sdd2 sdd3 sdd4 < sda6 sdb6 sdc7 > Jun 20 18:18:12 test10 kernel: sd 2:0:0:0: [sdc] Attached SCSI disk Jun 20 18:18:12 test10 kernel: sda7 > Jun 20 18:18:12 test10 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 20 18:18:12 test10 kernel: sdd5 sdb7 > Jun 20 18:18:12 test10 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jun 20 18:18:12 test10 kernel: sdd6 sdd7 > Jun 20 18:18:12 test10 kernel: sd 3:0:0:0: [sdd] Attached SCSI disk Jun 20 18:18:12 test10 kernel: dracut: Autoassembling MD Raid Jun 20 18:18:12 test10 kernel: md: md0 stopped. Jun 20 18:18:12 test10 kernel: md: bind Jun 20 18:18:12 test10 kernel: md: bind Jun 20 18:18:12 test10 kernel: md: bind Jun 20 18:18:12 test10 kernel: md: bind Jun 20 18:18:12 test10 kernel: md: raid1 personality registered for level 1 Jun 20 18:18:12 test10 kernel: raid1: raid set md0 active with 4 out of 4 mirrors Jun 20 18:18:12 test10 kernel: md0: detected capacity change from 0 to 104845312 Jun 20 18:18:12 test10 kernel: dracut: mdadm: /dev/md0 has been started with 4 drives. Jun 20 18:18:12 test10 kernel: md: md1 stopped. Jun 20 18:18:12 test10 kernel: md: bind Jun 20 18:18:12 test10 kernel: md: bind Jun 20 18:18:12 test10 kernel: md: bind Jun 20 18:18:12 test10 kernel: md: bind Jun 20 18:18:12 test10 kernel: raid1: md1 is not clean -- starting background reconstruction Jun 20 18:18:12 test10 kernel: raid1: raid set md1 active with 4 out of 4 mirrors Jun 20 18:18:12 test10 kernel: md1: bitmap initialized from disk: read 1/1 pages, set 64 bits Jun 20 18:18:12 test10 kernel: created bitmap (1 pages) for device md1 Jun 20 18:18:12 test10 kernel: md1: detected capacity change from 0 to 4293910528 Jun 20 18:18:12 test10 kernel: md: resync of RAID array md1 Jun 20 18:18:12 test10 kernel: md: minimum _guaranteed_ speed: 1000 KB/sec/disk. Jun 20 18:18:12 test10 kernel: md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync. Jun 20 18:18:12 test10 kernel: md: using 128k window, over a total of 4193272 blocks. Jun 20 18:18:12 test10 kernel: dracut: mdadm: /dev/md1 has been started with 4 drives. Jun 20 18:18:12 test10 kernel: md: md2 stopped. Jun 20 18:18:12 test10 kernel: md: bind Jun 20 18:18:12 test10 kernel: md: bind Jun 20 18:18:12 test10 kernel: md: bind Jun 20 18:18:12 test10 kernel: md: bind Jun 20 18:18:12 test10 kernel: raid1: md2 is not clean -- starting background reconstruction Jun 20 18:18:12 test10 kernel: raid1: raid set md2 active with 4 out of 4 mirrors Jun 20 18:18:12 test10 kernel: md2: bitmap initialized from disk: read 1/1 pages, set 1419 bits Jun 20 18:18:12 test10 kernel: created bitmap (1 pages) for device md2 Jun 20 18:18:12 test10 kernel: md2: detected capacity change from 0 to 107373064192 Jun 20 18:18:12 test10 kernel: dracut: mdadm: /dev/md2 has been started with 4 drives. Jun 20 18:18:12 test10 kernel: md: delaying resync of md2 until md1 has finished (they share one or more physical units) Jun 20 18:18:12 test10 kernel: kjournald starting. Commit interval 5 seconds Jun 20 18:18:12 test10 kernel: EXT3-fs: mounted filesystem with ordered data mode. Jun 20 18:18:12 test10 kernel: dracut: Mounted root filesystem /dev/md1 Jun 20 18:18:12 test10 kernel: dracut: Loading SELinux policy Jun 20 18:18:12 test10 kernel: type=1404 audit(1308593869.187:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 Jun 20 18:18:12 test10 kernel: type=1403 audit(1308593869.861:3): policy loaded auid=4294967295 ses=4294967295 Jun 20 18:18:12 test10 kernel: dracut: Switching root Jun 20 18:18:12 test10 kernel: udev: starting version 147 Jun 20 18:18:12 test10 kernel: e1000e: Intel(R) PRO/1000 Network Driver - 1.2.7-k2 Jun 20 18:18:12 test10 kernel: e1000e: Copyright (c) 1999 - 2009 Intel Corporation. Jun 20 18:18:12 test10 kernel: e1000e 0000:06:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 Jun 20 18:18:12 test10 kernel: 0000:06:00.0: eth0: (PCI Express:2.5GB/s:Width x1) 00:30:48:b0:c9:c6 Jun 20 18:18:12 test10 kernel: 0000:06:00.0: eth0: Intel(R) PRO/1000 Network Connection Jun 20 18:18:12 test10 kernel: 0000:06:00.0: eth0: MAC: 2, PHY: 2, PBA No: ffffff-0ff Jun 20 18:18:12 test10 kernel: e1000e 0000:07:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 Jun 20 18:18:12 test10 kernel: 0000:07:00.0: eth1: (PCI Express:2.5GB/s:Width x1) 00:30:48:b0:c9:c7 Jun 20 18:18:12 test10 kernel: 0000:07:00.0: eth1: Intel(R) PRO/1000 Network Connection Jun 20 18:18:12 test10 kernel: 0000:07:00.0: eth1: MAC: 2, PHY: 2, PBA No: ffffff-0ff Jun 20 18:18:12 test10 kernel: intel_rng: FWH not detected Jun 20 18:18:12 test10 kernel: iTCO_vendor_support: vendor-support=0 Jun 20 18:18:12 test10 kernel: iTCO_wdt: Intel TCO WatchDog Timer Driver v1.05 Jun 20 18:18:12 test10 kernel: iTCO_wdt: Found a ICH7 or ICH7R TCO device (Version=2, TCOBASE=0x1060) Jun 20 18:18:12 test10 kernel: iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Jun 20 18:18:12 test10 kernel: sd 0:0:0:0: Attached scsi generic sg0 type 0 Jun 20 18:18:12 test10 kernel: sd 1:0:0:0: Attached scsi generic sg1 type 0 Jun 20 18:18:12 test10 kernel: sd 2:0:0:0: Attached scsi generic sg2 type 0 Jun 20 18:18:12 test10 kernel: sd 3:0:0:0: Attached scsi generic sg3 type 0 Jun 20 18:18:12 test10 kernel: i801_smbus 0000:00:1f.3: PCI INT B -> GSI 17 (level, low) -> IRQ 17 Jun 20 18:18:12 test10 kernel: Marking TSC unstable due to TSC halts in idle Jun 20 18:18:12 test10 kernel: Switching to clocksource hpet Jun 20 18:18:12 test10 kernel: EXT3 FS on md1, internal journal Jun 20 18:18:12 test10 kernel: kjournald starting. Commit interval 5 seconds Jun 20 18:18:12 test10 kernel: EXT3 FS on md0, internal journal Jun 20 18:18:12 test10 kernel: EXT3-fs: mounted filesystem with ordered data mode. Jun 20 18:18:12 test10 kernel: kjournald starting. Commit interval 5 seconds Jun 20 18:18:12 test10 kernel: EXT3 FS on md2, internal journal Jun 20 18:18:12 test10 kernel: EXT3-fs: mounted filesystem with ordered data mode. Jun 20 18:18:12 test10 kernel: NET: Registered protocol family 10 Jun 20 18:18:12 test10 kernel: lo: Disabled Privacy Extensions Jun 20 18:18:12 test10 kernel: ADDRCONF(NETDEV_UP): eth0: link is not ready Jun 20 18:18:12 test10 kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX Jun 20 18:18:12 test10 kernel: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 20 18:18:12 test10 kernel: ADDRCONF(NETDEV_UP): eth1: link is not ready Jun 20 18:18:12 test10 kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None Jun 20 18:18:12 test10 kernel: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready Jun 20 18:18:12 test10 kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX Jun 20 18:18:12 test10 kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None Jun 20 18:18:13 test10 kernel: RPC: Registered udp transport module. Jun 20 18:18:13 test10 kernel: RPC: Registered tcp transport module. Jun 20 18:18:13 test10 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jun 20 18:18:45 test10 kernel: shutdown[2916]: segfault at 7fff7b152bbf ip 00007ff6b52b7e68 sp 00007fff7b1472c0 error 6 in libnss_files-2.12.so[7ff6b52b1000+c000] Jun 20 18:18:45 test10 init: rc main process (1036) killed by TERM signal Jun 20 18:18:46 test10 init: Disconnected from system bus Jun 20 18:18:46 test10 rpcbind: rpcbind terminating on signal. Restart with "rpcbind -w" Jun 20 18:18:46 test10 auditd[1333]: The audit daemon is exiting. Jun 20 18:18:46 test10 kernel: type=1305 audit(1308593926.505:12): audit_pid=0 old=1333 auid=4294967295 ses=4294967295 subj=system_u:system_r:auditd_t:s0 res=1 Jun 20 18:18:46 test10 netconsole: : inserting netconsole module with arguments netconsole=6666@10.200.98.110/eth0,514@10.200.2.33/00:1F:45:68:64:D7 Jun 20 18:18:46 test10 kernel: netconsole: local port 6666 Jun 20 18:18:46 test10 kernel: netconsole: local IP 10.200.98.110 Jun 20 18:18:46 test10 kernel: netconsole: interface eth0 Jun 20 18:18:46 test10 kernel: netconsole: remote port 514 Jun 20 18:18:46 test10 kernel: netconsole: remote IP 10.200.2.33 Jun 20 18:18:46 test10 kernel: netconsole: remote ethernet address 00:1f:45:68:64:d7 Jun 20 18:18:46 test10 kernel: console [netcon0] enabled Jun 20 18:18:46 test10 kernel: netconsole: network logging started Jun 20 18:18:47 test10 NET[3076]: /sbin/dhclient-script : updated /etc/resolv.conf Jun 20 18:18:47 test10 kernel: IPv6 over IPv4 tunneling driver Jun 20 18:18:47 test10 kernel: sit0: Disabled Privacy Extensions Jun 20 18:18:47 test10 kernel: Kernel logging (proc) stopped. Jun 20 18:18:47 test10 rsyslogd: [origin software="rsyslogd" swVersion="4.6.2" x-pid="1373" x-info="http://www.rsyslog.com"] exiting on signal 15. Jun 20 18:19:58 test10 kernel: imklog 4.6.2, log source = /proc/kmsg started. Jun 20 18:19:58 test10 rsyslogd: [origin software="rsyslogd" swVersion="4.6.2" x-pid="1239" x-info="http://www.rsyslog.com"] (re)start Jun 20 18:19:58 test10 kernel: Initializing cgroup subsys cpuset Jun 20 18:19:58 test10 kernel: Initializing cgroup subsys cpu Jun 20 18:19:58 test10 kernel: Linux version 2.6.39-sc8.el6 (bchrisman@buildrhel6.sm.scalecomputing.com) (gcc version 4.4.4 20100630 (Red Hat 4.4.4-10) (GCC) ) #19 SMP Wed Jun 8 16:35:45 PDT 2011 Jun 20 18:19:58 test10 kernel: Command line: ro root=UUID=32f21c01-33d1-4669-98d8-b6df549ad97d rd_MD_UUID=4ed9ac80:54166382:27eda738:00495ef4 rd_NO_LUKS rd_NO_LVM rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us crashkernel=auto rhgb quiet Jun 20 18:19:58 test10 kernel: BIOS-provided physical RAM map: Jun 20 18:19:58 test10 kernel: BIOS-e820: 0000000000000000 - 000000000009dc00 (usable) Jun 20 18:19:58 test10 kernel: BIOS-e820: 000000000009dc00 - 00000000000a0000 (reserved) Jun 20 18:19:58 test10 kernel: BIOS-e820: 00000000000e4000 - 0000000000100000 (reserved) Jun 20 18:19:58 test10 kernel: BIOS-e820: 0000000000100000 - 000000003f680000 (usable) Jun 20 18:19:58 test10 kernel: BIOS-e820: 000000003f680000 - 000000003f68b000 (ACPI data) Jun 20 18:19:58 test10 kernel: BIOS-e820: 000000003f68b000 - 000000003f700000 (ACPI NVS) Jun 20 18:19:58 test10 kernel: BIOS-e820: 000000003f700000 - 0000000040000000 (reserved) Jun 20 18:19:58 test10 kernel: BIOS-e820: 00000000e0000000 - 00000000e4000000 (reserved) Jun 20 18:19:58 test10 kernel: BIOS-e820: 00000000fec00000 - 00000000fec10000 (reserved) Jun 20 18:19:58 test10 kernel: BIOS-e820: 00000000fee00000 - 00000000fee01000 (reserved) Jun 20 18:19:58 test10 kernel: BIOS-e820: 00000000ff000000 - 0000000100000000 (reserved) Jun 20 18:19:58 test10 kernel: NX (Execute Disable) protection: active Jun 20 18:19:58 test10 kernel: DMI present. Jun 20 18:19:58 test10 kernel: No AGP bridge found Jun 20 18:19:58 test10 kernel: last_pfn = 0x3f680 max_arch_pfn = 0x400000000 Jun 20 18:19:58 test10 kernel: x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106 Jun 20 18:19:58 test10 kernel: total RAM covered: 1015M Jun 20 18:19:58 test10 kernel: Found optimal setting for mtrr clean up Jun 20 18:19:58 test10 kernel: gran_size: 64K chunk_size: 16M num_reg: 3 lose cover RAM: 0G Jun 20 18:19:58 test10 kernel: found SMP MP-table at [ffff8800000f6f10] f6f10 Jun 20 18:19:58 test10 kernel: init_memory_mapping: 0000000000000000-000000003f680000 Jun 20 18:19:58 test10 kernel: RAMDISK: 373df000 - 37ff0000 Jun 20 18:19:58 test10 kernel: crashkernel: memory value expected Jun 20 18:19:58 test10 kernel: ACPI: RSDP 00000000000f6ee0 00014 (v00 INTELB) Jun 20 18:19:58 test10 kernel: ACPI: RSDT 000000003f6841bb 0003C (v01 PTLTD RSDT 06040000 LTP 00000000) Jun 20 18:19:58 test10 kernel: ACPI: FACP 000000003f68ae7e 00084 (v01 SUPRMC 06040000 PTL 00000003) Jun 20 18:19:58 test10 kernel: ACPI: DSDT 000000003f6855e3 0589B (v01 INTEL BR_WATER 06040000 MSFT 03000001) Jun 20 18:19:58 test10 kernel: ACPI: FACS 000000003f68bfc0 00040 Jun 20 18:19:58 test10 kernel: ACPI: TCPA 000000003f68af02 00032 (v01 SMC 06040000 PTL 00000000) Jun 20 18:19:58 test10 kernel: ACPI: MCFG 000000003f68af34 0003C (v01 INTELB R_WATERP 06040000 LTP 00000000) Jun 20 18:19:58 test10 kernel: ACPI: APIC 000000003f68af70 00068 (v01 INTELB R_WATERP 06040000 LTP 00000000) Jun 20 18:19:58 test10 kernel: ACPI: BOOT 000000003f68afd8 00028 (v01 INTELB R_WATERP 06040000 LTP 00000001) Jun 20 18:19:58 test10 kernel: ACPI: SSDT 000000003f6841f7 013EC (v01 INTELB R_WATERW 00003000 INTL 20061109) Jun 20 18:19:58 test10 kernel: No NUMA configuration found Jun 20 18:19:58 test10 kernel: Faking a node at 0000000000000000-000000003f680000 Jun 20 18:19:58 test10 kernel: Initmem setup node 0 0000000000000000-000000003f680000 Jun 20 18:19:58 test10 kernel: NODE_DATA [000000003f656000 - 000000003f67cfff] Jun 20 18:19:58 test10 kernel: Zone PFN ranges: Jun 20 18:19:58 test10 kernel: DMA 0x00000010 -> 0x00001000 Jun 20 18:19:58 test10 kernel: DMA32 0x00001000 -> 0x00100000 Jun 20 18:19:58 test10 kernel: Normal empty Jun 20 18:19:58 test10 kernel: Movable zone start PFN for each node Jun 20 18:19:58 test10 kernel: early_node_map[2] active PFN ranges Jun 20 18:19:58 test10 kernel: 0: 0x00000010 -> 0x0000009d Jun 20 18:19:58 test10 kernel: 0: 0x00000100 -> 0x0003f680 Jun 20 18:19:58 test10 kernel: ACPI: PM-Timer IO Port: 0x1008 Jun 20 18:19:58 test10 kernel: ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) Jun 20 18:19:58 test10 kernel: ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled) Jun 20 18:19:58 test10 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jun 20 18:19:58 test10 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jun 20 18:19:58 test10 kernel: ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0]) Jun 20 18:19:58 test10 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23 Jun 20 18:19:58 test10 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jun 20 18:19:58 test10 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 18:19:58 test10 kernel: Using ACPI (MADT) for SMP configuration information Jun 20 18:19:58 test10 kernel: SMP: Allowing 2 CPUs, 0 hotplug CPUs Jun 20 18:19:58 test10 kernel: PM: Registered nosave memory: 000000000009d000 - 000000000009e000 Jun 20 18:19:58 test10 kernel: PM: Registered nosave memory: 000000000009e000 - 00000000000a0000 Jun 20 18:19:58 test10 kernel: PM: Registered nosave memory: 00000000000a0000 - 00000000000e4000 Jun 20 18:19:58 test10 kernel: PM: Registered nosave memory: 00000000000e4000 - 0000000000100000 Jun 20 18:19:58 test10 kernel: Allocating PCI resources starting at 40000000 (gap: 40000000:a0000000) Jun 20 18:19:58 test10 kernel: Booting paravirtualized kernel on bare hardware Jun 20 18:19:58 test10 kernel: setup_percpu: NR_CPUS:4096 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 20 18:19:58 test10 kernel: PERCPU: Embedded 29 pages/cpu @ffff88003f400000 s89600 r8192 d20992 u1048576 Jun 20 18:19:58 test10 kernel: Built 1 zonelists in Node order, mobility grouping on. Total pages: 256041 Jun 20 18:19:58 test10 kernel: Policy zone: DMA32 Jun 20 18:19:58 test10 kernel: Kernel command line: ro root=UUID=32f21c01-33d1-4669-98d8-b6df549ad97d rd_MD_UUID=4ed9ac80:54166382:27eda738:00495ef4 rd_NO_LUKS rd_NO_LVM rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us crashkernel=auto rhgb quiet Jun 20 18:19:58 test10 kernel: PID hash table entries: 4096 (order: 3, 32768 bytes) Jun 20 18:19:58 test10 kernel: Checking aperture... Jun 20 18:19:58 test10 kernel: No AGP bridge found Jun 20 18:19:58 test10 kernel: Memory: 995188k/1038848k available (5010k kernel code, 460k absent, 43200k reserved, 7302k data, 1472k init) Jun 20 18:19:58 test10 kernel: Hierarchical RCU implementation. Jun 20 18:19:58 test10 kernel: RCU-based detection of stalled CPUs is disabled. Jun 20 18:19:58 test10 kernel: NR_IRQS:262400 nr_irqs:512 16 Jun 20 18:19:58 test10 kernel: Console: colour VGA+ 80x25 Jun 20 18:19:58 test10 kernel: console [tty0] enabled Jun 20 18:19:58 test10 kernel: allocated 8388608 bytes of page_cgroup Jun 20 18:19:58 test10 kernel: please try 'cgroup_disable=memory' option if you don't want memory cgroups Jun 20 18:19:58 test10 kernel: Fast TSC calibration using PIT Jun 20 18:19:58 test10 kernel: Detected 1995.003 MHz processor. Jun 20 18:19:58 test10 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 3990.00 BogoMIPS (lpj=1995003) Jun 20 18:19:58 test10 kernel: pid_max: default: 32768 minimum: 301 Jun 20 18:19:58 test10 kernel: Dentry cache hash table entries: 131072 (order: 8, 1048576 bytes) Jun 20 18:19:58 test10 kernel: Inode-cache hash table entries: 65536 (order: 7, 524288 bytes) Jun 20 18:19:58 test10 kernel: Mount-cache hash table entries: 256 Jun 20 18:19:58 test10 kernel: Initializing cgroup subsys ns Jun 20 18:19:58 test10 kernel: ns_cgroup deprecated: consider using the 'clone_children' flag without the ns_cgroup. Jun 20 18:19:58 test10 kernel: Initializing cgroup subsys cpuacct Jun 20 18:19:58 test10 kernel: Initializing cgroup subsys memory Jun 20 18:19:58 test10 kernel: Initializing cgroup subsys devices Jun 20 18:19:58 test10 kernel: Initializing cgroup subsys freezer Jun 20 18:19:58 test10 kernel: Initializing cgroup subsys net_cls Jun 20 18:19:58 test10 kernel: Initializing cgroup subsys blkio Jun 20 18:19:58 test10 kernel: CPU: Physical Processor ID: 0 Jun 20 18:19:58 test10 kernel: CPU: Processor Core ID: 0 Jun 20 18:19:58 test10 kernel: mce: CPU supports 6 MCE banks Jun 20 18:19:58 test10 kernel: using mwait in idle threads. Jun 20 18:19:58 test10 kernel: ACPI: Core revision 20110316 Jun 20 18:19:58 test10 kernel: ftrace: allocating 18497 entries in 73 pages Jun 20 18:19:58 test10 kernel: Setting APIC routing to flat Jun 20 18:19:58 test10 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 20 18:19:58 test10 kernel: CPU0: Intel(R) Pentium(R) Dual CPU E2180 @ 2.00GHz stepping 0d Jun 20 18:19:58 test10 kernel: Performance Events: PEBS fmt0+, Core2 events, Intel PMU driver. Jun 20 18:19:58 test10 kernel: PEBS disabled due to CPU errata. Jun 20 18:19:58 test10 kernel: ... version: 2 Jun 20 18:19:58 test10 kernel: ... bit width: 40 Jun 20 18:19:58 test10 kernel: ... generic registers: 2 Jun 20 18:19:58 test10 kernel: ... value mask: 000000ffffffffff Jun 20 18:19:58 test10 kernel: ... max period: 000000007fffffff Jun 20 18:19:58 test10 kernel: ... fixed-purpose events: 3 Jun 20 18:19:58 test10 kernel: ... event mask: 0000000700000003 Jun 20 18:19:58 test10 kernel: Booting Node 0, Processors #1 Ok. Jun 20 18:19:58 test10 kernel: Brought up 2 CPUs Jun 20 18:19:58 test10 kernel: Total of 2 processors activated (7979.17 BogoMIPS). Jun 20 18:19:58 test10 kernel: devtmpfs: initialized Jun 20 18:19:58 test10 kernel: PM: Registering ACPI NVS region at 3f68b000 (479232 bytes) Jun 20 18:19:58 test10 kernel: print_constraints: dummy: Jun 20 18:19:58 test10 kernel: NET: Registered protocol family 16 Jun 20 18:19:58 test10 kernel: ACPI: bus type pci registered Jun 20 18:19:58 test10 kernel: PCI: MMCONFIG for domain 0000 [bus 00-07] at [mem 0xe0000000-0xe07fffff] (base 0xe0000000) Jun 20 18:19:58 test10 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xe07fffff] reserved in E820 Jun 20 18:19:58 test10 kernel: PCI: Using configuration type 1 for base access Jun 20 18:19:58 test10 kernel: bio: create slab at 0 Jun 20 18:19:58 test10 kernel: ACPI: Interpreter enabled Jun 20 18:19:58 test10 kernel: ACPI: (supports S0 S1 S4 S5) Jun 20 18:19:58 test10 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 18:19:58 test10 kernel: ACPI: No dock devices found. Jun 20 18:19:58 test10 kernel: PCI: Ignoring host bridge windows from ACPI; if necessary, use "pci=use_crs" and report a bug Jun 20 18:19:58 test10 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 20 18:19:58 test10 kernel: pci 0000:00:1f.0: quirk: [io 0x1000-0x107f] claimed by ICH6 ACPI/GPIO/TCO Jun 20 18:19:58 test10 kernel: pci 0000:00:1f.0: quirk: [io 0x1180-0x11bf] claimed by ICH6 GPIO Jun 20 18:19:58 test10 kernel: pci 0000:00:1f.0: ICH7 LPC Generic IO decode 1 PIO at 0294 (mask 0097) Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.0: PCI bridge to [bus 02-02] Jun 20 18:19:58 test10 kernel: pci 0000:06:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.4: PCI bridge to [bus 06-06] Jun 20 18:19:58 test10 kernel: pci 0000:07:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.5: PCI bridge to [bus 07-07] Jun 20 18:19:58 test10 kernel: pci 0000:00:1e.0: PCI bridge to [bus 08-08] (subtractive decode) Jun 20 18:19:58 test10 kernel: pci0000:00: Requesting ACPI _OSC control (0x1d) Jun 20 18:19:58 test10 kernel: Unable to assume _OSC PCIe control. Disabling ASPM Jun 20 18:19:58 test10 kernel: ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 7 *10 11 12 14 15) Jun 20 18:19:58 test10 kernel: ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 7 10 *11 12 14 15) Jun 20 18:19:58 test10 kernel: ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 *5 6 7 10 11 12 14 15) Jun 20 18:19:58 test10 kernel: ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 6 *7 10 11 12 14 15) Jun 20 18:19:58 test10 kernel: ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 7 10 11 12 14 15) *0, disabled. Jun 20 18:19:58 test10 kernel: ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 7 10 11 12 14 15) *0, disabled. Jun 20 18:19:58 test10 kernel: ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 5 6 7 10 11 12 14 15) *0, disabled. Jun 20 18:19:58 test10 kernel: ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 7 10 11 12 14 15) *0, disabled. Jun 20 18:19:58 test10 kernel: vgaarb: device added: PCI:0000:00:02.0,decodes=io+mem,owns=io+mem,locks=none Jun 20 18:19:58 test10 kernel: vgaarb: loaded Jun 20 18:19:58 test10 kernel: SCSI subsystem initialized Jun 20 18:19:58 test10 kernel: usbcore: registered new interface driver usbfs Jun 20 18:19:58 test10 kernel: usbcore: registered new interface driver hub Jun 20 18:19:58 test10 kernel: usbcore: registered new device driver usb Jun 20 18:19:58 test10 kernel: PCI: Using ACPI for IRQ routing Jun 20 18:19:58 test10 kernel: HPET: 3 timers in total, 0 timers will be used for per-cpu timer Jun 20 18:19:58 test10 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 20 18:19:58 test10 kernel: hpet0: 3 comparators, 64-bit 14.318180 MHz counter Jun 20 18:19:58 test10 kernel: Switching to clocksource hpet Jun 20 18:19:58 test10 kernel: Switched to NOHz mode on CPU #0 Jun 20 18:19:58 test10 kernel: Switched to NOHz mode on CPU #1 Jun 20 18:19:58 test10 kernel: pnp: PnP ACPI init Jun 20 18:19:58 test10 kernel: ACPI: bus type pnp registered Jun 20 18:19:58 test10 kernel: system 00:01: [io 0x0295-0x0296] has been reserved Jun 20 18:19:58 test10 kernel: system 00:01: [io 0x0800-0x083f] has been reserved Jun 20 18:19:58 test10 kernel: system 00:01: [io 0x0900-0x090f] has been reserved Jun 20 18:19:58 test10 kernel: system 00:01: [io 0x1000-0x107f] has been reserved Jun 20 18:19:58 test10 kernel: system 00:01: [io 0x1180-0x11bf] has been reserved Jun 20 18:19:58 test10 kernel: system 00:01: [io 0x04d0-0x04d1] has been reserved Jun 20 18:19:58 test10 kernel: system 00:01: [io 0xfe00] has been reserved Jun 20 18:19:58 test10 kernel: system 00:01: [mem 0xfed14000-0xfed17fff] has been reserved Jun 20 18:19:58 test10 kernel: system 00:01: [mem 0xe0000000-0xefffffff] could not be reserved Jun 20 18:19:58 test10 kernel: system 00:01: [mem 0xfef00000-0xfeffffff] has been reserved Jun 20 18:19:58 test10 kernel: pnp: PnP ACPI: found 10 devices Jun 20 18:19:58 test10 kernel: ACPI: ACPI bus type pnp unregistered Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.5: BAR 15: assigned [mem 0x40000000-0x401fffff 64bit pref] Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.4: BAR 15: assigned [mem 0x40200000-0x403fffff 64bit pref] Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.0: BAR 14: assigned [mem 0x40400000-0x405fffff] Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.0: BAR 15: assigned [mem 0x40600000-0x407fffff 64bit pref] Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.0: BAR 13: assigned [io 0x2000-0x2fff] Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.0: PCI bridge to [bus 02-02] Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.0: bridge window [io 0x2000-0x2fff] Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.0: bridge window [mem 0x40400000-0x405fffff] Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.0: bridge window [mem 0x40600000-0x407fffff 64bit pref] Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.4: PCI bridge to [bus 06-06] Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.4: bridge window [io 0x4000-0x4fff] Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.4: bridge window [mem 0xd0100000-0xd01fffff] Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.4: bridge window [mem 0x40200000-0x403fffff 64bit pref] Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.5: PCI bridge to [bus 07-07] Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.5: bridge window [io 0x5000-0x5fff] Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.5: bridge window [mem 0xd0200000-0xd02fffff] Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.5: bridge window [mem 0x40000000-0x401fffff 64bit pref] Jun 20 18:19:58 test10 kernel: pci 0000:00:1e.0: PCI bridge to [bus 08-08] Jun 20 18:19:58 test10 kernel: pci 0000:00:1e.0: bridge window [io disabled] Jun 20 18:19:58 test10 kernel: pci 0000:00:1e.0: bridge window [mem disabled] Jun 20 18:19:58 test10 kernel: pci 0000:00:1e.0: bridge window [mem pref disabled] Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.0: enabling device (0000 -> 0003) Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.4: PCI INT A -> GSI 16 (level, low) -> IRQ 16 Jun 20 18:19:58 test10 kernel: pci 0000:00:1c.5: PCI INT B -> GSI 17 (level, low) -> IRQ 17 Jun 20 18:19:58 test10 kernel: NET: Registered protocol family 2 Jun 20 18:19:58 test10 kernel: IP route cache hash table entries: 32768 (order: 6, 262144 bytes) Jun 20 18:19:58 test10 kernel: TCP established hash table entries: 131072 (order: 9, 2097152 bytes) Jun 20 18:19:58 test10 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes) Jun 20 18:19:58 test10 kernel: TCP: Hash tables configured (established 131072 bind 65536) Jun 20 18:19:58 test10 kernel: TCP reno registered Jun 20 18:19:58 test10 kernel: UDP hash table entries: 512 (order: 2, 16384 bytes) Jun 20 18:19:58 test10 kernel: UDP-Lite hash table entries: 512 (order: 2, 16384 bytes) Jun 20 18:19:58 test10 kernel: NET: Registered protocol family 1 Jun 20 18:19:58 test10 kernel: Trying to unpack rootfs image as initramfs... Jun 20 18:19:58 test10 kernel: Freeing initrd memory: 12356k freed Jun 20 18:19:58 test10 kernel: Simple Boot Flag at 0x37 set to 0x1 Jun 20 18:19:58 test10 kernel: audit: initializing netlink socket (disabled) Jun 20 18:19:58 test10 kernel: type=2000 audit(1308593972.470:1): initialized Jun 20 18:19:58 test10 kernel: HugeTLB registered 2 MB page size, pre-allocated 0 pages Jun 20 18:19:58 test10 kernel: VFS: Disk quotas dquot_6.5.2 Jun 20 18:19:58 test10 kernel: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 18:19:58 test10 kernel: msgmni has been set to 1967 Jun 20 18:19:58 test10 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) Jun 20 18:19:58 test10 kernel: io scheduler noop registered Jun 20 18:19:58 test10 kernel: io scheduler deadline registered Jun 20 18:19:58 test10 kernel: io scheduler cfq registered (default) Jun 20 18:19:58 test10 kernel: pci_hotplug: PCI Hot Plug PCI Core version: 0.5 Jun 20 18:19:58 test10 kernel: pciehp: PCI Express Hot Plug Controller Driver version: 0.4 Jun 20 18:19:58 test10 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 18:19:58 test10 kernel: input: Power Button as /devices/LNXSYSTM:00/device:00/PNP0A03:00/PNP0C0C:00/input/input0 Jun 20 18:19:58 test10 kernel: ACPI: Power Button [PWRB] Jun 20 18:19:58 test10 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input1 Jun 20 18:19:58 test10 kernel: ACPI: Power Button [PWRF] Jun 20 18:19:58 test10 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 18:19:58 test10 kernel: serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A Jun 20 18:19:58 test10 kernel: serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A Jun 20 18:19:58 test10 kernel: 00:07: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A Jun 20 18:19:58 test10 kernel: 00:08: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A Jun 20 18:19:58 test10 kernel: Non-volatile memory driver v1.3 Jun 20 18:19:58 test10 kernel: Linux agpgart interface v0.103 Jun 20 18:19:58 test10 kernel: agpgart-intel 0000:00:00.0: Intel 946GZ Chipset Jun 20 18:19:58 test10 kernel: agpgart-intel 0000:00:00.0: detected gtt size: 524288K total, 262144K mappable Jun 20 18:19:58 test10 kernel: agpgart-intel 0000:00:00.0: detected 8192K stolen memory Jun 20 18:19:58 test10 kernel: agpgart-intel 0000:00:00.0: AGP aperture is 256M @ 0xc0000000 Jun 20 18:19:58 test10 kernel: brd: module loaded Jun 20 18:19:58 test10 kernel: loop: module loaded Jun 20 18:19:58 test10 kernel: Fixed MDIO Bus: probed Jun 20 18:19:58 test10 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Jun 20 18:19:58 test10 kernel: ehci_hcd 0000:00:1d.7: PCI INT A -> GSI 16 (level, low) -> IRQ 16 Jun 20 18:19:58 test10 kernel: ehci_hcd 0000:00:1d.7: EHCI Host Controller Jun 20 18:19:58 test10 kernel: ehci_hcd 0000:00:1d.7: new USB bus registered, assigned bus number 1 Jun 20 18:19:58 test10 kernel: ehci_hcd 0000:00:1d.7: using broken periodic workaround Jun 20 18:19:58 test10 kernel: ehci_hcd 0000:00:1d.7: debug port 1 Jun 20 18:19:58 test10 kernel: ehci_hcd 0000:00:1d.7: irq 16, io mem 0xd0500000 Jun 20 18:19:58 test10 kernel: ehci_hcd 0000:00:1d.7: USB 2.0 started, EHCI 1.00 Jun 20 18:19:58 test10 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0002 Jun 20 18:19:58 test10 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Jun 20 18:19:58 test10 kernel: usb usb1: Product: EHCI Host Controller Jun 20 18:19:58 test10 kernel: usb usb1: Manufacturer: Linux 2.6.39-sc8.el6 ehci_hcd Jun 20 18:19:58 test10 kernel: usb usb1: SerialNumber: 0000:00:1d.7 Jun 20 18:19:58 test10 kernel: hub 1-0:1.0: USB hub found Jun 20 18:19:58 test10 kernel: hub 1-0:1.0: 8 ports detected Jun 20 18:19:58 test10 kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver Jun 20 18:19:58 test10 kernel: uhci_hcd: USB Universal Host Controller Interface driver Jun 20 18:19:58 test10 kernel: uhci_hcd 0000:00:1d.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 Jun 20 18:19:58 test10 kernel: uhci_hcd 0000:00:1d.0: UHCI Host Controller Jun 20 18:19:58 test10 kernel: uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 2 Jun 20 18:19:58 test10 kernel: uhci_hcd 0000:00:1d.0: irq 16, io base 0x00003000 Jun 20 18:19:58 test10 kernel: usb usb2: New USB device found, idVendor=1d6b, idProduct=0001 Jun 20 18:19:58 test10 kernel: usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Jun 20 18:19:58 test10 kernel: usb usb2: Product: UHCI Host Controller Jun 20 18:19:58 test10 kernel: usb usb2: Manufacturer: Linux 2.6.39-sc8.el6 uhci_hcd Jun 20 18:19:58 test10 kernel: usb usb2: SerialNumber: 0000:00:1d.0 Jun 20 18:19:58 test10 kernel: hub 2-0:1.0: USB hub found Jun 20 18:19:58 test10 kernel: hub 2-0:1.0: 2 ports detected Jun 20 18:19:58 test10 kernel: uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 Jun 20 18:19:58 test10 kernel: uhci_hcd 0000:00:1d.1: UHCI Host Controller Jun 20 18:19:58 test10 kernel: uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 3 Jun 20 18:19:58 test10 kernel: uhci_hcd 0000:00:1d.1: irq 17, io base 0x00003020 Jun 20 18:19:58 test10 kernel: usb usb3: New USB device found, idVendor=1d6b, idProduct=0001 Jun 20 18:19:58 test10 kernel: usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Jun 20 18:19:58 test10 kernel: usb usb3: Product: UHCI Host Controller Jun 20 18:19:58 test10 kernel: usb usb3: Manufacturer: Linux 2.6.39-sc8.el6 uhci_hcd Jun 20 18:19:58 test10 kernel: usb usb3: SerialNumber: 0000:00:1d.1 Jun 20 18:19:58 test10 kernel: hub 3-0:1.0: USB hub found Jun 20 18:19:58 test10 kernel: hub 3-0:1.0: 2 ports detected Jun 20 18:19:58 test10 kernel: uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 Jun 20 18:19:58 test10 kernel: uhci_hcd 0000:00:1d.2: UHCI Host Controller Jun 20 18:19:58 test10 kernel: uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 4 Jun 20 18:19:58 test10 kernel: uhci_hcd 0000:00:1d.2: irq 18, io base 0x00003040 Jun 20 18:19:58 test10 kernel: usb usb4: New USB device found, idVendor=1d6b, idProduct=0001 Jun 20 18:19:58 test10 kernel: usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Jun 20 18:19:58 test10 kernel: usb usb4: Product: UHCI Host Controller Jun 20 18:19:58 test10 kernel: usb usb4: Manufacturer: Linux 2.6.39-sc8.el6 uhci_hcd Jun 20 18:19:58 test10 kernel: usb usb4: SerialNumber: 0000:00:1d.2 Jun 20 18:19:58 test10 kernel: hub 4-0:1.0: USB hub found Jun 20 18:19:58 test10 kernel: hub 4-0:1.0: 2 ports detected Jun 20 18:19:58 test10 kernel: uhci_hcd 0000:00:1d.3: PCI INT D -> GSI 19 (level, low) -> IRQ 19 Jun 20 18:19:58 test10 kernel: uhci_hcd 0000:00:1d.3: UHCI Host Controller Jun 20 18:19:58 test10 kernel: uhci_hcd 0000:00:1d.3: new USB bus registered, assigned bus number 5 Jun 20 18:19:58 test10 kernel: uhci_hcd 0000:00:1d.3: irq 19, io base 0x00003060 Jun 20 18:19:58 test10 kernel: usb usb5: New USB device found, idVendor=1d6b, idProduct=0001 Jun 20 18:19:58 test10 kernel: usb usb5: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Jun 20 18:19:58 test10 kernel: usb usb5: Product: UHCI Host Controller Jun 20 18:19:58 test10 kernel: usb usb5: Manufacturer: Linux 2.6.39-sc8.el6 uhci_hcd Jun 20 18:19:58 test10 kernel: usb usb5: SerialNumber: 0000:00:1d.3 Jun 20 18:19:58 test10 kernel: hub 5-0:1.0: USB hub found Jun 20 18:19:58 test10 kernel: hub 5-0:1.0: 2 ports detected Jun 20 18:19:58 test10 kernel: i8042: PNP: No PS/2 controller found. Probing ports directly. Jun 20 18:19:58 test10 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 20 18:19:58 test10 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 18:19:58 test10 kernel: rtc_cmos 00:04: RTC can wake from S4 Jun 20 18:19:58 test10 kernel: rtc_cmos 00:04: rtc core: registered rtc_cmos as rtc0 Jun 20 18:19:58 test10 kernel: rtc0: alarms up to one month, y3k, 114 bytes nvram, hpet irqs Jun 20 18:19:58 test10 kernel: cpuidle: using governor ladder Jun 20 18:19:58 test10 kernel: cpuidle: using governor menu Jun 20 18:19:58 test10 kernel: dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.2) Jun 20 18:19:58 test10 kernel: usbcore: registered new interface driver usbhid Jun 20 18:19:58 test10 kernel: usbhid: USB HID core driver Jun 20 18:19:58 test10 kernel: nf_conntrack version 0.5.0 (7871 buckets, 31484 max) Jun 20 18:19:58 test10 kernel: ip_tables: (C) 2000-2006 Netfilter Core Team Jun 20 18:19:58 test10 kernel: TCP cubic registered Jun 20 18:19:58 test10 kernel: NET: Registered protocol family 17 Jun 20 18:19:58 test10 kernel: Registering the dns_resolver key type Jun 20 18:19:58 test10 kernel: registered taskstats version 1 Jun 20 18:19:58 test10 kernel: rtc_cmos 00:04: setting system clock to 2011-06-20 18:19:34 UTC (1308593974) Jun 20 18:19:58 test10 kernel: Initializing network drop monitor service Jun 20 18:19:58 test10 kernel: Freeing unused kernel memory: 1472k freed Jun 20 18:19:58 test10 kernel: Write protecting the kernel read-only data: 10240k Jun 20 18:19:58 test10 kernel: Freeing unused kernel memory: 1116k freed Jun 20 18:19:58 test10 kernel: Refined TSC clocksource calibration: 1994.999 MHz. Jun 20 18:19:58 test10 kernel: Switching to clocksource tsc Jun 20 18:19:58 test10 kernel: Freeing unused kernel memory: 1792k freed Jun 20 18:19:58 test10 kernel: dracut: dracut-004-23.el6 Jun 20 18:19:58 test10 kernel: dracut: rd_NO_LUKS: removing cryptoluks activation Jun 20 18:19:58 test10 kernel: udev: starting version 147 Jun 20 18:19:58 test10 kernel: udevd (62): /proc/62/oom_adj is deprecated, please use /proc/62/oom_score_adj instead. Jun 20 18:19:58 test10 kernel: [drm] Initialized drm 1.1.0 20060810 Jun 20 18:19:58 test10 kernel: i915 0000:00:02.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 Jun 20 18:19:58 test10 kernel: [drm] Supports vblank timestamp caching Rev 1 (10.10.2010). Jun 20 18:19:58 test10 kernel: [drm] Driver supports precise vblank timestamp query. Jun 20 18:19:58 test10 kernel: vgaarb: device changed decodes: PCI:0000:00:02.0,olddecodes=io+mem,decodes=io+mem:owns=io+mem Jun 20 18:19:58 test10 kernel: [drm] initialized overlay support Jun 20 18:19:58 test10 kernel: No connectors reported connected with modes Jun 20 18:19:58 test10 kernel: [drm] Cannot find any crtc or sizes - going 1024x768 Jun 20 18:19:58 test10 kernel: fbcon: inteldrmfb (fb0) is primary device Jun 20 18:19:58 test10 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 18:19:58 test10 kernel: fb0: inteldrmfb frame buffer device Jun 20 18:19:58 test10 kernel: drm: registered panic notifier Jun 20 18:19:58 test10 kernel: [drm] Initialized i915 1.6.0 20080730 for 0000:00:02.0 on minor 0 Jun 20 18:19:58 test10 kernel: dracut: Starting plymouth daemon Jun 20 18:19:58 test10 kernel: ahci 0000:00:1f.2: PCI INT B -> GSI 17 (level, low) -> IRQ 17 Jun 20 18:19:58 test10 kernel: ahci 0000:00:1f.2: AHCI 0001.0100 32 slots 4 ports 3 Gbps 0xf impl SATA mode Jun 20 18:19:58 test10 kernel: ahci 0000:00:1f.2: flags: 64bit ncq pm led clo pio slum part Jun 20 18:19:58 test10 kernel: scsi0 : ahci Jun 20 18:19:58 test10 kernel: scsi1 : ahci Jun 20 18:19:58 test10 kernel: scsi2 : ahci Jun 20 18:19:58 test10 kernel: scsi3 : ahci Jun 20 18:19:58 test10 kernel: ata1: SATA max UDMA/133 abar m1024@0xd0500400 port 0xd0500500 irq 44 Jun 20 18:19:58 test10 kernel: ata2: SATA max UDMA/133 abar m1024@0xd0500400 port 0xd0500580 irq 44 Jun 20 18:19:58 test10 kernel: ata3: SATA max UDMA/133 abar m1024@0xd0500400 port 0xd0500600 irq 44 Jun 20 18:19:58 test10 kernel: ata4: SATA max UDMA/133 abar m1024@0xd0500400 port 0xd0500680 irq 44 Jun 20 18:19:58 test10 kernel: ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Jun 20 18:19:58 test10 kernel: ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Jun 20 18:19:58 test10 kernel: ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Jun 20 18:19:58 test10 kernel: ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Jun 20 18:19:58 test10 kernel: ata3.00: ATA-8: WDC WD1002FBYS-02A6B0, 03.00C06, max UDMA/133 Jun 20 18:19:58 test10 kernel: ata3.00: 1953525168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA Jun 20 18:19:58 test10 kernel: ata3.00: configured for UDMA/133 Jun 20 18:19:58 test10 kernel: ata1.00: ATA-8: WDC WD2002FYPS-01U1B1, 04.05G05, max UDMA/133 Jun 20 18:19:58 test10 kernel: ata1.00: 3907029168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA Jun 20 18:19:58 test10 kernel: ata4.00: ATA-8: WDC WD2002FYPS-01U1B0, 04.05G04, max UDMA/133 Jun 20 18:19:58 test10 kernel: ata4.00: 3907029168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA Jun 20 18:19:58 test10 kernel: ata2.00: ATA-8: WDC WD2002FYPS-01U1B1, 04.05G05, max UDMA/133 Jun 20 18:19:58 test10 kernel: ata2.00: 3907029168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA Jun 20 18:19:58 test10 kernel: ata1.00: configured for UDMA/133 Jun 20 18:19:58 test10 kernel: scsi 0:0:0:0: Direct-Access ATA WDC WD2002FYPS-0 04.0 PQ: 0 ANSI: 5 Jun 20 18:19:58 test10 kernel: ata4.00: configured for UDMA/133 Jun 20 18:19:58 test10 kernel: ata2.00: configured for UDMA/133 Jun 20 18:19:58 test10 kernel: scsi 1:0:0:0: Direct-Access ATA WDC WD2002FYPS-0 04.0 PQ: 0 ANSI: 5 Jun 20 18:19:58 test10 kernel: scsi 2:0:0:0: Direct-Access ATA WDC WD1002FBYS-0 03.0 PQ: 0 ANSI: 5 Jun 20 18:19:58 test10 kernel: scsi 3:0:0:0: Direct-Access ATA WDC WD2002FYPS-0 04.0 PQ: 0 ANSI: 5 Jun 20 18:19:58 test10 kernel: sd 0:0:0:0: [sda] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB) Jun 20 18:19:58 test10 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 20 18:19:58 test10 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jun 20 18:19:58 test10 kernel: sd 1:0:0:0: [sdb] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB) Jun 20 18:19:58 test10 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jun 20 18:19:58 test10 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jun 20 18:19:58 test10 kernel: sdb: detected capacity change from 0 to 2000398934016 Jun 20 18:19:58 test10 kernel: sda: detected capacity change from 0 to 2000398934016 Jun 20 18:19:58 test10 kernel: sd 2:0:0:0: [sdc] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) Jun 20 18:19:58 test10 kernel: sd 2:0:0:0: [sdc] Write Protect is off Jun 20 18:19:58 test10 kernel: sd 2:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jun 20 18:19:58 test10 kernel: sd 3:0:0:0: [sdd] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB) Jun 20 18:19:58 test10 kernel: sd 3:0:0:0: [sdd] Write Protect is off Jun 20 18:19:58 test10 kernel: sd 3:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jun 20 18:19:58 test10 kernel: sdd: detected capacity change from 0 to 2000398934016 Jun 20 18:19:58 test10 kernel: sdc: detected capacity change from 0 to 1000204886016 Jun 20 18:19:58 test10 kernel: sdc: sdc1 sdc2 sdc3 sdc4 < sdc5 sdc6 sdc7 > Jun 20 18:19:58 test10 kernel: sd 2:0:0:0: [sdc] Attached SCSI disk Jun 20 18:19:58 test10 kernel: sda: sda1 sda2 sda3 sda4 < sda5 sda6 sda7 > Jun 20 18:19:58 test10 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 20 18:19:58 test10 kernel: sdb: sdb1 sdb2 sdb3 sdb4 < sdb5 sdb6 sdb7 > Jun 20 18:19:58 test10 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jun 20 18:19:58 test10 kernel: sdd: sdd1 sdd2 sdd3 sdd4 < sdd5 sdd6 sdd7 > Jun 20 18:19:58 test10 kernel: sd 3:0:0:0: [sdd] Attached SCSI disk Jun 20 18:19:58 test10 kernel: dracut: Autoassembling MD Raid Jun 20 18:19:58 test10 kernel: dracut: mdadm: AUTO line may only be give once. Subsequent lines ignored Jun 20 18:19:58 test10 kernel: md: md0 stopped. Jun 20 18:19:58 test10 kernel: md: bind Jun 20 18:19:58 test10 kernel: md: bind Jun 20 18:19:58 test10 kernel: md: bind Jun 20 18:19:58 test10 kernel: md: bind Jun 20 18:19:58 test10 kernel: md: raid1 personality registered for level 1 Jun 20 18:19:58 test10 kernel: bio: create slab at 1 Jun 20 18:19:58 test10 kernel: md/raid1:md0: active with 4 out of 4 mirrors Jun 20 18:19:58 test10 kernel: md0: detected capacity change from 0 to 104845312 Jun 20 18:19:58 test10 kernel: md0: detected capacity change from 0 to 104845312 Jun 20 18:19:58 test10 kernel: md0: unknown partition table Jun 20 18:19:58 test10 kernel: dracut: mdadm: /dev/md0 has been started with 4 drives. Jun 20 18:19:58 test10 kernel: md: md1 stopped. Jun 20 18:19:58 test10 kernel: md: bind Jun 20 18:19:58 test10 kernel: md: bind Jun 20 18:19:58 test10 kernel: md: bind Jun 20 18:19:58 test10 kernel: md: bind Jun 20 18:19:58 test10 kernel: md/raid1:md1: not clean -- starting background reconstruction Jun 20 18:19:58 test10 kernel: md/raid1:md1: active with 4 out of 4 mirrors Jun 20 18:19:58 test10 kernel: created bitmap (1 pages) for device md1 Jun 20 18:19:58 test10 kernel: md1: bitmap initialized from disk: read 1/1 pages, set 26 bits Jun 20 18:19:58 test10 kernel: md1: detected capacity change from 0 to 4293910528 Jun 20 18:19:58 test10 kernel: dracut: mdadm: /dev/md1 has been started with 4 drives. Jun 20 18:19:58 test10 kernel: md: resync of RAID array md1 Jun 20 18:19:58 test10 kernel: md: minimum _guaranteed_ speed: 1000 KB/sec/disk. Jun 20 18:19:58 test10 kernel: md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync. Jun 20 18:19:58 test10 kernel: md: using 128k window, over a total of 4193272 blocks. Jun 20 18:19:58 test10 kernel: md1: detected capacity change from 0 to 4293910528 Jun 20 18:19:58 test10 kernel: md1: unknown partition table Jun 20 18:19:58 test10 kernel: md: md2 stopped. Jun 20 18:19:58 test10 kernel: md: bind Jun 20 18:19:58 test10 kernel: md: bind Jun 20 18:19:58 test10 kernel: md: bind Jun 20 18:19:58 test10 kernel: md: bind Jun 20 18:19:58 test10 kernel: md/raid1:md2: not clean -- starting background reconstruction Jun 20 18:19:58 test10 kernel: md/raid1:md2: active with 4 out of 4 mirrors Jun 20 18:19:58 test10 kernel: created bitmap (1 pages) for device md2 Jun 20 18:19:58 test10 kernel: md2: bitmap initialized from disk: read 1/1 pages, set 1419 bits Jun 20 18:19:58 test10 kernel: md2: detected capacity change from 0 to 107373064192 Jun 20 18:19:58 test10 kernel: dracut: mdadm: /dev/md2 has been started with 4 drives. Jun 20 18:19:58 test10 kernel: md: delaying resync of md2 until md1 has finished (they share one or more physical units) Jun 20 18:19:58 test10 kernel: md2: detected capacity change from 0 to 107373064192 Jun 20 18:19:58 test10 kernel: md2: unknown partition table Jun 20 18:19:58 test10 kernel: EXT3-fs: barriers not enabled Jun 20 18:19:58 test10 kernel: kjournald starting. Commit interval 5 seconds Jun 20 18:19:58 test10 kernel: EXT3-fs (md1): mounted filesystem with ordered data mode Jun 20 18:19:58 test10 kernel: dracut: Mounted root filesystem /dev/md1 Jun 20 18:19:58 test10 kernel: dracut: Loading SELinux policy Jun 20 18:19:58 test10 kernel: dracut: /sbin/load_policy: Can't load policy: No such device Jun 20 18:19:58 test10 kernel: dracut: Switching root Jun 20 18:19:58 test10 kernel: udev: starting version 147 Jun 20 18:19:58 test10 kernel: e1000e: Intel(R) PRO/1000 Network Driver - 1.3.10-k2 Jun 20 18:19:58 test10 kernel: e1000e: Copyright(c) 1999 - 2011 Intel Corporation. Jun 20 18:19:58 test10 kernel: e1000e 0000:06:00.0: Disabling ASPM L1 Jun 20 18:19:58 test10 kernel: e1000e 0000:06:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 Jun 20 18:19:58 test10 kernel: e1000e 0000:06:00.0: Disabling ASPM L0s Jun 20 18:19:58 test10 kernel: e1000e 0000:06:00.0: eth0: (PCI Express:2.5GB/s:Width x1) 00:30:48:b0:c9:c6 Jun 20 18:19:58 test10 kernel: e1000e 0000:06:00.0: eth0: Intel(R) PRO/1000 Network Connection Jun 20 18:19:58 test10 kernel: e1000e 0000:06:00.0: eth0: MAC: 2, PHY: 2, PBA No: FFFFFF-0FF Jun 20 18:19:58 test10 kernel: e1000e 0000:07:00.0: Disabling ASPM L1 Jun 20 18:19:58 test10 kernel: e1000e 0000:07:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 Jun 20 18:19:58 test10 kernel: e1000e 0000:07:00.0: Disabling ASPM L0s Jun 20 18:19:58 test10 kernel: e1000e 0000:07:00.0: eth1: (PCI Express:2.5GB/s:Width x1) 00:30:48:b0:c9:c7 Jun 20 18:19:58 test10 kernel: e1000e 0000:07:00.0: eth1: Intel(R) PRO/1000 Network Connection Jun 20 18:19:58 test10 kernel: e1000e 0000:07:00.0: eth1: MAC: 2, PHY: 2, PBA No: FFFFFF-0FF Jun 20 18:19:58 test10 kernel: iTCO_vendor_support: vendor-support=0 Jun 20 18:19:58 test10 kernel: iTCO_wdt: Intel TCO WatchDog Timer Driver v1.06 Jun 20 18:19:58 test10 kernel: iTCO_wdt: Found a ICH7 or ICH7R TCO device (Version=2, TCOBASE=0x1060) Jun 20 18:19:58 test10 kernel: iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Jun 20 18:19:58 test10 kernel: sd 0:0:0:0: Attached scsi generic sg0 type 0 Jun 20 18:19:58 test10 kernel: sd 1:0:0:0: Attached scsi generic sg1 type 0 Jun 20 18:19:58 test10 kernel: sd 2:0:0:0: Attached scsi generic sg2 type 0 Jun 20 18:19:58 test10 kernel: sd 3:0:0:0: Attached scsi generic sg3 type 0 Jun 20 18:19:58 test10 kernel: i801_smbus 0000:00:1f.3: PCI INT B -> GSI 17 (level, low) -> IRQ 17 Jun 20 18:19:58 test10 kernel: device-mapper: uevent: version 1.0.3 Jun 20 18:19:58 test10 kernel: device-mapper: ioctl: 4.20.0-ioctl (2011-02-02) initialised: dm-devel@redhat.com Jun 20 18:19:58 test10 kernel: EXT3-fs (md1): using internal journal Jun 20 18:19:58 test10 kernel: EXT3-fs: barriers not enabled Jun 20 18:19:58 test10 kernel: kjournald starting. Commit interval 5 seconds Jun 20 18:19:58 test10 kernel: EXT3-fs (md0): using internal journal Jun 20 18:19:58 test10 kernel: EXT3-fs (md0): mounted filesystem with ordered data mode Jun 20 18:19:58 test10 kernel: EXT3-fs: barriers not enabled Jun 20 18:19:58 test10 kernel: kjournald starting. Commit interval 5 seconds Jun 20 18:19:58 test10 kernel: EXT3-fs (md2): using internal journal Jun 20 18:19:58 test10 kernel: EXT3-fs (md2): mounted filesystem with ordered data mode Jun 20 18:19:58 test10 kernel: NET: Registered protocol family 10 Jun 20 18:19:58 test10 kernel: ADDRCONF(NETDEV_UP): eth0: link is not ready Jun 20 18:19:58 test10 kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx Jun 20 18:19:58 test10 kernel: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 20 18:19:58 test10 kernel: ADDRCONF(NETDEV_UP): eth1: link is not ready Jun 20 18:19:58 test10 kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None Jun 20 18:19:58 test10 kernel: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready Jun 20 18:19:58 test10 kernel: e1000e 0000:07:00.0: eth1: Reset adapter Jun 20 18:19:58 test10 kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx Jun 20 18:19:58 test10 kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None Jun 20 18:19:58 test10 kernel: RPC: Registered udp transport module. Jun 20 18:19:58 test10 kernel: RPC: Registered tcp transport module. Jun 20 18:19:58 test10 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jun 20 18:19:59 test10 netconsole: : inserting netconsole module with arguments netconsole=6666@10.200.98.110/eth0,514@10.200.2.33/00:1F:45:68:64:D7 Jun 20 18:20:00 test10 kernel: netconsole: local port 6666 Jun 20 18:20:00 test10 kernel: netconsole: local IP 10.200.98.110 Jun 20 18:20:00 test10 kernel: netconsole: interface 'eth0' Jun 20 18:20:00 test10 kernel: netconsole: remote port 514 Jun 20 18:20:00 test10 kernel: netconsole: remote IP 10.200.2.33 Jun 20 18:20:00 test10 kernel: netconsole: remote ethernet address 00:1f:45:68:64:d7 Jun 20 18:20:00 test10 kernel: console [netcon0] enabled Jun 20 18:20:00 test10 kernel: netconsole: network logging started Jun 20 18:20:02 test10 xinetd[1389]: Server /usr/sbin/swat is not executable [file=/etc/xinetd.d/samba] [line=8] Jun 20 18:20:02 test10 xinetd[1389]: Error parsing attribute server - DISABLING SERVICE [file=/etc/xinetd.d/samba] [line=8] Jun 20 18:20:02 test10 xinetd[1389]: xinetd Version 2.3.14 started with libwrap loadavg labeled-networking options compiled in. Jun 20 18:20:02 test10 xinetd[1389]: Started working: 3 available services Jun 20 18:20:16 test10 ntpdate[1404]: step time server 128.10.254.6 offset 10.045116 sec Jun 20 18:20:17 test10 ntpd[1407]: ntpd 4.2.4p8@1.1612-o Thu May 13 14:38:25 UTC 2010 (1) Jun 20 18:20:17 test10 ntpd[1408]: precision = 0.120 usec Jun 20 18:20:17 test10 ntpd[1408]: Listening on interface #0 wildcard, 0.0.0.0#123 Disabled Jun 20 18:20:17 test10 ntpd[1408]: Listening on interface #1 wildcard, ::#123 Disabled Jun 20 18:20:17 test10 ntpd[1408]: Listening on interface #2 eth1, fe80::230:48ff:feb0:c9c7#123 Enabled Jun 20 18:20:17 test10 ntpd[1408]: Listening on interface #3 eth0, fe80::230:48ff:feb0:c9c6#123 Enabled Jun 20 18:20:17 test10 ntpd[1408]: Listening on interface #4 lo, ::1#123 Enabled Jun 20 18:20:17 test10 ntpd[1408]: Listening on interface #5 lo, 127.0.0.1#123 Enabled Jun 20 18:20:17 test10 ntpd[1408]: Listening on interface #6 eth0, 10.200.98.110#123 Enabled Jun 20 18:20:17 test10 ntpd[1408]: Listening on interface #7 eth1, 169.254.10.1#123 Enabled Jun 20 18:20:17 test10 ntpd[1408]: Listening on routing socket on fd #24 for interface updates Jun 20 18:20:17 test10 ntpd[1408]: kernel time sync status 2040 Jun 20 18:20:19 test10 monit[1538]: monit: generated unique Monit id 131a291b34e94ca400612ffc90950db5 and stored to '/root/.monit.id' Jun 20 18:20:19 test10 monit[1538]: Starting monit daemon Jun 20 18:20:19 test10 monit[1540]: 'system_test10.sm.scalecomputing.com' Monit started Jun 20 18:20:19 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:20:19 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:20:19 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:20:19 test10 dhclient: dhclient(1108) is already running - exiting. Jun 20 18:20:19 test10 dhclient: Jun 20 18:20:19 test10 dhclient: This version of ISC DHCP is based on the release available Jun 20 18:20:19 test10 dhclient: on ftp.isc.org. Features have been added and other changes Jun 20 18:20:19 test10 dhclient: have been made to the base software release in order to make Jun 20 18:20:19 test10 dhclient: it work better with this distribution. Jun 20 18:20:19 test10 dhclient: Jun 20 18:20:19 test10 dhclient: Please report for this software via the Red Hat Bugzilla site: Jun 20 18:20:19 test10 dhclient: http://bugzilla.redhat.com Jun 20 18:20:19 test10 dhclient: Jun 20 18:20:19 test10 dhclient: exiting. Jun 20 18:20:19 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:20:19 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:20:19 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:20:19 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:20:19 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:20:19 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:20:19 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:20:20 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:20:20 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:20:20 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:20:20 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:20:20 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:20:20 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:20:20 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:20:20 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:20:20 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:20:20 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:20:20 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:20:21 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:20:21 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:20:21 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:20:21 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:20:21 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:20:21 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:20:21 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:20:21 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:20:21 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:20:21 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:20:21 test10 scconfigd: INFO [rollout-2939148048] Performing rollout for batch ID: startup Jun 20 18:20:21 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:20:21 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:20:21 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:20:21 test10 scconfigd: ERR [rollout-2939148048] Server is shutting down... abandoning child process 1621 Jun 20 18:20:21 test10 scconfigd: ERR [rollout-2939148048] Server is shutting down... abandoning child process 1622 Jun 20 18:20:21 test10 scconfigd: ERR [rollout-2939148048] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:20:21 test10 scconfigd: INFO [resolve-2930755344] Performing impact resolution for batch ID: startup Jun 20 18:20:21 test10 scconfigd: WARN [resolve-2930755344] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:20:21 test10 scconfigd: WARN [resolve-2930755344] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:20:21 test10 scconfigd: WARN [resolve-2930755344] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:20:21 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:20:22 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:20:22 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:20:22 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:20:22 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:20:32 test10 kernel: md: md1: resync done. Jun 20 18:20:32 test10 kernel: md: resync of RAID array md2 Jun 20 18:20:32 test10 kernel: md: minimum _guaranteed_ speed: 1000 KB/sec/disk. Jun 20 18:20:32 test10 kernel: md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync. Jun 20 18:20:32 test10 kernel: md: using 128k window, over a total of 104856508 blocks. Jun 20 18:20:52 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:21:02 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:21:02 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:21:02 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:21:02 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:21:02 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:21:02 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:21:02 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:21:02 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:21:02 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:21:02 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:21:03 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:21:03 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:21:03 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:21:03 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:21:03 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:21:03 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:21:03 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:21:03 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:21:03 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:21:03 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:21:03 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:21:04 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:21:04 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:21:04 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:21:04 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:21:04 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:21:04 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:21:04 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:21:04 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:21:04 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:21:04 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:21:04 test10 scconfigd: INFO [rollout-562939664] Performing rollout for batch ID: startup Jun 20 18:21:04 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:21:04 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:21:04 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:21:04 test10 scconfigd: ERR [rollout-562939664] Server is shutting down... abandoning child process 2017 Jun 20 18:21:04 test10 scconfigd: ERR [rollout-562939664] Server is shutting down... abandoning child process 2018 Jun 20 18:21:04 test10 scconfigd: ERR [rollout-562939664] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:21:04 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:21:04 test10 scconfigd: INFO [resolve-554546960] Performing impact resolution for batch ID: startup Jun 20 18:21:04 test10 scconfigd: WARN [resolve-554546960] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:21:04 test10 scconfigd: WARN [resolve-554546960] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:21:04 test10 scconfigd: WARN [resolve-554546960] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:21:05 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:21:05 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:21:05 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:21:05 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:21:35 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:21:45 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:21:45 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:21:45 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:21:45 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:21:45 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:21:45 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:21:45 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:21:45 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:21:45 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:21:45 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:21:46 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:21:46 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:21:46 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:21:46 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:21:46 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:21:46 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:21:46 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:21:46 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:21:46 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:21:46 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:21:46 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:21:47 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:21:47 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:21:47 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:21:47 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:21:47 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:21:47 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:21:47 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:21:47 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:21:47 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:21:47 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:21:47 test10 scconfigd: INFO [rollout-2649818896] Performing rollout for batch ID: startup Jun 20 18:21:47 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:21:47 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:21:47 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:21:47 test10 scconfigd: ERR [rollout-2649818896] Server is shutting down... abandoning child process 2413 Jun 20 18:21:47 test10 scconfigd: ERR [rollout-2649818896] Server is shutting down... abandoning child process 2414 Jun 20 18:21:47 test10 scconfigd: ERR [rollout-2649818896] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:21:47 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:21:47 test10 scconfigd: INFO [resolve-2641426192] Performing impact resolution for batch ID: startup Jun 20 18:21:47 test10 scconfigd: WARN [resolve-2641426192] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:21:47 test10 scconfigd: WARN [resolve-2641426192] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:21:47 test10 scconfigd: WARN [resolve-2641426192] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:21:48 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:21:48 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:21:48 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:21:48 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:22:18 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:22:28 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:22:28 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:22:28 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:22:28 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:22:28 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:22:28 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:22:28 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:22:28 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:22:28 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:22:28 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:22:29 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:22:29 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:22:29 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:22:29 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:22:29 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:22:29 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:22:29 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:22:29 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:22:29 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:22:29 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:22:29 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:22:30 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:22:30 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:22:30 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:22:30 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:22:30 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:22:30 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:22:30 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:22:30 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:22:30 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:22:30 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:22:30 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:22:30 test10 scconfigd: INFO [rollout-861275920] Performing rollout for batch ID: startup Jun 20 18:22:30 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:22:30 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:22:30 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:22:30 test10 scconfigd: ERR [rollout-861275920] Server is shutting down... abandoning child process 2809 Jun 20 18:22:30 test10 scconfigd: ERR [rollout-861275920] Server is shutting down... abandoning child process 2810 Jun 20 18:22:30 test10 scconfigd: ERR [rollout-861275920] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:22:30 test10 scconfigd: INFO [resolve-852883216] Performing impact resolution for batch ID: startup Jun 20 18:22:30 test10 scconfigd: WARN [resolve-852883216] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:22:30 test10 scconfigd: WARN [resolve-852883216] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:22:30 test10 scconfigd: WARN [resolve-852883216] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:22:31 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:22:31 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:22:31 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:22:31 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:23:01 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:23:11 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:23:11 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:23:11 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:23:11 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:23:11 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:23:11 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:23:11 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:23:11 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:23:11 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:23:11 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:23:12 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:23:12 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:23:12 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:23:12 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:23:12 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:23:12 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:23:12 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:23:12 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:23:12 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:23:12 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:23:12 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:23:13 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:23:13 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:23:13 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:23:13 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:23:13 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:23:13 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:23:13 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:23:13 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:23:13 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:23:13 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:23:13 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:23:13 test10 scconfigd: INFO [rollout-906102544] Performing rollout for batch ID: startup Jun 20 18:23:13 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:23:13 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:23:13 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:23:13 test10 scconfigd: ERR [rollout-906102544] Server is shutting down... abandoning child process 3205 Jun 20 18:23:13 test10 scconfigd: ERR [rollout-906102544] Server is shutting down... abandoning child process 3206 Jun 20 18:23:13 test10 scconfigd: ERR [rollout-906102544] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:23:13 test10 scconfigd: INFO [resolve-897709840] Performing impact resolution for batch ID: startup Jun 20 18:23:13 test10 scconfigd: WARN [resolve-897709840] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:23:13 test10 scconfigd: WARN [resolve-897709840] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:23:13 test10 scconfigd: WARN [resolve-897709840] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:23:14 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:23:14 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:23:14 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:23:14 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:23:44 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:23:54 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:23:54 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:23:54 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:23:54 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:23:54 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:23:54 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:23:54 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:23:54 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:23:54 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:23:54 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:23:55 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:23:55 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:23:55 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:23:55 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:23:55 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:23:55 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:23:55 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:23:55 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:23:55 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:23:55 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:23:55 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:23:56 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:23:56 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:23:56 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:23:56 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:23:56 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:23:56 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:23:56 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:23:56 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:23:56 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:23:56 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:23:56 test10 scconfigd: INFO [rollout-2041063184] Performing rollout for batch ID: startup Jun 20 18:23:56 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:23:56 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:23:56 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:23:56 test10 scconfigd: ERR [rollout-2041063184] Server is shutting down... abandoning child process 3601 Jun 20 18:23:56 test10 scconfigd: ERR [rollout-2041063184] Server is shutting down... abandoning child process 3602 Jun 20 18:23:56 test10 scconfigd: ERR [rollout-2041063184] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:23:56 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:23:56 test10 scconfigd: INFO [resolve-2032670480] Performing impact resolution for batch ID: startup Jun 20 18:23:56 test10 scconfigd: WARN [resolve-2032670480] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:23:56 test10 scconfigd: WARN [resolve-2032670480] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:23:56 test10 scconfigd: WARN [resolve-2032670480] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:23:57 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:23:57 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:23:57 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:23:57 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:24:27 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:24:33 test10 ntpd[1408]: synchronized to 198.137.202.16, stratum 2 Jun 20 18:24:33 test10 ntpd[1408]: kernel time sync status change 2001 Jun 20 18:24:37 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:24:37 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:24:37 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:24:37 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:24:37 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:24:37 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:24:37 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:24:37 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:24:37 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:24:37 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:24:38 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:24:38 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:24:38 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:24:38 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:24:38 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:24:38 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:24:38 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:24:38 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:24:38 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:24:38 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:24:38 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:24:39 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:24:39 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:24:39 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:24:39 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:24:39 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:24:39 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:24:39 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:24:39 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:24:39 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:24:39 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:24:39 test10 scconfigd: INFO [rollout-2067375888] Performing rollout for batch ID: startup Jun 20 18:24:39 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:24:39 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:24:39 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:24:39 test10 scconfigd: ERR [rollout-2067375888] Server is shutting down... abandoning child process 3997 Jun 20 18:24:39 test10 scconfigd: ERR [rollout-2067375888] Server is shutting down... abandoning child process 3998 Jun 20 18:24:39 test10 scconfigd: ERR [rollout-2067375888] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:24:39 test10 scconfigd: INFO [resolve-2058983184] Performing impact resolution for batch ID: startup Jun 20 18:24:39 test10 scconfigd: WARN [resolve-2058983184] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:24:39 test10 scconfigd: WARN [resolve-2058983184] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:24:39 test10 scconfigd: WARN [resolve-2058983184] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:24:39 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:24:40 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:24:40 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:24:40 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:24:40 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:25:10 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:25:20 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:25:20 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:25:20 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:25:20 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:25:20 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:25:20 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:25:20 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:25:20 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:25:20 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:25:20 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:25:21 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:25:21 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:25:21 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:25:21 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:25:21 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:25:21 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:25:21 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:25:21 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:25:21 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:25:21 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:25:21 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:25:22 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:25:22 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:25:22 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:25:22 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:25:22 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:25:22 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:25:22 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:25:22 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:25:22 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:25:22 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:25:22 test10 scconfigd: INFO [rollout-1255622416] Performing rollout for batch ID: startup Jun 20 18:25:22 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:25:22 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:25:22 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:25:22 test10 scconfigd: ERR [rollout-1255622416] Server is shutting down... abandoning child process 4393 Jun 20 18:25:22 test10 scconfigd: ERR [rollout-1255622416] Server is shutting down... abandoning child process 4394 Jun 20 18:25:22 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:25:22 test10 scconfigd: ERR [rollout-1255622416] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:25:22 test10 scconfigd: INFO [resolve-1247229712] Performing impact resolution for batch ID: startup Jun 20 18:25:22 test10 scconfigd: WARN [resolve-1247229712] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:25:22 test10 scconfigd: WARN [resolve-1247229712] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:25:22 test10 scconfigd: WARN [resolve-1247229712] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:25:23 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:25:23 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:25:23 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:25:23 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:25:53 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:26:03 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:26:03 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:26:03 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:26:03 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:26:03 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:26:03 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:26:03 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:26:03 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:26:03 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:26:03 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:26:04 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:26:04 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:26:04 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:26:04 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:26:04 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:26:04 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:26:04 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:26:04 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:26:04 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:26:04 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:26:04 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:26:05 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:26:05 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:26:05 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:26:05 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:26:05 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:26:05 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:26:05 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:26:05 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:26:05 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:26:05 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:26:05 test10 scconfigd: INFO [rollout-2642237200] Performing rollout for batch ID: startup Jun 20 18:26:05 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:26:05 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:26:05 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:26:05 test10 scconfigd: ERR [rollout-2642237200] Server is shutting down... abandoning child process 4789 Jun 20 18:26:05 test10 scconfigd: ERR [rollout-2642237200] Server is shutting down... abandoning child process 4790 Jun 20 18:26:05 test10 scconfigd: ERR [rollout-2642237200] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:26:05 test10 scconfigd: INFO [resolve-2633844496] Performing impact resolution for batch ID: startup Jun 20 18:26:05 test10 scconfigd: WARN [resolve-2633844496] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:26:05 test10 scconfigd: WARN [resolve-2633844496] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:26:05 test10 scconfigd: WARN [resolve-2633844496] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:26:05 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:26:06 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:26:06 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:26:06 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:26:06 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:26:36 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:26:46 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:26:46 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:26:46 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:26:46 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:26:46 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:26:46 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:26:46 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:26:46 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:26:46 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:26:46 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:26:47 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:26:47 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:26:47 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:26:47 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:26:47 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:26:47 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:26:47 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:26:47 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:26:47 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:26:47 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:26:47 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:26:48 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:26:48 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:26:48 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:26:48 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:26:48 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:26:48 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:26:48 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:26:48 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:26:48 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:26:48 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:26:48 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:26:48 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:26:48 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:26:48 test10 scconfigd: INFO [rollout-3988748048] Performing rollout for batch ID: startup Jun 20 18:26:48 test10 scconfigd: WARN [rollout-3988748048] fire_event_async: server is shutting down - dropped event: config.rollout_started Jun 20 18:26:48 test10 scconfigd: ERR [rollout-3988748048] Server is shutting down... abandoning child process 5185 Jun 20 18:26:48 test10 scconfigd: ERR [rollout-3988748048] Server is shutting down... abandoning child process 5186 Jun 20 18:26:48 test10 scconfigd: ERR [rollout-3988748048] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:26:48 test10 scconfigd: INFO [resolve-3980355344] Performing impact resolution for batch ID: startup Jun 20 18:26:48 test10 scconfigd: WARN [resolve-3980355344] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:26:48 test10 scconfigd: WARN [resolve-3980355344] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:26:48 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:26:48 test10 scconfigd: WARN [resolve-3980355344] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:26:49 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:26:49 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:26:49 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:26:49 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:27:19 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:27:29 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:27:29 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:27:29 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:27:29 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:27:29 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:27:29 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:27:29 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:27:29 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:27:29 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:27:29 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:27:30 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:27:30 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:27:30 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:27:30 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:27:30 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:27:30 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:27:30 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:27:30 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:27:30 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:27:30 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:27:30 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:27:31 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:27:31 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:27:31 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:27:31 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:27:31 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:27:31 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:27:31 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:27:31 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:27:31 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:27:31 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:27:31 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:27:31 test10 scconfigd: INFO [rollout-2615818000] Performing rollout for batch ID: startup Jun 20 18:27:31 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:27:31 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:27:31 test10 scconfigd: ERR [rollout-2615818000] Server is shutting down... abandoning child process 5581 Jun 20 18:27:31 test10 scconfigd: ERR [rollout-2615818000] Server is shutting down... abandoning child process 5582 Jun 20 18:27:31 test10 scconfigd: ERR [rollout-2615818000] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:27:31 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:27:31 test10 scconfigd: INFO [resolve-2607425296] Performing impact resolution for batch ID: startup Jun 20 18:27:31 test10 scconfigd: WARN [resolve-2607425296] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:27:31 test10 scconfigd: WARN [resolve-2607425296] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:27:31 test10 scconfigd: WARN [resolve-2607425296] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:27:32 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:27:32 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:27:32 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:27:32 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:28:02 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:28:12 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:28:12 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:28:12 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:28:12 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:28:12 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:28:12 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:28:12 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:28:12 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:28:12 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:28:12 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:28:13 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:28:13 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:28:13 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:28:13 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:28:13 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:28:13 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:28:13 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:28:13 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:28:13 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:28:13 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:28:13 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:28:14 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:28:14 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:28:14 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:28:14 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:28:14 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:28:14 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:28:14 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:28:14 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:28:14 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:28:14 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:28:14 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:28:14 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:28:14 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:28:14 test10 scconfigd: INFO [rollout-3464152848] Performing rollout for batch ID: startup Jun 20 18:28:14 test10 scconfigd: WARN [rollout-3464152848] fire_event_async: server is shutting down - dropped event: config.rollout_started Jun 20 18:28:14 test10 scconfigd: ERR [rollout-3464152848] Server is shutting down... abandoning child process 5977 Jun 20 18:28:14 test10 scconfigd: ERR [rollout-3464152848] Server is shutting down... abandoning child process 5978 Jun 20 18:28:14 test10 scconfigd: ERR [rollout-3464152848] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:28:14 test10 scconfigd: INFO [resolve-3455760144] Performing impact resolution for batch ID: startup Jun 20 18:28:14 test10 scconfigd: WARN [resolve-3455760144] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:28:14 test10 scconfigd: WARN [resolve-3455760144] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:28:14 test10 scconfigd: WARN [resolve-3455760144] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:28:14 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:28:15 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:28:15 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:28:15 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:28:15 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:28:45 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:28:55 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:28:55 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:28:55 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:28:55 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:28:55 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:28:55 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:28:55 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:28:55 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:28:55 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:28:55 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:28:56 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:28:56 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:28:56 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:28:56 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:28:56 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:28:56 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:28:56 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:28:56 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:28:56 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:28:56 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:28:56 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:28:57 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:28:57 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:28:57 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:28:57 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:28:57 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:28:57 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:28:57 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:28:57 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:28:57 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:28:57 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:28:57 test10 scconfigd: INFO [rollout-952190736] Performing rollout for batch ID: startup Jun 20 18:28:57 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:28:57 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:28:57 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:28:57 test10 scconfigd: ERR [rollout-952190736] Server is shutting down... abandoning child process 6373 Jun 20 18:28:57 test10 scconfigd: ERR [rollout-952190736] Server is shutting down... abandoning child process 6374 Jun 20 18:28:57 test10 scconfigd: ERR [rollout-952190736] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:28:57 test10 scconfigd: INFO [resolve-872412944] Performing impact resolution for batch ID: startup Jun 20 18:28:57 test10 scconfigd: WARN [resolve-872412944] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:28:57 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:28:57 test10 scconfigd: WARN [resolve-872412944] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:28:57 test10 scconfigd: WARN [resolve-872412944] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:28:58 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:28:58 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:28:58 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:28:58 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:29:28 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:29:38 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:29:38 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:29:38 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:29:38 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:29:38 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:29:38 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:29:38 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:29:38 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:29:38 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:29:38 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:29:39 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:29:39 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:29:39 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:29:39 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:29:39 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:29:39 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:29:39 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:29:39 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:29:39 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:29:39 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:29:39 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:29:40 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:29:40 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:29:40 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:29:40 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:29:40 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:29:40 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:29:40 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:29:40 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:29:40 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:29:40 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:29:40 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:29:40 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:29:40 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:29:40 test10 scconfigd: INFO [rollout-4149827344] Performing rollout for batch ID: startup Jun 20 18:29:40 test10 scconfigd: WARN [rollout-4149827344] fire_event_async: server is shutting down - dropped event: config.rollout_started Jun 20 18:29:40 test10 scconfigd: ERR [rollout-4149827344] Server is shutting down... abandoning child process 6769 Jun 20 18:29:40 test10 scconfigd: ERR [rollout-4149827344] Server is shutting down... abandoning child process 6770 Jun 20 18:29:40 test10 scconfigd: ERR [rollout-4149827344] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:29:40 test10 scconfigd: INFO [resolve-4141434640] Performing impact resolution for batch ID: startup Jun 20 18:29:40 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:29:40 test10 scconfigd: WARN [resolve-4141434640] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:29:40 test10 scconfigd: WARN [resolve-4141434640] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:29:40 test10 scconfigd: WARN [resolve-4141434640] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:29:41 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:29:41 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:29:41 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:29:41 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:30:11 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:30:21 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:30:21 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:30:21 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:30:21 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:30:21 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:30:21 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:30:21 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:30:21 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:30:21 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:30:21 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:30:22 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:30:22 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:30:22 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:30:22 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:30:22 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:30:22 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:30:22 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:30:22 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:30:22 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:30:22 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:30:22 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:30:23 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:30:23 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:30:23 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:30:23 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:30:23 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:30:23 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:30:23 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:30:23 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:30:23 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:30:23 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:30:23 test10 scconfigd: INFO [rollout-2624710416] Performing rollout for batch ID: startup Jun 20 18:30:23 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:30:23 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:30:23 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:30:23 test10 scconfigd: ERR [rollout-2624710416] Server is shutting down... abandoning child process 7177 Jun 20 18:30:23 test10 scconfigd: ERR [rollout-2624710416] Server is shutting down... abandoning child process 7178 Jun 20 18:30:23 test10 scconfigd: ERR [rollout-2624710416] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:30:23 test10 scconfigd: INFO [resolve-2616317712] Performing impact resolution for batch ID: startup Jun 20 18:30:23 test10 scconfigd: WARN [resolve-2616317712] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:30:23 test10 scconfigd: WARN [resolve-2616317712] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:30:23 test10 scconfigd: WARN [resolve-2616317712] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:30:23 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:30:24 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:30:24 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:30:24 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:30:24 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:30:54 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:30:58 test10 ntpd[1408]: synchronized to 173.8.198.243, stratum 2 Jun 20 18:31:04 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:31:04 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:31:04 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:31:04 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:31:04 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:31:04 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:31:04 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:31:04 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:31:04 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:31:04 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:31:05 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:31:05 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:31:05 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:31:05 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:31:05 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:31:05 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:31:05 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:31:05 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:31:05 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:31:05 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:31:05 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:31:06 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:31:06 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:31:06 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:31:06 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:31:06 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:31:06 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:31:06 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:31:06 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:31:06 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:31:06 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:31:06 test10 scconfigd: INFO [rollout-2958219024] Performing rollout for batch ID: startup Jun 20 18:31:06 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:31:06 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:31:06 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:31:06 test10 scconfigd: ERR [rollout-2958219024] Server is shutting down... abandoning child process 7573 Jun 20 18:31:06 test10 scconfigd: ERR [rollout-2958219024] Server is shutting down... abandoning child process 7574 Jun 20 18:31:06 test10 scconfigd: ERR [rollout-2958219024] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:31:06 test10 scconfigd: INFO [resolve-2949826320] Performing impact resolution for batch ID: startup Jun 20 18:31:06 test10 scconfigd: WARN [resolve-2949826320] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:31:06 test10 scconfigd: WARN [resolve-2949826320] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:31:06 test10 scconfigd: WARN [resolve-2949826320] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:31:06 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:31:07 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:31:07 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:31:07 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:31:07 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:31:37 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:31:47 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:31:47 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:31:47 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:31:47 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:31:47 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:31:47 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:31:47 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:31:47 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:31:47 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:31:47 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:31:48 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:31:48 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:31:48 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:31:48 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:31:48 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:31:48 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:31:48 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:31:48 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:31:48 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:31:48 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:31:48 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:31:49 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:31:49 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:31:49 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:31:49 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:31:49 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:31:49 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:31:49 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:31:49 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:31:49 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:31:49 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:31:49 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:31:49 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:31:49 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:31:49 test10 scconfigd: INFO [rollout-3970778896] Performing rollout for batch ID: startup Jun 20 18:31:49 test10 scconfigd: WARN [rollout-3970778896] fire_event_async: server is shutting down - dropped event: config.rollout_started Jun 20 18:31:49 test10 scconfigd: ERR [rollout-3970778896] Server is shutting down... abandoning child process 7969 Jun 20 18:31:49 test10 scconfigd: ERR [rollout-3970778896] Server is shutting down... abandoning child process 7970 Jun 20 18:31:49 test10 scconfigd: ERR [rollout-3970778896] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:31:49 test10 scconfigd: INFO [resolve-3892311824] Performing impact resolution for batch ID: startup Jun 20 18:31:49 test10 scconfigd: WARN [resolve-3892311824] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:31:49 test10 scconfigd: WARN [resolve-3892311824] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:31:49 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:31:49 test10 scconfigd: WARN [resolve-3892311824] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:31:50 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:31:50 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:31:50 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:31:50 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:32:20 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:32:30 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:32:30 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:32:30 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:32:30 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:32:30 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:32:30 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:32:30 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:32:30 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:32:30 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:32:30 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:32:31 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:32:31 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:32:31 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:32:31 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:32:31 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:32:31 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:32:31 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:32:31 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:32:31 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:32:31 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:32:31 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:32:32 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:32:32 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:32:32 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:32:32 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:32:32 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:32:32 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:32:32 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:32:32 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:32:32 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:32:32 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:32:32 test10 scconfigd: INFO [rollout-1867613968] Performing rollout for batch ID: startup Jun 20 18:32:32 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:32:32 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:32:32 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:32:32 test10 scconfigd: ERR [rollout-1867613968] Server is shutting down... abandoning child process 8365 Jun 20 18:32:32 test10 scconfigd: ERR [rollout-1867613968] Server is shutting down... abandoning child process 8366 Jun 20 18:32:32 test10 scconfigd: ERR [rollout-1867613968] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:32:32 test10 scconfigd: INFO [resolve-1859221264] Performing impact resolution for batch ID: startup Jun 20 18:32:32 test10 scconfigd: WARN [resolve-1859221264] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:32:32 test10 scconfigd: WARN [resolve-1859221264] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:32:32 test10 scconfigd: WARN [resolve-1859221264] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:32:32 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:32:33 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:32:33 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:32:33 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:32:33 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:33:03 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:33:13 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:33:13 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:33:13 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:33:13 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:33:13 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:33:13 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:33:13 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:33:13 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:33:13 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:33:13 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:33:14 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:33:14 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:33:14 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:33:14 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:33:14 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:33:14 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:33:14 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:33:14 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:33:14 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:33:14 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:33:14 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:33:15 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:33:15 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:33:15 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:33:15 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:33:15 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:33:15 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:33:15 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:33:15 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:33:15 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:33:15 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:33:15 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:33:15 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:33:15 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:33:15 test10 scconfigd: INFO [rollout-3354593040] Performing rollout for batch ID: startup Jun 20 18:33:15 test10 scconfigd: WARN [rollout-3354593040] fire_event_async: server is shutting down - dropped event: config.rollout_started Jun 20 18:33:15 test10 scconfigd: ERR [rollout-3354593040] Server is shutting down... abandoning child process 8761 Jun 20 18:33:15 test10 scconfigd: ERR [rollout-3354593040] Server is shutting down... abandoning child process 8762 Jun 20 18:33:15 test10 scconfigd: ERR [rollout-3354593040] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:33:15 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:33:15 test10 scconfigd: INFO [resolve-3346200336] Performing impact resolution for batch ID: startup Jun 20 18:33:15 test10 scconfigd: WARN [resolve-3346200336] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:33:15 test10 scconfigd: WARN [resolve-3346200336] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:33:15 test10 scconfigd: WARN [resolve-3346200336] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:33:16 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:33:16 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:33:16 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:33:16 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:33:46 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:33:56 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:33:56 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:33:56 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:33:56 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:33:56 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:33:56 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:33:56 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:33:56 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:33:56 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:33:56 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:33:57 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:33:57 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:33:57 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:33:57 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:33:57 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:33:57 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:33:57 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:33:57 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:33:57 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:33:57 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:33:57 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:33:58 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:33:58 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:33:58 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:33:58 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:33:58 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:33:58 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:33:58 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:33:58 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:33:58 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:33:58 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:33:58 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:33:58 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:33:58 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:33:58 test10 scconfigd: INFO [rollout-3873810192] Performing rollout for batch ID: startup Jun 20 18:33:58 test10 scconfigd: WARN [rollout-3873810192] fire_event_async: server is shutting down - dropped event: config.rollout_started Jun 20 18:33:58 test10 scconfigd: ERR [rollout-3873810192] Server is shutting down... abandoning child process 9157 Jun 20 18:33:58 test10 scconfigd: ERR [rollout-3873810192] Server is shutting down... abandoning child process 9158 Jun 20 18:33:58 test10 scconfigd: ERR [rollout-3873810192] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:33:58 test10 scconfigd: INFO [resolve-3865417488] Performing impact resolution for batch ID: startup Jun 20 18:33:58 test10 scconfigd: WARN [resolve-3865417488] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:33:58 test10 scconfigd: WARN [resolve-3865417488] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:33:58 test10 scconfigd: WARN [resolve-3865417488] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:33:58 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:33:59 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:33:59 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:33:59 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:33:59 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:34:17 test10 ntpd[1408]: synchronized to 198.137.202.16, stratum 2 Jun 20 18:34:29 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:34:39 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:34:39 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:34:39 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:34:39 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:34:39 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:34:39 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:34:39 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:34:39 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:34:39 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:34:39 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:34:40 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:34:40 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:34:40 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:34:40 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:34:40 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:34:40 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:34:40 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:34:40 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:34:40 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:34:40 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:34:40 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:34:41 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:34:41 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:34:41 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:34:41 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:34:41 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:34:41 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:34:41 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:34:41 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:34:41 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:34:41 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:34:41 test10 scconfigd: INFO [rollout-2148022032] Performing rollout for batch ID: startup Jun 20 18:34:41 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:34:41 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:34:41 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:34:41 test10 scconfigd: ERR [rollout-2148022032] Server is shutting down... abandoning child process 9553 Jun 20 18:34:41 test10 scconfigd: ERR [rollout-2148022032] Server is shutting down... abandoning child process 9554 Jun 20 18:34:41 test10 scconfigd: ERR [rollout-2148022032] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:34:41 test10 scconfigd: INFO [resolve-2139629328] Performing impact resolution for batch ID: startup Jun 20 18:34:41 test10 scconfigd: WARN [resolve-2139629328] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:34:41 test10 scconfigd: WARN [resolve-2139629328] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:34:41 test10 scconfigd: WARN [resolve-2139629328] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:34:41 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:34:42 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:34:42 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:34:42 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:34:42 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:35:12 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:35:22 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:35:22 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:35:22 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:35:22 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:35:22 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:35:22 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:35:22 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:35:22 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:35:22 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:35:22 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:35:23 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:35:23 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:35:23 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:35:23 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:35:23 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:35:23 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:35:23 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:35:23 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:35:23 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:35:23 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:35:23 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:35:24 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:35:24 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:35:24 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:35:24 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:35:24 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:35:24 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:35:24 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:35:24 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:35:24 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:35:24 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:35:24 test10 scconfigd: INFO [rollout-2684831504] Performing rollout for batch ID: startup Jun 20 18:35:24 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:35:24 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:35:24 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:35:24 test10 scconfigd: ERR [rollout-2684831504] Server is shutting down... abandoning child process 9949 Jun 20 18:35:24 test10 scconfigd: ERR [rollout-2684831504] Server is shutting down... abandoning child process 9950 Jun 20 18:35:24 test10 scconfigd: ERR [rollout-2684831504] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:35:24 test10 scconfigd: INFO [resolve-2676438800] Performing impact resolution for batch ID: startup Jun 20 18:35:24 test10 scconfigd: WARN [resolve-2676438800] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:35:24 test10 scconfigd: WARN [resolve-2676438800] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:35:24 test10 scconfigd: WARN [resolve-2676438800] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:35:24 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:35:25 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:35:25 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:35:25 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:35:25 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:35:55 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:36:05 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:36:05 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:36:05 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:36:05 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:36:05 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:36:05 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:36:05 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:36:05 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:36:05 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:36:05 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:36:06 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:36:06 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:36:06 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:36:06 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:36:06 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:36:06 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:36:06 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:36:06 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:36:06 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:36:06 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:36:06 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:36:07 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:36:07 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:36:07 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:36:07 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:36:07 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:36:07 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:36:07 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:36:07 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:36:07 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:36:07 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:36:07 test10 scconfigd: INFO [rollout-3016054544] Performing rollout for batch ID: startup Jun 20 18:36:07 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:36:07 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:36:07 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:36:07 test10 scconfigd: ERR [rollout-3016054544] Server is shutting down... abandoning child process 10345 Jun 20 18:36:07 test10 scconfigd: ERR [rollout-3016054544] Server is shutting down... abandoning child process 10346 Jun 20 18:36:07 test10 scconfigd: ERR [rollout-3016054544] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:36:07 test10 scconfigd: INFO [resolve-3007661840] Performing impact resolution for batch ID: startup Jun 20 18:36:07 test10 scconfigd: WARN [resolve-3007661840] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:36:07 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:36:07 test10 scconfigd: WARN [resolve-3007661840] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:36:07 test10 scconfigd: WARN [resolve-3007661840] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:36:08 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:36:08 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:36:08 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:36:08 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:36:38 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:36:48 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:36:48 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:36:48 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:36:48 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:36:48 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:36:48 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:36:48 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:36:48 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:36:48 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:36:48 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:36:49 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:36:49 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:36:49 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:36:49 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:36:49 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:36:49 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:36:49 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:36:49 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:36:49 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:36:49 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:36:49 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:36:50 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:36:50 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:36:50 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:36:50 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:36:50 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:36:50 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:36:50 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:36:50 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:36:50 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:36:50 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:36:50 test10 scconfigd: INFO [rollout-2781742864] Performing rollout for batch ID: startup Jun 20 18:36:50 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:36:50 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:36:50 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:36:50 test10 scconfigd: ERR [rollout-2781742864] Server is shutting down... abandoning child process 10741 Jun 20 18:36:50 test10 scconfigd: ERR [rollout-2781742864] Server is shutting down... abandoning child process 10742 Jun 20 18:36:50 test10 scconfigd: ERR [rollout-2781742864] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:36:50 test10 scconfigd: INFO [resolve-2773350160] Performing impact resolution for batch ID: startup Jun 20 18:36:50 test10 scconfigd: WARN [resolve-2773350160] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:36:50 test10 scconfigd: WARN [resolve-2773350160] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:36:50 test10 scconfigd: WARN [resolve-2773350160] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:36:50 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:36:51 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:36:51 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:36:51 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:36:51 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:37:21 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:37:31 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:37:31 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:37:31 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:37:31 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:37:31 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:37:31 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:37:31 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:37:31 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:37:31 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:37:31 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:37:32 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:37:32 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:37:32 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:37:32 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:37:32 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:37:32 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:37:32 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:37:32 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:37:32 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:37:32 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:37:32 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:37:33 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:37:33 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:37:33 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:37:33 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:37:33 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:37:33 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:37:33 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:37:33 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:37:33 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:37:33 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:37:33 test10 scconfigd: INFO [rollout-3744372496] Performing rollout for batch ID: startup Jun 20 18:37:33 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:37:33 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:37:33 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:37:33 test10 scconfigd: ERR [rollout-3744372496] Server is shutting down... abandoning child process 11137 Jun 20 18:37:33 test10 scconfigd: ERR [rollout-3744372496] Server is shutting down... abandoning child process 11138 Jun 20 18:37:33 test10 scconfigd: ERR [rollout-3744372496] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:37:33 test10 scconfigd: INFO [resolve-3735979792] Performing impact resolution for batch ID: startup Jun 20 18:37:33 test10 scconfigd: WARN [resolve-3735979792] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:37:33 test10 scconfigd: WARN [resolve-3735979792] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:37:33 test10 scconfigd: WARN [resolve-3735979792] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:37:33 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:37:34 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:37:34 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:37:34 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:37:34 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:38:04 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:38:14 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:38:14 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:38:14 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:38:14 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:38:14 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:38:14 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:38:14 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:38:14 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:38:14 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:38:14 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:38:15 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:38:15 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:38:15 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:38:15 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:38:15 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:38:15 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:38:15 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:38:15 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:38:15 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:38:15 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:38:15 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:38:16 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:38:16 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:38:16 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:38:16 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:38:16 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:38:16 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:38:16 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:38:16 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:38:16 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:38:16 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:38:16 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:38:16 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:38:16 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:38:16 test10 scconfigd: INFO [rollout-1700574992] Performing rollout for batch ID: startup Jun 20 18:38:16 test10 scconfigd: WARN [rollout-1700574992] fire_event_async: server is shutting down - dropped event: config.rollout_started Jun 20 18:38:16 test10 scconfigd: ERR [rollout-1700574992] Server is shutting down... abandoning child process 11533 Jun 20 18:38:16 test10 scconfigd: ERR [rollout-1700574992] Server is shutting down... abandoning child process 11534 Jun 20 18:38:16 test10 scconfigd: ERR [rollout-1700574992] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:38:16 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:38:16 test10 scconfigd: INFO [resolve-1692182288] Performing impact resolution for batch ID: startup Jun 20 18:38:16 test10 scconfigd: WARN [resolve-1692182288] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:38:16 test10 scconfigd: WARN [resolve-1692182288] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:38:16 test10 scconfigd: WARN [resolve-1692182288] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:38:17 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:38:17 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:38:17 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:38:17 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:38:37 test10 kernel: md: md2: resync done. Jun 20 18:38:47 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:38:57 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:38:57 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:38:57 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:38:57 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:38:57 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:38:57 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:38:57 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:38:57 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:38:57 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:38:57 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:38:58 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:38:58 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:38:58 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:38:58 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:38:58 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:38:58 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:38:58 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:38:58 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:38:58 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:38:58 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:38:58 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:38:59 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:38:59 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:38:59 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:38:59 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:38:59 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:38:59 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:38:59 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:38:59 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:38:59 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:38:59 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:38:59 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:38:59 test10 scconfigd: INFO [rollout-4111546128] Performing rollout for batch ID: startup Jun 20 18:38:59 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:38:59 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:38:59 test10 scconfigd: ERR [rollout-4111546128] Server is shutting down... abandoning child process 11929 Jun 20 18:38:59 test10 scconfigd: ERR [rollout-4111546128] Server is shutting down... abandoning child process 11930 Jun 20 18:38:59 test10 scconfigd: ERR [rollout-4111546128] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:38:59 test10 scconfigd: INFO [resolve-4103153424] Performing impact resolution for batch ID: startup Jun 20 18:38:59 test10 scconfigd: WARN [resolve-4103153424] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:38:59 test10 scconfigd: WARN [resolve-4103153424] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:38:59 test10 scconfigd: WARN [resolve-4103153424] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:38:59 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:39:00 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:39:00 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:39:00 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:39:00 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:39:30 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:39:40 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:39:40 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:39:40 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:39:40 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:39:40 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:39:40 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:39:40 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:39:40 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:39:40 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:39:40 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:39:41 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:39:41 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:39:41 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:39:41 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:39:41 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:39:41 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:39:41 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:39:41 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:39:41 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:39:41 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:39:41 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:39:42 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:39:42 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:39:42 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:39:42 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:39:42 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:39:42 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:39:42 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:39:42 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:39:42 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:39:42 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:39:42 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:39:42 test10 scconfigd: INFO [rollout-3690305296] Performing rollout for batch ID: startup Jun 20 18:39:42 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:39:42 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:39:42 test10 scconfigd: ERR [rollout-3690305296] Server is shutting down... abandoning child process 12325 Jun 20 18:39:42 test10 scconfigd: ERR [rollout-3690305296] Server is shutting down... abandoning child process 12326 Jun 20 18:39:42 test10 scconfigd: ERR [rollout-3690305296] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:39:42 test10 scconfigd: INFO [resolve-3681912592] Performing impact resolution for batch ID: startup Jun 20 18:39:42 test10 scconfigd: WARN [resolve-3681912592] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:39:42 test10 scconfigd: WARN [resolve-3681912592] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:39:42 test10 scconfigd: WARN [resolve-3681912592] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:39:42 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:39:43 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:39:43 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:39:43 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:39:43 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:40:13 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:40:23 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:40:23 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:40:23 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:40:23 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:40:23 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:40:23 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:40:23 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:40:23 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:40:23 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:40:23 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:40:24 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:40:24 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:40:24 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:40:24 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:40:24 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:40:24 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:40:24 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:40:24 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:40:24 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:40:24 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:40:24 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:40:25 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:40:25 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:40:25 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:40:25 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:40:25 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:40:25 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:40:25 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:40:25 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:40:25 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:40:25 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:40:25 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:40:25 test10 scconfigd: INFO [rollout-521045776] Performing rollout for batch ID: startup Jun 20 18:40:25 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:40:25 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:40:25 test10 scconfigd: ERR [rollout-521045776] Server is shutting down... abandoning child process 12733 Jun 20 18:40:25 test10 scconfigd: ERR [rollout-521045776] Server is shutting down... abandoning child process 12734 Jun 20 18:40:25 test10 scconfigd: ERR [rollout-521045776] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:40:25 test10 scconfigd: INFO [resolve-512653072] Performing impact resolution for batch ID: startup Jun 20 18:40:25 test10 scconfigd: WARN [resolve-512653072] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:40:25 test10 scconfigd: WARN [resolve-512653072] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:40:25 test10 scconfigd: WARN [resolve-512653072] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:40:25 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:40:26 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:40:26 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:40:26 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:40:26 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:40:56 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:41:06 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:41:06 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:41:06 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:41:06 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:41:06 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:41:06 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:41:06 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:41:06 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:41:06 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:41:06 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:41:07 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:41:07 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:41:07 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:41:07 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:41:07 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:41:07 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:41:07 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:41:07 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:41:07 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:41:07 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:41:07 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:41:08 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:41:08 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:41:08 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:41:08 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:41:08 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:41:08 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:41:08 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:41:08 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:41:08 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:41:08 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:41:08 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:41:08 test10 scconfigd: INFO [rollout-1577674512] Performing rollout for batch ID: startup Jun 20 18:41:08 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:41:08 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:41:08 test10 scconfigd: ERR [rollout-1577674512] Server is shutting down... abandoning child process 13129 Jun 20 18:41:08 test10 scconfigd: ERR [rollout-1577674512] Server is shutting down... abandoning child process 13130 Jun 20 18:41:08 test10 scconfigd: ERR [rollout-1577674512] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:41:08 test10 scconfigd: INFO [resolve-1569281808] Performing impact resolution for batch ID: startup Jun 20 18:41:08 test10 scconfigd: WARN [resolve-1569281808] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:41:08 test10 scconfigd: WARN [resolve-1569281808] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:41:08 test10 scconfigd: WARN [resolve-1569281808] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:41:08 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:41:09 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:41:09 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:41:09 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:41:09 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:41:39 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:41:49 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:41:49 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:41:49 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:41:49 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:41:49 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:41:49 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:41:49 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:41:49 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:41:49 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:41:49 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:41:50 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:41:50 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:41:50 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:41:50 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:41:50 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:41:50 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:41:50 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:41:50 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:41:50 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:41:50 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:41:50 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:41:51 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:41:51 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:41:51 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:41:51 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:41:51 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:41:51 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:41:51 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:41:51 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:41:51 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:41:51 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:41:51 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:41:51 test10 scconfigd: INFO [rollout-350836496] Performing rollout for batch ID: startup Jun 20 18:41:51 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:41:51 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:41:51 test10 scconfigd: ERR [rollout-350836496] Server is shutting down... abandoning child process 13525 Jun 20 18:41:51 test10 scconfigd: ERR [rollout-350836496] Server is shutting down... abandoning child process 13526 Jun 20 18:41:51 test10 scconfigd: ERR [rollout-350836496] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:41:51 test10 scconfigd: INFO [resolve-268433168] Performing impact resolution for batch ID: startup Jun 20 18:41:51 test10 scconfigd: WARN [resolve-268433168] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:41:51 test10 scconfigd: WARN [resolve-268433168] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:41:51 test10 scconfigd: WARN [resolve-268433168] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:41:51 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:41:52 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:41:52 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:41:52 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:41:52 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:42:22 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:42:32 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:42:32 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:42:32 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:42:32 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:42:32 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:42:32 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:42:32 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:42:32 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:42:32 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:42:32 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:42:33 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:42:33 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:42:33 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:42:33 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:42:33 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:42:33 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:42:33 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:42:33 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:42:33 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:42:33 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:42:33 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:42:34 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:42:34 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:42:34 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:42:34 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:42:34 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:42:34 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:42:34 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:42:34 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:42:34 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:42:34 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:42:34 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:42:34 test10 scconfigd: INFO [rollout-3700832016] Performing rollout for batch ID: startup Jun 20 18:42:34 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:42:34 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:42:34 test10 scconfigd: ERR [rollout-3700832016] Server is shutting down... abandoning child process 13921 Jun 20 18:42:34 test10 scconfigd: ERR [rollout-3700832016] Server is shutting down... abandoning child process 13922 Jun 20 18:42:34 test10 scconfigd: ERR [rollout-3700832016] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:42:34 test10 scconfigd: INFO [resolve-3623876368] Performing impact resolution for batch ID: startup Jun 20 18:42:34 test10 scconfigd: WARN [resolve-3623876368] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:42:34 test10 scconfigd: WARN [resolve-3623876368] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:42:34 test10 scconfigd: WARN [resolve-3623876368] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:42:34 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:42:35 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:42:35 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:42:35 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:42:35 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:43:05 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:43:15 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:43:15 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:43:15 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:43:15 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:43:15 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:43:15 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:43:15 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:43:15 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:43:15 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:43:15 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:43:16 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:43:16 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:43:16 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:43:16 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:43:16 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:43:16 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:43:16 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:43:16 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:43:16 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:43:16 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:43:16 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:43:17 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:43:17 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:43:17 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:43:17 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:43:17 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:43:17 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:43:17 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:43:17 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:43:17 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:43:17 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:43:17 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:43:17 test10 scconfigd: INFO [rollout-289031952] Performing rollout for batch ID: startup Jun 20 18:43:17 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:43:17 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:43:17 test10 scconfigd: ERR [rollout-289031952] Server is shutting down... abandoning child process 14317 Jun 20 18:43:17 test10 scconfigd: ERR [rollout-289031952] Server is shutting down... abandoning child process 14318 Jun 20 18:43:17 test10 scconfigd: ERR [rollout-289031952] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:43:17 test10 scconfigd: INFO [resolve-280639248] Performing impact resolution for batch ID: startup Jun 20 18:43:17 test10 scconfigd: WARN [resolve-280639248] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:43:17 test10 scconfigd: WARN [resolve-280639248] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:43:17 test10 scconfigd: WARN [resolve-280639248] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:43:17 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:43:18 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:43:18 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:43:18 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:43:18 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:43:48 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:43:58 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:43:58 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:43:58 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:43:58 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:43:58 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:43:58 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:43:58 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:43:58 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:43:58 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:43:58 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:43:59 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:43:59 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:43:59 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:43:59 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:43:59 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:43:59 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:43:59 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:43:59 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:43:59 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:43:59 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:43:59 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:44:00 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:44:00 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:44:00 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:44:00 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:44:00 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:44:00 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:44:00 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:44:00 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:44:00 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:44:00 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:44:00 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:44:00 test10 scconfigd: INFO [rollout-1695201040] Performing rollout for batch ID: startup Jun 20 18:44:00 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:44:00 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:44:00 test10 scconfigd: ERR [rollout-1695201040] Server is shutting down... abandoning child process 14713 Jun 20 18:44:00 test10 scconfigd: ERR [rollout-1695201040] Server is shutting down... abandoning child process 14714 Jun 20 18:44:00 test10 scconfigd: ERR [rollout-1695201040] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:44:00 test10 scconfigd: INFO [resolve-1686808336] Performing impact resolution for batch ID: startup Jun 20 18:44:00 test10 scconfigd: WARN [resolve-1686808336] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:44:00 test10 scconfigd: WARN [resolve-1686808336] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:44:00 test10 scconfigd: WARN [resolve-1686808336] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:44:00 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:44:01 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:44:01 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:44:01 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:44:01 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:44:31 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:44:41 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:44:41 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:44:41 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:44:41 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:44:41 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:44:41 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:44:41 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:44:41 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:44:41 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:44:41 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:44:42 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:44:42 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:44:42 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:44:42 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:44:42 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:44:42 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:44:42 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:44:42 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:44:42 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:44:42 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:44:42 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:44:43 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:44:43 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:44:43 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:44:43 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:44:43 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:44:43 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:44:43 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:44:43 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:44:43 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:44:43 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:44:43 test10 scconfigd: INFO [rollout-3061032720] Performing rollout for batch ID: startup Jun 20 18:44:43 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:44:43 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:44:43 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:44:43 test10 scconfigd: ERR [rollout-3061032720] Server is shutting down... abandoning child process 15113 Jun 20 18:44:43 test10 scconfigd: ERR [rollout-3061032720] Server is shutting down... abandoning child process 15114 Jun 20 18:44:43 test10 scconfigd: ERR [rollout-3061032720] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:44:43 test10 scconfigd: INFO [resolve-3052640016] Performing impact resolution for batch ID: startup Jun 20 18:44:43 test10 scconfigd: WARN [resolve-3052640016] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:44:43 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:44:43 test10 scconfigd: WARN [resolve-3052640016] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:44:43 test10 scconfigd: WARN [resolve-3052640016] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:44:44 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:44:44 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:44:44 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:44:44 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:45:14 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:45:24 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:45:24 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:45:24 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:45:24 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:45:24 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:45:24 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:45:24 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:45:24 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:45:24 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:45:24 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:45:25 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:45:25 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:45:25 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:45:25 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:45:25 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:45:25 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:45:25 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:45:25 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:45:25 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:45:25 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:45:25 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:45:26 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:45:26 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:45:26 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:45:26 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:45:26 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:45:26 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:45:26 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:45:26 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:45:26 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:45:26 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:45:26 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:45:26 test10 scconfigd: INFO [rollout-2057533200] Performing rollout for batch ID: startup Jun 20 18:45:26 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:45:26 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:45:26 test10 scconfigd: ERR [rollout-2057533200] Server is shutting down... abandoning child process 15525 Jun 20 18:45:26 test10 scconfigd: ERR [rollout-2057533200] Server is shutting down... abandoning child process 15526 Jun 20 18:45:26 test10 scconfigd: ERR [rollout-2057533200] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:45:26 test10 scconfigd: INFO [resolve-2049140496] Performing impact resolution for batch ID: startup Jun 20 18:45:26 test10 scconfigd: WARN [resolve-2049140496] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:45:26 test10 scconfigd: WARN [resolve-2049140496] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:45:26 test10 scconfigd: WARN [resolve-2049140496] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:45:26 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:45:27 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:45:27 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:45:27 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:45:27 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:45:57 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:46:07 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:46:07 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:46:07 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:46:07 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:46:07 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:46:07 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:46:07 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:46:07 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 169.254.10.1:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:46:07 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:46:07 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:46:08 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:46:08 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:46:08 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:46:08 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:46:08 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:46:08 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:46:08 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:46:08 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:46:08 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:46:08 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:46:08 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:46:09 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:46:09 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:46:09 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:46:09 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:46:09 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:46:09 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:46:09 test10 scconfigd: INFO [unknown] I have chosen 169.254.10.1 as my private IP Jun 20 18:46:09 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:46:09 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:46:09 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:46:09 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:46:09 test10 scconfigd: INFO [rollout-1815181072] Performing rollout for batch ID: startup Jun 20 18:46:09 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:46:09 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:46:09 test10 scconfigd: ERR [rollout-1815181072] Server is shutting down... abandoning child process 15921 Jun 20 18:46:09 test10 scconfigd: ERR [rollout-1815181072] Server is shutting down... abandoning child process 15922 Jun 20 18:46:09 test10 scconfigd: ERR [rollout-1815181072] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:46:09 test10 scconfigd: INFO [resolve-1806788368] Performing impact resolution for batch ID: startup Jun 20 18:46:09 test10 scconfigd: WARN [resolve-1806788368] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:46:09 test10 scconfigd: WARN [resolve-1806788368] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:46:09 test10 scconfigd: WARN [resolve-1806788368] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:46:09 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:46:10 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:46:10 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:46:10 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:46:10 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:46:40 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:46:50 test10 NET[17550]: /sbin/dhclient-script : updated /etc/resolv.conf Jun 20 18:46:50 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:46:50 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:46:50 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:46:50 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:46:50 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:46:50 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:46:50 test10 kernel: ADDRCONF(NETDEV_UP): eth0: link is not ready Jun 20 18:46:51 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:46:51 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:46:51 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:46:51 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:46:51 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:46:51 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:46:51 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:46:51 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:46:51 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:46:51 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:46:51 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:46:52 test10 ntpd[1408]: Deleting interface #2 eth1, fe80::230:48ff:feb0:c9c7#123, interface stats: received=0, sent=0, dropped=0, active_time=1595 secs Jun 20 18:46:52 test10 ntpd[1408]: Deleting interface #3 eth0, fe80::230:48ff:feb0:c9c6#123, interface stats: received=0, sent=0, dropped=0, active_time=1595 secs Jun 20 18:46:52 test10 ntpd[1408]: Deleting interface #6 eth0, 10.200.98.110#123, interface stats: received=75, sent=75, dropped=0, active_time=1595 secs Jun 20 18:46:52 test10 ntpd[1408]: Deleting interface #7 eth1, 169.254.10.1#123, interface stats: received=0, sent=0, dropped=0, active_time=1595 secs Jun 20 18:46:52 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:46:52 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:46:52 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:46:52 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:46:52 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:46:52 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:46:52 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:46:52 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:46:52 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:46:52 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:46:53 test10 kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx Jun 20 18:46:53 test10 kernel: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 20 18:46:55 test10 kernel: ADDRCONF(NETDEV_UP): eth1: link is not ready Jun 20 18:46:56 test10 ntpd[1408]: Listening on interface #8 eth0, fe80::230:48ff:feb0:c9c6#123 Enabled Jun 20 18:46:56 test10 ntpd[1408]: Listening on interface #9 eth0, 10.200.98.110#123 Enabled Jun 20 18:46:58 test10 kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None Jun 20 18:46:58 test10 kernel: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready Jun 20 18:47:00 test10 ntpd[1408]: Listening on interface #10 eth1, fe80::230:48ff:feb0:c9c7#123 Enabled Jun 20 18:47:00 test10 ntpd[1408]: Listening on interface #11 eth1, 192.168.98.110#123 Enabled Jun 20 18:47:19 test10 ntpd[1408]: synchronized to 173.8.198.243, stratum 2 Jun 20 18:47:22 test10 monit[1540]: 'scconfigd' failed to start Jun 20 18:47:22 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:47:22 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:47:22 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:47:48 test10 xinetd[1389]: Starting reconfiguration Jun 20 18:47:48 test10 xinetd[1389]: Server /usr/sbin/swat is not executable [file=/etc/xinetd.d/samba] [line=8] Jun 20 18:47:48 test10 xinetd[1389]: Error parsing attribute server - DISABLING SERVICE [file=/etc/xinetd.d/samba] [line=8] Jun 20 18:47:48 test10 xinetd[1389]: Swapping defaults Jun 20 18:47:48 test10 xinetd[1389]: readjusting service exec Jun 20 18:47:48 test10 xinetd[1389]: readjusting service login Jun 20 18:47:48 test10 xinetd[1389]: readjusting service shell Jun 20 18:47:48 test10 xinetd[1389]: Reconfigured: new=0 old=3 dropped=0 (services) Jun 20 18:47:52 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:48:02 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:48:02 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:48:02 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:48:02 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:48:02 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:48:02 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:48:02 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:48:02 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 192.168.98.110:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:48:02 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:48:02 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:48:03 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:48:03 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:48:03 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:48:03 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:48:03 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:48:03 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:48:03 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:48:03 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:48:03 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:48:03 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:48:03 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:48:04 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:48:04 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:48:04 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:48:04 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:48:04 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:48:04 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:48:04 test10 scconfigd: INFO [unknown] I have chosen 192.168.98.110 as my private IP Jun 20 18:48:04 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:48:04 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:48:04 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:48:04 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:48:04 test10 scconfigd: INFO [rollout-295442192] Performing rollout for batch ID: startup Jun 20 18:48:04 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:48:04 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:48:04 test10 scconfigd: ERR [rollout-295442192] Server is shutting down... abandoning child process 920 Jun 20 18:48:04 test10 scconfigd: ERR [rollout-295442192] Server is shutting down... abandoning child process 921 Jun 20 18:48:04 test10 scconfigd: ERR [rollout-295442192] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:48:04 test10 scconfigd: INFO [resolve-287049488] Performing impact resolution for batch ID: startup Jun 20 18:48:04 test10 scconfigd: WARN [resolve-287049488] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:48:04 test10 scconfigd: WARN [resolve-287049488] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:48:04 test10 scconfigd: WARN [resolve-287049488] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:48:04 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:48:05 test10 monit[1540]: 'scconfigd' started Jun 20 18:48:05 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:48:05 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:48:05 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:48:05 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:48:10 test10 xinetd[1389]: START: shell pid=1414 from=192.168.98.109 Jun 20 18:48:10 test10 rshd[1415]: root@scale-192-168-98-109 as root: cmd='echo hello' Jun 20 18:48:10 test10 xinetd[1389]: EXIT: shell status=0 pid=1414 duration=0(sec) Jun 20 18:48:32 test10 yum: Installed: scale-qa-2.4.0.6270-1.x86_64 Jun 20 18:48:35 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:48:36 test10 yum: Installed: scqad-2.4.0.6270-1.x86_64 Jun 20 18:48:45 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:48:45 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:48:45 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:48:45 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:48:45 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:48:45 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:48:45 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:48:45 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 192.168.98.110:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:48:45 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:48:45 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:48:46 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:48:46 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:48:46 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:48:46 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:48:46 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:48:46 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:48:46 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:48:46 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:48:46 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:48:46 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:48:46 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:48:47 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:48:47 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:48:47 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:48:47 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:48:47 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:48:47 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:48:47 test10 scconfigd: INFO [unknown] I have chosen 192.168.98.110 as my private IP Jun 20 18:48:47 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:48:47 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:48:47 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:48:47 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:48:47 test10 scconfigd: INFO [rollout-4040185616] Performing rollout for batch ID: startup Jun 20 18:48:47 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:48:47 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:48:47 test10 scconfigd: ERR [rollout-4040185616] Server is shutting down... abandoning child process 1900 Jun 20 18:48:47 test10 scconfigd: ERR [rollout-4040185616] Server is shutting down... abandoning child process 1901 Jun 20 18:48:47 test10 scconfigd: ERR [rollout-4040185616] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:48:47 test10 scconfigd: INFO [resolve-3959420688] Performing impact resolution for batch ID: startup Jun 20 18:48:47 test10 scconfigd: WARN [resolve-3959420688] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:48:47 test10 scconfigd: WARN [resolve-3959420688] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:48:47 test10 scconfigd: WARN [resolve-3959420688] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:48:47 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:48:48 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:48:48 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:48:48 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:48:48 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:49:18 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:49:28 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:49:28 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:49:28 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:49:28 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:49:28 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:49:28 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:49:28 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:49:28 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 192.168.98.110:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:49:28 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:49:28 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:49:29 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:49:29 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:49:29 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:49:29 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:49:29 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:49:29 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:49:29 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:49:29 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:49:29 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:49:29 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:49:29 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:49:30 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:49:30 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:49:30 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:49:30 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:49:30 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:49:30 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:49:30 test10 scconfigd: INFO [unknown] I have chosen 192.168.98.110 as my private IP Jun 20 18:49:30 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:49:30 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:49:30 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:49:30 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:49:30 test10 scconfigd: INFO [rollout-1649174288] Performing rollout for batch ID: startup Jun 20 18:49:30 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:49:30 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:49:30 test10 scconfigd: ERR [rollout-1649174288] Server is shutting down... abandoning child process 2296 Jun 20 18:49:30 test10 scconfigd: ERR [rollout-1649174288] Server is shutting down... abandoning child process 2297 Jun 20 18:49:30 test10 scconfigd: ERR [rollout-1649174288] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:49:30 test10 scconfigd: INFO [resolve-1640781584] Performing impact resolution for batch ID: startup Jun 20 18:49:30 test10 scconfigd: WARN [resolve-1640781584] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:49:30 test10 scconfigd: WARN [resolve-1640781584] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:49:30 test10 scconfigd: WARN [resolve-1640781584] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:49:30 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:49:31 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:49:31 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:49:31 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:49:31 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:50:01 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:50:11 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:50:11 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:50:11 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:50:11 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:50:11 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:50:11 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:50:11 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:50:11 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 192.168.98.110:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:50:11 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:50:11 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:50:12 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:50:12 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:50:12 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:50:12 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:50:12 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:50:12 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:50:12 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:50:12 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:50:12 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:50:12 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:50:12 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:50:13 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:50:13 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:50:13 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:50:13 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:50:13 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:50:13 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:50:13 test10 scconfigd: INFO [unknown] I have chosen 192.168.98.110 as my private IP Jun 20 18:50:13 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:50:13 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:50:13 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:50:13 test10 scconfigd: INFO [rollout-412739344] Performing rollout for batch ID: startup Jun 20 18:50:13 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:50:13 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:50:13 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:50:13 test10 scconfigd: ERR [rollout-412739344] Server is shutting down... abandoning child process 2704 Jun 20 18:50:13 test10 scconfigd: ERR [rollout-412739344] Server is shutting down... abandoning child process 2705 Jun 20 18:50:13 test10 scconfigd: ERR [rollout-412739344] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:50:13 test10 scconfigd: INFO [resolve-335542032] Performing impact resolution for batch ID: startup Jun 20 18:50:13 test10 scconfigd: WARN [resolve-335542032] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:50:13 test10 scconfigd: WARN [resolve-335542032] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:50:13 test10 scconfigd: WARN [resolve-335542032] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:50:13 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:50:14 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:50:14 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:50:14 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:50:14 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:50:14 test10 xinetd[1389]: START: shell pid=2915 from=192.168.98.111 Jun 20 18:50:15 test10 rshd[2928]: root@scale-192-168-98-111 as root: cmd='/opt/scale/lib/scmenu/legacy/scale-rsh 192.168.98.111 "echo hello"; echo EXIT$?' Jun 20 18:50:16 test10 xinetd[1389]: EXIT: shell status=0 pid=2915 duration=2(sec) Jun 20 18:50:44 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:50:54 test10 monit[1540]: 'scstoraged' process is not running Jun 20 18:50:54 test10 monit[1540]: 'scstoraged' trying to restart Jun 20 18:50:54 test10 monit[1540]: 'scstoraged' start: /etc/init.d/scstoraged Jun 20 18:50:54 test10 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:50:54 test10 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:50:54 test10 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:50:54 test10 scstoraged: INFO [unknown] Mounting ceph Jun 20 18:50:54 test10 scstoraged: ERR [unknown] unable to mount ceph. /bin/mount -t ceph -o name=admin,secretfile=/etc/ceph/filesystem.key 192.168.98.110:/ /scale (sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012) Jun 20 18:50:54 test10 scstoraged: ERR [unknown] Unable to mount Ceph: Unable to mount Ceph: sh: modprobe: command not found#012mount.ceph: modprobe failed, exit status 127#012unable to read secretfile: No such file or directory#012error reading secret file#012failed to parse ceph_options#012 Jun 20 18:50:54 test10 scstoraged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:50:55 test10 monit[1540]: 'scmanaged' process is not running Jun 20 18:50:55 test10 monit[1540]: 'scmanaged' trying to restart Jun 20 18:50:55 test10 monit[1540]: 'scmanaged' start: /etc/init.d/scmanaged Jun 20 18:50:55 test10 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:50:55 test10 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:50:55 test10 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:50:55 test10 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:50:55 test10 scmanaged: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:50:55 test10 scmanaged: CRIT [unknown] ABORT: std::exception Jun 20 18:50:55 test10 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:50:55 test10 scstoraged: INFO [unknown] logging halted Jun 20 18:50:56 test10 monit[1540]: 'scconfigd' process is not running Jun 20 18:50:56 test10 monit[1540]: 'scconfigd' trying to restart Jun 20 18:50:56 test10 monit[1540]: 'scconfigd' start: /etc/init.d/scconfigd Jun 20 18:50:56 test10 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:50:56 test10 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:50:56 test10 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:50:56 test10 scconfigd: INFO [unknown] I have chosen 192.168.98.110 as my private IP Jun 20 18:50:56 test10 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:50:56 test10 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:50:56 test10 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:50:56 test10 scconfigd: ERR [unknown] Can't open event_db_conf file: /fsscale0/lib/event_subscriptions.conf Jun 20 18:50:56 test10 scconfigd: INFO [rollout-999241488] Performing rollout for batch ID: startup Jun 20 18:50:56 test10 scconfigd: CRIT [unknown] ABORT: std::exception Jun 20 18:50:56 test10 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 18:50:56 test10 scconfigd: ERR [rollout-999241488] Server is shutting down... abandoning child process 3110 Jun 20 18:50:56 test10 scconfigd: ERR [rollout-999241488] Server is shutting down... abandoning child process 3111 Jun 20 18:50:56 test10 scconfigd: ERR [rollout-999241488] [CHECKOUT] CONFIG_CHECKOUT_FAIL|||/opt/scale/lib/scconfigd/wc: svn co 'file:///fsscale0/lib/scconfigd/svn/config' '/opt/scale/lib/scconfigd/wc' Jun 20 18:50:56 test10 scconfigd: INFO [resolve-990848784] Performing impact resolution for batch ID: startup Jun 20 18:50:56 test10 scconfigd: WARN [resolve-990848784] fire_event_async: server is shutting down - dropped event: config.resolve_started Jun 20 18:50:56 test10 scconfigd: WARN [resolve-990848784] fire_event_async: server is shutting down - dropped event: config.commit_status Jun 20 18:50:56 test10 scconfigd: WARN [resolve-990848784] fire_event_async: server is shutting down - dropped event: config.rollout_complete Jun 20 18:50:56 test10 scmanaged: INFO [unknown] logging halted Jun 20 18:50:57 test10 monit[1540]: 'scclusterd' process is not running Jun 20 18:50:57 test10 monit[1540]: 'scclusterd' trying to restart Jun 20 18:50:57 test10 monit[1540]: 'scclusterd' start: /etc/init.d/scclusterd Jun 20 18:50:57 test10 scconfigd: INFO [unknown] logging halted Jun 20 18:51:05 test10 xinetd[1389]: START: shell pid=3478 from=192.168.98.111 Jun 20 18:51:05 test10 rshd[3479]: root@scale-192-168-98-111 as root: cmd='scshapi /opt/scale/bin/scservices stop' Jun 20 18:51:05 test10 monit[1540]: 'scclusterd' failed to start Jun 20 18:51:09 test10 xinetd[1389]: EXIT: shell status=0 pid=3478 duration=4(sec) Jun 20 18:51:10 test10 xinetd[1389]: START: shell pid=3592 from=192.168.98.111 Jun 20 18:51:10 test10 rshd[3592]: root@scale-192-168-98-111 as root: cmd='rcp -f /var/log/scale/scshapi/scshapi_20110620_185105.log' Jun 20 18:51:10 test10 xinetd[1389]: EXIT: shell status=0 pid=3592 duration=0(sec) Jun 20 18:51:22 test10 xinetd[1389]: START: shell pid=3595 from=192.168.98.111 Jun 20 18:51:22 test10 rshd[3596]: root@scale-192-168-98-111 as root: cmd='scshapi scale_network_set_hostname scale-192-168-98-110' Jun 20 18:51:22 test10 xinetd[1389]: EXIT: shell status=0 pid=3595 duration=0(sec) Jun 20 18:51:23 test10 xinetd[1389]: START: shell pid=3681 from=192.168.98.111 Jun 20 18:51:23 test10 rshd[3681]: root@scale-192-168-98-111 as root: cmd='rcp -f /var/log/scale/scshapi/scshapi_20110620_185122.log' Jun 20 18:51:23 test10 xinetd[1389]: EXIT: shell status=0 pid=3681 duration=0(sec) Jun 20 18:51:23 test10 xinetd[1389]: START: shell pid=3684 from=192.168.98.109 Jun 20 18:51:23 test10 rshd[3685]: root@scale-192-168-98-109 as root: cmd='mkdir -p /etc/ceph; echo EXIT$?' Jun 20 18:51:23 test10 xinetd[1389]: EXIT: shell status=0 pid=3684 duration=0(sec) Jun 20 18:51:24 test10 xinetd[1389]: START: shell pid=3689 from=192.168.98.109 Jun 20 18:51:24 test10 rshd[3689]: root@scale-192-168-98-109 as root: cmd='rcp -t /etc/ceph/ceph.conf' Jun 20 18:51:25 test10 xinetd[1389]: EXIT: shell status=0 pid=3689 duration=1(sec) Jun 20 18:51:26 test10 xinetd[1389]: START: shell pid=3692 from=192.168.98.111 Jun 20 18:51:26 test10 rshd[3693]: root@scale-192-168-98-111 as root: cmd='scshapi scale_ceph_add_node_to_config_file' Jun 20 18:51:27 test10 xinetd[1389]: START: shell pid=4104 from=192.168.98.110 Jun 20 18:51:27 test10 rshd[4106]: root@scale-192-168-98-110 as root: cmd='mkdir -p /etc/ceph; echo EXIT$?' Jun 20 18:51:27 test10 xinetd[1389]: EXIT: shell status=0 pid=4104 duration=0(sec) Jun 20 18:51:28 test10 xinetd[1389]: START: shell pid=4149 from=192.168.98.110 Jun 20 18:51:28 test10 rshd[4149]: root@scale-192-168-98-110 as root: cmd='rcp -t /etc/ceph/ceph.conf' Jun 20 18:51:28 test10 xinetd[1389]: EXIT: shell status=0 pid=4149 duration=0(sec) Jun 20 18:51:29 test10 xinetd[1389]: EXIT: shell status=0 pid=3692 duration=3(sec) Jun 20 18:51:30 test10 xinetd[1389]: START: shell pid=4211 from=192.168.98.111 Jun 20 18:51:30 test10 rshd[4211]: root@scale-192-168-98-111 as root: cmd='rcp -f /var/log/scale/scshapi/scshapi_20110620_185126.log' Jun 20 18:51:30 test10 xinetd[1389]: EXIT: shell status=0 pid=4211 duration=0(sec) Jun 20 18:51:31 test10 xinetd[1389]: START: shell pid=4214 from=192.168.98.111 Jun 20 18:51:31 test10 rshd[4215]: root@scale-192-168-98-111 as root: cmd='scshapi scale_ceph_distribute_config_file 192.168.98.111' Jun 20 18:51:32 test10 xinetd[1389]: EXIT: shell status=0 pid=4214 duration=1(sec) Jun 20 18:51:33 test10 xinetd[1389]: START: shell pid=4304 from=192.168.98.111 Jun 20 18:51:33 test10 rshd[4304]: root@scale-192-168-98-111 as root: cmd='rcp -f /var/log/scale/scshapi/scshapi_20110620_185131.log' Jun 20 18:51:33 test10 xinetd[1389]: EXIT: shell status=0 pid=4304 duration=0(sec) Jun 20 18:51:35 test10 xinetd[1389]: START: shell pid=4307 from=192.168.98.111 Jun 20 18:51:35 test10 rshd[4308]: root@scale-192-168-98-111 as root: cmd='mkdir -p /etc/ceph; echo EXIT$?' Jun 20 18:51:35 test10 xinetd[1389]: EXIT: shell status=0 pid=4307 duration=0(sec) Jun 20 18:51:36 test10 xinetd[1389]: START: shell pid=4312 from=192.168.98.111 Jun 20 18:51:36 test10 rshd[4312]: root@scale-192-168-98-111 as root: cmd='rcp -t /etc/ceph/ceph.conf' Jun 20 18:51:36 test10 xinetd[1389]: EXIT: shell status=0 pid=4312 duration=0(sec) Jun 20 18:51:49 test10 kernel: Btrfs loaded Jun 20 18:51:49 test10 kernel: device fsid 3a403f111723d4ae-a3ed6f5dd5c88ca7 devid 1 transid 441 /dev/sdd7 Jun 20 18:51:49 test10 kernel: device fsid be45deaa3752d81e-9caf1f3cfd37ce80 devid 1 transid 518 /dev/sdb7 Jun 20 18:51:49 test10 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 7 /dev/sda7 Jun 20 18:51:50 test10 kernel: device fsid 52466529a7d48c35-9cf0b4faef506db2 devid 1 transid 450 /dev/sdc7 Jun 20 18:51:50 test10 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 7 /dev/sda7 Jun 20 18:51:51 test10 kernel: device fsid 3a403f111723d4ae-a3ed6f5dd5c88ca7 devid 1 transid 441 /dev/sdd7 Jun 20 18:51:51 test10 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 7 /dev/sdb7 Jun 20 18:51:51 test10 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 12 /dev/sda7 Jun 20 18:51:51 test10 kernel: device fsid 52466529a7d48c35-9cf0b4faef506db2 devid 1 transid 450 /dev/sdc7 Jun 20 18:51:51 test10 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 7 /dev/sdb7 Jun 20 18:51:52 test10 kernel: device fsid 3a403f111723d4ae-a3ed6f5dd5c88ca7 devid 1 transid 441 /dev/sdd7 Jun 20 18:51:52 test10 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 12 /dev/sdb7 Jun 20 18:51:52 test10 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 12 /dev/sda7 Jun 20 18:51:52 test10 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 7 /dev/sdc7 Jun 20 18:51:52 test10 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 7 /dev/sdc7 Jun 20 18:51:54 test10 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 7 /dev/sdd7 Jun 20 18:51:54 test10 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 12 /dev/sdb7 Jun 20 18:51:54 test10 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 12 /dev/sda7 Jun 20 18:51:54 test10 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 12 /dev/sdc7 Jun 20 18:51:54 test10 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 7 /dev/sdd7 Jun 20 18:52:00 test10 xinetd[1389]: START: shell pid=4874 from=192.168.98.111 Jun 20 18:52:00 test10 rshd[4874]: root@scale-192-168-98-111 as root: cmd='rcp -d -t //etc/ceph' Jun 20 18:52:00 test10 xinetd[1389]: EXIT: shell status=0 pid=4874 duration=0(sec) Jun 20 18:52:03 test10 xinetd[1389]: START: shell pid=4877 from=192.168.98.111 Jun 20 18:52:03 test10 rshd[4878]: root@scale-192-168-98-111 as root: cmd='scshapi scale_cfs_start' Jun 20 18:52:06 test10 xinetd[1389]: EXIT: shell status=0 pid=4877 duration=3(sec) Jun 20 18:52:06 test10 xinetd[1389]: START: shell pid=5722 from=192.168.98.111 Jun 20 18:52:07 test10 rshd[5722]: root@scale-192-168-98-111 as root: cmd='rcp -f /var/log/scale/scshapi/scshapi_20110620_185203.log' Jun 20 18:52:07 test10 xinetd[1389]: EXIT: shell status=0 pid=5722 duration=1(sec) Jun 20 18:56:32 test10 xinetd[1389]: START: shell pid=6232 from=192.168.98.109 Jun 20 18:56:32 test10 rshd[6234]: root@scale-192-168-98-109 as root: cmd='rcp 192.168.98.109:/etc/ctdb/nodes /etc/ctdb/nodes' Jun 20 18:56:32 test10 xinetd[1389]: EXIT: shell status=0 pid=6232 duration=0(sec) Jun 20 18:56:33 test10 xinetd[1389]: START: shell pid=6237 from=192.168.98.109 Jun 20 18:56:33 test10 rshd[6238]: root@scale-192-168-98-109 as root: cmd='killall -SIGUSR2 scst_fileio_tgt' Jun 20 18:56:33 test10 xinetd[1389]: EXIT: shell status=0 pid=6237 duration=0(sec) Jun 20 18:56:33 test10 xinetd[1389]: START: shell pid=6241 from=192.168.98.109 Jun 20 18:56:33 test10 rshd[6242]: root@scale-192-168-98-109 as root: cmd='scnetconsole setup' Jun 20 18:56:33 test10 kernel: Kernel logging (proc) stopped. Jun 20 18:56:33 test10 rsyslogd: [origin software="rsyslogd" swVersion="4.6.2" x-pid="1239" x-info="http://www.rsyslog.com"] exiting on signal 15. Jun 20 18:56:33 scale-192-168-98-110 kernel: imklog 4.6.2, log source = /proc/kmsg started. Jun 20 18:56:33 scale-192-168-98-110 rsyslogd: [origin software="rsyslogd" swVersion="4.6.2" x-pid="6267" x-info="http://www.rsyslog.com"] (re)start Jun 20 18:56:33 scale-192-168-98-110 rsyslogd: WARNING: rsyslogd is running in compatibility mode. Automatically generated config directives may interfer with your rsyslog.conf settings. We suggest upgrading your config and adding -c4 as the first rsyslogd option. Jun 20 18:56:33 scale-192-168-98-110 rsyslogd: Warning: backward compatibility layer added to following directive to rsyslog.conf: ModLoad imudp Jun 20 18:56:33 scale-192-168-98-110 rsyslogd: Warning: backward compatibility layer added to following directive to rsyslog.conf: UDPServerRun (null) Jun 20 18:56:33 scale-192-168-98-110 rsyslogd: Name or service not known Jun 20 18:56:33 scale-192-168-98-110 rsyslogd: UDP message reception disabled due to error logged in last message. Jun 20 18:56:33 scale-192-168-98-110 rsyslogd: Warning: backward compatibility layer added to following directive to rsyslog.conf: ModLoad imuxsock Jun 20 18:56:33 scale-192-168-98-110 xinetd[1389]: EXIT: shell status=0 pid=6241 duration=0(sec) Jun 20 18:56:34 scale-192-168-98-110 xinetd[1389]: START: shell pid=6347 from=192.168.98.109 Jun 20 18:56:34 scale-192-168-98-110 rshd[6348]: root@scale-192-168-98-109 as root: cmd='scnetconsole restart' Jun 20 18:56:34 scale-192-168-98-110 xinetd[1389]: EXIT: shell status=0 pid=6347 duration=0(sec) Jun 20 18:56:36 scale-192-168-98-110 xinetd[1389]: START: shell pid=6427 from=192.168.98.109 Jun 20 18:56:36 scale-192-168-98-110 rshd[6428]: root@scale-192-168-98-109 as root: cmd='/opt/scale/bin/screloadbricklist' Jun 20 18:56:47 scale-192-168-98-110 xinetd[1389]: EXIT: shell status=0 pid=6427 duration=11(sec) Jun 20 18:56:58 scale-192-168-98-110 xinetd[1389]: START: shell pid=6431 from=192.168.98.111 Jun 20 18:56:58 scale-192-168-98-110 rshd[6432]: root@scale-192-168-98-111 as root: cmd='scshapi scale_cluster_add_node_post_cfs_setup 192.168.98.110' Jun 20 18:56:58 scale-192-168-98-110 kernel: libceph: loaded (mon/osd proto 15/24, osdmap 5/6 5/6) Jun 20 18:56:58 scale-192-168-98-110 kernel: ceph: loaded (mds proto 32) Jun 20 18:56:58 scale-192-168-98-110 kernel: libceph: client4126 fsid 4213fc3a-0a25-06da-6d74-b614ca1f57f4 Jun 20 18:56:58 scale-192-168-98-110 kernel: libceph: mon0 192.168.98.110:6789 session established Jun 20 18:56:59 scale-192-168-98-110 cluster: INFO [unknown] soapd is starting up as /opt/scale/sbin/scclusterd Jun 20 18:56:59 scale-192-168-98-110 cluster: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:56:59 scale-192-168-98-110 cluster: INFO [unknown] soapd - rev SC001 Jun 20 18:56:59 scale-192-168-98-110 cluster: INFO [unknown] Loading event subscriptions database Jun 20 18:56:59 scale-192-168-98-110 scconfigd: INFO [unknown] soapd is starting up as /opt/scale/sbin/scconfigd Jun 20 18:56:59 scale-192-168-98-110 scconfigd: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:56:59 scale-192-168-98-110 scconfigd: INFO [unknown] soapd - rev SC001 Jun 20 18:56:59 scale-192-168-98-110 scconfigd: INFO [unknown] I have chosen 192.168.98.110 as my private IP Jun 20 18:56:59 scale-192-168-98-110 scconfigd: INFO [unknown] Creating repo cluster using /opt/scale/lib/scconfigd/repos/conf//cluster.conf Jun 20 18:56:59 scale-192-168-98-110 scconfigd: INFO [unknown] Creating repo storage using /opt/scale/lib/scconfigd/repos/conf//storage.conf Jun 20 18:56:59 scale-192-168-98-110 scconfigd: INFO [unknown] Loading event subscriptions database Jun 20 18:56:59 scale-192-168-98-110 scconfigd: INFO [rollout-1110259472] Performing rollout for batch ID: startup Jun 20 18:56:59 scale-192-168-98-110 scmanaged: INFO [unknown] soapd is starting up as /opt/scale/sbin/scmanaged Jun 20 18:56:59 scale-192-168-98-110 scmanaged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:56:59 scale-192-168-98-110 scmanaged: INFO [unknown] soapd - rev SC001 Jun 20 18:56:59 scale-192-168-98-110 scmanaged: INFO [unknown] Loading event subscriptions database Jun 20 18:56:59 scale-192-168-98-110 cluster: INFO [unknown] Starting SOAP services Jun 20 18:56:59 scale-192-168-98-110 cluster: INFO [cluster-recv-2986325776] Binding to port 5200 on 192.168.98.110 Jun 20 18:56:59 scale-192-168-98-110 cluster: INFO [cluster-recv-2994718480] Binding to port 5200 on 127.0.0.1 Jun 20 18:56:59 scale-192-168-98-110 cluster: INFO [control-recv-2969540368] Binding to port 5211 on 127.0.0.1 Jun 20 18:56:59 scale-192-168-98-110 cluster: INFO [control-recv-2617243408] Binding to port 5211 on 192.168.98.110 Jun 20 18:56:59 scale-192-168-98-110 cluster: INFO [event-recv-2592065296] Binding to port 5210 on 127.0.0.1 Jun 20 18:56:59 scale-192-168-98-110 cluster: INFO [event-recv-2575279888] Binding to port 5210 on 192.168.98.110 Jun 20 18:56:59 scale-192-168-98-110 scconfigd: INFO [unknown] Starting SOAP services Jun 20 18:56:59 scale-192-168-98-110 scconfigd: INFO [event-recv-973059856] Binding to port 5130 on 127.0.0.1 Jun 20 18:56:59 scale-192-168-98-110 scconfigd: INFO [control-recv-989845264] Binding to port 5131 on 192.168.98.110 Jun 20 18:56:59 scale-192-168-98-110 scconfigd: INFO [config-recv-1091323664] Binding to port 5120 on 127.0.0.1 Jun 20 18:56:59 scale-192-168-98-110 scconfigd: INFO [control-recv-998237968] Binding to port 5131 on 127.0.0.1 Jun 20 18:56:59 scale-192-168-98-110 scconfigd: INFO [config-recv-1082930960] Binding to port 5120 on 192.168.98.110 Jun 20 18:56:59 scale-192-168-98-110 scconfigd: INFO [event-recv-964667152] Binding to port 5130 on 192.168.98.110 Jun 20 18:56:59 scale-192-168-98-110 scstoraged: INFO [unknown] /opt/scale/sbin/scstoraged is starting up Jun 20 18:56:59 scale-192-168-98-110 scstoraged: INFO [unknown] Log mask set to default Jun 20 18:56:59 scale-192-168-98-110 scstoraged: INFO [unknown] forte library version 1.1.0 revision Jun 20 18:56:59 scale-192-168-98-110 scmanaged: INFO [unknown] Starting SOAP services Jun 20 18:56:59 scale-192-168-98-110 scmanaged: INFO [manage-recv-2289137424] Binding to port 5140 on 10.200.98.110 Jun 20 18:56:59 scale-192-168-98-110 scmanaged: INFO [manage-recv-2280744720] Binding to port 5140 on 127.0.0.1 Jun 20 18:56:59 scale-192-168-98-110 scmanaged: INFO [control-recv-2331100944] Binding to port 5151 on 192.168.98.110 Jun 20 18:56:59 scale-192-168-98-110 scmanaged: INFO [control-recv-2339493648] Binding to port 5151 on 127.0.0.1 Jun 20 18:56:59 scale-192-168-98-110 scmanaged: INFO [event-recv-2314315536] Binding to port 5150 on 127.0.0.1 Jun 20 18:56:59 scale-192-168-98-110 scmanaged: INFO [event-recv-2305922832] Binding to port 5150 on 192.168.98.110 Jun 20 18:56:59 scale-192-168-98-110 cluster: ERR [unknown] Can't open state_db_conf file: /fsscale0/lib/state.conf Jun 20 18:56:59 scale-192-168-98-110 scstoraged: INFO [control-recv-1191171856] Binding to port 5171 on 192.168.98.110 Jun 20 18:56:59 scale-192-168-98-110 scstoraged: INFO [event-recv-1174386448] Binding to port 5170 on 127.0.0.1 Jun 20 18:56:59 scale-192-168-98-110 scstoraged: INFO [storage-recv-1315518224] Binding to port 5160 on 127.0.0.1 Jun 20 18:56:59 scale-192-168-98-110 scstoraged: INFO [control-recv-1290340112] Binding to port 5171 on 127.0.0.1 Jun 20 18:56:59 scale-192-168-98-110 scstoraged: INFO [storage-recv-1307125520] Binding to port 5160 on 192.168.98.110 Jun 20 18:56:59 scale-192-168-98-110 scstoraged: INFO [event-recv-1165993744] Binding to port 5170 on 192.168.98.110 Jun 20 18:57:00 scale-192-168-98-110 monit[7222]: Starting monit daemon Jun 20 18:57:00 scale-192-168-98-110 monit[7224]: 'system_scale-192-168-98-110' Monit started Jun 20 18:57:00 scale-192-168-98-110 cluster: INFO [locking-recv-1870653200] Binding to port 6121 on 127.0.0.1 Jun 20 18:57:00 scale-192-168-98-110 cluster: INFO [event-pool-2004870928] Sending alert with code BRICK (level 2) from time 1308596219 Jun 20 18:57:02 scale-192-168-98-110 scconfigd: ERR [rollout-1110259472] [DONT_RESOLVE] Path for our IP does not exist in svn: /opt/scale/lib/scconfigd/wc/192.168.98.110 Jun 20 18:57:03 scale-192-168-98-110 cluster: WARN [notify-pool-1862260496] Unable to contact event service at 192.168.98.109:5210/ Jun 20 18:57:03 scale-192-168-98-110 cluster: WARN [notify-pool-1862260496] 0 tries left Jun 20 18:57:03 scale-192-168-98-110 cluster: ERR [notify-pool-1862260496] Giving up on event service at 192.168.98.109:5210/ Jun 20 18:57:03 scale-192-168-98-110 cluster: ERR [notify-pool-1862260496] Failed notification: cluster.brick_status_change = 192.168.98.110,1 Jun 20 18:59:07 scale-192-168-98-110 scconfigd: INFO [rollout-1110259472] Performing rollout for batch ID: Opj/TXl1BADwhEYnsv0lajylPdzHLRxy Jun 20 18:59:11 scale-192-168-98-110 scconfigd: INFO [resolve-125822736] Performing impact resolution for batch ID: Opj/TXl1BADwhEYnsv0lajylPdzHLRxy Jun 20 18:59:11 scale-192-168-98-110 scconfigd: INFO [resolve-125822736] Running impact: setsid nohup service syslog restart >& /dev/null < /dev/null & Jun 20 18:59:11 scale-192-168-98-110 scconfigd: INFO [resolve-125822736] Running impact: /opt/scale/lib/scconfigd/repos/impacts/ntpdstartup Jun 20 18:59:12 scale-192-168-98-110 ntpd[1408]: ntpd exiting on signal 15 Jun 20 18:59:12 scale-192-168-98-110 ntpdate[7440]: name server cannot be used, reason: Temporary failure in name resolution Jun 20 18:59:12 scale-192-168-98-110 scconfigd: INFO [resolve-125822736] Running impact: /opt/scale/lib/scconfigd/repos/impacts/ctdb.php Jun 20 18:59:12 scale-192-168-98-110 scconfigd: INFO [resolve-125822736] Running impact: service winbind stop Jun 20 18:59:12 scale-192-168-98-110 scconfigd: INFO [resolve-125822736] Running impact: service smb stop Jun 20 18:59:12 scale-192-168-98-110 scconfigd: INFO [resolve-125822736] Running impact: /opt/scale/lib/scconfigd/repos/impacts/setQuotas.php Jun 20 18:59:12 scale-192-168-98-110 scconfigd: INFO [resolve-125822736] Running impact: service scst reload Jun 20 18:59:12 scale-192-168-98-110 scconfigd: ERR [resolve-125822736] Impact failed with return code 1: service scst reload#012Output:scst: unrecognized service#012#012 Jun 20 18:59:12 scale-192-168-98-110 scconfigd: INFO [resolve-125822736] Running impact: /opt/scale/lib/scconfigd/repos/impacts/joinADS.php Jun 20 18:59:14 scale-192-168-98-110 scconfigd: INFO [resolve-125822736] Running impact: service smb start Jun 20 18:59:14 scale-192-168-98-110 smbd[7561]: [2011/06/20 18:59:14.734493, 0] smbd/server.c:67(smbd_messaging_context) Jun 20 18:59:14 scale-192-168-98-110 smbd[7561]: Could not init smbd messaging context. Jun 20 18:59:14 scale-192-168-98-110 nmbd[7564]: [2011/06/20 18:59:14.957346, 0] nmbd/nmbd.c:60(nmbd_messaging_context) Jun 20 18:59:14 scale-192-168-98-110 nmbd[7564]: Could not init nmbd messaging context. Jun 20 18:59:14 scale-192-168-98-110 scconfigd: ERR [resolve-125822736] Impact failed with return code 1: service smb start#012Output:Starting SMB services: [FAILED]#015#012Starting NMB services: [FAILED]#015#012#012 Jun 20 18:59:14 scale-192-168-98-110 scconfigd: INFO [resolve-125822736] Running impact: service nfs restart Jun 20 18:59:15 scale-192-168-98-110 kernel: Installing knfsd (copyright (C) 1996 okir@monad.swb.de). Jun 20 18:59:15 scale-192-168-98-110 kernel: NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory Jun 20 18:59:15 scale-192-168-98-110 kernel: NFSD: starting 90-second grace period Jun 20 18:59:15 scale-192-168-98-110 scconfigd: INFO [resolve-125822736] Running impact: service winbind start Jun 20 18:59:15 scale-192-168-98-110 winbindd[7705]: [2011/06/20 18:59:15.679143, 0] winbindd/winbindd.c:58(winbind_messaging_context) Jun 20 18:59:15 scale-192-168-98-110 winbindd[7705]: Could not init winbind messaging context. Jun 20 18:59:15 scale-192-168-98-110 scconfigd: ERR [resolve-125822736] Impact failed with return code 1: service winbind start#012Output:Starting Winbind services: [FAILED]#015#012#012 Jun 20 18:59:15 scale-192-168-98-110 scconfigd: INFO [resolve-125822736] Running impact: /opt/scale/lib/scconfigd/repos/impacts/setShareAcls Jun 20 18:59:15 scale-192-168-98-110 scconfigd: INFO [resolve-125822736] Running impact: service crond reload Jun 20 18:59:48 scale-192-168-98-110 xinetd[1389]: START: shell pid=8105 from=192.168.98.109 Jun 20 18:59:48 scale-192-168-98-110 rshd[8105]: root@scale-192-168-98-109 as root: cmd='rcp -f /etc/ctdb/nodes' Jun 20 18:59:48 scale-192-168-98-110 xinetd[1389]: EXIT: shell status=0 pid=8105 duration=0(sec) Jun 20 18:59:48 scale-192-168-98-110 xinetd[1389]: START: shell pid=8112 from=192.168.98.110 Jun 20 18:59:48 scale-192-168-98-110 rshd[8114]: root@scale-192-168-98-110 as root: cmd='rcp 192.168.98.110:/etc/ctdb/nodes /etc/ctdb/nodes' Jun 20 18:59:48 scale-192-168-98-110 xinetd[1389]: START: shell pid=8117 from=192.168.98.110 Jun 20 18:59:48 scale-192-168-98-110 rshd[8117]: root@scale-192-168-98-110 as root: cmd='rcp -f /etc/ctdb/nodes' Jun 20 18:59:48 scale-192-168-98-110 xinetd[1389]: EXIT: shell status=0 pid=8117 duration=0(sec) Jun 20 18:59:48 scale-192-168-98-110 xinetd[1389]: EXIT: shell status=0 pid=8112 duration=0(sec) Jun 20 18:59:48 scale-192-168-98-110 xinetd[1389]: START: shell pid=8125 from=192.168.98.111 Jun 20 18:59:48 scale-192-168-98-110 rshd[8125]: root@scale-192-168-98-111 as root: cmd='rcp -f /etc/ctdb/nodes' Jun 20 18:59:48 scale-192-168-98-110 xinetd[1389]: EXIT: shell status=0 pid=8125 duration=0(sec) Jun 20 18:59:48 scale-192-168-98-110 xinetd[1389]: START: shell pid=8193 from=192.168.98.110 Jun 20 18:59:48 scale-192-168-98-110 rshd[8195]: root@scale-192-168-98-110 as root: cmd='killall -SIGUSR2 scst_fileio_tgt' Jun 20 18:59:48 scale-192-168-98-110 xinetd[1389]: EXIT: shell status=0 pid=8193 duration=0(sec) Jun 20 18:59:49 scale-192-168-98-110 xinetd[1389]: START: shell pid=8271 from=192.168.98.110 Jun 20 18:59:49 scale-192-168-98-110 rshd[8273]: root@scale-192-168-98-110 as root: cmd='scnetconsole setup' Jun 20 18:59:49 scale-192-168-98-110 kernel: Kernel logging (proc) stopped. Jun 20 18:59:49 scale-192-168-98-110 rsyslogd: [origin software="rsyslogd" swVersion="4.6.2" x-pid="6267" x-info="http://www.rsyslog.com"] exiting on signal 15. Jun 20 18:59:49 scale-192-168-98-110 kernel: imklog 4.6.2, log source = /proc/kmsg started. Jun 20 18:59:49 scale-192-168-98-110 rsyslogd: [origin software="rsyslogd" swVersion="4.6.2" x-pid="8296" x-info="http://www.rsyslog.com"] (re)start Jun 20 18:59:49 scale-192-168-98-110 rsyslogd: WARNING: rsyslogd is running in compatibility mode. Automatically generated config directives may interfer with your rsyslog.conf settings. We suggest upgrading your config and adding -c4 as the first rsyslogd option. Jun 20 18:59:49 scale-192-168-98-110 rsyslogd: Warning: backward compatibility layer added to following directive to rsyslog.conf: ModLoad imudp Jun 20 18:59:49 scale-192-168-98-110 rsyslogd: Warning: backward compatibility layer added to following directive to rsyslog.conf: UDPServerRun (null) Jun 20 18:59:49 scale-192-168-98-110 rsyslogd: Name or service not known Jun 20 18:59:49 scale-192-168-98-110 rsyslogd: UDP message reception disabled due to error logged in last message. Jun 20 18:59:49 scale-192-168-98-110 rsyslogd: Warning: backward compatibility layer added to following directive to rsyslog.conf: ModLoad imuxsock Jun 20 18:59:49 scale-192-168-98-110 xinetd[1389]: EXIT: shell status=0 pid=8271 duration=0(sec) Jun 20 18:59:51 scale-192-168-98-110 xinetd[1389]: START: shell pid=8448 from=192.168.98.110 Jun 20 18:59:51 scale-192-168-98-110 rshd[8450]: root@scale-192-168-98-110 as root: cmd='scnetconsole restart' Jun 20 18:59:52 scale-192-168-98-110 netconsole: : inserting netconsole module with arguments netconsole=6666@192.168.98.110/eth1,514@192.168.98.109/00:30:48:B1:03:FD Jun 20 18:59:52 scale-192-168-98-110 kernel: netconsole: local port 6666 Jun 20 18:59:52 scale-192-168-98-110 kernel: netconsole: local IP 192.168.98.110 Jun 20 18:59:52 scale-192-168-98-110 kernel: netconsole: interface 'eth1' Jun 20 18:59:52 scale-192-168-98-110 kernel: netconsole: remote port 514 Jun 20 18:59:52 scale-192-168-98-110 kernel: netconsole: remote IP 192.168.98.109 Jun 20 18:59:52 scale-192-168-98-110 kernel: netconsole: remote ethernet address 00:30:48:b1:03:fd Jun 20 18:59:52 scale-192-168-98-110 kernel: console [netcon0] enabled Jun 20 18:59:52 scale-192-168-98-110 kernel: netconsole: network logging started Jun 20 18:59:52 scale-192-168-98-110 xinetd[1389]: EXIT: shell status=0 pid=8448 duration=1(sec) Jun 20 18:59:54 scale-192-168-98-110 xinetd[1389]: START: shell pid=8646 from=192.168.98.110 Jun 20 18:59:54 scale-192-168-98-110 rshd[8648]: root@scale-192-168-98-110 as root: cmd='/opt/scale/bin/screloadbricklist' Jun 20 18:59:55 scale-192-168-98-110 xinetd[1389]: EXIT: shell status=0 pid=8646 duration=1(sec) Jun 20 19:00:05 scale-192-168-98-110 xinetd[1389]: EXIT: shell status=0 pid=6431 duration=187(sec) Jun 20 19:00:06 scale-192-168-98-110 xinetd[1389]: START: shell pid=8681 from=192.168.98.111 Jun 20 19:00:06 scale-192-168-98-110 rshd[8681]: root@scale-192-168-98-111 as root: cmd='rcp -f /var/log/scale/scshapi/scshapi_20110620_185658.log' Jun 20 19:00:06 scale-192-168-98-110 xinetd[1389]: EXIT: shell status=0 pid=8681 duration=0(sec) Jun 20 19:01:50 scale-192-168-98-110 monit[7224]: 'scntpd' process is not running Jun 20 19:01:50 scale-192-168-98-110 monit[7224]: 'scntpd' trying to restart Jun 20 19:01:50 scale-192-168-98-110 monit[7224]: 'scntpd' start: /etc/init.d/scntpd Jun 20 19:01:50 scale-192-168-98-110 ntpdate[8722]: name server cannot be used, reason: Temporary failure in name resolution Jun 20 19:02:20 scale-192-168-98-110 monit[7224]: 'scntpd' failed to start Jun 20 19:02:58 scale-192-168-98-110 xinetd[1389]: START: shell pid=8727 from=192.168.98.111 Jun 20 19:02:58 scale-192-168-98-110 rshd[8728]: root@scale-192-168-98-111 as root: cmd='rcp 192.168.98.111:/etc/ctdb/nodes /etc/ctdb/nodes' Jun 20 19:02:58 scale-192-168-98-110 xinetd[1389]: EXIT: shell status=0 pid=8727 duration=0(sec) Jun 20 19:02:58 scale-192-168-98-110 xinetd[1389]: START: shell pid=8731 from=192.168.98.111 Jun 20 19:02:58 scale-192-168-98-110 rshd[8732]: root@scale-192-168-98-111 as root: cmd='killall -SIGUSR2 scst_fileio_tgt' Jun 20 19:02:58 scale-192-168-98-110 xinetd[1389]: EXIT: shell status=0 pid=8731 duration=0(sec) Jun 20 19:02:59 scale-192-168-98-110 xinetd[1389]: START: shell pid=8735 from=192.168.98.111 Jun 20 19:02:59 scale-192-168-98-110 rshd[8736]: root@scale-192-168-98-111 as root: cmd='scnetconsole setup' Jun 20 19:02:59 scale-192-168-98-110 kernel: Kernel logging (proc) stopped. Jun 20 19:02:59 scale-192-168-98-110 rsyslogd: [origin software="rsyslogd" swVersion="4.6.2" x-pid="8296" x-info="http://www.rsyslog.com"] exiting on signal 15. Jun 20 19:02:59 scale-192-168-98-110 kernel: imklog 4.6.2, log source = /proc/kmsg started. Jun 20 19:02:59 scale-192-168-98-110 rsyslogd: [origin software="rsyslogd" swVersion="4.6.2" x-pid="8759" x-info="http://www.rsyslog.com"] (re)start Jun 20 19:02:59 scale-192-168-98-110 rsyslogd: WARNING: rsyslogd is running in compatibility mode. Automatically generated config directives may interfer with your rsyslog.conf settings. We suggest upgrading your config and adding -c4 as the first rsyslogd option. Jun 20 19:02:59 scale-192-168-98-110 rsyslogd: Warning: backward compatibility layer added to following directive to rsyslog.conf: ModLoad imudp Jun 20 19:02:59 scale-192-168-98-110 rsyslogd: Warning: backward compatibility layer added to following directive to rsyslog.conf: UDPServerRun (null) Jun 20 19:02:59 scale-192-168-98-110 rsyslogd: Name or service not known Jun 20 19:02:59 scale-192-168-98-110 rsyslogd: UDP message reception disabled due to error logged in last message. Jun 20 19:02:59 scale-192-168-98-110 rsyslogd: Warning: backward compatibility layer added to following directive to rsyslog.conf: ModLoad imuxsock Jun 20 19:02:59 scale-192-168-98-110 xinetd[1389]: EXIT: shell status=0 pid=8735 duration=0(sec) Jun 20 19:03:01 scale-192-168-98-110 xinetd[1389]: START: shell pid=8835 from=192.168.98.111 Jun 20 19:03:01 scale-192-168-98-110 rshd[8836]: root@scale-192-168-98-111 as root: cmd='scnetconsole restart' Jun 20 19:03:02 scale-192-168-98-110 netconsole: : inserting netconsole module with arguments netconsole=6666@192.168.98.110/eth1,514@192.168.98.111/00:30:48:B0:CF:8D Jun 20 19:03:02 scale-192-168-98-110 kernel: netconsole: local port 6666 Jun 20 19:03:02 scale-192-168-98-110 kernel: netconsole: local IP 192.168.98.110 Jun 20 19:03:02 scale-192-168-98-110 kernel: netconsole: interface 'eth1' Jun 20 19:03:02 scale-192-168-98-110 kernel: netconsole: remote port 514 Jun 20 19:03:02 scale-192-168-98-110 kernel: netconsole: remote IP 192.168.98.111 Jun 20 19:03:02 scale-192-168-98-110 kernel: netconsole: remote ethernet address 00:30:48:b0:cf:8d Jun 20 19:03:02 scale-192-168-98-110 kernel: console [netcon0] enabled Jun 20 19:03:02 scale-192-168-98-110 kernel: netconsole: network logging started Jun 20 19:03:02 scale-192-168-98-110 xinetd[1389]: EXIT: shell status=0 pid=8835 duration=1(sec) Jun 20 19:03:05 scale-192-168-98-110 xinetd[1389]: START: shell pid=8963 from=192.168.98.111 Jun 20 19:03:05 scale-192-168-98-110 rshd[8964]: root@scale-192-168-98-111 as root: cmd='/opt/scale/bin/screloadbricklist' Jun 20 19:03:06 scale-192-168-98-110 cluster: INFO [cluster-pool-3011503888] Brick 192.168.98.111 was added to the cluster Jun 20 19:03:06 scale-192-168-98-110 xinetd[1389]: EXIT: shell status=0 pid=8963 duration=1(sec) Jun 20 19:03:22 scale-192-168-98-110 scconfigd: INFO [rollout-125822736] Performing rollout for batch ID: bJn/TfL+CQCZVAApKCsao9N5BXfIckFw Jun 20 19:03:53 scale-192-168-98-110 scconfigd: INFO [rollout-125822736] Performing rollout for batch ID: epn/TctWBgDr8+HOa2giUNOJrPAbRt6t Jun 20 19:04:28 scale-192-168-98-110 scconfigd: INFO [resolve-1110259472] Performing impact resolution for batch ID: bJn/TfL+CQCZVAApKCsao9N5BXfIckFw Jun 20 19:04:28 scale-192-168-98-110 scconfigd: INFO [resolve-1110259472] Running impact: service winbind stop Jun 20 19:04:29 scale-192-168-98-110 scconfigd: INFO [resolve-1110259472] Running impact: /opt/scale/lib/scconfigd/repos/impacts/joinADS.php Jun 20 19:04:30 scale-192-168-98-110 scconfigd: INFO [resolve-1110259472] Running impact: service winbind start Jun 20 19:04:31 scale-192-168-98-110 winbindd[9194]: [2011/06/20 19:04:31.019798, 0] winbindd/winbindd.c:58(winbind_messaging_context) Jun 20 19:04:31 scale-192-168-98-110 winbindd[9194]: Could not init winbind messaging context. Jun 20 19:04:31 scale-192-168-98-110 scconfigd: ERR [resolve-1110259472] Impact failed with return code 1: service winbind start#012Output:Starting Winbind services: [FAILED]#015#012#012 Jun 20 19:04:31 scale-192-168-98-110 scconfigd: INFO [rollout-117430032] Performing rollout for batch ID: mpn/TWwhAABPmy5pxnRp85A6rLtB05fV Jun 20 19:04:44 scale-192-168-98-110 scconfigd: INFO [resolve-125822736] Performing impact resolution for batch ID: epn/TctWBgDr8+HOa2giUNOJrPAbRt6t Jun 20 19:04:44 scale-192-168-98-110 scconfigd: INFO [rollout-1110259472] Performing rollout for batch ID: tJn/TQ3KBgBSvHxVPL2HOPvTW1QIh2TO Jun 20 19:04:48 scale-192-168-98-110 scconfigd: INFO [resolve-109037328] Performing impact resolution for batch ID: mpn/TWwhAABPmy5pxnRp85A6rLtB05fV Jun 20 19:05:13 scale-192-168-98-110 scconfigd: INFO [rollout-109037328] Performing rollout for batch ID: zJn/TZwYBgDMIUQkVjJrDuF+Ydk/oUOM Jun 20 19:05:37 scale-192-168-98-110 scconfigd: INFO [resolve-100644624] Performing impact resolution for batch ID: tJn/TQ3KBgBSvHxVPL2HOPvTW1QIh2TO Jun 20 19:05:42 scale-192-168-98-110 scconfigd: INFO [rollout-100644624] Performing rollout for batch ID: 6Zn/TQhoCgCAcFR9HIC6CvnzE8f5TP5m Jun 20 19:05:56 scale-192-168-98-110 scconfigd: INFO [resolve-100644624] Performing impact resolution for batch ID: zJn/TZwYBgDMIUQkVjJrDuF+Ydk/oUOM Jun 20 19:05:58 scale-192-168-98-110 scconfigd: INFO [resolve-100644624] Performing impact resolution for batch ID: 6Zn/TQhoCgCAcFR9HIC6CvnzE8f5TP5m Jun 20 19:07:20 scale-192-168-98-110 monit[7224]: 'scntpd' process is not running Jun 20 19:07:20 scale-192-168-98-110 monit[7224]: 'scntpd' trying to restart Jun 20 19:07:20 scale-192-168-98-110 monit[7224]: 'scntpd' start: /etc/init.d/scntpd Jun 20 19:07:20 scale-192-168-98-110 ntpdate[9420]: name server cannot be used, reason: Temporary failure in name resolution Jun 20 19:07:50 scale-192-168-98-110 monit[7224]: 'scntpd' failed to start Jun 20 19:12:50 scale-192-168-98-110 monit[7224]: 'scntpd' process is not running Jun 20 19:12:50 scale-192-168-98-110 monit[7224]: 'scntpd' trying to restart Jun 20 19:12:50 scale-192-168-98-110 monit[7224]: 'scntpd' start: /etc/init.d/scntpd Jun 20 19:12:50 scale-192-168-98-110 ntpdate[9452]: name server cannot be used, reason: Temporary failure in name resolution Jun 20 19:13:20 scale-192-168-98-110 monit[7224]: 'scntpd' failed to start Jun 20 19:18:21 scale-192-168-98-110 monit[7224]: 'scntpd' process is not running Jun 20 19:18:21 scale-192-168-98-110 monit[7224]: 'scntpd' trying to restart Jun 20 19:18:21 scale-192-168-98-110 monit[7224]: 'scntpd' start: /etc/init.d/scntpd Jun 20 19:18:21 scale-192-168-98-110 ntpdate[9475]: name server cannot be used, reason: Temporary failure in name resolution Jun 20 19:18:51 scale-192-168-98-110 monit[7224]: 'scntpd' failed to start Jun 20 19:23:51 scale-192-168-98-110 monit[7224]: 'scntpd' process is not running Jun 20 19:23:51 scale-192-168-98-110 monit[7224]: 'scntpd' trying to restart Jun 20 19:23:51 scale-192-168-98-110 monit[7224]: 'scntpd' start: /etc/init.d/scntpd Jun 20 19:23:51 scale-192-168-98-110 ntpdate[17864]: name server cannot be used, reason: Temporary failure in name resolution Jun 20 19:24:21 scale-192-168-98-110 monit[7224]: 'scntpd' failed to start Jun 20 19:29:21 scale-192-168-98-110 monit[7224]: 'scntpd' process is not running Jun 20 19:29:21 scale-192-168-98-110 monit[7224]: 'scntpd' trying to restart Jun 20 19:29:21 scale-192-168-98-110 monit[7224]: 'scntpd' start: /etc/init.d/scntpd Jun 20 19:29:21 scale-192-168-98-110 ntpdate[17900]: name server cannot be used, reason: Temporary failure in name resolution Jun 20 19:29:51 scale-192-168-98-110 monit[7224]: 'scntpd' failed to start Jun 20 19:30:42 scale-192-168-98-110 cluster: INFO [status-pool-1853867792] Can't contact brick 192.168.98.109: [TIMEOUT] TIMEOUT: Jun 20 19:30:42 scale-192-168-98-110 cluster: INFO [status-pool-1853867792] status of brick 192.168.98.109 appears to have changed to 0 Jun 20 19:30:47 scale-192-168-98-110 cluster: INFO [status-pool-1853867792] Votes are in for brick 192.168.98.109: new status is 0 with 2 votes Jun 20 19:30:47 scale-192-168-98-110 cluster: ALRT [status-pool-1853867792] 10.200.98.109 is down (private IP is 192.168.98.109) Jun 20 19:30:48 scale-192-168-98-110 cluster: INFO [event-pool-1979692816] Sending alert with code BRICK (level 1) from time 1308598247 Jun 20 19:30:48 scale-192-168-98-110 cluster: INFO [status-pool-1962907408] Can't contact brick 192.168.98.109: [TIMEOUT] TIMEOUT: Jun 20 19:30:48 scale-192-168-98-110 cluster: INFO [status-pool-1962907408] status of brick 192.168.98.109 appears to have changed to 0 Jun 20 19:30:50 scale-192-168-98-110 cluster: INFO [status-pool-1845475088] Can't contact brick 192.168.98.109: [TIMEOUT] TIMEOUT: Jun 20 19:30:50 scale-192-168-98-110 cluster: INFO [status-pool-1845475088] status of brick 192.168.98.109 appears to have changed to 0 Jun 20 19:30:53 scale-192-168-98-110 cluster: INFO [status-pool-1962907408] Votes are in for brick 192.168.98.109: new status is 0 with 2 votes Jun 20 19:30:53 scale-192-168-98-110 cluster: ALRT [status-pool-1962907408] 10.200.98.109 is down (private IP is 192.168.98.109) Jun 20 19:30:54 scale-192-168-98-110 cluster: INFO [status-pool-1853867792] Can't contact brick 192.168.98.109: [TIMEOUT] TIMEOUT: Jun 20 19:30:55 scale-192-168-98-110 cluster: INFO [status-pool-1828689680] Can't contact brick 192.168.98.109: [TIMEOUT] TIMEOUT: Jun 20 19:31:00 scale-192-168-98-110 cluster: INFO [status-pool-1837082384] Can't contact brick 192.168.98.109: [TIMEOUT] TIMEOUT: Jun 20 19:31:05 scale-192-168-98-110 cluster: INFO [status-pool-1853867792] Can't contact brick 192.168.98.109: [TIMEOUT] TIMEOUT: Jun 20 19:31:08 scale-192-168-98-110 cluster: INFO [status-pool-1845475088] Can't contact brick 192.168.98.109: [TIMEOUT] TIMEOUT: Jun 20 19:31:10 scale-192-168-98-110 cluster: INFO [status-pool-1828689680] Can't contact brick 192.168.98.109: [SOAP-ENV:Client] Connection refused: connect failed in tcp_connect() Jun 20 19:31:15 scale-192-168-98-110 cluster: INFO [status-pool-1853867792] Can't contact brick 192.168.98.109: [SOAP-ENV:Client] Connection refused: connect failed in tcp_connect() Jun 20 19:31:20 scale-192-168-98-110 cluster: INFO [status-pool-1828689680] Can't contact brick 192.168.98.109: [SOAP-ENV:Client] Connection refused: connect failed in tcp_connect() Jun 20 19:31:25 scale-192-168-98-110 cluster: INFO [status-pool-1845475088] Can't contact brick 192.168.98.109: [SOAP-ENV:Client] Connection refused: connect failed in tcp_connect() Jun 20 19:31:30 scale-192-168-98-110 cluster: INFO [status-pool-1828689680] Can't contact brick 192.168.98.109: [SOAP-ENV:Client] Connection refused: connect failed in tcp_connect() Jun 20 19:31:35 scale-192-168-98-110 cluster: INFO [status-pool-1853867792] Can't contact brick 192.168.98.109: [SOAP-ENV:Client] Connection refused: connect failed in tcp_connect() Jun 20 19:31:38 scale-192-168-98-110 monit[7224]: monit daemon with pid [7224] killed Jun 20 19:31:38 scale-192-168-98-110 monit[7224]: 'system_scale-192-168-98-110' Monit stopped Jun 20 19:31:38 scale-192-168-98-110 cluster: INFO [unknown] Quitting due to signal 15. Jun 20 19:31:38 scale-192-168-98-110 cluster: WARN [notify-pool-1988085520] Unable to contact event service at 192.168.98.109:5210/ Jun 20 19:31:38 scale-192-168-98-110 cluster: WARN [notify-pool-1988085520] 0 tries left Jun 20 19:31:38 scale-192-168-98-110 cluster: ERR [notify-pool-1988085520] Giving up on event service at 192.168.98.109:5210/ Jun 20 19:31:38 scale-192-168-98-110 cluster: ERR [notify-pool-1988085520] Failed notification: cluster.brick_status_change = 192.168.98.110,4 Jun 20 19:31:38 scale-192-168-98-110 scconfigd: NOTC [event-pool-922703632] Going out-of-service Jun 20 19:31:38 scale-192-168-98-110 cluster: WARN [notify-pool-1988085520] Unable to contact event service at 192.168.98.109:5130/ Jun 20 19:31:38 scale-192-168-98-110 cluster: WARN [notify-pool-1988085520] 0 tries left Jun 20 19:31:38 scale-192-168-98-110 cluster: ERR [notify-pool-1988085520] Giving up on event service at 192.168.98.109:5130/ Jun 20 19:31:38 scale-192-168-98-110 cluster: ERR [notify-pool-1988085520] Failed notification: cluster.brick_status_change = 192.168.98.110,4 Jun 20 19:31:38 scale-192-168-98-110 cluster: WARN [notify-pool-1988085520] Unable to contact event service at 192.168.98.109:5150/ Jun 20 19:31:38 scale-192-168-98-110 cluster: WARN [notify-pool-1988085520] 0 tries left Jun 20 19:31:38 scale-192-168-98-110 cluster: ERR [notify-pool-1988085520] Giving up on event service at 192.168.98.109:5150/ Jun 20 19:31:38 scale-192-168-98-110 cluster: ERR [notify-pool-1988085520] Failed notification: cluster.brick_status_change = 192.168.98.110,4 Jun 20 19:31:38 scale-192-168-98-110 cluster: WARN [notify-pool-1862260496] Unable to contact event service at 192.168.98.109:5170/ Jun 20 19:31:38 scale-192-168-98-110 cluster: WARN [notify-pool-1862260496] 0 tries left Jun 20 19:31:38 scale-192-168-98-110 cluster: ERR [notify-pool-1862260496] Giving up on event service at 192.168.98.109:5170/ Jun 20 19:31:38 scale-192-168-98-110 cluster: ERR [notify-pool-1862260496] Failed notification: cluster.brick_status_change = 192.168.98.110,4 Jun 20 19:31:43 scale-192-168-98-110 cluster: WARN [notify-pool-1862260496] Unable to contact event service at localhost:5130/ Jun 20 19:31:43 scale-192-168-98-110 cluster: WARN [notify-pool-1862260496] 0 tries left Jun 20 19:31:43 scale-192-168-98-110 cluster: ERR [notify-pool-1862260496] Giving up on event service at localhost:5130/ Jun 20 19:31:43 scale-192-168-98-110 cluster: ERR [notify-pool-1862260496] Failed notification: cluster.list_updated = Jun 20 19:31:43 scale-192-168-98-110 cluster: WARN [notify-pool-1988085520] Unable to contact event service at localhost:5210/ Jun 20 19:31:43 scale-192-168-98-110 cluster: WARN [notify-pool-1988085520] 0 tries left Jun 20 19:31:43 scale-192-168-98-110 cluster: ERR [notify-pool-1988085520] Giving up on event service at localhost:5210/ Jun 20 19:31:43 scale-192-168-98-110 cluster: ERR [notify-pool-1988085520] Failed notification: cluster.list_updated = Jun 20 19:31:43 scale-192-168-98-110 cluster: WARN [notify-pool-1736435472] Unable to contact event service at localhost:5150/ Jun 20 19:31:43 scale-192-168-98-110 cluster: WARN [notify-pool-1736435472] 0 tries left Jun 20 19:31:43 scale-192-168-98-110 cluster: ERR [notify-pool-1736435472] Giving up on event service at localhost:5150/ Jun 20 19:31:43 scale-192-168-98-110 cluster: ERR [notify-pool-1736435472] Failed notification: cluster.list_updated = Jun 20 19:31:44 scale-192-168-98-110 cluster: WARN [notify-pool-1719650064] Unable to contact event service at localhost:5170/ Jun 20 19:31:44 scale-192-168-98-110 cluster: WARN [notify-pool-1719650064] 0 tries left Jun 20 19:31:44 scale-192-168-98-110 cluster: ERR [notify-pool-1719650064] Giving up on event service at localhost:5170/ Jun 20 19:31:44 scale-192-168-98-110 cluster: ERR [notify-pool-1719650064] Failed notification: cluster.list_updated = Jun 20 19:32:08 scale-192-168-98-110 cluster: ERR [event-pool-1979692816] Cluster:GetBrickList failed Jun 20 19:32:08 scale-192-168-98-110 cluster: ERR [event-pool-1979692816] SOAP Fault: [TIMEOUT] TIMEOUT: Jun 20 19:32:08 scale-192-168-98-110 cluster: ERR [event-pool-1979692816] Unable to get the list of servers in the cluster Jun 20 19:32:08 scale-192-168-98-110 scconfigd: ERR [event-pool-897525520] Cluster:GetBrickList failed Jun 20 19:32:08 scale-192-168-98-110 scconfigd: ERR [event-pool-897525520] SOAP Fault: [TIMEOUT] TIMEOUT: Jun 20 19:32:08 scale-192-168-98-110 scconfigd: ERR [event-pool-897525520] Unable to get the list of servers in the cluster Jun 20 19:32:08 scale-192-168-98-110 scmanaged: ERR [event-pool-2230388496] Cluster:GetBrickList failed Jun 20 19:32:08 scale-192-168-98-110 scmanaged: ERR [event-pool-2230388496] SOAP Fault: [TIMEOUT] TIMEOUT: Jun 20 19:32:08 scale-192-168-98-110 scmanaged: ERR [event-pool-2230388496] Unable to get the list of servers in the cluster Jun 20 19:32:08 scale-192-168-98-110 scstoraged: ERR [event-pool-1107244816] Cluster:GetBrickList failed Jun 20 19:32:08 scale-192-168-98-110 scstoraged: ERR [event-pool-1107244816] SOAP Fault: [TIMEOUT] TIMEOUT: Jun 20 19:32:08 scale-192-168-98-110 scstoraged: ERR [event-pool-1107244816] Unable to get the list of servers in the cluster Jun 20 19:32:08 scale-192-168-98-110 scconfigd: INFO [unknown] Quitting due to signal 15. Jun 20 19:32:09 scale-192-168-98-110 scconfigd: INFO [unknown] soapd is shutting down, please wait... Jun 20 19:32:16 scale-192-168-98-110 scconfigd: INFO [unknown] logging halted Jun 20 19:32:16 scale-192-168-98-110 scmanaged: INFO [unknown] Quitting due to signal 15. Jun 20 19:32:18 scale-192-168-98-110 scmanaged: WARN [event-pool-2230388496] fire_event_async: server is shutting down - dropped event: manage.brick_status_change Jun 20 19:32:19 scale-192-168-98-110 scmanaged: INFO [unknown] soapd is shutting down, please wait... Jun 20 19:32:27 scale-192-168-98-110 scmanaged: INFO [unknown] logging halted Jun 20 19:32:27 scale-192-168-98-110 scstoraged: INFO [unknown] Quitting due to signal 15. Jun 20 19:32:39 scale-192-168-98-110 scstoraged: INFO [unknown] logging halted Jun 20 19:43:01 scale-192-168-98-110 kernel: md: data-check of RAID array md2 Jun 20 19:43:01 scale-192-168-98-110 kernel: md: minimum _guaranteed_ speed: 1000 KB/sec/disk. Jun 20 19:43:01 scale-192-168-98-110 kernel: md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for data-check. Jun 20 19:43:01 scale-192-168-98-110 kernel: md: using 128k window, over a total of 104856508 blocks. Jun 20 19:43:01 scale-192-168-98-110 kernel: md: delaying data-check of md1 until md2 has finished (they share one or more physical units) Jun 20 19:43:01 scale-192-168-98-110 kernel: md: delaying data-check of md0 until md2 has finished (they share one or more physical units) Jun 20 19:43:01 scale-192-168-98-110 kernel: md: delaying data-check of md1 until md2 has finished (they share one or more physical units) Jun 20 20:01:33 scale-192-168-98-110 kernel: md: md2: data-check done. Jun 20 20:01:33 scale-192-168-98-110 kernel: md: data-check of RAID array md1 Jun 20 20:01:33 scale-192-168-98-110 kernel: md: minimum _guaranteed_ speed: 1000 KB/sec/disk. Jun 20 20:01:33 scale-192-168-98-110 kernel: md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for data-check. Jun 20 20:01:33 scale-192-168-98-110 kernel: md: delaying data-check of md0 until md1 has finished (they share one or more physical units) Jun 20 20:01:33 scale-192-168-98-110 kernel: md: using 128k window, over a total of 4193272 blocks. Jun 20 20:02:20 scale-192-168-98-110 kernel: md: md1: data-check done. Jun 20 20:02:20 scale-192-168-98-110 kernel: md: data-check of RAID array md0 Jun 20 20:02:20 scale-192-168-98-110 kernel: md: minimum _guaranteed_ speed: 1000 KB/sec/disk. Jun 20 20:02:20 scale-192-168-98-110 kernel: md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for data-check. Jun 20 20:02:20 scale-192-168-98-110 kernel: md: using 128k window, over a total of 102388 blocks. Jun 20 20:02:22 scale-192-168-98-110 kernel: md: md0: data-check done. Jun 20 20:22:26 scale-192-168-98-110 mountd[7677]: Caught signal 15, un-registering and exiting. Jun 20 20:22:26 scale-192-168-98-110 kernel: nfsd: last server has exited, flushing export cache Jun 20 20:22:28 scale-192-168-98-110 kernel: NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory Jun 20 20:22:28 scale-192-168-98-110 kernel: NFSD: starting 90-second grace period Jun 20 20:22:46 scale-192-168-98-110 kernel: svc: 10.200.2.136, port=787: unknown version (4 for prog 100003, nfsd) Jun 20 20:22:46 scale-192-168-98-110 mountd[19076]: authenticated mount request from 10.200.2.136:776 for /scale (/scale) Jun 20 20:53:43 scale-192-168-98-110 mountd[19076]: Caught signal 15, un-registering and exiting. Jun 20 20:53:43 scale-192-168-98-110 kernel: nfsd: last server has exited, flushing export cache Jun 20 20:53:45 scale-192-168-98-110 exportfs[19308]: /etc/exports:2: unknown keyword "no_root_square" Jun 20 20:53:45 scale-192-168-98-110 kernel: NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory Jun 20 20:53:45 scale-192-168-98-110 kernel: NFSD: starting 90-second grace period Jun 20 20:53:59 scale-192-168-98-110 mountd[19349]: Caught signal 15, un-registering and exiting. Jun 20 20:53:59 scale-192-168-98-110 kernel: nfsd: last server has exited, flushing export cache Jun 20 20:54:02 scale-192-168-98-110 kernel: NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory Jun 20 20:54:02 scale-192-168-98-110 kernel: NFSD: starting 90-second grace period Jun 20 20:54:15 scale-192-168-98-110 mountd[19472]: Caught signal 15, un-registering and exiting. Jun 20 20:54:15 scale-192-168-98-110 kernel: nfsd: last server has exited, flushing export cache Jun 20 20:54:15 scale-192-168-98-110 kernel: NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory Jun 20 20:54:15 scale-192-168-98-110 kernel: NFSD: starting 90-second grace period Jun 20 21:05:34 scale-192-168-98-110 mountd[19598]: Caught signal 15, un-registering and exiting. Jun 20 21:05:34 scale-192-168-98-110 kernel: nfsd: last server has exited, flushing export cache Jun 20 21:06:06 scale-192-168-98-110 kernel: libceph: client4129 fsid 4213fc3a-0a25-06da-6d74-b614ca1f57f4 Jun 20 21:06:06 scale-192-168-98-110 kernel: libceph: mon0 192.168.98.110:6789 session established Jun 20 21:06:10 scale-192-168-98-110 kernel: NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory Jun 20 21:06:10 scale-192-168-98-110 kernel: NFSD: starting 90-second grace period Jun 20 21:06:28 scale-192-168-98-110 mountd[19793]: Caught signal 15, un-registering and exiting. Jun 20 21:06:28 scale-192-168-98-110 kernel: nfsd: last server has exited, flushing export cache Jun 20 21:07:33 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 505 /dev/sdd7 Jun 20 21:07:33 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 445 /dev/sdb7 Jun 20 21:07:33 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 413 /dev/sda7 Jun 20 21:07:33 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 505 /dev/sdc7 Jun 20 21:07:33 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 413 /dev/sda7 Jun 20 21:07:34 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 505 /dev/sdd7 Jun 20 21:07:34 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 445 /dev/sdb7 Jun 20 21:07:34 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 414 /dev/sda7 Jun 20 21:07:34 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 505 /dev/sdc7 Jun 20 21:07:34 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 445 /dev/sdb7 Jun 20 21:07:34 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 505 /dev/sdd7 Jun 20 21:07:34 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 449 /dev/sdb7 Jun 20 21:07:34 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 417 /dev/sda7 Jun 20 21:07:34 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 505 /dev/sdc7 Jun 20 21:07:34 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 505 /dev/sdc7 Jun 20 21:07:34 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 505 /dev/sdd7 Jun 20 21:07:35 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 449 /dev/sdb7 Jun 20 21:07:35 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 417 /dev/sda7 Jun 20 21:07:35 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 507 /dev/sdc7 Jun 20 21:07:35 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 505 /dev/sdd7 Jun 20 21:08:12 scale-192-168-98-110 kernel: libceph: client4816 fsid 4213fc3a-0a25-06da-6d74-b614ca1f57f4 Jun 20 21:08:12 scale-192-168-98-110 kernel: libceph: mon0 192.168.98.110:6789 session established Jun 20 21:08:15 scale-192-168-98-110 kernel: NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory Jun 20 21:08:15 scale-192-168-98-110 kernel: NFSD: starting 90-second grace period Jun 20 21:08:26 scale-192-168-98-110 mountd[23113]: Caught signal 15, un-registering and exiting. Jun 20 21:08:26 scale-192-168-98-110 kernel: nfsd: last server has exited, flushing export cache Jun 20 21:09:51 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 516 /dev/sdd7 Jun 20 21:09:51 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 458 /dev/sdb7 Jun 20 21:09:51 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 425 /dev/sda7 Jun 20 21:09:51 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 516 /dev/sdc7 Jun 20 21:09:51 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 425 /dev/sda7 Jun 20 21:09:52 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 516 /dev/sdd7 Jun 20 21:09:52 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 458 /dev/sdb7 Jun 20 21:09:52 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 427 /dev/sda7 Jun 20 21:09:52 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 516 /dev/sdc7 Jun 20 21:09:52 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 458 /dev/sdb7 Jun 20 21:09:52 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 516 /dev/sdd7 Jun 20 21:09:52 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 462 /dev/sdb7 Jun 20 21:09:52 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 429 /dev/sda7 Jun 20 21:09:52 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 516 /dev/sdc7 Jun 20 21:09:52 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 516 /dev/sdc7 Jun 20 21:09:52 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 516 /dev/sdd7 Jun 20 21:09:52 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 462 /dev/sdb7 Jun 20 21:09:52 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 429 /dev/sda7 Jun 20 21:09:52 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 520 /dev/sdc7 Jun 20 21:09:52 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 516 /dev/sdd7 Jun 20 21:12:12 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 526 /dev/sdd7 Jun 20 21:12:12 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 468 /dev/sdb7 Jun 20 21:12:12 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 435 /dev/sda7 Jun 20 21:12:12 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 526 /dev/sdc7 Jun 20 21:12:12 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 435 /dev/sda7 Jun 20 21:12:13 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 526 /dev/sdd7 Jun 20 21:12:13 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 468 /dev/sdb7 Jun 20 21:12:13 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 436 /dev/sda7 Jun 20 21:12:13 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 526 /dev/sdc7 Jun 20 21:12:13 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 468 /dev/sdb7 Jun 20 21:12:13 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 526 /dev/sdd7 Jun 20 21:12:13 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 472 /dev/sdb7 Jun 20 21:12:13 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 439 /dev/sda7 Jun 20 21:12:13 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 526 /dev/sdc7 Jun 20 21:12:13 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 526 /dev/sdc7 Jun 20 21:12:14 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 526 /dev/sdd7 Jun 20 21:12:14 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 472 /dev/sdb7 Jun 20 21:12:14 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 439 /dev/sda7 Jun 20 21:12:14 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 530 /dev/sdc7 Jun 20 21:12:14 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 526 /dev/sdd7 Jun 20 21:12:24 scale-192-168-98-110 kernel: libceph: client5011 fsid 4213fc3a-0a25-06da-6d74-b614ca1f57f4 Jun 20 21:12:24 scale-192-168-98-110 kernel: libceph: mon0 192.168.98.110:6789 session established Jun 20 21:12:57 scale-192-168-98-110 kernel: NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory Jun 20 21:12:57 scale-192-168-98-110 kernel: NFSD: starting 90-second grace period Jun 20 21:15:23 scale-192-168-98-110 mountd[29501]: Caught signal 15, un-registering and exiting. Jun 20 21:15:23 scale-192-168-98-110 kernel: nfsd: last server has exited, flushing export cache Jun 20 21:15:58 scale-192-168-98-110 kernel: libceph: client5017 fsid 4213fc3a-0a25-06da-6d74-b614ca1f57f4 Jun 20 21:15:58 scale-192-168-98-110 kernel: libceph: mon0 192.168.98.110:6789 session established Jun 20 21:16:02 scale-192-168-98-110 kernel: NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory Jun 20 21:16:02 scale-192-168-98-110 kernel: NFSD: starting 90-second grace period Jun 20 21:30:56 scale-192-168-98-110 mountd[29652]: Caught signal 15, un-registering and exiting. Jun 20 21:30:56 scale-192-168-98-110 kernel: nfsd: last server has exited, flushing export cache Jun 20 21:31:05 scale-192-168-98-110 kernel: libceph: mon0 192.168.98.110:6789 socket closed Jun 20 21:31:05 scale-192-168-98-110 kernel: libceph: mon0 192.168.98.110:6789 session lost, hunting for new mon Jun 20 21:31:05 scale-192-168-98-110 kernel: libceph: mon2 192.168.98.111:6789 session established Jun 20 21:31:06 scale-192-168-98-110 kernel: libceph: mon2 192.168.98.111:6789 socket closed Jun 20 21:31:06 scale-192-168-98-110 kernel: libceph: mon2 192.168.98.111:6789 session lost, hunting for new mon Jun 20 21:31:06 scale-192-168-98-110 kernel: libceph: mon2 192.168.98.111:6789 connection failed Jun 20 21:31:09 scale-192-168-98-110 kernel: libceph: mds0 192.168.98.110:6800 socket closed Jun 20 21:31:09 scale-192-168-98-110 kernel: libceph: mon1 192.168.98.110:6789 connection failed Jun 20 21:31:13 scale-192-168-98-110 kernel: libceph: mds0 192.168.98.110:6800 connection failed Jun 20 21:31:14 scale-192-168-98-110 kernel: libceph: mds0 192.168.98.110:6800 connection failed Jun 20 21:31:15 scale-192-168-98-110 kernel: libceph: mds0 192.168.98.110:6800 connection failed Jun 20 21:31:17 scale-192-168-98-110 kernel: libceph: mds0 192.168.98.110:6800 connection failed Jun 20 21:31:19 scale-192-168-98-110 kernel: libceph: mon0 192.168.98.109:6789 connection failed Jun 20 21:31:21 scale-192-168-98-110 kernel: libceph: mds0 192.168.98.110:6800 connection failed Jun 20 21:31:29 scale-192-168-98-110 kernel: libceph: mds0 192.168.98.110:6800 connection failed Jun 20 21:31:29 scale-192-168-98-110 kernel: libceph: mon0 192.168.98.109:6789 connection failed Jun 20 21:31:39 scale-192-168-98-110 kernel: libceph: mon1 192.168.98.110:6789 connection failed Jun 20 21:31:45 scale-192-168-98-110 kernel: libceph: mds0 192.168.98.110:6800 connection failed Jun 20 21:31:49 scale-192-168-98-110 kernel: libceph: mon2 192.168.98.111:6789 connection failed Jun 20 21:31:53 scale-192-168-98-110 kernel: ceph: mds0 caps stale Jun 20 21:31:59 scale-192-168-98-110 kernel: libceph: mon0 192.168.98.109:6789 connection failed Jun 20 21:32:09 scale-192-168-98-110 kernel: libceph: mon0 192.168.98.109:6789 session established Jun 20 21:32:13 scale-192-168-98-110 kernel: ceph: mds0 caps stale Jun 20 21:32:15 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 537 /dev/sdd7 Jun 20 21:32:15 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 488 /dev/sdb7 Jun 20 21:32:15 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 447 /dev/sda7 Jun 20 21:32:16 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 538 /dev/sdc7 Jun 20 21:32:16 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 447 /dev/sda7 Jun 20 21:32:16 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 537 /dev/sdd7 Jun 20 21:32:16 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 488 /dev/sdb7 Jun 20 21:32:16 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 449 /dev/sda7 Jun 20 21:32:16 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 538 /dev/sdc7 Jun 20 21:32:16 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 488 /dev/sdb7 Jun 20 21:32:17 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 537 /dev/sdd7 Jun 20 21:32:17 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 492 /dev/sdb7 Jun 20 21:32:17 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 451 /dev/sda7 Jun 20 21:32:17 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 538 /dev/sdc7 Jun 20 21:32:17 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 538 /dev/sdc7 Jun 20 21:32:17 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 537 /dev/sdd7 Jun 20 21:32:17 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 492 /dev/sdb7 Jun 20 21:32:17 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 451 /dev/sda7 Jun 20 21:32:17 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 541 /dev/sdc7 Jun 20 21:32:17 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 537 /dev/sdd7 Jun 20 21:32:17 scale-192-168-98-110 kernel: libceph: wrong peer, want 192.168.98.110:6800/27702, got 192.168.98.110:6800/31207 Jun 20 21:32:17 scale-192-168-98-110 kernel: libceph: mds0 192.168.98.110:6800 wrong peer at address Jun 20 21:32:30 scale-192-168-98-110 kernel: ceph: mds0 reconnect start Jun 20 21:32:30 scale-192-168-98-110 kernel: ceph: mds0 reconnect success Jun 20 21:32:30 scale-192-168-98-110 kernel: ceph: mds0 recovery completed Jun 20 21:35:38 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:35:38 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:35:38 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:35:38 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:35:38 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:35:38 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:35:38 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:35:38 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:35:38 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:35:38 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:35:38 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:35:43 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:35:43 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:35:43 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:35:43 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:35:43 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff880039728840 Jun 20 21:35:43 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:35:43 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:35:43 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:35:43 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:35:43 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:35:43 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:35:43 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:35:43 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:35:43 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:35:43 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 28 nref 1 Jun 20 21:35:43 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:35:43 scale-192-168-98-110 kernel: libceph: osd_client.c:1138 : osds timeout Jun 20 21:35:43 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff880039728840 Jun 20 21:35:43 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:35:43 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 1 left Jun 20 21:35:43 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:35:43 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:35:43 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:35:43 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:35:43 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:35:43 scale-192-168-98-110 kernel: libceph: osd_client.c:707 : __remove_old_osds ffff880037db79a8 Jun 20 21:35:48 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:35:48 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:35:48 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:35:48 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:35:48 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff880039728840 Jun 20 21:35:48 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:35:48 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:35:48 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:35:48 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:35:48 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:35:48 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:35:48 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:35:48 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:35:48 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:35:48 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 28 nref 1 Jun 20 21:35:48 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:35:48 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff880039728840 Jun 20 21:35:48 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:35:48 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 1 left Jun 20 21:35:48 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:35:48 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:35:48 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:35:48 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:35:48 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:35:50 scale-192-168-98-110 kernel: libceph: mon_client.c:698 : monc delayed_work Jun 20 21:35:50 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff880028e79800 Jun 20 21:35:50 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff880028e79800 nref = 6 -> 7 Jun 20 21:35:50 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880028e79800 Jun 20 21:35:50 scale-192-168-98-110 kernel: libceph: mon_client.c:190 : __send_subscribe sub_sent=0 exp=0 want_osd=0 Jun 20 21:35:50 scale-192-168-98-110 kernel: libceph: mon_client.c:179 : __schedule_delayed after 20000 Jun 20 21:35:50 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880028e79800 Jun 20 21:35:50 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:35:50 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880028e79800 ret 0 Jun 20 21:35:50 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880028e79800 state 29 nref 7 Jun 20 21:35:50 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:35:50 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff880028e79800 Jun 20 21:35:50 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:35:50 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880028e79800 1 left Jun 20 21:35:50 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880028e79800 0 left in 0 kvecs ret = 1 Jun 20 21:35:50 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:35:50 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880028e79800 ret 0 Jun 20 21:35:50 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff880028e79800 nref = 7 -> 6 Jun 20 21:35:53 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:35:53 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:35:53 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:35:53 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:35:53 scale-192-168-98-110 kernel: ceph: mds_client.c:1049 : send_renew_caps to mds0 (up:active) Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001a77c5c0 front 28 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001a77c5c0 to mds0 22=client_session len 28+0+0 ----- Jun 20 21:35:53 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:35:53 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:35:53 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:35:53 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:35:53 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 20 nref 1 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001a77c5c0 seq 13 type 22 len 28+0+0 0 pgs Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 2327826360 data_crc 0 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff880039728840 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 95 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 95 left Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880039728840 state = 20, queueing work Jun 20 21:35:53 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:35:53 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:35:53 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (3) Jun 20 21:35:53 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff880039728840 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 13 type 22 at ffff88001a77c5c0 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001a77c5c0 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001a77c5c0 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880039728840 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880039728840 msg (null) Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 22 front 28 data 0 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001a77c5c0 front 28 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001a77c5c0 28 (2286841573) + 0 (0) + 0 (0) Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001a77c5c0 13 from mds0 22=client_session len 28+0 (2286841573 0 0) ===== Jun 20 21:35:53 scale-192-168-98-110 kernel: ceph: mds_client.c:2269 : handle_session mds0 renewcaps ffff880039728800 state open seq 60 Jun 20 21:35:53 scale-192-168-98-110 kernel: ceph: mds_client.c:1086 : renewed_caps mds0 ttl now 4306498020, was fresh, now stale Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001a77c5c0 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001a77c5c0 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 4 nref 1 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff880039728840 12 -> 13 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 9 left Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:35:53 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:35:53 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:35:53 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:35:58 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:35:58 scale-192-168-98-110 kernel: libceph: osd_client.c:1138 : osds timeout Jun 20 21:35:58 scale-192-168-98-110 kernel: libceph: osd_client.c:707 : __remove_old_osds ffff880037db79a8 Jun 20 21:35:58 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:35:58 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:35:58 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:35:58 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff880039728840 Jun 20 21:35:58 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:35:58 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:35:58 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:35:58 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:35:58 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:35:58 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:35:58 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:35:58 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:35:58 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:35:58 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 28 nref 1 Jun 20 21:35:58 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:35:58 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff880039728840 Jun 20 21:35:58 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:35:58 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 1 left Jun 20 21:35:58 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:35:58 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:35:58 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:35:58 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:35:58 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:36:03 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:36:03 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:36:03 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:36:03 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:36:03 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff880039728840 Jun 20 21:36:03 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:36:03 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:36:03 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:36:03 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:36:03 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:36:03 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:36:03 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:36:03 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:03 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:36:03 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 28 nref 1 Jun 20 21:36:03 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:36:03 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff880039728840 Jun 20 21:36:03 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:36:03 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 1 left Jun 20 21:36:03 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:36:03 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:36:03 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:36:03 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:36:03 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:36:08 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:36:08 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:36:08 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:36:08 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:36:08 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff880039728840 Jun 20 21:36:08 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:36:08 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:36:08 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:36:08 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:36:08 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:36:08 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:36:08 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:36:08 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:08 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:36:08 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 28 nref 1 Jun 20 21:36:08 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:36:08 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff880039728840 Jun 20 21:36:08 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:36:08 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 1 left Jun 20 21:36:08 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:36:08 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:36:08 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:36:08 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:36:08 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:36:10 scale-192-168-98-110 kernel: libceph: mon_client.c:698 : monc delayed_work Jun 20 21:36:10 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff880028e79800 Jun 20 21:36:10 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff880028e79800 nref = 6 -> 7 Jun 20 21:36:10 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880028e79800 Jun 20 21:36:10 scale-192-168-98-110 kernel: libceph: mon_client.c:190 : __send_subscribe sub_sent=0 exp=0 want_osd=0 Jun 20 21:36:10 scale-192-168-98-110 kernel: libceph: mon_client.c:179 : __schedule_delayed after 20000 Jun 20 21:36:10 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880028e79800 Jun 20 21:36:10 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:10 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880028e79800 ret 0 Jun 20 21:36:10 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880028e79800 state 29 nref 7 Jun 20 21:36:10 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:36:10 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff880028e79800 Jun 20 21:36:10 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:36:10 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880028e79800 1 left Jun 20 21:36:10 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880028e79800 0 left in 0 kvecs ret = 1 Jun 20 21:36:10 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:36:10 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880028e79800 ret 0 Jun 20 21:36:10 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff880028e79800 nref = 7 -> 6 Jun 20 21:36:13 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:36:13 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:36:13 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:36:13 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:36:13 scale-192-168-98-110 kernel: ceph: mds_client.c:1049 : send_renew_caps to mds0 (up:active) Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff880037606680 front 28 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff880037606680 to mds0 22=client_session len 28+0+0 ----- Jun 20 21:36:13 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:36:13 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:36:13 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:36:13 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:36:13 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 20 nref 1 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff880037606680 seq 14 type 22 len 28+0+0 0 pgs Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 821038390 data_crc 0 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff880039728840 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 95 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 95 left Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880039728840 state = 20, queueing work Jun 20 21:36:13 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:36:13 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: osd_client.c:1138 : osds timeout Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: osd_client.c:707 : __remove_old_osds ffff880037db79a8 Jun 20 21:36:13 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (3) Jun 20 21:36:13 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff880039728840 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 14 type 22 at ffff880037606680 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff880037606680 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff880037606680 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880039728840 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880039728840 msg (null) Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 22 front 28 data 0 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff880037606680 front 28 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff880037606680 28 (838970475) + 0 (0) + 0 (0) Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff880037606680 14 from mds0 22=client_session len 28+0 (838970475 0 0) ===== Jun 20 21:36:13 scale-192-168-98-110 kernel: ceph: mds_client.c:2269 : handle_session mds0 renewcaps ffff880039728800 state open seq 61 Jun 20 21:36:13 scale-192-168-98-110 kernel: ceph: mds_client.c:1086 : renewed_caps mds0 ttl now 4306518020, was fresh, now stale Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff880037606680 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff880037606680 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 4 nref 1 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff880039728840 13 -> 14 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 9 left Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:36:13 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:36:13 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:36:13 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:36:18 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:36:18 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:36:18 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:36:18 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:36:18 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff880039728840 Jun 20 21:36:18 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:36:18 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:36:18 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:36:18 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:36:18 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:36:18 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:36:18 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:36:18 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:18 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:36:18 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 28 nref 1 Jun 20 21:36:18 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:36:18 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff880039728840 Jun 20 21:36:18 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:36:18 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 1 left Jun 20 21:36:18 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:36:18 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:36:18 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:36:18 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:36:18 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:36:23 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:36:23 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:36:23 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:36:23 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:36:23 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff880039728840 Jun 20 21:36:23 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:36:23 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:36:23 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:36:23 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:36:23 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:36:23 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:36:23 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:36:23 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:23 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:36:23 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 28 nref 1 Jun 20 21:36:23 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:36:23 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff880039728840 Jun 20 21:36:23 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:36:23 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 1 left Jun 20 21:36:23 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:36:23 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:36:23 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:36:23 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:36:23 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:36:28 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:36:28 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:36:28 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:36:28 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:36:28 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff880039728840 Jun 20 21:36:28 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:36:28 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:36:28 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:36:28 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:36:28 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:36:28 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:36:28 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:36:28 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:28 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:36:28 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 28 nref 1 Jun 20 21:36:28 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:36:28 scale-192-168-98-110 kernel: libceph: osd_client.c:1138 : osds timeout Jun 20 21:36:28 scale-192-168-98-110 kernel: libceph: osd_client.c:707 : __remove_old_osds ffff880037db79a8 Jun 20 21:36:28 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff880039728840 Jun 20 21:36:28 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:36:28 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 1 left Jun 20 21:36:28 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:36:28 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:36:28 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:36:28 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:36:28 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:36:30 scale-192-168-98-110 kernel: libceph: mon_client.c:698 : monc delayed_work Jun 20 21:36:30 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff880028e79800 Jun 20 21:36:30 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff880028e79800 nref = 6 -> 7 Jun 20 21:36:30 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880028e79800 Jun 20 21:36:30 scale-192-168-98-110 kernel: libceph: mon_client.c:190 : __send_subscribe sub_sent=0 exp=0 want_osd=0 Jun 20 21:36:30 scale-192-168-98-110 kernel: libceph: mon_client.c:179 : __schedule_delayed after 20000 Jun 20 21:36:30 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880028e79800 Jun 20 21:36:30 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:30 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880028e79800 ret 0 Jun 20 21:36:30 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880028e79800 state 29 nref 7 Jun 20 21:36:30 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:36:30 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff880028e79800 Jun 20 21:36:30 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:36:30 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880028e79800 1 left Jun 20 21:36:30 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880028e79800 0 left in 0 kvecs ret = 1 Jun 20 21:36:30 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:36:30 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880028e79800 ret 0 Jun 20 21:36:30 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff880028e79800 nref = 7 -> 6 Jun 20 21:36:33 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:36:33 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:36:33 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:36:33 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:36:33 scale-192-168-98-110 kernel: ceph: mds_client.c:1049 : send_renew_caps to mds0 (up:active) Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c31dec0 front 28 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c31dec0 to mds0 22=client_session len 28+0+0 ----- Jun 20 21:36:33 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:36:33 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:36:33 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:36:33 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:36:33 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 20 nref 1 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c31dec0 seq 15 type 22 len 28+0+0 0 pgs Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 4224466005 data_crc 0 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff880039728840 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 95 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 95 left Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880039728840 state = 20, queueing work Jun 20 21:36:33 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:36:33 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:36:33 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (3) Jun 20 21:36:33 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff880039728840 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 15 type 22 at ffff88001c31dec0 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c31dec0 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c31dec0 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880039728840 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880039728840 msg (null) Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 22 front 28 data 0 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c31dec0 front 28 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c31dec0 28 (4181564680) + 0 (0) + 0 (0) Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c31dec0 15 from mds0 22=client_session len 28+0 (4181564680 0 0) ===== Jun 20 21:36:33 scale-192-168-98-110 kernel: ceph: mds_client.c:2269 : handle_session mds0 renewcaps ffff880039728800 state open seq 62 Jun 20 21:36:33 scale-192-168-98-110 kernel: ceph: mds_client.c:1086 : renewed_caps mds0 ttl now 4306538020, was fresh, now stale Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c31dec0 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c31dec0 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 4 nref 1 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff880039728840 14 -> 15 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 9 left Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:36:33 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:36:33 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:36:33 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:36:38 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:36:38 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:36:38 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:36:38 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:36:38 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff880039728840 Jun 20 21:36:38 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:36:38 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:36:38 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:36:38 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:36:38 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:36:38 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:36:38 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:36:38 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:38 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:36:38 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 28 nref 1 Jun 20 21:36:38 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:36:38 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff880039728840 Jun 20 21:36:38 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:36:38 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 1 left Jun 20 21:36:38 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:36:38 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:36:38 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:36:38 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:36:38 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:36:43 scale-192-168-98-110 kernel: libceph: osd_client.c:1138 : osds timeout Jun 20 21:36:43 scale-192-168-98-110 kernel: libceph: osd_client.c:707 : __remove_old_osds ffff880037db79a8 Jun 20 21:36:43 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:36:43 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:36:43 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:36:43 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:36:43 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff880039728840 Jun 20 21:36:43 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:36:43 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:36:43 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:36:43 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:36:43 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:36:43 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:36:43 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:36:43 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:43 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:36:43 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 28 nref 1 Jun 20 21:36:43 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:36:43 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff880039728840 Jun 20 21:36:43 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:36:43 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 1 left Jun 20 21:36:43 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:36:43 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:36:43 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:36:43 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:36:43 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:36:48 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:36:48 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:36:48 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:36:48 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:36:48 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff880039728840 Jun 20 21:36:48 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:36:48 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:36:48 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:36:48 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:36:48 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:36:48 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:36:48 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:36:48 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:48 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:36:48 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 28 nref 1 Jun 20 21:36:48 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:36:48 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff880039728840 Jun 20 21:36:48 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:36:48 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 1 left Jun 20 21:36:48 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:36:48 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:36:48 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:36:48 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:36:48 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:36:50 scale-192-168-98-110 kernel: libceph: mon_client.c:698 : monc delayed_work Jun 20 21:36:50 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff880028e79800 Jun 20 21:36:50 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff880028e79800 nref = 6 -> 7 Jun 20 21:36:50 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880028e79800 Jun 20 21:36:50 scale-192-168-98-110 kernel: libceph: mon_client.c:190 : __send_subscribe sub_sent=0 exp=0 want_osd=0 Jun 20 21:36:50 scale-192-168-98-110 kernel: libceph: mon_client.c:179 : __schedule_delayed after 20000 Jun 20 21:36:50 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880028e79800 Jun 20 21:36:50 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:50 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880028e79800 ret 0 Jun 20 21:36:50 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880028e79800 state 29 nref 7 Jun 20 21:36:50 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:36:50 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff880028e79800 Jun 20 21:36:50 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:36:50 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880028e79800 1 left Jun 20 21:36:50 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880028e79800 0 left in 0 kvecs ret = 1 Jun 20 21:36:50 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:36:50 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880028e79800 ret 0 Jun 20 21:36:50 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff880028e79800 nref = 7 -> 6 Jun 20 21:36:53 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:36:53 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:36:53 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:36:53 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:36:53 scale-192-168-98-110 kernel: ceph: mds_client.c:1049 : send_renew_caps to mds0 (up:active) Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88003a560480 front 28 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88003a560480 to mds0 22=client_session len 28+0+0 ----- Jun 20 21:36:53 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:36:53 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:36:53 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:36:53 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:36:53 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 20 nref 1 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88003a560480 seq 16 type 22 len 28+0+0 0 pgs Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 1099169499 data_crc 0 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff880039728840 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 95 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 95 left Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880039728840 state = 20, queueing work Jun 20 21:36:53 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:36:53 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:36:53 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (3) Jun 20 21:36:53 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff880039728840 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 16 type 22 at ffff88003a560480 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88003a560480 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88003a560480 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880039728840 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880039728840 msg (null) Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 22 front 28 data 0 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88003a560480 front 28 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88003a560480 28 (1131569030) + 0 (0) + 0 (0) Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88003a560480 16 from mds0 22=client_session len 28+0 (1131569030 0 0) ===== Jun 20 21:36:53 scale-192-168-98-110 kernel: ceph: mds_client.c:2269 : handle_session mds0 renewcaps ffff880039728800 state open seq 63 Jun 20 21:36:53 scale-192-168-98-110 kernel: ceph: mds_client.c:1086 : renewed_caps mds0 ttl now 4306558020, was fresh, now stale Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88003a560480 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88003a560480 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 4 nref 1 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff880039728840 15 -> 16 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 9 left Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:36:53 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:36:53 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:36:53 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:36:58 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:36:58 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:36:58 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:36:58 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:36:58 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff880039728840 Jun 20 21:36:58 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:36:58 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:36:58 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:36:58 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:36:58 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:36:58 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:36:58 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:36:58 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:36:58 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:36:58 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 28 nref 1 Jun 20 21:36:58 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:36:58 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff880039728840 Jun 20 21:36:58 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:36:58 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 1 left Jun 20 21:36:58 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:36:58 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:36:58 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:36:58 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:36:58 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:36:58 scale-192-168-98-110 kernel: libceph: osd_client.c:1138 : osds timeout Jun 20 21:36:58 scale-192-168-98-110 kernel: libceph: osd_client.c:707 : __remove_old_osds ffff880037db79a8 Jun 20 21:37:03 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:37:03 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:37:03 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:37:03 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:37:03 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff880039728840 Jun 20 21:37:03 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:03 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:37:03 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:03 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:37:03 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:37:03 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:03 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:03 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:03 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:03 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 28 nref 1 Jun 20 21:37:03 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:03 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff880039728840 Jun 20 21:37:03 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:37:03 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 1 left Jun 20 21:37:03 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:03 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:03 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:03 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:37:03 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:37:08 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:37:08 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:37:08 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:37:08 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:37:08 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff880039728840 Jun 20 21:37:08 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:08 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:37:08 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:08 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:37:08 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:37:08 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:08 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:08 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:08 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:08 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 28 nref 1 Jun 20 21:37:08 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:08 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff880039728840 Jun 20 21:37:08 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:37:08 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 1 left Jun 20 21:37:08 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:08 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:08 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:08 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:37:08 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: mon_client.c:698 : monc delayed_work Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff880028e79800 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff880028e79800 nref = 6 -> 7 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880028e79800 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: mon_client.c:190 : __send_subscribe sub_sent=0 exp=1 want_osd=0 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: mon_client.c:216 : __send_subscribe to 'mdsmap' 31+ Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c2a98c0 to mon0 15=mon_subscribe len 58+0+0 ----- Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: mon_client.c:179 : __schedule_delayed after 10000 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880028e79800 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880028e79800 ret 0 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880028e79800 state 29 nref 7 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c2a98c0 seq 4 type 15 len 58+0+0 0 pgs Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 1457082477 data_crc 0 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff880028e79800 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 125 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880028e79800 125 left Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880028e79800 0 left in 0 kvecs ret = 1 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880028e79800 state = 29, queueing work Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff880028e79800 nref = 7 -> 8 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880028e79800 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880028e79800 state = 29, queueing work Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff880028e79800 nref = 8 -> 9 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1956 : queue_con ffff880028e79800 - already queued Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff880028e79800 nref = 9 -> 8 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff880028e79800 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880028e79800 1 left Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880028e79800 0 left in 0 kvecs ret = 1 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880028e79800 ret 0 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff880028e79800 nref = 8 -> 7 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880028e79800 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff880028e79800 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880028e79800 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880028e79800 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880028e79800 msg (null) Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 4 front 481 data 0 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff8800395d35c0 front 481 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff8800395d35c0 481 (2772070520) + 0 (0) + 0 (0) Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff8800395d35c0 13 from mon0 4=mon_map len 481+0 (2772070520 0 0) ===== Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: mon_client.c:335 : handle_monmap Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: mon_client.c:52 : monmap_decode ffff880019f97004 ffff880019f971e1 len 477 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: mon_client.c:76 : monmap_decode epoch 1, num_mon 3 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: mon_client.c:79 : monmap_decode mon0 is 192.168.98.109:6789 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: mon_client.c:79 : monmap_decode mon1 is 192.168.98.110:6789 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: mon_client.c:79 : monmap_decode mon2 is 192.168.98.111:6789 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff8800395d35c0 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff8800395d35c0 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880028e79800 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880028e79800 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880028e79800 msg (null) Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 16 front 20 data 0 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c2a9080 20 (228457512) + 0 (0) + 0 (0) Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c2a9080 14 from mon0 16=mon_subscribe_ack len 20+0 (228457512 0 0) ===== Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: mon_client.c:255 : handle_subscribe_ack after 300 seconds Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880028e79800 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880028e79800 ret 0 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880028e79800 state 5 nref 7 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff880028e79800 12 -> 14 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880028e79800 9 left Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880028e79800 0 left in 0 kvecs ret = 1 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880028e79800 ret 0 Jun 20 21:37:10 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff880028e79800 nref = 7 -> 6 Jun 20 21:37:13 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: osd_client.c:1138 : osds timeout Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: osd_client.c:707 : __remove_old_osds ffff880037db79a8 Jun 20 21:37:13 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:37:13 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:37:13 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:37:13 scale-192-168-98-110 kernel: ceph: mds_client.c:1049 : send_renew_caps to mds0 (up:active) Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff8800395d35c0 front 28 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff8800395d35c0 to mds0 22=client_session len 28+0+0 ----- Jun 20 21:37:13 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:13 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:13 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:37:13 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:37:13 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 20 nref 1 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff8800395d35c0 seq 17 type 22 len 28+0+0 0 pgs Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 1042159445 data_crc 0 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff880039728840 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 95 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 95 left Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880039728840 state = 20, queueing work Jun 20 21:37:13 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:13 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:13 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (3) Jun 20 21:37:13 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff880039728840 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 17 type 22 at ffff8800395d35c0 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff8800395d35c0 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff8800395d35c0 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880039728840 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880039728840 msg (null) Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 22 front 28 data 0 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff8800395d35c0 front 28 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff8800395d35c0 28 (1022342664) + 0 (0) + 0 (0) Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff8800395d35c0 17 from mds0 22=client_session len 28+0 (1022342664 0 0) ===== Jun 20 21:37:13 scale-192-168-98-110 kernel: ceph: mds_client.c:2269 : handle_session mds0 renewcaps ffff880039728800 state open seq 64 Jun 20 21:37:13 scale-192-168-98-110 kernel: ceph: mds_client.c:1086 : renewed_caps mds0 ttl now 4306578022, was fresh, now stale Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff8800395d35c0 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff8800395d35c0 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 4 nref 1 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff880039728840 16 -> 17 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 9 left Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:13 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:13 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:37:13 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:37:18 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:37:18 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:37:18 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:37:18 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:37:18 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff880039728840 Jun 20 21:37:18 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:18 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:37:18 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:18 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:37:18 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:37:18 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:18 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:18 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:18 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:18 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 28 nref 1 Jun 20 21:37:18 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:18 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff880039728840 Jun 20 21:37:18 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:37:18 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 1 left Jun 20 21:37:18 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:18 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:18 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:18 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:37:18 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88003bd19da0 mask pAsLsXsFs mode 040755 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff880028e45000 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff880028e452d8 need=1 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff880028e452d8 1033 = 8 used + 1 resv + 1024 avail Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff880028e45000 tid 77 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds ffff88003bd19da0 is_hash=0 (0) mode 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:737 : choose_mds ffff88003bd19da0 1.fffffffffffffffe mds0 (auth cap ffff88003bcdf7a8) Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff880039728800 state open Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff880028e45000 tid 77 getattr (attempt 1) Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:1569 : inode ffff88003bd19da0 1.fffffffffffffffe Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff880022a76cc0 front 114 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff880022a76cc0 to mds0 24=client_request len 114+0+0 ----- Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (5) Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 20 nref 1 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff880022a76cc0 seq 18 type 24 len 114+0+0 0 pgs Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 3599100673 data_crc 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff880039728840 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 181 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 181 left Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880039728840 state = 20, queueing work Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (5) Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff880039728840 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 18 type 24 at ffff880022a76cc0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880039728840 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880039728840 msg (null) Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 312 data 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff880022a760c0 front 312 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff880022a760c0 312 (3470859829) + 0 (0) + 0 (0) Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff880022a760c0 18 from mds0 26=client_reply len 312+0 (3470859829 0 0) ===== Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff880028e45000 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff880028e45000 tid 77 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 77 result 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: snap.c:616 : update_snap_trace deletion=0 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: snap.c:150 : lookup_snap_realm 1 ffff88003aabbb80 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: snap.c:673 : update_snap_trace 1 ffff88003aabbb80 seq 1 unchanged Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: snap.c:677 : done with 1 ffff88003aabbb80, invalidated=0, ffff880019df2d38 ffff880019df2d38 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff880028e45000 is_dentry 0 is_target 1 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: inode.c:61 : get_inode on 1=1.fffffffffffffffe got ffff88003bd19da0 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: inode.c:572 : fill_inode ffff88003bd19da0 ino 1.fffffffffffffffe v 3004 had 2970 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: caps.c:678 : __ceph_caps_issued ffff88003bd19da0 cap ffff88003bcdf7a8 issued pAsLsXs Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: inode.c:616 : ffff88003bd19da0 mode 040755 uid.gid 0.0 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: caps.c:527 : add_cap ffff88003bd19da0 mds0 cap 4a4e pAsLsXsFs seq 2 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: caps.c:678 : __ceph_caps_issued ffff88003bd19da0 cap ffff88003bcdf7a8 issued pAsLsXs Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: caps.c:498 : marking ffff88003bd19da0 NOT complete Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: caps.c:618 : add_cap inode ffff88003bd19da0 (1.fffffffffffffffe) cap ffff88003bcdf7a8 pAsLsXsFs now pAsLsXsFs seq 2 mds0 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: inode.c:1179 : fill_trace done err=0 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff880028e452d8 count=1 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 1033 = 8 used + 0 resv + 1025 avail Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff880028e45000 done, result 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff880022a76cc0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff880022a76cc0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff880022a760c0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff880022a760c0 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: caps.c:2199 : put_cap_refs ffff88003bd19da0 had p Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff880028e452d8 count=0 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: inode.c:1789 : do_getattr result=0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88003bd19da0 mask pAsLsXsFs mode 040755 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: caps.c:753 : __ceph_caps_issued_mask ffff88003bd19da0 cap ffff88003bcdf7a8 issued pAsLsXsFs (mask pAsLsXsFs) Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: caps.c:717 : __touch_cap ffff88003bd19da0 cap ffff88003bcdf7a8 mds0 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: super.c:62 : statfs Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff880022a760c0 front 34 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff880022a76cc0 front 1024 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff880022a760c0 to mon0 13=statfs len 34+0+0 ----- Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff880028e79800 nref = 6 -> 7 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880028e79800 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880028e79800 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880028e79800 ret 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880028e79800 state 21 nref 7 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff880022a760c0 seq 5 type 13 len 34+0+0 0 pgs Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 1357752027 data_crc 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff880028e79800 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 101 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880028e79800 101 left Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880028e79800 0 left in 0 kvecs ret = 1 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880028e79800 ret 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff880028e79800 nref = 7 -> 6 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880028e79800 state = 5, queueing work Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff880028e79800 nref = 6 -> 7 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880028e79800 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880028e79800 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff880028e79800 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880028e79800 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880028e79800 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880028e79800 msg (null) Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 14 front 56 data 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: mon_client.c:441 : get_generic_reply 3 got ffff880022a76cc0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff880022a76cc0 56 (4242398567) + 0 (0) + 0 (0) Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff880022a76cc0 15 from mon0 14=statfs_reply len 56+0 (4242398567 0 0) ===== Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: mon_client.c:491 : handle_statfs_reply ffff880022a76cc0 tid 3 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880028e79800 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880028e79800 ret 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880028e79800 state 5 nref 7 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff880028e79800 14 -> 15 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880028e79800 9 left Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880028e79800 0 left in 0 kvecs ret = 1 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880028e79800 ret 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff880028e79800 nref = 7 -> 6 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff880022a76cc0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff880022a76cc0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff880022a760c0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff880022a760c0 Jun 20 21:37:19 scale-192-168-98-110 kernel: NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory Jun 20 21:37:19 scale-192-168-98-110 kernel: NFSD: starting 90-second grace period Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88003bd19da0 mask pAsLsXsFs mode 040755 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: caps.c:753 : __ceph_caps_issued_mask ffff88003bd19da0 cap ffff88003bcdf7a8 issued pAsLsXsFs (mask pAsLsXsFs) Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: caps.c:717 : __touch_cap ffff88003bd19da0 cap ffff88003bcdf7a8 mds0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 4 nref 1 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff880039728840 17 -> 18 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 9 left Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:19 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:37:19 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:37:20 scale-192-168-98-110 kernel: libceph: mon_client.c:698 : monc delayed_work Jun 20 21:37:20 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff880028e79800 Jun 20 21:37:20 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff880028e79800 nref = 6 -> 7 Jun 20 21:37:20 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880028e79800 Jun 20 21:37:20 scale-192-168-98-110 kernel: libceph: mon_client.c:190 : __send_subscribe sub_sent=0 exp=0 want_osd=0 Jun 20 21:37:20 scale-192-168-98-110 kernel: libceph: mon_client.c:179 : __schedule_delayed after 20000 Jun 20 21:37:20 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880028e79800 Jun 20 21:37:20 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:20 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880028e79800 ret 0 Jun 20 21:37:20 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880028e79800 state 29 nref 7 Jun 20 21:37:20 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:20 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff880028e79800 Jun 20 21:37:20 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:37:20 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880028e79800 1 left Jun 20 21:37:20 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880028e79800 0 left in 0 kvecs ret = 1 Jun 20 21:37:20 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:20 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880028e79800 ret 0 Jun 20 21:37:20 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff880028e79800 nref = 7 -> 6 Jun 20 21:37:23 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:37:23 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:37:23 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:37:23 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:37:23 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff880039728840 Jun 20 21:37:23 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:23 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:37:23 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:23 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:37:23 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:37:23 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:23 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:23 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:23 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:23 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 28 nref 1 Jun 20 21:37:23 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:23 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff880039728840 Jun 20 21:37:23 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:37:23 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 1 left Jun 20 21:37:23 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:23 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:23 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:23 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:37:23 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88003bd19da0 mask pAsLsXsFs mode 040755 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:753 : __ceph_caps_issued_mask ffff88003bd19da0 cap ffff88003bcdf7a8 issued pAsLsXsFs (mask pAsLsXsFs) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:717 : __touch_cap ffff88003bd19da0 cap ffff88003bcdf7a8 mds0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88003bd19da0 mask pAsLsXsFs mode 040755 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:753 : __ceph_caps_issued_mask ffff88003bd19da0 cap ffff88003bcdf7a8 issued pAsLsXsFs (mask pAsLsXsFs) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:717 : __touch_cap ffff88003bd19da0 cap ffff88003bcdf7a8 mds0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: super.c:62 : statfs Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff8800395d3800 front 34 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff8800395d35c0 front 1024 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff8800395d3800 to mon0 13=statfs len 34+0+0 ----- Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff880028e79800 nref = 6 -> 7 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880028e79800 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880028e79800 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880028e79800 ret 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880028e79800 state 21 nref 7 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff8800395d3800 seq 6 type 13 len 34+0+0 0 pgs Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 1357752027 data_crc 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff880028e79800 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 101 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880028e79800 101 left Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880028e79800 0 left in 0 kvecs ret = 1 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880028e79800 state = 21, queueing work Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff880028e79800 nref = 7 -> 8 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880028e79800 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880028e79800 ret 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff880028e79800 nref = 8 -> 7 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880028e79800 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff880028e79800 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880028e79800 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880028e79800 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880028e79800 msg (null) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 14 front 56 data 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: mon_client.c:441 : get_generic_reply 4 got ffff8800395d35c0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff8800395d35c0 56 (4242398567) + 0 (0) + 0 (0) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff8800395d35c0 16 from mon0 14=statfs_reply len 56+0 (4242398567 0 0) ===== Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: mon_client.c:491 : handle_statfs_reply ffff8800395d35c0 tid 4 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff8800395d35c0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff8800395d35c0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff8800395d3800 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff8800395d3800 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88003bd19da0 mask As mode 040755 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880028e79800 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880028e79800 ret 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880028e79800 state 5 nref 7 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff880028e79800 15 -> 16 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880028e79800 9 left Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880028e79800 0 left in 0 kvecs ret = 1 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880028e79800 ret 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff880028e79800 nref = 7 -> 6 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:753 : __ceph_caps_issued_mask ffff88003bd19da0 cap ffff88003bcdf7a8 issued pAsLsXsFs (mask As) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:717 : __touch_cap ffff88003bd19da0 cap ffff88003bcdf7a8 mds0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88003bd19da0 mask pAsLsXsFs mode 040755 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:753 : __ceph_caps_issued_mask ffff88003bd19da0 cap ffff88003bcdf7a8 issued pAsLsXsFs (mask pAsLsXsFs) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:717 : __touch_cap ffff88003bd19da0 cap ffff88003bcdf7a8 mds0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88003bd19da0 mask As mode 040755 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:753 : __ceph_caps_issued_mask ffff88003bd19da0 cap ffff88003bcdf7a8 issued pAsLsXsFs (mask As) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:717 : __touch_cap ffff88003bd19da0 cap ffff88003bcdf7a8 mds0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88003bd19da0 mask As mode 040755 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:753 : __ceph_caps_issued_mask ffff88003bd19da0 cap ffff88003bcdf7a8 issued pAsLsXsFs (mask As) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:717 : __touch_cap ffff88003bd19da0 cap ffff88003bcdf7a8 mds0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88003bd19da0 mask As mode 040755 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:753 : __ceph_caps_issued_mask ffff88003bd19da0 cap ffff88003bcdf7a8 issued pAsLsXsFs (mask As) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:717 : __touch_cap ffff88003bd19da0 cap ffff88003bcdf7a8 mds0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88003bd19da0 mask As mode 040755 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:753 : __ceph_caps_issued_mask ffff88003bd19da0 cap ffff88003bcdf7a8 issued pAsLsXsFs (mask As) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:717 : __touch_cap ffff88003bd19da0 cap ffff88003bcdf7a8 mds0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88003bd19da0 mask As mode 040755 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:753 : __ceph_caps_issued_mask ffff88003bd19da0 cap ffff88003bcdf7a8 issued pAsLsXsFs (mask As) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:717 : __touch_cap ffff88003bd19da0 cap ffff88003bcdf7a8 mds0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88003bd19da0 mask pAsLsXsFs mode 040755 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:753 : __ceph_caps_issued_mask ffff88003bd19da0 cap ffff88003bcdf7a8 issued pAsLsXsFs (mask pAsLsXsFs) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:717 : __touch_cap ffff88003bd19da0 cap ffff88003bcdf7a8 mds0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff880018119000 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff8800181192d8 need=1 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff8800181192d8 1033 = 8 used + 1 resv + 1024 avail Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff880018119000 tid 78 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff880039728800 state open Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff880018119000 tid 78 lookuphash (attempt 1) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c20fb40 front 123 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c20fb40 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (5) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 20 nref 1 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c20fb40 seq 19 type 24 len 123+0+0 0 pgs Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 991723506 data_crc 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 190 left Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880039728840 state = 20, queueing work Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (5) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 19 type 24 at ffff88001c20fb40 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880039728840 msg (null) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff880037606c80 front 27 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff880037606c80 27 (1122951432) + 0 (0) + 0 (0) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff880037606c80 19 from mds0 26=client_reply len 27+0 (1122951432 0 0) ===== Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff880018119000 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 78 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 78 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff880018119000 tid 78 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 78 result -116 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff880018119000 is_dentry 0 is_target 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff8800181192d8 count=1 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 1033 = 8 used + 0 resv + 1025 avail Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff880018119000 done, result -116 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c20fb40 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c20fb40 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff880037606c80 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff880037606c80 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff8800181192d8 count=0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff880019d41c00 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff880019d41ed8 need=1 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff880019d41ed8 1033 = 8 used + 1 resv + 1024 avail Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff880019d41c00 tid 79 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 3 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff880039728800 state open Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff880019d41c00 tid 79 lookuphash (attempt 1) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff880037606c80 front 123 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 4 nref 1 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff880039728840 18 -> 19 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 9 left Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff880037606c80 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 5 -> 6 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (6) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 6 -> 5 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 20 nref 1 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff880037606c80 seq 20 type 24 len 123+0+0 0 pgs Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 2785075383 data_crc 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 190 left Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880039728840 state = 20, queueing work Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (5) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 20 type 24 at ffff880037606c80 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880039728840 msg (null) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff880037606680 front 27 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff880037606680 27 (1122951432) + 0 (0) + 0 (0) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff880037606680 20 from mds0 26=client_reply len 27+0 (1122951432 0 0) ===== Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff880019d41c00 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 79 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 79 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff880019d41c00 tid 79 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 79 result -116 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff880019d41c00 is_dentry 0 is_target 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff880019d41ed8 count=1 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 1033 = 8 used + 0 resv + 1025 avail Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff880019d41c00 done, result -116 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff880037606c80 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff880037606c80 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff880037606680 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff880037606680 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff880019d41ed8 count=0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff88001c133c00 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff88001c133ed8 need=1 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff88001c133ed8 1033 = 8 used + 1 resv + 1024 avail Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff88001c133c00 tid 80 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 3 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff880039728800 state open Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff88001c133c00 tid 80 lookuphash (attempt 1) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff880037606680 front 123 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:37:24 scale-192-168-98-110 kernel: hpet1: lost 1 rtc interrupts Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 4 nref 1 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff880039728840 19 -> 20 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 9 left Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff880037606680 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 5 -> 6 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (6) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 6 -> 5 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 20 nref 1 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff880037606680 seq 21 type 24 len 123+0+0 0 pgs Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 2138092474 data_crc 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 190 left Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880039728840 state = 20, queueing work Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (5) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 21 type 24 at ffff880037606680 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880039728840 msg (null) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff8800376068c0 front 27 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff8800376068c0 27 (1122951432) + 0 (0) + 0 (0) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff8800376068c0 21 from mds0 26=client_reply len 27+0 (1122951432 0 0) ===== Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff88001c133c00 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 80 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 80 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff88001c133c00 tid 80 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 80 result -116 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff88001c133c00 is_dentry 0 is_target 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88001c133ed8 count=1 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 1033 = 8 used + 0 resv + 1025 avail Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff88001c133c00 done, result -116 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff880037606680 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff880037606680 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff8800376068c0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff8800376068c0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88001c133ed8 count=0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff88003aa43400 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff88003aa436d8 need=1 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff88003aa436d8 1033 = 8 used + 1 resv + 1024 avail Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff88003aa43400 tid 81 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 3 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff880039728800 state open Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff88003aa43400 tid 81 lookuphash (attempt 1) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff8800376068c0 front 123 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 4 nref 1 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff880039728840 20 -> 21 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 9 left Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff8800376068c0 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 5 -> 6 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (6) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 6 -> 5 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 20 nref 1 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff8800376068c0 seq 22 type 24 len 123+0+0 0 pgs Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 3798740223 data_crc 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 190 left Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880039728840 state = 20, queueing work Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (5) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 22 type 24 at ffff8800376068c0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880039728840 msg (null) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff880037606800 front 27 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff880037606800 27 (1122951432) + 0 (0) + 0 (0) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff880037606800 22 from mds0 26=client_reply len 27+0 (1122951432 0 0) ===== Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff88003aa43400 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 81 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 81 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff88003aa43400 tid 81 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 81 result -116 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff88003aa43400 is_dentry 0 is_target 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88003aa436d8 count=1 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 1033 = 8 used + 0 resv + 1025 avail Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff88003aa43400 done, result -116 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff8800376068c0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff8800376068c0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff880037606800 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff880037606800 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88003aa436d8 count=0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff8800377a4c00 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff8800377a4ed8 need=1 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff8800377a4ed8 1033 = 8 used + 1 resv + 1024 avail Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff8800377a4c00 tid 82 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 3 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff880039728800 state open Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff8800377a4c00 tid 82 lookuphash (attempt 1) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff880037606800 front 123 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 4 nref 1 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff880039728840 21 -> 22 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 9 left Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff880037606800 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 5 -> 6 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (6) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 6 -> 5 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 20 nref 1 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff880037606800 seq 23 type 24 len 123+0+0 0 pgs Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 1084619713 data_crc 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 190 left Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880039728840 state = 20, queueing work Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (5) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 23 type 24 at ffff880037606800 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880039728840 msg (null) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff880037606bc0 front 27 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff880037606bc0 27 (1122951432) + 0 (0) + 0 (0) Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff880037606bc0 23 from mds0 26=client_reply len 27+0 (1122951432 0 0) ===== Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff8800377a4c00 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 82 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 82 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff8800377a4c00 tid 82 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 82 result -116 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff8800377a4c00 is_dentry 0 is_target 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff8800377a4ed8 count=1 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 1033 = 8 used + 0 resv + 1025 avail Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff8800377a4c00 done, result -116 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff880037606800 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff880037606800 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff880037606bc0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff880037606bc0 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:24 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff8800377a4ed8 count=0 Jun 20 21:37:24 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff88003abe4400 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff88003abe46d8 need=1 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff88003abe46d8 1033 = 8 used + 1 resv + 1024 avail Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff88003abe4400 tid 83 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 3 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff880039728800 state open Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff88003abe4400 tid 83 lookuphash (attempt 1) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff880037606bc0 front 123 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 4 nref 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff880039728840 22 -> 23 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 9 left Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff880037606bc0 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 5 -> 6 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (6) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 6 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 20 nref 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff880037606bc0 seq 24 type 24 len 123+0+0 0 pgs Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 3719914628 data_crc 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 190 left Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880039728840 state = 20, queueing work Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (5) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 24 type 24 at ffff880037606bc0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880039728840 msg (null) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff880037606080 front 27 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff880037606080 27 (1122951432) + 0 (0) + 0 (0) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff880037606080 24 from mds0 26=client_reply len 27+0 (1122951432 0 0) ===== Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff88003abe4400 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 83 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 83 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff88003abe4400 tid 83 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 83 result -116 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff88003abe4400 is_dentry 0 is_target 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88003abe46d8 count=1 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 1033 = 8 used + 0 resv + 1025 avail Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff88003abe4400 done, result -116 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff880037606bc0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff880037606bc0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff880037606080 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff880037606080 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88003abe46d8 count=0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff88001811b400 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff88001811b6d8 need=1 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff88001811b6d8 1033 = 8 used + 1 resv + 1024 avail Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff88001811b400 tid 84 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 3 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff880039728800 state open Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff88001811b400 tid 84 lookuphash (attempt 1) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff880037606080 front 123 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 4 nref 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff880039728840 23 -> 24 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 9 left Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff880037606080 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 5 -> 6 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (6) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 6 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 20 nref 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff880037606080 seq 25 type 24 len 123+0+0 0 pgs Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 14295884 data_crc 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 190 left Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880039728840 state = 20, queueing work Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (5) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 25 type 24 at ffff880037606080 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880039728840 msg (null) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c31d5c0 front 27 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c31d5c0 27 (1122951432) + 0 (0) + 0 (0) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c31d5c0 25 from mds0 26=client_reply len 27+0 (1122951432 0 0) ===== Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff88001811b400 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 84 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 84 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff88001811b400 tid 84 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 84 result -116 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff88001811b400 is_dentry 0 is_target 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88001811b6d8 count=1 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 1033 = 8 used + 0 resv + 1025 avail Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff88001811b400 done, result -116 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff880037606080 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff880037606080 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c31d5c0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c31d5c0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88001811b6d8 count=0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff880018119400 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff8800181196d8 need=1 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff8800181196d8 1033 = 8 used + 1 resv + 1024 avail Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff880018119400 tid 85 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 3 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff880039728800 state open Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff880018119400 tid 85 lookuphash (attempt 1) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c31d5c0 front 123 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 4 nref 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff880039728840 24 -> 25 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 9 left Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c31d5c0 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 5 -> 6 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (6) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 6 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 20 nref 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c31d5c0 seq 26 type 24 len 123+0+0 0 pgs Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 2647047177 data_crc 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 190 left Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880039728840 state = 20, queueing work Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (5) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 26 type 24 at ffff88001c31d5c0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880039728840 msg (null) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c31db00 front 27 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c31db00 27 (1122951432) + 0 (0) + 0 (0) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c31db00 26 from mds0 26=client_reply len 27+0 (1122951432 0 0) ===== Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff880018119400 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 85 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 85 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff880018119400 tid 85 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 85 result -116 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff880018119400 is_dentry 0 is_target 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff8800181196d8 count=1 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 1033 = 8 used + 0 resv + 1025 avail Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff880018119400 done, result -116 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c31d5c0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c31d5c0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c31db00 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c31db00 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff8800181196d8 count=0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff880037e3e800 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff880037e3ead8 need=1 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff880037e3ead8 1033 = 8 used + 1 resv + 1024 avail Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff880037e3e800 tid 86 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 3 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff880039728800 state open Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff880037e3e800 tid 86 lookuphash (attempt 1) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c31db00 front 123 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 4 nref 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff880039728840 25 -> 26 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 9 left Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c31db00 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 5 -> 6 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (6) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 6 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 20 nref 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c31db00 seq 27 type 24 len 123+0+0 0 pgs Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 1057975095 data_crc 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 190 left Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880039728840 state = 20, queueing work Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (5) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 27 type 24 at ffff88001c31db00 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880039728840 msg (null) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c31d800 front 27 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c31d800 27 (1122951432) + 0 (0) + 0 (0) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c31d800 27 from mds0 26=client_reply len 27+0 (1122951432 0 0) ===== Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff880037e3e800 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 86 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 86 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff880037e3e800 tid 86 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 86 result -116 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff880037e3e800 is_dentry 0 is_target 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff880037e3ead8 count=1 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 1033 = 8 used + 0 resv + 1025 avail Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff880037e3e800 done, result -116 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c31db00 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c31db00 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c31d800 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c31d800 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff880037e3ead8 count=0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff880018119400 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff8800181196d8 need=1 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff8800181196d8 1033 = 8 used + 1 resv + 1024 avail Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff880018119400 tid 87 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 3 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff880039728800 state open Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff880018119400 tid 87 lookuphash (attempt 1) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c31d800 front 123 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 4 nref 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff880039728840 26 -> 27 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 9 left Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c31d800 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 5 -> 6 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (6) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 6 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 20 nref 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c31d800 seq 28 type 24 len 123+0+0 0 pgs Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 2719216754 data_crc 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 190 left Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880039728840 state = 20, queueing work Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (5) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 28 type 24 at ffff88001c31d800 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880039728840 msg (null) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c31d980 front 27 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c31d980 27 (1122951432) + 0 (0) + 0 (0) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c31d980 28 from mds0 26=client_reply len 27+0 (1122951432 0 0) ===== Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff880018119400 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 87 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 87 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff880018119400 tid 87 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 87 result -116 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff880018119400 is_dentry 0 is_target 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff8800181196d8 count=1 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 1033 = 8 used + 0 resv + 1025 avail Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff880018119400 done, result -116 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c31d800 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c31d800 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c31d980 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c31d980 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff8800181196d8 count=0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff88003abe4000 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff88003abe42d8 need=1 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff88003abe42d8 1033 = 8 used + 1 resv + 1024 avail Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff88003abe4000 tid 88 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 3 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff880039728800 state open Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff88003abe4000 tid 88 lookuphash (attempt 1) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c31d980 front 123 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:37:25 scale-192-168-98-110 kernel: hpet1: lost 1 rtc interrupts Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 4 nref 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff880039728840 27 -> 28 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 9 left Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c31d980 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 5 -> 6 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (6) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 6 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 20 nref 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c31d980 seq 29 type 24 len 123+0+0 0 pgs Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 2149945942 data_crc 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 190 left Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880039728840 state = 20, queueing work Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (5) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 29 type 24 at ffff88001c31d980 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880039728840 msg (null) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c31d8c0 front 27 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c31d8c0 27 (1122951432) + 0 (0) + 0 (0) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c31d8c0 29 from mds0 26=client_reply len 27+0 (1122951432 0 0) ===== Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff88003abe4000 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 88 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 88 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff88003abe4000 tid 88 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 88 result -116 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff88003abe4000 is_dentry 0 is_target 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88003abe42d8 count=1 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 1033 = 8 used + 0 resv + 1025 avail Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff88003abe4000 done, result -116 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c31d980 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c31d980 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c31d8c0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c31d8c0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88003abe42d8 count=0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff880019d41c00 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff880019d41ed8 need=1 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff880019d41ed8 1033 = 8 used + 1 resv + 1024 avail Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff880019d41c00 tid 89 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 3 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff880039728800 state open Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff880019d41c00 tid 89 lookuphash (attempt 1) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c31d8c0 front 123 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 4 nref 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff880039728840 28 -> 29 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 9 left Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c31d8c0 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 5 -> 6 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (6) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 6 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 20 nref 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c31d8c0 seq 30 type 24 len 123+0+0 0 pgs Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 490278163 data_crc 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 190 left Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880039728840 state = 20, queueing work Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (5) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 30 type 24 at ffff88001c31d8c0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880039728840 msg (null) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c31dbc0 front 27 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c31dbc0 27 (1122951432) + 0 (0) + 0 (0) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c31dbc0 30 from mds0 26=client_reply len 27+0 (1122951432 0 0) ===== Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff880019d41c00 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 89 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 89 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff880019d41c00 tid 89 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 89 result -116 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff880019d41c00 is_dentry 0 is_target 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff880019d41ed8 count=1 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 1033 = 8 used + 0 resv + 1025 avail Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff880019d41c00 done, result -116 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c31d8c0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c31d8c0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c31dbc0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c31dbc0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff880019d41ed8 count=0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff880019d41c00 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff880019d41ed8 need=1 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff880019d41ed8 1033 = 8 used + 1 resv + 1024 avail Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff880019d41c00 tid 90 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 3 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff880039728800 state open Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff880019d41c00 tid 90 lookuphash (attempt 1) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c31d800 front 123 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 4 nref 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff880039728840 29 -> 30 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 9 left Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c31d800 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 5 -> 6 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (6) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 6 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 20 nref 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c31d800 seq 31 type 24 len 123+0+0 0 pgs Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 3220232749 data_crc 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 190 left Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880039728840 state = 20, queueing work Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 4 -> 5 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (5) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (4) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 5 -> 4 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 31 type 24 at ffff88001c31d800 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880039728840 msg (null) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff8800395d35c0 front 27 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff8800395d35c0 27 (1122951432) + 0 (0) + 0 (0) Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff8800395d35c0 31 from mds0 26=client_reply len 27+0 (1122951432 0 0) ===== Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff880019d41c00 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 90 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 90 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff880019d41c00 tid 90 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 90 result -116 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff880019d41c00 is_dentry 0 is_target 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff880019d41ed8 count=1 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 1033 = 8 used + 0 resv + 1025 avail Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff880019d41c00 done, result -116 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c31d800 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c31d800 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff8800395d35c0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff8800395d35c0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff880019d41ed8 count=0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 4 nref 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff880039728840 30 -> 31 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 9 left Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:25 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:37:25 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:37:28 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:37:28 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:37:28 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:37:28 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:37:28 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff880039728840 Jun 20 21:37:28 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:28 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:37:28 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:28 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff880039728800 mds0 extra 680 Jun 20 21:37:28 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:37:28 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:28 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:28 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:28 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:28 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 28 nref 1 Jun 20 21:37:28 scale-192-168-98-110 kernel: libceph: osd_client.c:1138 : osds timeout Jun 20 21:37:28 scale-192-168-98-110 kernel: libceph: osd_client.c:707 : __remove_old_osds ffff880037db79a8 Jun 20 21:37:28 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:28 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff880039728840 Jun 20 21:37:28 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:37:28 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 1 left Jun 20 21:37:28 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:28 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:28 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:28 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:37:28 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:37:29 scale-192-168-98-110 mountd[494]: Caught signal 15, un-registering and exiting. Jun 20 21:37:30 scale-192-168-98-110 kernel: nfsd: last server has exited, flushing export cache Jun 20 21:37:30 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88003bd19da0 mask pAsLsXsFs mode 040755 Jun 20 21:37:30 scale-192-168-98-110 kernel: ceph: caps.c:753 : __ceph_caps_issued_mask ffff88003bd19da0 cap ffff88003bcdf7a8 issued pAsLsXsFs (mask pAsLsXsFs) Jun 20 21:37:30 scale-192-168-98-110 kernel: ceph: caps.c:717 : __touch_cap ffff88003bd19da0 cap ffff88003bcdf7a8 mds0 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: super.c:869 : kill_sb ffff88003983dc00 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: mds_client.c:3055 : pre_umount Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: mds_client.c:2884 : drop_leases Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:2938 : flush_dirty_caps Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:2952 : flush_dirty_caps done Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: mds_client.c:3046 : wait_requests done Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: dir.c:1039 : d_release ffff880038699540 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: dir.c:1204 : dentry_lru_del ffff880018489550 ffff880038699540 'wc.*.old' Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:808 : ceph_caps_revoking ffff8800033c3300 Fb = 0 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:620 : writepages_start ffff8800033c3300 dosync=1 (mode=ALL) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:646 : not cyclic, 0 to 2251799813685247 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:657 : no snap context with dirty data? Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:885 : writepages done, rc = 0 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: inode.c:396 : destroy_inode ffff8800033c3300 ino 10000001305.fffffffffffffffe Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:1005 : adding 10000001305 release to mds0 msg ffff88001c296d80 (850 left) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:1026 : release msg ffff88001c296d80 at 1/170 (28) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:881 : __ceph_remove_cap ffff8800185aa428 from ffff8800033c3300 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:271 : put_cap ffff8800185aa428 1033 = 8 used + 0 resv + 1025 avail Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: snap.c:200 : put_snap_realm 1 ffff88003aabbb80 8 -> 7 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:465 : __cap_delay_cancel ffff8800033c3300 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: xattr.c:325 : __ceph_destroy_xattrs p= (null) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: dir.c:1039 : d_release ffff88003b7be740 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: dir.c:1204 : dentry_lru_del ffff880018489910 ffff88003b7be740 'wc' Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:808 : ceph_caps_revoking ffff8800033c3850 Fb = 0 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:620 : writepages_start ffff8800033c3850 dosync=1 (mode=ALL) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:646 : not cyclic, 0 to 2251799813685247 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:657 : no snap context with dirty data? Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:885 : writepages done, rc = 0 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: inode.c:396 : destroy_inode ffff8800033c3850 ino 10000000085.fffffffffffffffe Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:1005 : adding 10000000085 release to mds0 msg ffff88001c296d80 (849 left) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:1026 : release msg ffff88001c296d80 at 2/170 (52) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:881 : __ceph_remove_cap ffff8800185aa3a8 from ffff8800033c3850 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:271 : put_cap ffff8800185aa3a8 1032 = 7 used + 0 resv + 1025 avail Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: snap.c:200 : put_snap_realm 1 ffff88003aabbb80 7 -> 6 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:465 : __cap_delay_cancel ffff8800033c3850 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: xattr.c:325 : __ceph_destroy_xattrs p= (null) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: dir.c:1039 : d_release ffff88003b7bea40 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: dir.c:1204 : dentry_lru_del ffff880018489960 ffff88003b7bea40 'storage.db' Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:808 : ceph_caps_revoking ffff8800033c3da0 Fb = 0 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:620 : writepages_start ffff8800033c3da0 dosync=1 (mode=ALL) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:646 : not cyclic, 0 to 2251799813685247 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:657 : no snap context with dirty data? Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:885 : writepages done, rc = 0 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: inode.c:396 : destroy_inode ffff8800033c3da0 ino 10000001f54.fffffffffffffffe Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:1005 : adding 10000001f54 release to mds0 msg ffff88001c296d80 (848 left) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:1026 : release msg ffff88001c296d80 at 3/170 (76) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:881 : __ceph_remove_cap ffff8800185aa328 from ffff8800033c3da0 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:271 : put_cap ffff8800185aa328 1031 = 6 used + 0 resv + 1025 avail Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: snap.c:200 : put_snap_realm 1 ffff88003aabbb80 6 -> 5 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:465 : __cap_delay_cancel ffff8800033c3da0 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: xattr.c:325 : __ceph_destroy_xattrs p= (null) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: dir.c:1039 : d_release ffff880018421140 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: dir.c:1204 : dentry_lru_del ffff8800184899b0 ffff880018421140 'cluster.db' Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:808 : ceph_caps_revoking ffff88003bd90300 Fb = 0 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:620 : writepages_start ffff88003bd90300 dosync=1 (mode=ALL) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:646 : not cyclic, 0 to 2251799813685247 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:657 : no snap context with dirty data? Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:885 : writepages done, rc = 0 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: inode.c:396 : destroy_inode ffff88003bd90300 ino 10000000046.fffffffffffffffe Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:1005 : adding 10000000046 release to mds0 msg ffff88001c296d80 (847 left) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:1026 : release msg ffff88001c296d80 at 4/170 (100) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:881 : __ceph_remove_cap ffff88003bcdfe28 from ffff88003bd90300 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:271 : put_cap ffff88003bcdfe28 1030 = 5 used + 0 resv + 1025 avail Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: snap.c:200 : put_snap_realm 1 ffff88003aabbb80 5 -> 4 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:465 : __cap_delay_cancel ffff88003bd90300 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: xattr.c:325 : __ceph_destroy_xattrs p= (null) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: dir.c:1039 : d_release ffff880038507a80 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: dir.c:1204 : dentry_lru_del ffff880018489a00 ffff880038507a80 'repos' Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:808 : ceph_caps_revoking ffff88003bd90850 Fb = 0 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:620 : writepages_start ffff88003bd90850 dosync=1 (mode=ALL) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:646 : not cyclic, 0 to 2251799813685247 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:657 : no snap context with dirty data? Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:885 : writepages done, rc = 0 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: inode.c:396 : destroy_inode ffff88003bd90850 ino 10000000015.fffffffffffffffe Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:1005 : adding 10000000015 release to mds0 msg ffff88001c296d80 (846 left) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:1026 : release msg ffff88001c296d80 at 5/170 (124) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:881 : __ceph_remove_cap ffff88003bcdfea8 from ffff88003bd90850 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:271 : put_cap ffff88003bcdfea8 1029 = 4 used + 0 resv + 1025 avail Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: snap.c:200 : put_snap_realm 1 ffff88003aabbb80 4 -> 3 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:465 : __cap_delay_cancel ffff88003bd90850 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: xattr.c:325 : __ceph_destroy_xattrs p= (null) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: dir.c:1039 : d_release ffff88000faad740 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: dir.c:1204 : dentry_lru_del ffff880018489a50 ffff88000faad740 'scconfigd' Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:808 : ceph_caps_revoking ffff88003bd90da0 Fb = 0 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:620 : writepages_start ffff88003bd90da0 dosync=1 (mode=ALL) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:646 : not cyclic, 0 to 2251799813685247 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:657 : no snap context with dirty data? Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:885 : writepages done, rc = 0 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: inode.c:396 : destroy_inode ffff88003bd90da0 ino 10000000003.fffffffffffffffe Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:1005 : adding 10000000003 release to mds0 msg ffff88001c296d80 (845 left) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:1026 : release msg ffff88001c296d80 at 6/170 (148) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:881 : __ceph_remove_cap ffff88003bcdff28 from ffff88003bd90da0 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:271 : put_cap ffff88003bcdff28 1028 = 3 used + 0 resv + 1025 avail Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: snap.c:200 : put_snap_realm 1 ffff88003aabbb80 3 -> 2 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:465 : __cap_delay_cancel ffff88003bd90da0 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: xattr.c:325 : __ceph_destroy_xattrs p= (null) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: dir.c:1039 : d_release ffff88000fb2f440 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: dir.c:1204 : dentry_lru_del ffff880018489aa0 ffff88000fb2f440 'lib' Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:808 : ceph_caps_revoking ffff88003bd19300 Fb = 0 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:620 : writepages_start ffff88003bd19300 dosync=1 (mode=ALL) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:646 : not cyclic, 0 to 2251799813685247 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:657 : no snap context with dirty data? Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:885 : writepages done, rc = 0 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: inode.c:396 : destroy_inode ffff88003bd19300 ino 10000000001.fffffffffffffffe Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:1005 : adding 10000000001 release to mds0 msg ffff88001c296d80 (844 left) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:1026 : release msg ffff88001c296d80 at 7/170 (172) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:881 : __ceph_remove_cap ffff88003bcdf928 from ffff88003bd19300 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:271 : put_cap ffff88003bcdf928 1027 = 2 used + 0 resv + 1025 avail Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: snap.c:200 : put_snap_realm 1 ffff88003aabbb80 2 -> 1 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:465 : __cap_delay_cancel ffff88003bd19300 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: xattr.c:325 : __ceph_destroy_xattrs p= (null) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: dir.c:1039 : d_release ffff88003b4b0300 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: dir.c:1204 : dentry_lru_del ffff8800184895a0 ffff88003b4b0300 'fsscale0' Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:808 : ceph_caps_revoking ffff88003bd19da0 Fb = 0 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:620 : writepages_start ffff88003bd19da0 dosync=1 (mode=ALL) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:646 : not cyclic, 0 to 2251799813685247 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:657 : no snap context with dirty data? Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: addr.c:885 : writepages done, rc = 0 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: inode.c:396 : destroy_inode ffff88003bd19da0 ino 1.fffffffffffffffe Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:1005 : adding 1 release to mds0 msg ffff88001c296d80 (843 left) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:1026 : release msg ffff88001c296d80 at 8/170 (196) Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:881 : __ceph_remove_cap ffff88003bcdf7a8 from ffff88003bd19da0 Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: caps.c:271 : put_cap ffff88003bcdf7a8 1026 = 1 used + 0 resv + 1025 avail Jun 20 21:37:32 scale-192-168-98-110 kernel: ceph: snap.c:200 : put_snap_realm 1 ffff88003aabbb80 1 -> 0 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: snap.c:166 : __destroy_snap_realm ffff88003aabbb80 1 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: caps.c:465 : __cap_delay_cancel ffff88003bd19da0 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: xattr.c:325 : __ceph_destroy_xattrs p= (null) Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: super.c:99 : sync_fs (non-blocking) Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: caps.c:2938 : flush_dirty_caps Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: caps.c:2952 : flush_dirty_caps done Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: super.c:101 : sync_fs (non-blocking) done Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: super.c:105 : sync_fs (blocking) Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: osd_client.c:1779 : sync done (thru tid 0) Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:3121 : sync Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:3126 : sync want tid 90 flush_seq 0 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: caps.c:2938 : flush_dirty_caps Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: caps.c:2952 : flush_dirty_caps done Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:3078 : wait_unsafe_requests want 90 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:3111 : wait_unsafe_requests done Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:1269 : check_cap_flush want 0 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:1306 : check_cap_flush ok, flushed thru 0 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: super.c:108 : sync_fs (blocking) done Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: super.c:39 : put_super Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:3162 : close_sessions Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff880039728800 2 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:1103 : request_close_session mds0 state closing seq 1 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff8800395d3800 front 28 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff8800395d3800 to mds0 22=client_session len 28+0+0 ----- Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:3179 : waiting for sessions to close Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret 0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff880039728840 state 20 nref 1 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff8800395d3800 seq 32 type 22 len 28+0+0 0 pgs Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 4220035607 data_crc 0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff880039728840 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 95 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff880039728840 95 left Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff880039728840 0 left in 0 kvecs ret = 1 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff880039728840 ret 0 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880039728840 state = 4, queueing work Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 2 -> 3 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (3) Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff880039728840 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff880039728840 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 32 type 22 at ffff8800395d3800 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff8800395d3800 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff8800395d3800 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff880039728840 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff880039728840 msg (null) Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 22 front 28 data 0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff8800395d3800 front 28 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff8800395d3800 28 (3822250925) + 0 (0) + 0 (0) Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff8800395d3800 32 from mds0 22=client_session len 28+0 (3822250925 0 0) ===== Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:454 : __unregister_session mds0 ffff880039728800 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:350 : con_close ffff880039728840 peer 192.168.98.111:6800 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880039728840 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:2269 : handle_session mds0 close ffff880039728800 state closing seq 0 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:986 : remove_session_caps on ffff880039728800 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:882 : iterate_session_caps ffff880039728800 mds0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c296d80 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c296d80 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c296a80 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c296a80 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c2960c0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c2960c0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c296600 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c296600 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c296840 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c296840 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:1880 : kick_requests mds0 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:3204 : stopped Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff8800395d3800 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff8800395d3800 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:3221 : mdsc_destroy ffff880028ff1400 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:3209 : stop Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff880039728840 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff880039728840 ret -11 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1993 : con_work CLOSED Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:293 : con_close_socket on ffff880039728840 sock ffff8800212d9d40 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:163 : ceph_state_change ffff880039728840 state = 3076 sk_state = 4 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:163 : ceph_state_change ffff880039728840 state = 3076 sk_state = 5 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:163 : ceph_state_change ffff880039728840 state = 3076 sk_state = 7 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:163 : ceph_state_change ffff880039728840 state = 3076 sk_state = 7 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880039728840 state = 3076, queueing work Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff880039728800 3 -> 4 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff880039728800 ok (4) Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1956 : queue_con ffff880039728840 - already queued Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (3) Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 4 -> 3 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (2) Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 3 -> 2 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1993 : con_work CLOSED Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:293 : con_close_socket on ffff880039728840 sock (null) Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff880039728800 (1) Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff880039728800 2 -> 1 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: mds_client.c:3229 : mdsc_destroy ffff880028ff1400 done Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: super.c:497 : destroy_fs_client ffff8800383d8c00 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: super.c:241 : destroy_mount_options ffff88001d9da780 Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: debugfs.c:189 : ceph_fs_debugfs_cleanup Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: ceph_common.:476 : destroy_client ffff880037db7800 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: osdmap.c:502 : osdmap_destroy ffff880003b987c0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: osd_client.c:707 : __remove_old_osds ffff880037db79a8 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88003aa38240 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88003aa38240 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88003aa38e40 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88003aa38e40 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88003aa38840 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88003aa38840 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88003aa38b40 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88003aa38b40 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88003aa389c0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88003aa389c0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff880037d54540 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff880037d54540 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff880037d54300 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff880037d54300 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c2a9b00 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c2a9b00 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c2a9500 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c2a9500 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c2a9c80 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c2a9c80 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c2c2500 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c2c2500 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c2c2200 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c2c2200 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c2c2d40 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c2c2d40 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c2c2b00 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c2c2b00 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c2c2c80 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c2c2c80 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88003aa38540 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88003aa38540 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88003aa38f00 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88003aa38f00 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88003aa383c0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88003aa383c0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88003aa386c0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88003aa386c0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88003aa38900 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88003aa38900 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: mon_client.c:820 : stop Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: mon_client.c:120 : __close_session closing mon0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:350 : con_close ffff880028e79800 peer 192.168.98.109:6789 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff880028e79800 nref = 6 -> 7 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880028e79800 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: auth.c:76 : auth_reset ffff88001a74e4c0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff880028e79800 nref = 7 -> 6 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: auth.c:65 : auth_destroy ffff88001a74e4c0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c2a92c0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c2a92c0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c2a9a40 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c2a9a40 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c2a98c0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c2a98c0 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c2a9080 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c2a9080 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1993 : con_work CLOSED Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:293 : con_close_socket on ffff880028e79800 sock ffff880038479d00 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2154 : destroy ffff880037e16000 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:2158 : destroyed messenger ffff880037e16000 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: ceph_common.:229 : destroy_options ffff88003aabbc80 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: ceph_common.:498 : destroy_client ffff880037db7800 done Jun 20 21:37:33 scale-192-168-98-110 kernel: ceph: super.c:514 : destroy_fs_client ffff8800383d8c00 done Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:163 : ceph_state_change ffff880028e79800 state = 3076 sk_state = 4 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:163 : ceph_state_change ffff880028e79800 state = 3076 sk_state = 5 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:163 : ceph_state_change ffff880028e79800 state = 3076 sk_state = 7 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:163 : ceph_state_change ffff880028e79800 state = 3076 sk_state = 7 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff880028e79800 state = 3076, queueing work Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff880028e79800 nref = 6 -> 7 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff880028e79800 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff880028e79800 nref = 7 -> 6 Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:1993 : con_work CLOSED Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:293 : con_close_socket on ffff880028e79800 sock (null) Jun 20 21:37:33 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff880028e79800 nref = 6 -> 5 Jun 20 21:50:49 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 553 /dev/sdd7 Jun 20 21:50:49 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 507 /dev/sdb7 Jun 20 21:50:49 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 463 /dev/sda7 Jun 20 21:50:50 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 554 /dev/sdc7 Jun 20 21:50:50 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 463 /dev/sda7 Jun 20 21:50:50 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 553 /dev/sdd7 Jun 20 21:50:50 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 507 /dev/sdb7 Jun 20 21:50:50 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 465 /dev/sda7 Jun 20 21:50:50 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 554 /dev/sdc7 Jun 20 21:50:50 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 507 /dev/sdb7 Jun 20 21:50:51 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 553 /dev/sdd7 Jun 20 21:50:51 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 511 /dev/sdb7 Jun 20 21:50:51 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 467 /dev/sda7 Jun 20 21:50:51 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 554 /dev/sdc7 Jun 20 21:50:51 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 554 /dev/sdc7 Jun 20 21:50:51 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 553 /dev/sdd7 Jun 20 21:50:51 scale-192-168-98-110 kernel: device fsid 6e45a9a0b99ee421-7076973bb0da548c devid 1 transid 511 /dev/sdb7 Jun 20 21:50:51 scale-192-168-98-110 kernel: device fsid d74a699deaa2d42d-cf706a9baff3419e devid 1 transid 467 /dev/sda7 Jun 20 21:50:51 scale-192-168-98-110 kernel: device fsid 224b72bdb90f0cab-ecf3ca365c54183 devid 1 transid 558 /dev/sdc7 Jun 20 21:50:51 scale-192-168-98-110 kernel: device fsid 7340ccf3ad40a8fe-77e327ce7bba8695 devid 1 transid 553 /dev/sdd7 Jun 20 21:52:21 scale-192-168-98-110 kernel: libceph: client5215 fsid 4213fc3a-0a25-06da-6d74-b614ca1f57f4 Jun 20 21:52:21 scale-192-168-98-110 kernel: libceph: mon0 192.168.98.110:6789 session established Jun 20 21:52:29 scale-192-168-98-110 kernel: NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory Jun 20 21:52:29 scale-192-168-98-110 kernel: NFSD: starting 90-second grace period Jun 20 21:55:36 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:55:36 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:55:36 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:55:36 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:55:36 scale-192-168-98-110 kernel: ceph: mds_client.c:1049 : send_renew_caps to mds0 (up:active) Jun 20 21:55:36 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:36 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:55:36 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:55:36 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:55:36 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:36 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:55:36 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:36 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:55:36 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:36 scale-192-168-98-110 kernel: ceph: mds_client.c:2269 : handle_session mds0 renewcaps ffff88003c0fc800 state open seq 10 Jun 20 21:55:36 scale-192-168-98-110 kernel: ceph: mds_client.c:1086 : renewed_caps mds0 ttl now 4307681012, was fresh, now stale Jun 20 21:55:36 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:55:36 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88002128dda0 mask pAsLsXsFs mode 040755 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:753 : __ceph_caps_issued_mask ffff88002128dda0 cap ffff8800211dc7a8 issued pAsLsXsFs (mask pAsLsXsFs) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:717 : __touch_cap ffff88002128dda0 cap ffff8800211dc7a8 mds0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88002128dda0 mask pAsLsXsFs mode 040755 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:753 : __ceph_caps_issued_mask ffff88002128dda0 cap ffff8800211dc7a8 issued pAsLsXsFs (mask pAsLsXsFs) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:717 : __touch_cap ffff88002128dda0 cap ffff8800211dc7a8 mds0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: super.c:62 : statfs Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c282500 front 34 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c282140 front 1024 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c282500 to mon0 13=statfs len 34+0+0 ----- Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff88000642e800 nref = 1 -> 2 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88000642e800 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88000642e800 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88000642e800 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88000642e800 state 21 nref 2 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c282500 seq 5 type 13 len 34+0+0 0 pgs Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 1357752027 data_crc 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff88000642e800 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 101 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88000642e800 101 left Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88000642e800 0 left in 0 kvecs ret = 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff88000642e800 state = 21, queueing work Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff88000642e800 nref = 2 -> 3 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88000642e800 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88000642e800 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff88000642e800 nref = 3 -> 2 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88000642e800 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff88000642e800 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88000642e800 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff88000642e800 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff88000642e800 msg (null) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 14 front 56 data 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: mon_client.c:441 : get_generic_reply 2 got ffff88001c282140 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c282140 56 (539004756) + 0 (0) + 0 (0) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c282140 9 from mon1 14=statfs_reply len 56+0 (539004756 0 0) ===== Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: mon_client.c:491 : handle_statfs_reply ffff88001c282140 tid 2 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88000642e800 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c282140 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c282140 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c282500 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c282500 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88002128dda0 mask As mode 040755 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:753 : __ceph_caps_issued_mask ffff88002128dda0 cap ffff8800211dc7a8 issued pAsLsXsFs (mask As) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:717 : __touch_cap ffff88002128dda0 cap ffff8800211dc7a8 mds0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88002128dda0 mask pAsLsXsFs mode 040755 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:753 : __ceph_caps_issued_mask ffff88002128dda0 cap ffff8800211dc7a8 issued pAsLsXsFs (mask pAsLsXsFs) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:717 : __touch_cap ffff88002128dda0 cap ffff8800211dc7a8 mds0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88002128dda0 mask As mode 040755 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88000642e800 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88000642e800 state 5 nref 2 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff88000642e800 8 -> 9 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88000642e800 9 left Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88000642e800 0 left in 0 kvecs ret = 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88000642e800 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff88000642e800 nref = 2 -> 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:753 : __ceph_caps_issued_mask ffff88002128dda0 cap ffff8800211dc7a8 issued pAsLsXsFs (mask As) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:717 : __touch_cap ffff88002128dda0 cap ffff8800211dc7a8 mds0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88002128dda0 mask As mode 040755 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:753 : __ceph_caps_issued_mask ffff88002128dda0 cap ffff8800211dc7a8 issued pAsLsXsFs (mask As) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:717 : __touch_cap ffff88002128dda0 cap ffff8800211dc7a8 mds0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88002128dda0 mask As mode 040755 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:753 : __ceph_caps_issued_mask ffff88002128dda0 cap ffff8800211dc7a8 issued pAsLsXsFs (mask As) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:717 : __touch_cap ffff88002128dda0 cap ffff8800211dc7a8 mds0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88002128dda0 mask As mode 040755 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:753 : __ceph_caps_issued_mask ffff88002128dda0 cap ffff8800211dc7a8 issued pAsLsXsFs (mask As) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:717 : __touch_cap ffff88002128dda0 cap ffff8800211dc7a8 mds0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88002128dda0 mask As mode 040755 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:753 : __ceph_caps_issued_mask ffff88002128dda0 cap ffff8800211dc7a8 issued pAsLsXsFs (mask As) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:717 : __touch_cap ffff88002128dda0 cap ffff8800211dc7a8 mds0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: inode.c:1776 : do_getattr inode ffff88002128dda0 mask pAsLsXsFs mode 040755 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:753 : __ceph_caps_issued_mask ffff88002128dda0 cap ffff8800211dc7a8 issued pAsLsXsFs (mask pAsLsXsFs) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:717 : __touch_cap ffff88002128dda0 cap ffff8800211dc7a8 mds0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff88001c133c00 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff88001c133ed8 need=1 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff88001c133ed8 2 = 1 used + 1 resv + 0 avail Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff88001c133c00 tid 2 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff88003c0fc800 state open Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff88001c133c00 tid 2 lookuphash (attempt 1) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c282200 front 123 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c282200 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 3 -> 4 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (4) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 4 -> 3 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 20 nref 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c282200 seq 13 type 24 len 123+0+0 0 pgs Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 3582729112 data_crc 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 190 left Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff88003c0fc840 state = 20, queueing work Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 3 -> 4 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (4) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (3) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 4 -> 3 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 13 type 24 at ffff88001c282200 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff88003c0fc840 msg (null) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c282800 front 27 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c282800 27 (4070192686) + 0 (0) + 0 (0) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c282800 13 from mds0 26=client_reply len 27+0 (4070192686 0 0) ===== Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff88001c133c00 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 2 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 2 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff88001c133c00 tid 2 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 2 result -116 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff88001c133c00 is_dentry 0 is_target 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88001c133ed8 count=1 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 2 = 1 used + 0 resv + 1 avail Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff88001c133c00 done, result -116 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c282200 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c282200 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c282800 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c282800 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88001c133ed8 count=0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff88003b3d3000 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff88003b3d32d8 need=1 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff88003b3d32d8 2 = 1 used + 1 resv + 0 avail Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff88003b3d3000 tid 3 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 2 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 3 -> 4 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff88003c0fc800 state open Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff88003b3d3000 tid 3 lookuphash (attempt 1) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c282800 front 123 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 4 nref 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff88003c0fc840 12 -> 13 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 9 left Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (3) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c282800 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 4 -> 5 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (5) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 5 -> 4 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 4 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 4 -> 5 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 5 -> 4 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 4 -> 3 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 28 nref 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c282800 seq 14 type 24 len 123+0+0 0 pgs Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 1217430749 data_crc 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 190 left Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff88003c0fc840 state = 28, queueing work Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 3 -> 4 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (4) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 1 left Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (3) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 4 -> 3 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 14 type 24 at ffff88001c282800 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff88003c0fc840 msg (null) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c282e00 front 27 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c282e00 27 (4070192686) + 0 (0) + 0 (0) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c282e00 14 from mds0 26=client_reply len 27+0 (4070192686 0 0) ===== Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff88003b3d3000 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 3 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 3 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff88003b3d3000 tid 3 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 3 result -116 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff88003b3d3000 is_dentry 0 is_target 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88003b3d32d8 count=1 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 2 = 1 used + 0 resv + 1 avail Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff88003b3d3000 done, result -116 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c282800 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c282800 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c282e00 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c282e00 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88003b3d32d8 count=0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff88001d950800 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff88001d950ad8 need=1 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff88001d950ad8 2 = 1 used + 1 resv + 0 avail Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff88001d950800 tid 4 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 2 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 3 -> 4 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff88003c0fc800 state open Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff88001d950800 tid 4 lookuphash (attempt 1) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c282e00 front 123 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 4 nref 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff88003c0fc840 13 -> 14 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 9 left Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (3) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c282e00 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 4 -> 5 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (5) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 5 -> 4 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 4 -> 3 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 20 nref 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c282e00 seq 15 type 24 len 123+0+0 0 pgs Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 2515780373 data_crc 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 190 left Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff88003c0fc840 state = 20, queueing work Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 3 -> 4 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (4) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (3) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 4 -> 3 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 15 type 24 at ffff88001c282e00 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff88003c0fc840 msg (null) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c282ec0 front 27 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c282ec0 27 (4070192686) + 0 (0) + 0 (0) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c282ec0 15 from mds0 26=client_reply len 27+0 (4070192686 0 0) ===== Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff88001d950800 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 4 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 4 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff88001d950800 tid 4 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 4 result -116 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff88001d950800 is_dentry 0 is_target 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88001d950ad8 count=1 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 2 = 1 used + 0 resv + 1 avail Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff88001d950800 done, result -116 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c282e00 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c282e00 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c282ec0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c282ec0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88001d950ad8 count=0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff88003b3d3000 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff88003b3d32d8 need=1 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff88003b3d32d8 2 = 1 used + 1 resv + 0 avail Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff88003b3d3000 tid 5 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 2 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 3 -> 4 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff88003c0fc800 state open Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff88003b3d3000 tid 5 lookuphash (attempt 1) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c282ec0 front 123 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:55:41 scale-192-168-98-110 kernel: hpet1: lost 1 rtc interrupts Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 4 nref 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff88003c0fc840 14 -> 15 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 9 left Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (3) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c282ec0 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 4 -> 5 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (5) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 5 -> 4 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 4 -> 3 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 20 nref 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c282ec0 seq 16 type 24 len 123+0+0 0 pgs Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 149904464 data_crc 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 190 left Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff88003c0fc840 state = 20, queueing work Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 3 -> 4 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (4) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (3) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 4 -> 3 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 16 type 24 at ffff88001c282ec0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff88003c0fc840 msg (null) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c2822c0 front 27 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c2822c0 27 (4070192686) + 0 (0) + 0 (0) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c2822c0 16 from mds0 26=client_reply len 27+0 (4070192686 0 0) ===== Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff88003b3d3000 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 5 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 5 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff88003b3d3000 tid 5 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 5 result -116 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff88003b3d3000 is_dentry 0 is_target 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88003b3d32d8 count=1 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 2 = 1 used + 0 resv + 1 avail Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff88003b3d3000 done, result -116 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c282ec0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c282ec0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c2822c0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c2822c0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88003b3d32d8 count=0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: mon_client.c:698 : monc delayed_work Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88000642e800 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff88000642e800 nref = 1 -> 2 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88000642e800 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: mon_client.c:190 : __send_subscribe sub_sent=0 exp=0 want_osd=0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: mon_client.c:179 : __schedule_delayed after 20000 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88000642e800 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88000642e800 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88000642e800 state 29 nref 2 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88000642e800 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88000642e800 1 left Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88000642e800 0 left in 0 kvecs ret = 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88000642e800 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff88000642e800 nref = 2 -> 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff88003b3d3000 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff88003b3d32d8 need=1 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff88003b3d32d8 2 = 1 used + 1 resv + 0 avail Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff88003b3d3000 tid 6 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 2 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 3 -> 4 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff88003c0fc800 state open Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff88003b3d3000 tid 6 lookuphash (attempt 1) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c282800 front 123 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 4 nref 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff88003c0fc840 15 -> 16 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 9 left Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (3) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c282800 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 4 -> 5 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (5) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 5 -> 4 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 4 -> 3 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 20 nref 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c282800 seq 17 type 24 len 123+0+0 0 pgs Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 2854652782 data_crc 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 190 left Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff88003c0fc840 state = 20, queueing work Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 3 -> 4 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (4) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (3) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 4 -> 3 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 17 type 24 at ffff88001c282800 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff88003c0fc840 msg (null) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c282680 front 27 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c282680 27 (4070192686) + 0 (0) + 0 (0) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c282680 17 from mds0 26=client_reply len 27+0 (4070192686 0 0) ===== Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff88003b3d3000 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 6 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 6 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff88003b3d3000 tid 6 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 6 result -116 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff88003b3d3000 is_dentry 0 is_target 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88003b3d32d8 count=1 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 2 = 1 used + 0 resv + 1 avail Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff88003b3d3000 done, result -116 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c282800 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c282800 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c282680 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c282680 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88003b3d32d8 count=0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff88001d950800 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff88001d950ad8 need=1 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff88001d950ad8 2 = 1 used + 1 resv + 0 avail Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff88001d950800 tid 7 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 2 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 3 -> 4 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff88003c0fc800 state open Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff88001d950800 tid 7 lookuphash (attempt 1) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c282680 front 123 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:55:41 scale-192-168-98-110 kernel: hpet1: lost 1 rtc interrupts Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 4 nref 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff88003c0fc840 16 -> 17 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 9 left Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (3) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c282680 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 4 -> 5 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (5) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 5 -> 4 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 4 -> 3 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 20 nref 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c282680 seq 18 type 24 len 123+0+0 0 pgs Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 926553131 data_crc 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 190 left Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff88003c0fc840 state = 20, queueing work Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 3 -> 4 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (4) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (3) Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 4 -> 3 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 18 type 24 at ffff88001c282680 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff88003c0fc840 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff88003c0fc840 msg (null) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c2822c0 front 27 Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c2822c0 27 (4070192686) + 0 (0) + 0 (0) Jun 20 21:55:41 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c2822c0 18 from mds0 26=client_reply len 27+0 (4070192686 0 0) ===== Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff88001d950800 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 7 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 7 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff88001d950800 tid 7 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 7 result -116 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff88001d950800 is_dentry 0 is_target 0 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88001d950ad8 count=1 Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 2 = 1 used + 0 resv + 1 avail Jun 20 21:55:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff88001d950800 done, result -116 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c282680 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c282680 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c2822c0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c2822c0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88001d950ad8 count=0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff880039540800 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff880039540ad8 need=1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff880039540ad8 2 = 1 used + 1 resv + 0 avail Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff880039540800 tid 8 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 2 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 3 -> 4 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff88003c0fc800 state open Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff880039540800 tid 8 lookuphash (attempt 1) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c2822c0 front 123 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 4 nref 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff88003c0fc840 17 -> 18 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 9 left Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (3) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 4 -> 3 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c2822c0 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 3 -> 4 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (4) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 4 -> 3 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 20 nref 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c2822c0 seq 19 type 24 len 123+0+0 0 pgs Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 353137167 data_crc 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 190 left Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (2) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff88003c0fc840 state = 4, queueing work Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 19 type 24 at ffff88001c2822c0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff88003c0fc840 msg (null) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c282ec0 front 27 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c282ec0 27 (4070192686) + 0 (0) + 0 (0) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c282ec0 19 from mds0 26=client_reply len 27+0 (4070192686 0 0) ===== Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff880039540800 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 8 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 8 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff880039540800 tid 8 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 8 result -116 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff880039540800 is_dentry 0 is_target 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff880039540ad8 count=1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 2 = 1 used + 0 resv + 1 avail Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 4 nref 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff88003c0fc840 18 -> 19 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 9 left Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (2) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff880039540800 done, result -116 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c2822c0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c2822c0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c282ec0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c282ec0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff880039540ad8 count=0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff88001d950800 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff88001d950ad8 need=1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff88001d950ad8 2 = 1 used + 1 resv + 0 avail Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff88001d950800 tid 9 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff88003c0fc800 state open Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff88001d950800 tid 9 lookuphash (attempt 1) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c282ec0 front 123 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c282ec0 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 3 -> 4 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (4) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 4 -> 3 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 20 nref 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c282ec0 seq 20 type 24 len 123+0+0 0 pgs Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 2282810698 data_crc 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 190 left Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (2) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff88003c0fc840 state = 4, queueing work Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 20 type 24 at ffff88001c282ec0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff88003c0fc840 msg (null) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c2822c0 front 27 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c2822c0 27 (4070192686) + 0 (0) + 0 (0) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c2822c0 20 from mds0 26=client_reply len 27+0 (4070192686 0 0) ===== Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff88001d950800 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 9 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 9 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff88001d950800 tid 9 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 9 result -116 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff88001d950800 is_dentry 0 is_target 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88001d950ad8 count=1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 2 = 1 used + 0 resv + 1 avail Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 4 nref 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff88003c0fc840 19 -> 20 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 9 left Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (2) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff88001d950800 done, result -116 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c282ec0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c282ec0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c2822c0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c2822c0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88001d950ad8 count=0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff880039b9c400 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff880039b9c6d8 need=1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff880039b9c6d8 2 = 1 used + 1 resv + 0 avail Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff880039b9c400 tid 10 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff88003c0fc800 state open Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff880039b9c400 tid 10 lookuphash (attempt 1) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c2822c0 front 123 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c2822c0 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 3 -> 4 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (4) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 4 -> 3 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 20 nref 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c2822c0 seq 21 type 24 len 123+0+0 0 pgs Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 718879348 data_crc 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 190 left Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (2) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff88003c0fc840 state = 4, queueing work Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 21 type 24 at ffff88001c2822c0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff88003c0fc840 msg (null) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c282ec0 front 27 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c282ec0 27 (4070192686) + 0 (0) + 0 (0) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c282ec0 21 from mds0 26=client_reply len 27+0 (4070192686 0 0) ===== Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff880039b9c400 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 10 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 10 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff880039b9c400 tid 10 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 10 result -116 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff880039b9c400 is_dentry 0 is_target 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff880039b9c6d8 count=1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 2 = 1 used + 0 resv + 1 avail Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 4 nref 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff88003c0fc840 20 -> 21 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 9 left Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (2) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff880039b9c400 done, result -116 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c2822c0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c2822c0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c282ec0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c282ec0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff880039b9c6d8 count=0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff880039540800 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff880039540ad8 need=1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff880039540ad8 2 = 1 used + 1 resv + 0 avail Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff880039540800 tid 11 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff88003c0fc800 state open Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff880039540800 tid 11 lookuphash (attempt 1) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c282ec0 front 123 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c282ec0 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 3 -> 4 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (4) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 4 -> 3 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 20 nref 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c282ec0 seq 22 type 24 len 123+0+0 0 pgs Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 3083183409 data_crc 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 190 left Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (2) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff88003c0fc840 state = 4, queueing work Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 22 type 24 at ffff88001c282ec0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff88003c0fc840 msg (null) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c2822c0 front 27 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c2822c0 27 (4070192686) + 0 (0) + 0 (0) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c2822c0 22 from mds0 26=client_reply len 27+0 (4070192686 0 0) ===== Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff880039540800 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 11 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 11 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff880039540800 tid 11 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 11 result -116 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff880039540800 is_dentry 0 is_target 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff880039540ad8 count=1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 2 = 1 used + 0 resv + 1 avail Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 4 nref 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff88003c0fc840 21 -> 22 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 9 left Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (2) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff880039540800 done, result -116 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c282ec0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c282ec0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c2822c0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c2822c0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff880039540ad8 count=0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff88001d950800 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff88001d950ad8 need=1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff88001d950ad8 2 = 1 used + 1 resv + 0 avail Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff88001d950800 tid 12 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff88003c0fc800 state open Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff88001d950800 tid 12 lookuphash (attempt 1) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c2822c0 front 123 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c2822c0 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 3 -> 4 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (4) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 4 -> 3 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 20 nref 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c2822c0 seq 23 type 24 len 123+0+0 0 pgs Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 1789321977 data_crc 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 190 left Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (2) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff88003c0fc840 state = 4, queueing work Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 23 type 24 at ffff88001c2822c0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff88003c0fc840 msg (null) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c282ec0 front 27 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c282ec0 27 (4070192686) + 0 (0) + 0 (0) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c282ec0 23 from mds0 26=client_reply len 27+0 (4070192686 0 0) ===== Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff88001d950800 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 12 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 12 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff88001d950800 tid 12 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 12 result -116 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff88001d950800 is_dentry 0 is_target 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88001d950ad8 count=1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 2 = 1 used + 0 resv + 1 avail Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 4 nref 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff88003c0fc840 22 -> 23 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 9 left Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (2) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff88001d950800 done, result -116 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c2822c0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c2822c0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c282ec0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c282ec0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff88001d950ad8 count=0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff880039b9c400 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff880039b9c6d8 need=1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff880039b9c6d8 2 = 1 used + 1 resv + 0 avail Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff880039b9c400 tid 13 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff88003c0fc800 state open Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff880039b9c400 tid 13 lookuphash (attempt 1) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c282ec0 front 123 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c282ec0 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 3 -> 4 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (4) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 4 -> 3 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 20 nref 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c282ec0 seq 24 type 24 len 123+0+0 0 pgs Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 4156194236 data_crc 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 190 left Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (2) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff88003c0fc840 state = 4, queueing work Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 24 type 24 at ffff88001c282ec0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff88003c0fc840 msg (null) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c2822c0 front 27 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c2822c0 27 (4070192686) + 0 (0) + 0 (0) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c2822c0 24 from mds0 26=client_reply len 27+0 (4070192686 0 0) ===== Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff880039b9c400 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 13 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 13 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff880039b9c400 tid 13 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 13 result -116 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff880039b9c400 is_dentry 0 is_target 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff880039b9c6d8 count=1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 2 = 1 used + 0 resv + 1 avail Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 4 nref 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff88003c0fc840 23 -> 24 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 9 left Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (2) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff880039b9c400 done, result -116 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c282ec0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c282ec0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c2822c0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c2822c0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff880039b9c6d8 count=0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: export.c:148 : __cfh_to_dentry 10000001a97 (1/317bf6c3) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1913 : do_request on ffff880028ff1c00 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:160 : reserve caps ctx=ffff880028ff1ed8 need=1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:199 : reserve caps ctx=ffff880028ff1ed8 2 = 1 used + 1 resv + 0 avail Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:571 : __register_request ffff880028ff1c00 tid 14 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:679 : __choose_mds (null) is_hash=0 (0) mode 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:743 : choose_mds chose random mds0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1824 : do_request mds0 session ffff88003c0fc800 state open Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1727 : prepare_send_request ffff880028ff1c00 tid 14 lookuphash (attempt 1) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1578 : path 830207683 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c2822c0 front 123 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1778 : r_locked_dir = (null) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c2822c0 to mds0 24=client_request len 123+0+0 ----- Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 3 -> 4 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (4) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 4 -> 3 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1939 : do_request waiting Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 20 nref 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c2822c0 seq 25 type 24 len 123+0+0 0 pgs Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 1433643650 data_crc 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 190 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 190 left Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (2) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff88003c0fc840 state = 4, queueing work Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 25 type 24 at ffff88001c2822c0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff88003c0fc840 msg (null) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 26 front 27 data 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c282ec0 front 27 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c282ec0 27 (4070192686) + 0 (0) + 0 (0) Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c282ec0 25 from mds0 26=client_reply len 27+0 (4070192686 0 0) ===== Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2033 : handle_reply ffff880028ff1c00 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2069 : got ESTALE on request 14 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2095 : have to return ESTALE on request 14 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:592 : __unregister_request ffff880028ff1c00 tid 14 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:2126 : handle_reply tid 14 result -116 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: inode.c:940 : fill_trace ffff880028ff1c00 is_dentry 0 is_target 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: inode.c:977 : fill_trace reply is empty! Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff880028ff1ed8 count=1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:221 : unreserve caps 2 = 1 used + 0 resv + 1 avail Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 4 nref 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff88003c0fc840 24 -> 25 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 9 left Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (2) Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1948 : do_request waited, got 0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:1976 : do_request ffff880028ff1c00 done, result -116 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c2822c0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c2822c0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c282ec0 Jun 20 21:55:42 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c282ec0 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:55:42 scale-192-168-98-110 kernel: ceph: caps.c:212 : unreserve caps ctx=ffff880028ff1ed8 count=0 Jun 20 21:55:46 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:55:46 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:55:46 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:55:46 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:55:46 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88003c0fc840 Jun 20 21:55:46 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:46 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:55:46 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:46 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:55:46 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:55:46 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:46 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:46 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:46 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:46 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 28 nref 1 Jun 20 21:55:46 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:46 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88003c0fc840 Jun 20 21:55:46 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:55:46 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 1 left Jun 20 21:55:46 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:46 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:46 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:46 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:55:46 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:55:51 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:55:51 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:55:51 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:55:51 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:55:51 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88003c0fc840 Jun 20 21:55:51 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:51 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:55:51 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:51 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:55:51 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:55:51 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:51 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:51 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:51 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:51 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 28 nref 1 Jun 20 21:55:51 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:51 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88003c0fc840 Jun 20 21:55:51 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:55:51 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 1 left Jun 20 21:55:51 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:51 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:51 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:51 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:55:51 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:55:51 scale-192-168-98-110 kernel: libceph: osd_client.c:1138 : osds timeout Jun 20 21:55:51 scale-192-168-98-110 kernel: libceph: osd_client.c:707 : __remove_old_osds ffff880037db85a8 Jun 20 21:55:56 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:55:56 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:55:56 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:55:56 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:55:56 scale-192-168-98-110 kernel: ceph: mds_client.c:1049 : send_renew_caps to mds0 (up:active) Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff8800394469c0 front 28 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff8800394469c0 to mds0 22=client_session len 28+0+0 ----- Jun 20 21:55:56 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:56 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:56 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:55:56 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:55:56 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 20 nref 1 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff8800394469c0 seq 26 type 22 len 28+0+0 0 pgs Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 1502581924 data_crc 0 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff88003c0fc840 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 95 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 95 left Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff88003c0fc840 state = 20, queueing work Jun 20 21:55:56 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:55:56 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:56 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (2) Jun 20 21:55:56 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff88003c0fc840 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 26 type 22 at ffff8800394469c0 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff8800394469c0 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff8800394469c0 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff88003c0fc840 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff88003c0fc840 msg (null) Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 22 front 28 data 0 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff8800394469c0 front 28 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff8800394469c0 28 (713343520) + 0 (0) + 0 (0) Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff8800394469c0 26 from mds0 22=client_session len 28+0 (713343520 0 0) ===== Jun 20 21:55:56 scale-192-168-98-110 kernel: ceph: mds_client.c:2269 : handle_session mds0 renewcaps ffff88003c0fc800 state open seq 11 Jun 20 21:55:56 scale-192-168-98-110 kernel: ceph: mds_client.c:1086 : renewed_caps mds0 ttl now 4307701012, was fresh, now stale Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff8800394469c0 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff8800394469c0 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 4 nref 1 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff88003c0fc840 25 -> 26 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 9 left Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:55:56 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:55:56 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:55:56 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:56:01 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:56:01 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:56:01 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:56:01 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88003c0fc840 Jun 20 21:56:01 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:56:01 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:56:01 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:56:01 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:56:01 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 28 nref 1 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88003c0fc840 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 1 left Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:56:01 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:56:01 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: mon_client.c:698 : monc delayed_work Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88000642e800 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff88000642e800 nref = 1 -> 2 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88000642e800 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: mon_client.c:190 : __send_subscribe sub_sent=0 exp=0 want_osd=0 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: mon_client.c:179 : __schedule_delayed after 20000 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88000642e800 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88000642e800 ret 0 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88000642e800 state 29 nref 2 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88000642e800 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88000642e800 1 left Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88000642e800 0 left in 0 kvecs ret = 1 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88000642e800 ret 0 Jun 20 21:56:01 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff88000642e800 nref = 2 -> 1 Jun 20 21:56:06 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:56:06 scale-192-168-98-110 kernel: libceph: osd_client.c:1138 : osds timeout Jun 20 21:56:06 scale-192-168-98-110 kernel: libceph: osd_client.c:707 : __remove_old_osds ffff880037db85a8 Jun 20 21:56:06 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:56:06 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:56:06 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:56:06 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88003c0fc840 Jun 20 21:56:06 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:56:06 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:56:06 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:56:06 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:56:06 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:56:06 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:56:06 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:56:06 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:06 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:56:06 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 28 nref 1 Jun 20 21:56:06 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:56:06 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88003c0fc840 Jun 20 21:56:06 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:56:06 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 1 left Jun 20 21:56:06 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:56:06 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:56:06 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:56:06 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:56:06 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:56:11 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:56:11 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:56:11 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:56:11 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:56:11 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88003c0fc840 Jun 20 21:56:11 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:56:11 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:56:11 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:56:11 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:56:11 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:56:11 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:56:11 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:56:11 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:11 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:56:11 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 28 nref 1 Jun 20 21:56:11 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:56:11 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88003c0fc840 Jun 20 21:56:11 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:56:11 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 1 left Jun 20 21:56:11 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:56:11 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:56:11 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:56:11 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:56:11 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:56:16 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:56:16 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:56:16 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:56:16 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:56:16 scale-192-168-98-110 kernel: ceph: mds_client.c:1049 : send_renew_caps to mds0 (up:active) Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88003bac2c80 front 28 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88003bac2c80 to mds0 22=client_session len 28+0+0 ----- Jun 20 21:56:16 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:56:16 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:56:16 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:56:16 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:56:16 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 20 nref 1 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88003bac2c80 seq 27 type 22 len 28+0+0 0 pgs Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 1884570141 data_crc 0 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff88003c0fc840 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 95 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 95 left Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff88003c0fc840 state = 20, queueing work Jun 20 21:56:16 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:56:16 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:56:16 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (2) Jun 20 21:56:16 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff88003c0fc840 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 27 type 22 at ffff88003bac2c80 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88003bac2c80 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88003bac2c80 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff88003c0fc840 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff88003c0fc840 msg (null) Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 22 front 28 data 0 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88003bac2c80 front 28 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88003bac2c80 28 (56562841) + 0 (0) + 0 (0) Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88003bac2c80 27 from mds0 22=client_session len 28+0 (56562841 0 0) ===== Jun 20 21:56:16 scale-192-168-98-110 kernel: ceph: mds_client.c:2269 : handle_session mds0 renewcaps ffff88003c0fc800 state open seq 12 Jun 20 21:56:16 scale-192-168-98-110 kernel: ceph: mds_client.c:1086 : renewed_caps mds0 ttl now 4307721012, was fresh, now stale Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88003bac2c80 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88003bac2c80 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 4 nref 1 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff88003c0fc840 26 -> 27 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 9 left Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:56:16 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:56:16 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:56:16 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:56:21 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:56:21 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:56:21 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:56:21 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88003c0fc840 Jun 20 21:56:21 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:56:21 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:56:21 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:56:21 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:56:21 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 28 nref 1 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88003c0fc840 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 1 left Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:56:21 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:56:21 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: osd_client.c:1138 : osds timeout Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: osd_client.c:707 : __remove_old_osds ffff880037db85a8 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: mon_client.c:698 : monc delayed_work Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88000642e800 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff88000642e800 nref = 1 -> 2 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88000642e800 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: mon_client.c:190 : __send_subscribe sub_sent=0 exp=0 want_osd=0 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: mon_client.c:179 : __schedule_delayed after 20000 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88000642e800 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88000642e800 ret 0 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88000642e800 state 29 nref 2 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88000642e800 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88000642e800 1 left Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88000642e800 0 left in 0 kvecs ret = 1 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88000642e800 ret 0 Jun 20 21:56:21 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff88000642e800 nref = 2 -> 1 Jun 20 21:56:26 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:56:26 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:56:26 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:56:26 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:56:26 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88003c0fc840 Jun 20 21:56:26 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:56:26 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:56:26 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:56:26 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:56:26 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:56:26 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:56:26 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:56:26 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:26 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:56:26 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 28 nref 1 Jun 20 21:56:26 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:56:26 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88003c0fc840 Jun 20 21:56:26 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:56:26 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 1 left Jun 20 21:56:26 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:56:26 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:56:26 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:56:26 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:56:26 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:56:31 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:56:31 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:56:31 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:56:31 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:56:31 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88003c0fc840 Jun 20 21:56:31 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:56:31 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:56:31 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:56:31 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:56:31 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:56:31 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:56:31 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:56:31 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:31 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:56:31 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 28 nref 1 Jun 20 21:56:31 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:56:31 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88003c0fc840 Jun 20 21:56:31 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:56:31 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 1 left Jun 20 21:56:31 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:56:31 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:56:31 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:56:31 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:56:31 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:56:36 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:56:36 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:56:36 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:56:36 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:56:36 scale-192-168-98-110 kernel: ceph: mds_client.c:1049 : send_renew_caps to mds0 (up:active) Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88003c1a5680 front 28 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88003c1a5680 to mds0 22=client_session len 28+0+0 ----- Jun 20 21:56:36 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:56:36 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:56:36 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:56:36 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:56:36 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: osd_client.c:1138 : osds timeout Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: osd_client.c:707 : __remove_old_osds ffff880037db85a8 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 20 nref 1 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88003c1a5680 seq 28 type 22 len 28+0+0 0 pgs Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 2748837345 data_crc 0 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff88003c0fc840 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 95 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 95 left Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff88003c0fc840 state = 20, queueing work Jun 20 21:56:36 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:56:36 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:56:36 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (2) Jun 20 21:56:36 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff88003c0fc840 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 28 type 22 at ffff88003c1a5680 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88003c1a5680 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88003c1a5680 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff88003c0fc840 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff88003c0fc840 msg (null) Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 22 front 28 data 0 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88003c1a5680 front 28 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88003c1a5680 28 (3104888343) + 0 (0) + 0 (0) Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88003c1a5680 28 from mds0 22=client_session len 28+0 (3104888343 0 0) ===== Jun 20 21:56:36 scale-192-168-98-110 kernel: ceph: mds_client.c:2269 : handle_session mds0 renewcaps ffff88003c0fc800 state open seq 13 Jun 20 21:56:36 scale-192-168-98-110 kernel: ceph: mds_client.c:1086 : renewed_caps mds0 ttl now 4307741012, was fresh, now stale Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88003c1a5680 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88003c1a5680 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 4 nref 1 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff88003c0fc840 27 -> 28 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 9 left Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:56:36 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:56:36 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:56:36 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:56:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:56:41 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:56:41 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:56:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88003c0fc840 Jun 20 21:56:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:56:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:56:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:56:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:56:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 28 nref 1 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88003c0fc840 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 1 left Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:56:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:56:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: mon_client.c:698 : monc delayed_work Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88000642e800 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff88000642e800 nref = 1 -> 2 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88000642e800 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: mon_client.c:190 : __send_subscribe sub_sent=0 exp=0 want_osd=0 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: mon_client.c:179 : __schedule_delayed after 20000 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88000642e800 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88000642e800 ret 0 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88000642e800 state 29 nref 2 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88000642e800 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88000642e800 1 left Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88000642e800 0 left in 0 kvecs ret = 1 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88000642e800 ret 0 Jun 20 21:56:41 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff88000642e800 nref = 2 -> 1 Jun 20 21:56:46 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:56:46 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:56:46 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:56:46 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:56:46 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88003c0fc840 Jun 20 21:56:46 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:56:46 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:56:46 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:56:46 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:56:46 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:56:46 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:56:46 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:56:46 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:46 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:56:46 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 28 nref 1 Jun 20 21:56:46 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:56:46 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88003c0fc840 Jun 20 21:56:46 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:56:46 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 1 left Jun 20 21:56:46 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:56:46 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:56:46 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:56:46 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:56:46 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:56:51 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:56:51 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:56:51 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:56:51 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:56:51 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88003c0fc840 Jun 20 21:56:51 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:56:51 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:56:51 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:56:51 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:56:51 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:56:51 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:56:51 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:56:51 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:51 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:56:51 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 28 nref 1 Jun 20 21:56:51 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:56:51 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88003c0fc840 Jun 20 21:56:51 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:56:51 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 1 left Jun 20 21:56:51 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:56:51 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:56:51 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:56:51 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:56:51 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:56:51 scale-192-168-98-110 kernel: libceph: osd_client.c:1138 : osds timeout Jun 20 21:56:51 scale-192-168-98-110 kernel: libceph: osd_client.c:707 : __remove_old_osds ffff880037db85a8 Jun 20 21:56:56 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:56:56 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:56:56 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:56:56 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:56:56 scale-192-168-98-110 kernel: ceph: mds_client.c:1049 : send_renew_caps to mds0 (up:active) Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001a797c00 front 28 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001a797c00 to mds0 22=client_session len 28+0+0 ----- Jun 20 21:56:56 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:56:56 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:56:56 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:56:56 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:56:56 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 20 nref 1 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001a797c00 seq 29 type 22 len 28+0+0 0 pgs Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 19378672 data_crc 0 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff88003c0fc840 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 95 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 95 left Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff88003c0fc840 state = 20, queueing work Jun 20 21:56:56 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:56:56 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:56:56 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (2) Jun 20 21:56:56 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff88003c0fc840 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 29 type 22 at ffff88001a797c00 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001a797c00 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001a797c00 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff88003c0fc840 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff88003c0fc840 msg (null) Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 22 front 28 data 0 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001a797c00 front 28 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001a797c00 28 (1915545460) + 0 (0) + 0 (0) Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001a797c00 29 from mds0 22=client_session len 28+0 (1915545460 0 0) ===== Jun 20 21:56:56 scale-192-168-98-110 kernel: ceph: mds_client.c:2269 : handle_session mds0 renewcaps ffff88003c0fc800 state open seq 14 Jun 20 21:56:56 scale-192-168-98-110 kernel: ceph: mds_client.c:1086 : renewed_caps mds0 ttl now 4307761012, was fresh, now stale Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001a797c00 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001a797c00 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 4 nref 1 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff88003c0fc840 28 -> 29 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 9 left Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:56:56 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:56:56 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:56:56 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:57:01 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:57:01 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:57:01 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:57:01 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88003c0fc840 Jun 20 21:57:01 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:57:01 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:57:01 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:57:01 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:57:01 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 28 nref 1 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88003c0fc840 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 1 left Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:57:01 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:57:01 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: mon_client.c:698 : monc delayed_work Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88000642e800 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff88000642e800 nref = 1 -> 2 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88000642e800 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: mon_client.c:190 : __send_subscribe sub_sent=0 exp=0 want_osd=0 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: mon_client.c:179 : __schedule_delayed after 20000 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88000642e800 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88000642e800 ret 0 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88000642e800 state 29 nref 2 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88000642e800 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88000642e800 1 left Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88000642e800 0 left in 0 kvecs ret = 1 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88000642e800 ret 0 Jun 20 21:57:01 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff88000642e800 nref = 2 -> 1 Jun 20 21:57:06 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:57:06 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:57:06 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:57:06 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:57:06 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88003c0fc840 Jun 20 21:57:06 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:57:06 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:57:06 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:57:06 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:57:06 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:57:06 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:57:06 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:57:06 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:06 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:57:06 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 28 nref 1 Jun 20 21:57:06 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:57:06 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88003c0fc840 Jun 20 21:57:06 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:57:06 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 1 left Jun 20 21:57:06 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:57:06 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:57:06 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:57:06 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:57:06 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:57:06 scale-192-168-98-110 kernel: libceph: osd_client.c:1138 : osds timeout Jun 20 21:57:06 scale-192-168-98-110 kernel: libceph: osd_client.c:707 : __remove_old_osds ffff880037db85a8 Jun 20 21:57:11 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:57:11 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:57:11 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:57:11 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:57:11 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88003c0fc840 Jun 20 21:57:11 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:57:11 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:57:11 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:57:11 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:57:11 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:57:11 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:57:11 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:57:11 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:11 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:57:11 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 28 nref 1 Jun 20 21:57:11 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:57:11 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88003c0fc840 Jun 20 21:57:11 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:57:11 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 1 left Jun 20 21:57:11 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:57:11 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:57:11 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:57:11 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:57:11 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:57:16 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:57:16 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:57:16 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:57:16 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:57:16 scale-192-168-98-110 kernel: ceph: mds_client.c:1049 : send_renew_caps to mds0 (up:active) Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff8800395d3440 front 28 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff8800395d3440 to mds0 22=client_session len 28+0+0 ----- Jun 20 21:57:16 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:57:16 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:57:16 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:57:16 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:57:16 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 20 nref 1 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff8800395d3440 seq 30 type 22 len 28+0+0 0 pgs Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 3144184702 data_crc 0 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff88003c0fc840 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 95 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 95 left Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff88003c0fc840 state = 20, queueing work Jun 20 21:57:16 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:57:16 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:57:16 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (2) Jun 20 21:57:16 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff88003c0fc840 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 30 type 22 at ffff8800395d3440 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff8800395d3440 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff8800395d3440 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff88003c0fc840 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff88003c0fc840 msg (null) Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 22 front 28 data 0 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff8800395d3440 front 28 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff8800395d3440 28 (3361940986) + 0 (0) + 0 (0) Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff8800395d3440 30 from mds0 22=client_session len 28+0 (3361940986 0 0) ===== Jun 20 21:57:16 scale-192-168-98-110 kernel: ceph: mds_client.c:2269 : handle_session mds0 renewcaps ffff88003c0fc800 state open seq 15 Jun 20 21:57:16 scale-192-168-98-110 kernel: ceph: mds_client.c:1086 : renewed_caps mds0 ttl now 4307781012, was fresh, now stale Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff8800395d3440 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff8800395d3440 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 4 nref 1 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff88003c0fc840 29 -> 30 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 9 left Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:57:16 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:57:16 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:57:16 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: osd_client.c:1138 : osds timeout Jun 20 21:57:21 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:57:21 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:57:21 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:57:21 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88003c0fc840 Jun 20 21:57:21 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:57:21 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:57:21 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:57:21 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:57:21 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 28 nref 1 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88003c0fc840 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 1 left Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:57:21 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:57:21 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:57:21 scale-192-168-98-110 kernel: hpet1: lost 1 rtc interrupts Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: osd_client.c:707 : __remove_old_osds ffff880037db85a8 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: mon_client.c:698 : monc delayed_work Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88000642e800 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff88000642e800 nref = 1 -> 2 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88000642e800 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: mon_client.c:190 : __send_subscribe sub_sent=0 exp=1 want_osd=0 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: mon_client.c:216 : __send_subscribe to 'mdsmap' 38+ Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c31d2c0 to mon0 15=mon_subscribe len 58+0+0 ----- Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: mon_client.c:179 : __schedule_delayed after 10000 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88000642e800 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88000642e800 ret 0 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88000642e800 state 29 nref 2 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c31d2c0 seq 6 type 15 len 58+0+0 0 pgs Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 417550357 data_crc 0 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff88000642e800 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 125 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88000642e800 125 left Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88000642e800 0 left in 0 kvecs ret = 1 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff88000642e800 state = 29, queueing work Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff88000642e800 nref = 2 -> 3 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88000642e800 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff88000642e800 state = 29, queueing work Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff88000642e800 nref = 3 -> 4 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1956 : queue_con ffff88000642e800 - already queued Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff88000642e800 nref = 4 -> 3 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88000642e800 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88000642e800 1 left Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88000642e800 0 left in 0 kvecs ret = 1 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88000642e800 ret 0 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff88000642e800 nref = 3 -> 2 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88000642e800 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff88000642e800 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88000642e800 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff88000642e800 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff88000642e800 msg (null) Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 4 front 481 data 0 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff880022a76f00 front 481 Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff880022a76f00 481 (2772070520) + 0 (0) + 0 (0) Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff880022a76f00 10 from mon1 4=mon_map len 481+0 (2772070520 0 0) ===== Jun 20 21:57:21 scale-192-168-98-110 kernel: libceph: mon_client.c:335 : handle_monmap Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: mon_client.c:52 : monmap_decode ffff880037fbb004 ffff880037fbb1e1 len 477 Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: mon_client.c:76 : monmap_decode epoch 1, num_mon 3 Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: mon_client.c:79 : monmap_decode mon0 is 192.168.98.109:6789 Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: mon_client.c:79 : monmap_decode mon1 is 192.168.98.110:6789 Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: mon_client.c:79 : monmap_decode mon2 is 192.168.98.111:6789 Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff880022a76f00 Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff880022a76f00 Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88000642e800 Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff88000642e800 Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff88000642e800 msg (null) Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 16 front 20 data 0 Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c31dc80 20 (228457512) + 0 (0) + 0 (0) Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c31dc80 11 from mon1 16=mon_subscribe_ack len 20+0 (228457512 0 0) ===== Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: mon_client.c:255 : handle_subscribe_ack after 300 seconds Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88000642e800 Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88000642e800 ret 0 Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88000642e800 state 5 nref 2 Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff88000642e800 9 -> 11 Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88000642e800 9 left Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88000642e800 0 left in 0 kvecs ret = 1 Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88000642e800 ret 0 Jun 20 21:57:22 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff88000642e800 nref = 2 -> 1 Jun 20 21:57:26 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:57:26 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:57:26 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:57:26 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:57:26 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88003c0fc840 Jun 20 21:57:26 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:57:26 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:57:26 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:57:26 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:57:26 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:57:26 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:57:26 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:57:26 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:26 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:57:26 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 28 nref 1 Jun 20 21:57:26 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:57:26 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88003c0fc840 Jun 20 21:57:26 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:57:26 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 1 left Jun 20 21:57:26 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:57:26 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:57:26 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:57:26 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:57:26 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:57:31 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:57:31 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:57:31 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:57:31 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:57:31 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88003c0fc840 Jun 20 21:57:31 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:57:31 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:57:31 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:57:31 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:57:31 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:57:31 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:57:31 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:57:31 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:31 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:57:31 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 28 nref 1 Jun 20 21:57:31 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:57:31 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88003c0fc840 Jun 20 21:57:31 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:57:31 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 1 left Jun 20 21:57:31 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:57:31 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:57:31 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:57:31 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:57:31 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:57:31 scale-192-168-98-110 kernel: libceph: mon_client.c:698 : monc delayed_work Jun 20 21:57:31 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88000642e800 Jun 20 21:57:31 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff88000642e800 nref = 1 -> 2 Jun 20 21:57:31 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88000642e800 Jun 20 21:57:32 scale-192-168-98-110 kernel: libceph: mon_client.c:190 : __send_subscribe sub_sent=0 exp=0 want_osd=0 Jun 20 21:57:32 scale-192-168-98-110 kernel: libceph: mon_client.c:179 : __schedule_delayed after 20000 Jun 20 21:57:32 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88000642e800 Jun 20 21:57:32 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:32 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88000642e800 ret 0 Jun 20 21:57:32 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88000642e800 state 29 nref 2 Jun 20 21:57:32 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:57:32 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88000642e800 Jun 20 21:57:32 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:57:32 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88000642e800 1 left Jun 20 21:57:32 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88000642e800 0 left in 0 kvecs ret = 1 Jun 20 21:57:32 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:57:32 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88000642e800 ret 0 Jun 20 21:57:32 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff88000642e800 nref = 2 -> 1 Jun 20 21:57:36 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:57:36 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:57:36 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:57:36 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:57:36 scale-192-168-98-110 kernel: ceph: mds_client.c:1049 : send_renew_caps to mds0 (up:active) Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c282bc0 front 28 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c282bc0 to mds0 22=client_session len 28+0+0 ----- Jun 20 21:57:36 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:57:36 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:57:36 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:57:36 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:57:36 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 20 nref 1 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c282bc0 seq 31 type 22 len 28+0+0 0 pgs Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 2797727520 data_crc 0 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff88003c0fc840 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 95 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 95 left Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff88003c0fc840 state = 20, queueing work Jun 20 21:57:36 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:57:36 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:57:36 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (2) Jun 20 21:57:36 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff88003c0fc840 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 31 type 22 at ffff88001c282bc0 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c282bc0 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c282bc0 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff88003c0fc840 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff88003c0fc840 msg (null) Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 22 front 28 data 0 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: osd_client.c:1138 : osds timeout Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: osd_client.c:707 : __remove_old_osds ffff880037db85a8 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c282bc0 front 28 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c282bc0 28 (2754629245) + 0 (0) + 0 (0) Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c282bc0 31 from mds0 22=client_session len 28+0 (2754629245 0 0) ===== Jun 20 21:57:36 scale-192-168-98-110 kernel: ceph: mds_client.c:2269 : handle_session mds0 renewcaps ffff88003c0fc800 state open seq 16 Jun 20 21:57:36 scale-192-168-98-110 kernel: ceph: mds_client.c:1086 : renewed_caps mds0 ttl now 4307801012, was fresh, now stale Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c282bc0 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c282bc0 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 4 nref 1 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff88003c0fc840 30 -> 31 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 9 left Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:57:36 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:57:36 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:57:36 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:57:41 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:57:41 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:57:41 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:57:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:57:41 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88003c0fc840 Jun 20 21:57:41 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:57:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:57:41 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:57:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:57:41 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:57:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:57:41 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:57:41 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:41 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:57:41 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 28 nref 1 Jun 20 21:57:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:57:41 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88003c0fc840 Jun 20 21:57:41 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:57:41 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 1 left Jun 20 21:57:41 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:57:41 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:57:41 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:57:41 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:57:41 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:57:46 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:57:46 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:57:46 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:57:46 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:57:46 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88003c0fc840 Jun 20 21:57:46 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:57:46 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:57:46 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:57:46 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:57:46 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:57:46 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:57:46 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:57:46 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:46 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:57:46 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 28 nref 1 Jun 20 21:57:46 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:57:46 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88003c0fc840 Jun 20 21:57:46 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:57:46 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 1 left Jun 20 21:57:46 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:57:46 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:57:46 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:57:46 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:57:46 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:57:51 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:57:51 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:57:51 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:57:51 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:57:51 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88003c0fc840 Jun 20 21:57:51 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:57:51 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:57:51 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:57:51 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:57:51 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:57:51 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:57:51 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:57:51 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:51 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:57:51 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 28 nref 1 Jun 20 21:57:51 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:57:51 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88003c0fc840 Jun 20 21:57:51 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:57:51 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 1 left Jun 20 21:57:51 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:57:51 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:57:51 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:57:51 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:57:51 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1 Jun 20 21:57:51 scale-192-168-98-110 kernel: libceph: osd_client.c:1138 : osds timeout Jun 20 21:57:51 scale-192-168-98-110 kernel: libceph: osd_client.c:707 : __remove_old_osds ffff880037db85a8 Jun 20 21:57:52 scale-192-168-98-110 kernel: libceph: mon_client.c:698 : monc delayed_work Jun 20 21:57:52 scale-192-168-98-110 kernel: libceph: messenger.c:2273 : con_keepalive ffff88000642e800 Jun 20 21:57:52 scale-192-168-98-110 kernel: libceph: messenger.c:393 : con_get ffff88000642e800 nref = 1 -> 2 Jun 20 21:57:52 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88000642e800 Jun 20 21:57:52 scale-192-168-98-110 kernel: libceph: mon_client.c:190 : __send_subscribe sub_sent=0 exp=0 want_osd=0 Jun 20 21:57:52 scale-192-168-98-110 kernel: libceph: mon_client.c:179 : __schedule_delayed after 20000 Jun 20 21:57:52 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88000642e800 Jun 20 21:57:52 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:52 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88000642e800 ret 0 Jun 20 21:57:52 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88000642e800 state 29 nref 2 Jun 20 21:57:52 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:57:52 scale-192-168-98-110 kernel: libceph: messenger.c:589 : prepare_write_keepalive ffff88000642e800 Jun 20 21:57:52 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 1 Jun 20 21:57:52 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88000642e800 1 left Jun 20 21:57:52 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88000642e800 0 left in 0 kvecs ret = 1 Jun 20 21:57:52 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:57:52 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88000642e800 ret 0 Jun 20 21:57:52 scale-192-168-98-110 kernel: libceph: messenger.c:402 : con_put ffff88000642e800 nref = 2 -> 1 Jun 20 21:57:56 scale-192-168-98-110 kernel: ceph: mds_client.c:2919 : mdsc delayed_work Jun 20 21:57:56 scale-192-168-98-110 kernel: ceph: caps.c:2911 : check_delayed_caps Jun 20 21:57:56 scale-192-168-98-110 kernel: ceph: mds_client.c:355 : lookup_mds_session ffff88003c0fc800 1 Jun 20 21:57:56 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 1 -> 2 Jun 20 21:57:56 scale-192-168-98-110 kernel: ceph: mds_client.c:1049 : send_renew_caps to mds0 (up:active) Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c20f900 front 28 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:2202 : ----- ffff88001c20f900 to mds0 22=client_session len 28+0+0 ----- Jun 20 21:57:56 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:57:56 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:57:56 scale-192-168-98-110 kernel: ceph: mds_client.c:1212 : add_cap_releases ffff88003c0fc800 mds0 extra 680 Jun 20 21:57:56 scale-192-168-98-110 kernel: ceph: mds_client.c:1318 : send_cap_releases mds0 Jun 20 21:57:56 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 20 nref 1 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:510 : prepare_write_message ffff88001c20f900 seq 32 type 22 len 28+0+0 0 pgs Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:542 : prepare_write_message front_crc 1836374647 data_crc 0 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:452 : prepare_write_message_footer ffff88003c0fc840 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 95 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 95 left Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:133 : ceph_data_ready on ffff88003c0fc840 state = 20, queueing work Jun 20 21:57:56 scale-192-168-98-110 kernel: ceph: mds_client.c:322 : mdsc get_session ffff88003c0fc800 2 -> 3 Jun 20 21:57:56 scale-192-168-98-110 kernel: ceph: mds_client.c:3299 : mdsc con_get ffff88003c0fc800 ok (3) Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1959 : queue_con ffff88003c0fc840 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:57:56 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (2) Jun 20 21:57:56 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 3 -> 2 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1826 : try_read start on ffff88003c0fc840 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 8 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:943 : prepare_read_ack ffff88003c0fc840 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1401 : got ack for seq 32 type 22 at ffff88001c20f900 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c20f900 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c20f900 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1884 : try_read got tag 7 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:959 : prepare_read_message ffff88003c0fc840 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1521 : read_partial_message con ffff88003c0fc840 msg (null) Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1574 : got hdr type 22 front 28 data 0 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:2346 : ceph_msg_new ffff88001c20f900 front 28 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1658 : read_partial_message got msg ffff88001c20f900 28 (511675635) + 0 (0) + 0 (0) Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1708 : ===== ffff88001c20f900 32 from mds0 22=client_session len 28+0 (511675635 0 0) ===== Jun 20 21:57:56 scale-192-168-98-110 kernel: ceph: mds_client.c:2269 : handle_session mds0 renewcaps ffff88003c0fc800 state open seq 17 Jun 20 21:57:56 scale-192-168-98-110 kernel: ceph: mds_client.c:1086 : renewed_caps mds0 ttl now 4307821012, was fresh, now stale Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:2444 : ceph_msg_put last one on ffff88001c20f900 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:2429 : msg_kfree ffff88001c20f900 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:949 : prepare_read_tag ffff88003c0fc840 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1830 : try_read tag 1 in_base_pos 0 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1927 : try_read done on ffff88003c0fc840 ret 0 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1726 : try_write start ffff88003c0fc840 state 4 nref 1 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 0 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:569 : prepare_write_ack ffff88003c0fc840 31 -> 32 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1729 : try_write out_kvec_bytes 9 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:708 : write_partial_kvec ffff88003c0fc840 9 left Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:736 : write_partial_kvec ffff88003c0fc840 0 left in 0 kvecs ret = 1 Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1804 : try_write nothing else to write. Jun 20 21:57:56 scale-192-168-98-110 kernel: libceph: messenger.c:1807 : try_write done on ffff88003c0fc840 ret 0 Jun 20 21:57:56 scale-192-168-98-110 kernel: ceph: mds_client.c:3310 : mdsc con_put ffff88003c0fc800 (1) Jun 20 21:57:56 scale-192-168-98-110 kernel: ceph: mds_client.c:333 : mdsc put_session ffff88003c0fc800 2 -> 1