similar to: kernel oops when sync after rm a file at full disk

Displaying 20 results from an estimated 20000 matches similar to: "kernel oops when sync after rm a file at full disk"

2010 Apr 08
2
ENOTEMPTY on "rm -rf" for snapshot and subvolume
Hi Everyone, Recently i created a snapshot of an existing volume which had some amount of data. Now that after creating the snapshot i have tried deleting the same snapshot. But i am getting ENOTEMPTY for "rmdir". But when i see the actual files inside are deleted not the parent directory. From the code it looks like if (inode->i_size >
2010 Apr 09
0
Fix for rm -r
The latest unstable git pull fixed a little problem I had with rm -r. With a filled up filesystem, rm -rwould stop after a few minutes say "filesystem full'', but a subsequent rm -r would clear the filesystem. Now, it acts as it should, and removed all the files and directories the first time. -- Andy Carlson
2011 Mar 24
1
2.6.38 defragment compression oops...
I found that I''m able to provoke undefined behaviour with 2.6.38 with extent defragmenting + recompression, eg: mkfs.btrfs /dev/sdb mount /dev/sdb /mnt cp -xa / /mnt find /mnt -print0 | xargs -0 btrfs filesystem defragment -vc After a short time, I was seeing what looked like a secondary effect [1]. Reproducing with lock instrumentation reported recursive spinlock acquisition, probably
2010 Nov 21
0
[JFS] Kernel oops when tried to access mounted but unplugged storage
Hello. I''ve built a kernel from git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git (Date: Fri Nov 19 19:46:45 2010 -0800) and got a kernel oops when tried to access to unplugged, but mounted external usb storage formatted with JFS. Steps to reproduce: mkfs.jfs /dev/sdb1 (unpluggable USB hard drive) mount /dev/sdb1 /mnt/drive cd /mnt/drive touch test sync
2011 Aug 23
0
[PATCH] Btrfs: fix an oops when deleting snapshots
We can reproduce this oops via the following steps: $ mkfs.btrfs /dev/sdb7 $ mount /dev/sdb7 /mnt/btrfs $ for ((i=0; i<3; i++)); do btrfs sub snap /mnt/btrfs /mnt/btrfs/s_$i; done $ rm -fr /mnt/btrfs/* $ rm -fr /mnt/btrfs/* then we''ll get ------------[ cut here ]------------ kernel BUG at fs/btrfs/inode.c:2264! [...] Call Trace: [<ffffffffa05578c7>] btrfs_rmdir+0xf7/0x1b0
2008 Dec 04
3
PROBLEM: oops when running fsstress against compressed btrfs filesystem
Chris: I''m consistently getting oopses when running fsstress against both single and multiple device compressed btrfs filesystems using kernels built from the current btrfs-unstable. In this report, I''m describing an incident with a single device filesystem. Once the oops occurs, all I/O appears to stop though iowait is still reported, and fsstress does not make apparent
2013 Oct 19
7
Lots of trouble hanging when rm files with many extents
Hello folks, I reported a bug here: https://bugzilla.kernel.org/show_bug.cgi?id=63071 but I am not sure if that was the right thing to do. This is producing OOM issues and leading to system crashes (including eventual panics) with such alarming frequency that I wonder if perhaps there is something different about my setup than others. In a nutshell, I originally made the mistake of storing a
2012 Nov 22
0
raid10 data fs full after degraded mount
Hello, on a fs with 4 disks, raid 10 for data, one drive was failing and has been removed. After reboot and ''mount -o degraded...'', the fs looks full, even though before removal of the failed device it was almost 80% free. root@fs0:~# df -h /mnt/b Filesystem Size Used Avail Use% Mounted on /dev/sde 11T 2.5T 41M 100% /mnt/b root@fs0:~# btrfs fi df /mnt/b Data,
2009 May 26
4
Oops on a converted ext4 system
I converted an ext4 filesystem with btrfs-convert, mounted it, and wanted to do "lzop -d ...". The result was an immediate Oops (btrfs is on LVM, on dm-crypt, on /dev/sdb which is USB-connected). mini-904.img.lzo dentry_open failed BUG: unable to handle kernel paging request at ffffffcd IP: [<c01b5f36>] fput+0x6/0x30 *pde = 00575067 *pte = 00000000 Oops: 0002 [#1] SMP last sysfs
2012 Aug 31
0
oops with btrfs on zvol
Hi, I''m experimenting with btrfs on top of zvol block device (using zfsonlinux), and got oops on a simple mount test. While I''m sure that zfsonlinux is somehow also at fault here (since the same test with zram works fine), the oops only shows things btrfs-related without any usable mention of zfs/zvol. Could anyone help me interpret the kernel logs, which btrfs-zvol interaction
2008 Oct 31
0
Another new disk format pushed to -unstable
Hello everyone, btrfs-unstable now has Yan Zheng''s fallocate support, along with disk format changes. I was hoping to roll all of this into a single format change with the compression code, but there was too much conflict between the two patches. The fallocate work is pretty neat, it allows preallocation of extents and overwrites them without triggering COW as long as there are no
2014 Jul 07
0
mount time of multi-disk arrays
Hello List, can anyone tell me how much time is acceptable and assumable for a multi-disk btrfs array with classical hard disk drives to mount? I'm having a bit of trouble with my current systemd setup, because it couldn't mount my btrfs raid anymore after adding the 5th drive. With the 4 drive setup it failed to mount once in a few times. Now it fails everytime because the default
2010 Oct 28
0
RAID0 limiting disk utilization
I noticed that if I have single-device allocation for data in a multi-device btrfs filesystem, a balance operation will convert the data to RAID0. This is true even if ''-d single'' is specified explicitly when creating the filesystem. Then it wants to continue using RAID0 for future data allocations, and I run out of space once there''s no longer two drives with space
2008 Oct 09
0
New disk format pushed out to the git tree
Hello everyone, Yan Zheng''s latest commit redoes the extent back references to make them more efficient. This includes a new disk format, so any pulls from the git unstable trees will have a new disk format to them. I''ve bumped the magic in both the kernel and progs repos, so you can''t accidentally mount an FS with the wrong format. I still hope to finish off
2012 Jan 09
2
btrfs-related kernel oops due to media error
Hi, One of my disks, partitioned into a single btrfs partition, is showing media errors. The problem is that these errors lead to kernel panic from btrfs - that make the filesystem unusable until reboot - and therefore it is very hard for me to do a full backup of the data prior to changing the disk. My current kernel is 3.2.0-8-generic from Ubuntu/precise (based on linux 3.2-final) but I
2014 May 28
0
Failed Disk RAID10 Problems
Hi, I have a Btrfs RAID 10 (data and metadata) file system that I believe suffered a disk failure. In my attempt to replace the disk, I think that I've made the problem worse and need some help recovering it. I happened to notice a lot of errors in the journal: end_request: I/O error, dev dm-11, sector 1549378344 BTRFS: bdev /dev/mapper/Hitachi_HDS721010KLA330_GTA040PBG71HXF1 errs: wr
2012 Jan 25
0
[3.2.1] kernel BUG at fs/btrfs/disk-io.c:2835!
I want to report a btrfs bug that happened this morning on kernel 3.2.1. I was copying a +5 GB file between two external usb harddisks, one formated with btrfs (the source) and the other with ext4 (the destination). In the meantime I decided to compile the 3.3-rc1 kernel with the output dir on the ext4 external harddisk. After some time I started to get many I/O errors coming from the ext4 drive
2008 Jul 03
2
iozone remove_suid oops...
Having done a current checkout, creating a new FS and running iozone [1] on it results in an oops [2]. remove_suid is called, accessing offset 14 of a NULL pointer. Let me know if you''d like me to test any fix, do further debugging or get more information. Thanks, Daniel --- [1] # mkfs.btrfs /dev/sda4 # mount /dev/sda4 /mnt /mnt# iozone -a . --- [2] [ 899.118926] BUG: unable to
2020 Aug 13
0
Re: [PATCH v3] appliance: extract UUID from QCOW2 disk image
On Thu, Aug 13, 2020 at 07:48:52AM +0300, Andrey Shinkevich wrote: > For the appliance of the QCOW2 format, the function get_root_uuid() > fails to get the UUID of the disk image. > In this case, let us read the first 256k bytes of the disk image with > the 'qemu-img dd' command. Then pass the read block to the 'file' > command. > > Suggested-by: Denis V.
2012 Jan 22
1
Trying to mount RAID1 degraded with removed disk -> open_ctree failed
Hi, I have setup a RAID1 using 3 devices (500G each) on separate disks. After removing one disk physically the filesystem cannot be mounted in degraded nor in recovery mode. When putting the disk back in the filesystem can be mounted without errors. I did a cold-swap (powercycle after removal/insertion of the disk). Here are the details: - latest kernel 3.2.1 and btrfs-tools on xubuntu