similar to: btrfs_search_slot BUG...

Displaying 20 results from an estimated 3000 matches similar to: "btrfs_search_slot BUG..."

2010 Jul 05
21
Aoe or iScsi???
Hi people... Here we use Xen 4 with Debian Lenny... We''re using kernel 2.6.31.13 pvops... As a storage system, we use AoE devices... So, we installed VM''s on AoE partition... The "NAS" server is a Intel based baremetal with SATA hard disc... However, sometime I feeling that VM''s is so slow... Also, all VM has GPLPV drivers installed... So, I am thing about
2014 Feb 23
3
nouveau graphical corruption in 3.13.2
On 9 February 2014 02:57, Ilia Mirkin <imirkin at alum.mit.edu> wrote: > On Sat, Feb 8, 2014 at 10:38 AM, Daniel J Blueman <daniel at quora.org> wrote: >> Interestingly, there was graphical failure booting 3.6.11, even >> nvidia-current fails to initialise, but these two issues could be due >> to running the Xorg stack in Ubuntu 14.04 pre-release. Using >>
2012 Mar 25
3
attempt to access beyond end of device and livelock
Hi Dongyang, Yan, When testing BTRFS with RAID 0 metadata on linux-3.3, we see discard ranges exceeding the end of the block device [1], potentially causing dataloss; when this occurs, filesystem writeback becomes catatonic due to continual resubmission. The reproducer is quite simple [2]. Hope this proves useful... Thanks, Daniel --- [1] attempt to access beyond end of device ram0: rw=129,
2012 Apr 28
1
SMB2 write performace slower than SMB1 in 10Gb network
Hi forks: I've been testing SMB2 with samba 3.6.4 performance these days, and I find a weird benchmark that SMB2 write performance is slower than SMB1 in 10Gb ethernet network. Server ----------------------- Linux: Redhat Enterprise 6.1 x64 Kernel: 2.6.31 x86_64 Samba: 3.6.4 (almost using the default configuration) Network: Chelsio T4 T420-SO-CR 10GbE network adapter RAID: Adaptec 51645 RAID
2011 May 27
0
[PATCH] Btrfs: try to only do one btrfs_search_slot in do_setxattr
I''ve been watching how many btrfs_search_slot()''s we do and I noticed that when we create a file with selinux enabled we were doing 2 each time we initialize the security context. That''s because we lookup the xattr first so we can delete it if we''re setting a new value to an existing xattr. But in the create case we don''t have any xattrs, so it is
2016 Feb 17
2
Amount CPU's
Quick question. In my host, I've got two processors with each 6 cores and each core has two threads. I use iometer to do some testings on hard drive performance. I get the idea that using more cores give me better results in iometer. (if it will improve the speed of my guest is an other question...) For a Windows 2012 R2 server guest, can I just give the guest 24 cores? Just to make
2014 Feb 08
2
nouveau graphical corruption in 3.13.2
On 8 February 2014 16:33, Ilia Mirkin <imirkin at alum.mit.edu> wrote: > On Sat, Feb 8, 2014 at 2:58 AM, Daniel J Blueman <daniel at quora.org> wrote: >> Hi guys, >> >> With a GeForce 320M GPU running linux 3.13.2 and Xorg 1.15.0, I'm >> seeing significant graphical corruption and later unrecoverable GPU >> lockup, accompanied by thousands of
2012 Nov 06
1
[PATCH] nouveau: Fix crash after D3
In 3.7-rc4, when starting X with the integrated GPU and suspending the discrete GPU, after one or more 32-bit applications are used (eg Skype) and X is stopped, we hit a panic. Prevent this by testing if the fini function is valid. Full panic bootlog is at: http://quora.org/2012/nouveau/dmesg-crash.txt Xorg.log is at: http://quora.org/2012/nouveau/Xorg.0.log-crash.txt Kernel log after fix is at:
2012 Oct 01
3
Best way to measure performance of ZIL
Hi all, I currently have a OCZ Vertex 4 SSD as a ZIL device and am well aware of their exaggerated claims of sustained performance. I was thinking about getting a DRAM based ZIL accelerator such as Christopher George''s DDRDive, one of the STEC products, etc. Of course the key question i''m trying to answer is: is the price premium worth it? --- What is the (average/min/max)
2014 Feb 08
2
nouveau graphical corruption in 3.13.2
Hi guys, With a GeForce 320M GPU running linux 3.13.2 and Xorg 1.15.0, I'm seeing significant graphical corruption and later unrecoverable GPU lockup, accompanied by thousands of ILLEGAL_MTHD or related kernel messages [1]. I see similar issues on 3.12 also. Is there any debugging or testing I can do to help diagnose this? Many thanks, Daniel --- [1] http://quora.org/nouveau-dmesg.txt
2017 Feb 10
3
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
It was found when running fio sequential write test with a XFS ramdisk on a VM running on a 2-socket x86-64 system, the %CPU times as reported by perf were as follows: 69.75% 0.59% fio [k] down_write 69.15% 0.01% fio [k] call_rwsem_down_write_failed 67.12% 1.12% fio [k] rwsem_down_write_failed 63.48% 52.77% fio [k] osq_lock 9.46% 7.88% fio [k]
2017 Feb 10
3
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
It was found when running fio sequential write test with a XFS ramdisk on a VM running on a 2-socket x86-64 system, the %CPU times as reported by perf were as follows: 69.75% 0.59% fio [k] down_write 69.15% 0.01% fio [k] call_rwsem_down_write_failed 67.12% 1.12% fio [k] rwsem_down_write_failed 63.48% 52.77% fio [k] osq_lock 9.46% 7.88% fio [k]
2013 Aug 29
23
[PATCH] Btrfs: optimize key searches in btrfs_search_slot
When the binary search returns 0 (exact match), the target key will necessarily be at slot 0 of all nodes below the current one, so in this case the binary search is not needed because it will always return 0, and we waste time doing it, holding node locks for longer than necessary, etc. Below follow histograms with the times spent on the current approach of doing a binary search when the
2017 Feb 08
4
[PATCH 1/2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
It was found when running fio sequential write test with a XFS ramdisk on a 2-socket x86-64 system, the %CPU times as reported by perf were as follows: 71.27% 0.28% fio [k] down_write 70.99% 0.01% fio [k] call_rwsem_down_write_failed 69.43% 1.18% fio [k] rwsem_down_write_failed 65.51% 54.57% fio [k] osq_lock 9.72% 7.99% fio [k] __raw_callee_save___kvm_vcpu_is_preempted 4.16%
2017 Feb 08
4
[PATCH 1/2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
It was found when running fio sequential write test with a XFS ramdisk on a 2-socket x86-64 system, the %CPU times as reported by perf were as follows: 71.27% 0.28% fio [k] down_write 70.99% 0.01% fio [k] call_rwsem_down_write_failed 69.43% 1.18% fio [k] rwsem_down_write_failed 65.51% 54.57% fio [k] osq_lock 9.72% 7.99% fio [k] __raw_callee_save___kvm_vcpu_is_preempted 4.16%
2017 Oct 10
2
small files performance
2017-10-10 8:25 GMT+02:00 Karan Sandha <ksandha at redhat.com>: > Hi Gandalf, > > We have multiple tuning to do for small-files which decrease the time for > negative lookups , meta-data caching, parallel readdir. Bumping the server > and client event threads will help you out in increasing the small file > performance. > > gluster v set <vol-name> group
2018 May 28
4
Re: VM I/O performance drops dramatically during storage migration with drive-mirror
Cc the QEMU Block Layer mailing list (qemu-block@nongnu.org), who might have more insights here; and wrap long lines. On Mon, May 28, 2018 at 06:07:51PM +0800, Chunguang Li wrote: > Hi, everyone. > > Recently I am doing some tests on the VM storage+memory migration with > KVM/QEMU/libvirt. I use the following migrate command through virsh: > "virsh migrate --live
2012 Oct 24
1
Nouveau soft lockup after switcheroo'd...
On 13 October 2012 15:12, Daniel J Blueman <daniel at quora.org> wrote: > On my Macbook Retina, when switching to the integrated GPU, we see a > ioread32 issued to the discrete GPU, which hangs as it is in D3 [1] > (drm.debug is set to 14 here). > > Full kernel 3.6.2 boot logs with drm.debug=5 are at: > http://quora.org/2012/mbp-i915-panel.txt > > What additional
2017 Feb 10
2
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
On 02/10/2017 11:19 AM, Peter Zijlstra wrote: > On Fri, Feb 10, 2017 at 10:43:09AM -0500, Waiman Long wrote: >> It was found when running fio sequential write test with a XFS ramdisk >> on a VM running on a 2-socket x86-64 system, the %CPU times as reported >> by perf were as follows: >> >> 69.75% 0.59% fio [k] down_write >> 69.15% 0.01% fio [k]
2017 Feb 10
2
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
On 02/10/2017 11:19 AM, Peter Zijlstra wrote: > On Fri, Feb 10, 2017 at 10:43:09AM -0500, Waiman Long wrote: >> It was found when running fio sequential write test with a XFS ramdisk >> on a VM running on a 2-socket x86-64 system, the %CPU times as reported >> by perf were as follows: >> >> 69.75% 0.59% fio [k] down_write >> 69.15% 0.01% fio [k]