search for: livelocked

Displaying 20 results from an estimated 43 matches for "livelocked".

Did you mean: livelock
2012 Mar 06
0
Livelock induced failure in blktap2.
We''ve been working on getting XEN 4.1.2 validated for internal use and have run into what appears to be a livelock induced failure in properly freeing a blktap2 device. We ported the blktap2 driver from Dan Stodden''s GIT tree into 3.2.x which is a reasonably straight forward process. We are also running the toolchain with a patch which Ian Campbell posted in order to get xl to
2012 Mar 25
3
attempt to access beyond end of device and livelock
Hi Dongyang, Yan, When testing BTRFS with RAID 0 metadata on linux-3.3, we see discard ranges exceeding the end of the block device [1], potentially causing dataloss; when this occurs, filesystem writeback becomes catatonic due to continual resubmission. The reproducer is quite simple [2]. Hope this proves useful... Thanks, Daniel --- [1] attempt to access beyond end of device ram0: rw=129,
2008 May 13
1
Hard(?) lock when reassociating ath with wpa_supplicant on RELENG_7
I seem to be able to lock my machine by going into wpa_cli and asking it to 'reassoc'. The reason for question mark after "hard" is that debug information (caused by wlandebug and athdebug) is being printed on the console. The only way to get machine's attention is to hold power button for 8 seconds. Note: manual reassociation is just the handy way to reproduce the problem
2013 Jan 21
1
btrfs_start_delalloc_inodes livelocks when creating snapshot under IO
Greetings all, I see the following issue during snap creation under IO: Transaction commit calls btrfs_start_delalloc_inodes() that locks the delalloc_inodes list, fetches the first inode, unlocks the list, triggers btrfs_alloc_delalloc_work/btrfs_queue_worker for this inode and then locks the list again. Then it checks the head of the list again. In my case, this is always exactly the same
2001 May 17
0
Fwd: ext3 for 2.4
---------- Forwarded Message ---------- Subject: ext3 for 2.4 Date: Thu, 17 May 2001 21:20:38 +1000 From: Andrew Morton <andrewm@uow.edu.au> To: ext2-devel@lists.sourceforge.net, "Peter J. Braam" <braam@mountainviewdata.com>, Andreas Dilger <adilger@turbolinux.com>, "Stephen C. Tweedie" <sct@redhat.com> Cc: linux-fsdevel@vger.kernel.org Summary:
2016 Dec 14
1
AtomicExpandPass and branch weighting
On Wed, Dec 14, 2016 at 1:34 PM, James Knight <jyknight at google.com> wrote: > Seems reasonable. > > I'd note additionally that on some architectures, that the success block > *must* be the fallthrough case (that is to say: you must not have any taken > branches between the load-linked and store-conditional) in order to have an > architectural guarantee that two such
2016 Dec 14
0
AtomicExpandPass and branch weighting
Seems reasonable. I'd note additionally that on some architectures, that the success block *must* be the fallthrough case (that is to say: you must not have any taken branches between the load-linked and store-conditional) in order to have an architectural guarantee that two such loops on different CPUs won't livelock against eachother. On Dec 12, 2016, at 12:30 PM, Kyle Butt via
2003 Mar 20
2
[Patch] ext3_journal_stop inode access
Hi Andrew, The patch below addresses the problem we were talking about earlier where ext3_writepage ends up accessing the inode after the page lock has been dropped (and hence at a point where it is possible for the inode to have been reclaimed.) Tested minimally (it builds and boots.) It makes ext3_journal_stop take an sb, not an inode, as its final parameter. It also sets
2012 Nov 05
25
[PATCH] IOMMU: don't disable bus mastering on faults for devices used by Xen or Dom0
Under the assumption that in these cases recurring faults aren''t a security issue and it can be expected that the drivers there are going to try to take care of the problem. Signed-off-by: Jan Beulich <jbeulich@suse.com> --- a/xen/drivers/passthrough/amd/iommu_init.c +++ b/xen/drivers/passthrough/amd/iommu_init.c @@ -625,6 +625,18 @@ static void parse_event_log_entry(struct
2015 Mar 12
1
[PATCH] virtio: Remove virtio device during shutdown
On Thu, 03/12 17:22, Michael S. Tsirkin wrote: > On Wed, Mar 11, 2015 at 06:11:35PM +0800, Fam Zheng wrote: > > On Wed, 03/11 10:06, Michael S. Tsirkin wrote: > > > On Wed, Mar 11, 2015 at 04:09:17PM +0800, Fam Zheng wrote: > > > > Currently shutdown is nop for virtio devices, but the core code could > > > > remove things behind us such as MSI-X handler
2015 Mar 12
1
[PATCH] virtio: Remove virtio device during shutdown
On Thu, 03/12 17:22, Michael S. Tsirkin wrote: > On Wed, Mar 11, 2015 at 06:11:35PM +0800, Fam Zheng wrote: > > On Wed, 03/11 10:06, Michael S. Tsirkin wrote: > > > On Wed, Mar 11, 2015 at 04:09:17PM +0800, Fam Zheng wrote: > > > > Currently shutdown is nop for virtio devices, but the core code could > > > > remove things behind us such as MSI-X handler
2010 Jan 21
4
dlmglue fixes
David, So here are the two patches. Remove all patches that you have and apply these. The first one is straight forward. The second one will hopefully fix the livelock issue you have been encountering. People reviewing the patches should note that the second one is slightly different than the one I posted earlier. It removes the BUG_ON in the if condition where we jump to update_holders. The
2019 Sep 06
1
[PULL] vhost, virtio: last minute fixes
Hope this can still make it. I was not sure about virtio-net change but it seems that it prevents livelocks for some people. The following changes since commit 089cf7f6ecb266b6a4164919a2e69bd2f938374a: Linux 5.3-rc7 (2019-09-02 09:57:40 -0700) are available in the Git repository at: git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git tags/for_linus for you to fetch changes up to
2005 May 22
1
RE: asterisk, ztdummy, and usb (and HZ = 100 under xen ???)
> On 21 May 2005, at 08:53, James Harper wrote: > > We''re going to rip the fixed ticker out of Xen and allow domains to > specify a background ticker at a rate of their choice, which ought to > fix this for you. So I could have a domain with 100HZ, and another with 1000HZ? That would be nice. > Of course, you''re pretty screwed running this kind of > thing
2001 Jul 06
1
ext3-2.4-0.9.0
An update of the ext3 journalling filesystem for 2.4 kernels is available at http://www.uow.edu.au/~andrewm/linux/ext3/ Patches are against 2.4.6-ac1 and 2.4.6. Changes since 0.0.8 include: - Multiplied the version numbering by ten to cater for bugfix releases against the 0.9.0 stream. - The main thrust has been the removal of a number of changes in the core kernel which were required
2001 Jun 08
1
VALinux's 2.4.5 beta kernel with Ext3
Anyone try this yet? ftp://ftp.valinux.com/pub/software/kernel/beta/2.4.5-beta2va3.11/ List of SRPM contents follows. -- TheBS atomic-lookup.patch atomicalloc.patch byteprofiling.patch comtrol-1.23.patch configs-2.4.5.tar.gz copy-user-reschedule.patch dac960-enclosure-quiet.patch dma-livelock-fix.patch e100-1.5.5.tar.gz e1000-3.0.7.tar.gz eepro100-speedo-1.patch emu10k1-tone.patch
2016 Dec 12
3
AtomicExpandPass and branch weighting
I'm working on a change to the layout algorithm, and I noted that test/CodeGen/ARM/cmpxchg-weak.ll was affected. Normally, that would be fine, but I noted that the layout changed the fallthrough from the success case to the failure case. I was surprised to see that the success case isn't annotated with a branch weight by AtomicExpandPass.cpp Would it make sense to annotate the success
2010 Mar 06
1
Is parralel processing automatically done on multi-core CPU's?
Hi, I just noticed this article, http://www.howtoforge.net/fully-utilizing-your-x-core-cpu which uses http://code.google.com/p/ppss/ for parralel processing. So, can anyone tell me if parralel processing happens automatically on CentOS, or would I need to use this script as well? We mainly have Dual Core & Dual CPU, Dual Core (i.e. 8 cores) servers and it would be benefitial to know whether
2010 Aug 20
0
[PATCH] ocfs2: Don't delete orphaned files if we are in the process of umount.
Generally, orphan scan run in ocfs2_wq and is used to replay orphan dir. So for some low end iscsi device, the delete_inode may take a long time(In some devices, I have seen that delete 500 files will take about 15 secs). This will eventually cause umount to livelock(umount has to flush ocfs2_wq which will wait until orphan scan to finish). So this patch just try to finish the orphan scan
2011 Sep 07
10
[PATCH] IRQ: Group IRQ_MOVE_CLEANUP_VECTOR with other hypervisor IPIs
Also, rename to MOVE_CLEANUP_VECTOR to be in line with the other IPI names. This requires bumping LAST_HIPRIORITY_VECTOR, but does mean that the range FIRST-LAST_HIPRIORITY_VECTORs are free once again. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> diff -r 0268e7380953 -r c7884dbb6f7d xen/arch/x86/apic.c --- a/xen/arch/x86/apic.c Mon Sep 05 15:10:28 2011 +0100 +++