similar to: Error while copying/moving file

Displaying 20 results from an estimated 200 matches similar to: "Error while copying/moving file"

2020 Nov 04
4
Libvirt driver iothread property for virtio-scsi disks
The docs[1] say: - The optional iothread attribute assigns the disk to an IOThread as defined by the range for the domain iothreads value. Multiple disks may be assigned to the same IOThread and are numbered from 1 to the domain iothreads value. Available for a disk device target configured to use "virtio" bus and "pci" or "ccw" address types. Since 1.2.8
2010 Mar 02
2
crash when using the cp command to copy files off a striped gluster dir but not when using rsync
Hi, I've got this strange problem where a striped endpoint will crash when I try to use cp to copy files off of it but not when I use rsync to copy files off: [user at gluster5 user]$ cp -r Python-2.6.4/ ~/tmp/ cp: reading `Python-2.6.4/Lib/lib2to3/tests/data/fixers/myfixes/__init__.py': Software caused connection abort cp: closing
2018 Sep 17
2
Re: NUMA issues on virtualized hosts
On 09/14/2018 03:36 PM, Lukas Hejtmanek wrote: > Hello, > > ok, I found that cpu pinning was wrong, so I corrected it to be 1:1. The issue > with iozone remains the same. > > The spec is running, however, it runs slower than 1-NUMA case. > > The corrected XML looks like follows: [Reformated XML for better reading] <cpu mode="host-passthrough">
2019 Jan 17
2
virt-install and IOThreads
Hello, is there any way of specifying iothreads via virt-install command? I've tried appending ",iothread='1'" but that didn't work. Thanks in advance! -- -Igor Gnatenko
2019 Jan 17
1
Re: virt-install and IOThreads
On Thu, Jan 17, 2019 at 4:35 PM Cole Robinson <crobinso@redhat.com> wrote: > > On 01/17/2019 05:58 AM, Igor Gnatenko wrote: > > Hello, > > > > is there any way of specifying iothreads via virt-install command? > > > > I've tried appending ",iothread='1'" but that didn't work. > > > > Thanks in advance! > > >
2014 Jun 17
2
[RFC PATCH 2/2] block: virtio-blk: support multi virt queues per virtio-blk device
Il 17/06/2014 18:00, Ming Lei ha scritto: >> > If you want to do queue steering based on the guest VCPU number, the number >> > of queues must be = to the number of VCPUs shouldn't it? >> > >> > I tried using a divisor of the number of VCPUs, but couldn't get the block >> > layer to deliver interrupts to the right VCPU. > For blk-mq's
2014 Jun 17
2
[RFC PATCH 2/2] block: virtio-blk: support multi virt queues per virtio-blk device
Il 17/06/2014 18:00, Ming Lei ha scritto: >> > If you want to do queue steering based on the guest VCPU number, the number >> > of queues must be = to the number of VCPUs shouldn't it? >> > >> > I tried using a divisor of the number of VCPUs, but couldn't get the block >> > layer to deliver interrupts to the right VCPU. > For blk-mq's
2009 May 28
2
Glusterfs 2.0 hangs on high load
Hello! After upgrade to version 2.0, now using 2.0.1, I'm experiencing problems with glusterfs stability. I'm running 2 node setup with cliet side afr, and glusterfsd also is running on same servers. Time to time glusterfs just hangs, i can reproduce this running iozone benchmarking tool. I'm using patched Fuse, but same result is with unpatched.
2015 Dec 14
1
[RFC PATCH 2/2] block: virtio-blk: support multi virt queues per virtio-blk device
On 18/06/2014 06:04, Ming Lei wrote: > For virtio-blk, I don't think it is always better to take more queues, and > we need to leverage below things in host side: > > - host storage top performance, generally it reaches that with more > than 1 jobs with libaio(suppose it is N, so basically we can use N > iothread per device in qemu to try to get top performance) > > -
2015 Dec 14
1
[RFC PATCH 2/2] block: virtio-blk: support multi virt queues per virtio-blk device
On 18/06/2014 06:04, Ming Lei wrote: > For virtio-blk, I don't think it is always better to take more queues, and > we need to leverage below things in host side: > > - host storage top performance, generally it reaches that with more > than 1 jobs with libaio(suppose it is N, so basically we can use N > iothread per device in qemu to try to get top performance) > > -
2018 Sep 18
1
Re: NUMA issues on virtualized hosts
On 09/17/2018 04:59 PM, Lukas Hejtmanek wrote: > Hello, > > so the current domain configuration: > <cpu mode='host-passthrough'><topology sockets='8' cores='4' threads='1'/><numa><cell cpus='0-3' memory='62000000' /><cell cpus='4-7' memory='62000000' /><cell cpus='8-11'
2010 May 04
1
Posix warning : Access to ... is crossing device
I have a distributed/replicated setup with Glusterfs 3.0.2, that I'm testing on 4 servers, each with access to /mnt/gluster (which consists of all directories /mnt/data01 - data24) on each server. I'm using configs I built from volgen, but every time I access a file (via an 'ls -l') for the first time, I get all of these messages in my logs on each server: [2010-05-04 10:50:30] W
2017 Apr 28
2
Re: Libvirtd freezes
On 04/27/2017 04:31 PM, Stefano Ricci wrote: > Here is the backtrace of the libvirt process just started [Just a side note, you shouldn't top post on technical lists. Gmail sucks at this.] > > https://pastebin.com/R66myzFp Looks like libvirtd is trying to spawn /usr/bin/qemu-system-x86_64 but it takes ages to init. In the debug logs you might see the actual command line that
2014 Jul 01
2
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi Jens and Rusty, On Thu, Jun 26, 2014 at 8:04 PM, Ming Lei <ming.lei at canonical.com> wrote: > On Thu, Jun 26, 2014 at 5:41 PM, Ming Lei <ming.lei at canonical.com> wrote: >> Hi, >> >> These patches try to support multi virtual queues(multi-vq) in one >> virtio-blk device, and maps each virtual queue(vq) to blk-mq's >> hardware queue. >>
2014 Jul 01
2
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi Jens and Rusty, On Thu, Jun 26, 2014 at 8:04 PM, Ming Lei <ming.lei at canonical.com> wrote: > On Thu, Jun 26, 2014 at 5:41 PM, Ming Lei <ming.lei at canonical.com> wrote: >> Hi, >> >> These patches try to support multi virtual queues(multi-vq) in one >> virtio-blk device, and maps each virtual queue(vq) to blk-mq's >> hardware queue. >>
2019 Sep 15
3
virsh -c lxc:/// setvcpus and <vcpu> configuration fails
Hi folks! i created a server with this XML file: <domain type='lxc'> <name>lxctest1</name> <uuid>227bd347-dd1d-4bfd-81e1-01052e91ffe2</uuid> <metadata> <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0"> <libosinfo:os id="http://centos.org/centos/6.9"/>
2010 Mar 04
1
[3.0.2] booster + unfsd failed
Hi list. I have been testing with glusterfs-3.0.2. glusterfs mount works well. unfsd on glusterfs mount point works well too. When using booster, unfsd realpath check failed. But ls util works well. I tried 3.0.0-git head source build but result was same. My System is Ubuntu 9.10 and using unfsd source from official gluster download site. Any comment appreciated!! - kpkim root at
2018 Sep 14
3
NUMA issues on virtualized hosts
Hello, I have cluster with AMD EPYC 7351 cpu. Two CPUs per node. I have performance 8-NUMA configuration: This is from hypervizor: [root@hde10 ~]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 2 NUMA
2009 Jun 15
1
Windows app demonstrating speex playback?
I'm developing a small linux/speex-based audio streaming (VoIP-like) application, and am looking for a binary windows app that can play speex content (at least from a file, preferably from an IP data stream) out through the windows sound system. This would be very helpful as a test tool. Any recommendations? I'd prefer a binary as I don't have development environment available for
2011 Jan 12
1
Setting up 3.1
[Repost - last time this didn't seem to work] I've been running gluster for a couple of years, so I'm quite used to 3.0.x and earlier. I'm looking to upgrade to 3.1.1 for more stability (I'm getting frequent 'file has vanished' errors when rsyncing from 3.0.6) on a bog-standard 2-node dist/rep config. So far it's not going well. I'm running on Ubuntu Lucid x64