Displaying 20 results from an estimated 4000 matches similar to: "Re: Libvirt and Glusterfs"
2013 Jul 10
1
Re: Libvirt and Glusterfs
On 07/09/2013 08:18 PM, Olivier Mauras wrote:
> On 2013-07-09 09:40, Vijay Bellur wrote:
>
>>> Hi, I'm trying to use qemu native glusterfs integration with libvirt.
>>> It's all working well from the qemu side, but libvirt fails to start
>>> a domain with a gluster drive or attach a drive. I have exactly the
>>> same error as this person:
2013 Jul 09
0
Re: Libvirt and Glusterfs
On 2013-07-09 09:40, Vijay Bellur wrote:
>> Hi, I'm trying to use
qemu native glusterfs integration with libvirt. It's all working well
from the qemu side, but libvirt fails to start a domain with a gluster
drive or attach a drive. I have exactly the same error as this person:
https://www.redhat.com/archives/libvirt-users/2013-April/msg00204.html
[1] I use qemu 1.5.1 with glusterfs
2013 Jul 07
0
Libvirt and Glusterfs
Hi,
I'm trying to use qemu native glusterfs integration with
libvirt. It's all working well from the qemu side, but libvirt fails to
start a domain with a gluster drive or attach a drive.
I have exactly
the same error as this person:
https://www.redhat.com/archives/libvirt-users/2013-April/msg00204.html
[1]
I use qemu 1.5.1 with glusterfs 3.4 beta 4 and libvirt
1.0.6.
[root@bbox ~]#
2012 Aug 06
2
using RBD with libvirt 0.9.13
I'm having some trouble creating KVM domains with RBD block devices using
virsh. I've managed to get virsh to define the domain, but it gives an error
when trying to start the domain:
error: Failed to start domain test0
error: internal error process exited while connecting to monitor: char device redirected to /dev/pts/3
kvm: -drive
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 18:28, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> I applied the workarround for this bug and now df shows the right size:
>
> That is good to hear.
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
> /dev/sdc1
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> My initial setup was composed of 2 similar nodes: stor1data and stor2data.
> A month ago I expanded both volumes with a new node: stor3data (2 bricks
> per volume).
> Of course, then to add the new peer with the bricks I did the 'balance
> force' operation.
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
I'm sorry for my last incomplete message.
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- -----------
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
There is a known issue with gluster 3.12.x builds (see [1]) so you may be
running into this.
The "shared-brick-count" values seem fine on stor1. Please send us "grep -n
"share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes
so we can check if they are the cause.
Regards,
Nithya
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
I applied the workarround for this bug and now df shows the right size:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor1data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor1data:/volumedisk1
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
My initial setup was composed of 2 similar nodes: stor1data and stor2data.
A month ago I expanded both volumes with a new node: stor3data (2 bricks
per volume).
Of course, then to add the new peer with the bricks I did the 'balance
force' operation. This task finished successfully (you can see info below)
and number of files on the 3 nodes were very similar .
For volumedisk1 I
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi,
Some days ago all my glusterfs configuration was working fine. Today I
realized that the total size reported by df command was changed and is
smaller than the aggregated capacity of all the bricks in the volume.
I checked that all the volumes status are fine, all the glusterd daemons
are running, there is no error in logs, however df shows a bad total size.
My configuration for one volume:
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- ----------- ----------- ------------
2012 Apr 12
1
CentOS 6.2 anaconda bug?
I have a kickstart file with the following partitioning directives:
part /boot --fstype ext3 --onpart=sda1
part pv.100000 --onpart=sda2 --noformat
volgroup vol0 pv.100000 --noformat
logvol / --vgname=vol0 --name=lvol1 --useexisting --fstype=ext4
logvol /tmp --vgname=vol0 --name=lvol2 --useexisting --fstype=ext4
logvol swap --vgname=vol0 --name=lvol3 --useexisting
logvol /data --vgname=vol0
2017 Dec 18
2
Upgrading from Gluster 3.8 to 3.12
Hi,
I have a cluster of 10 servers all running Fedora 24 along with
Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with
Gluster 3.12. I saw the documentation and did some testing but I
would like to run my plan through some (more?) educated minds.
The current setup is:
Volume Name: vol0
Distributed-Replicate
Number of Bricks: 2 x (2 + 1) = 6
Bricks:
Brick1:
2012 Jun 16
4
Failing to start or create VM, cannot connect to hypervisor host
Greetings -
I shutdown one of my Centos 6.2 VMs for some offline maintenance and am
now unable to get it to restart. I am also unable to create and start a
new VM. The host system is Centos 6.2, fully up to date. I have been
searching Google for two days and have not been successful in getting a VM
to start. I have restarted libvirtd, but did not want to shutdown my
other two running VMs and
2017 Dec 19
0
Upgrading from Gluster 3.8 to 3.12
On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com>
wrote:
> Hi,
>
> I have a cluster of 10 servers all running Fedora 24 along with
> Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with
> Gluster 3.12. I saw the documentation and did some testing but I
> would like to run my plan through some (more?) educated minds.
>
2015 May 09
2
Bug#784810: Bug#784810: Xen domU try ton access to dom0 LVM Volume group
On 09/05/2015 13:25, Ian Campbell wrote:
> On Sat, 2015-05-09 at 03:41 +0200, Romain Mourier wrote:
> [...]
>> xen-create-image --hostname=test0 --lvm=raid10 --fs=ext4
>> --bridge=br-lan --dhcp --dist=jessie
> [...]
>> root at hv0:~# xl create /etc/xen/test0.cfg && xl console test0
> What does /etc/xen/test0.cfg contain? I suspect it is reusing the dom0
2013 May 21
2
problem with "transform" and "get" functions
Hello, I'm having a problem using the "transform" and "get" functions. I'm
probably making a dumb mistake, and I need help!
I start by making a small simulated dataset. I save the names of the
variables in "var.names." Without getting into the details of it, I have to
create a custom function to perform some statistics. As part of this, I
need to sequentially
2011 Dec 09
3
xen the i / O performance, network performance is only 20-30% of the real machine?
First,sorry my poor english~
Here is this test:
Virtualization performance comparison test
Test environment
Physical machine:
Cpu 8-core
8G memory
HDD: 147G
xen virtual machine:
cpu 2 core
4G memory
30G hard drive
wmware virtual machine:
cpu 2 core
4G memory
30G hard drive
Optical disk array (san)
Size: 7.7T
Speed: 6G/sec
Testing and structural
I / 0 performance test
Test methods
Test
2011 Apr 08
1
Kickstart and lvm
I've written some %pre code to grab a few files off a logical volume, if it
exits, before the disk gets formatted, but can't get get it to work correctly.
Essentially:
%pre
...
lvm vgscan
lvm vgchange -a y
...
if [ -d /dev/vol0 ]; then
# do stuff
fi
lvm vgremove -f vol0
The problem is that /dev/vol0 does not exist after lvm vgchange. I added a
few debug statements (lvm pvdisplay, lvm