Displaying 20 results from an estimated 800 matches similar to: "No module named cygvirtmod"
2012 Apr 12
1
CentOS 6.2 anaconda bug?
I have a kickstart file with the following partitioning directives:
part /boot --fstype ext3 --onpart=sda1
part pv.100000 --onpart=sda2 --noformat
volgroup vol0 pv.100000 --noformat
logvol / --vgname=vol0 --name=lvol1 --useexisting --fstype=ext4
logvol /tmp --vgname=vol0 --name=lvol2 --useexisting --fstype=ext4
logvol swap --vgname=vol0 --name=lvol3 --useexisting
logvol /data --vgname=vol0
2017 Dec 18
2
Upgrading from Gluster 3.8 to 3.12
Hi,
I have a cluster of 10 servers all running Fedora 24 along with
Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with
Gluster 3.12. I saw the documentation and did some testing but I
would like to run my plan through some (more?) educated minds.
The current setup is:
Volume Name: vol0
Distributed-Replicate
Number of Bricks: 2 x (2 + 1) = 6
Bricks:
Brick1:
2017 Dec 19
0
Upgrading from Gluster 3.8 to 3.12
On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com>
wrote:
> Hi,
>
> I have a cluster of 10 servers all running Fedora 24 along with
> Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with
> Gluster 3.12. I saw the documentation and did some testing but I
> would like to run my plan through some (more?) educated minds.
>
2013 Jul 09
2
Re: Libvirt and Glusterfs
> Hi,
>
> I'm trying to use qemu native glusterfs integration with libvirt. It's
> all working well from the qemu side, but libvirt fails to start a domain
> with a gluster drive or attach a drive.
> I have exactly the same error as this person:
> https://www.redhat.com/archives/libvirt-users/2013-April/msg00204.html
>
> I use qemu 1.5.1 with glusterfs 3.4 beta 4
2010 Jun 30
6
Xen 4 - Error 111 Refused Connection
Hi there.
I cannot start xen, I''m getting this error
System:
Linux netwarrior 2.6.31.13 #5 SMP Wed Jun 30 13:25:47 ART 2010 x86_64
AMD Athlon(tm) II X4 630 Processor AuthenticAMD
GNU/Linux
xend.log
le "/usr/lib64/python2.6/site-packages/xen/xend/xenstore/xstransact.py",
line 355, in Mkdir
complete(path, lambda t: t.mkdir(*args))
File
2010 Jun 09
2
Is kvm ready for prime time?
Hi all,
I've started playing with kvm on CentOS 5.5, with not much success so far.
In a nutshell, I have the same problem as
http://lists.centos.org/pipermail/centos-virt/2010-April/001854.html
I followed the RHEL5 virtualization guide to set up a bridge interface br0,
and then used virt-install (rather than virt-manager - I like automation) to
set up the vm. It hangs at some point.
2013 Jul 10
1
Re: Libvirt and Glusterfs
On 07/09/2013 08:18 PM, Olivier Mauras wrote:
> On 2013-07-09 09:40, Vijay Bellur wrote:
>
>>> Hi, I'm trying to use qemu native glusterfs integration with libvirt.
>>> It's all working well from the qemu side, but libvirt fails to start
>>> a domain with a gluster drive or attach a drive. I have exactly the
>>> same error as this person:
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
I'm sorry for my last incomplete message.
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- -----------
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> My initial setup was composed of 2 similar nodes: stor1data and stor2data.
> A month ago I expanded both volumes with a new node: stor3data (2 bricks
> per volume).
> Of course, then to add the new peer with the bricks I did the 'balance
> force' operation.
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
My initial setup was composed of 2 similar nodes: stor1data and stor2data.
A month ago I expanded both volumes with a new node: stor3data (2 bricks
per volume).
Of course, then to add the new peer with the bricks I did the 'balance
force' operation. This task finished successfully (you can see info below)
and number of files on the 3 nodes were very similar .
For volumedisk1 I
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 18:28, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> I applied the workarround for this bug and now df shows the right size:
>
> That is good to hear.
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
> /dev/sdc1
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- ----------- ----------- ------------
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
I applied the workarround for this bug and now df shows the right size:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor1data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor1data:/volumedisk1
2012 Jul 10
1
Issue with getCPUStats and getMemoryStats
Hi,
I am facing issue with the calls getCPUStats and getMemoryStats. Please
find the error below.
AttributeError: 'module' object has no attribute
'VIR_NODE_CPU_STATS_ALL_CPUS'
>>> print con.getCPUStats(2, None, 0, 0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File
2011 Apr 08
1
Kickstart and lvm
I've written some %pre code to grab a few files off a logical volume, if it
exits, before the disk gets formatted, but can't get get it to work correctly.
Essentially:
%pre
...
lvm vgscan
lvm vgchange -a y
...
if [ -d /dev/vol0 ]; then
# do stuff
fi
lvm vgremove -f vol0
The problem is that /dev/vol0 does not exist after lvm vgchange. I added a
few debug statements (lvm pvdisplay, lvm
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
There is a known issue with gluster 3.12.x builds (see [1]) so you may be
running into this.
The "shared-brick-count" values seem fine on stor1. Please send us "grep -n
"share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes
so we can check if they are the cause.
Regards,
Nithya
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi,
Some days ago all my glusterfs configuration was working fine. Today I
realized that the total size reported by df command was changed and is
smaller than the aggregated capacity of all the bricks in the volume.
I checked that all the volumes status are fine, all the glusterd daemons
are running, there is no error in logs, however df shows a bad total size.
My configuration for one volume:
2020 Jul 20
3
Performance issues since upgrading to 3.X to 4.X
Pretty close to what i have that :
[global]
workgroup = PRIVATE
server string = %h server
log file = /var/log/samba/log.%m
max log size = 1000
log level = 0
####### Authentication #######
## stand alone everything open.
security = user
guest ok = yes
map to guest = bad password
## map id's outside to domain to tdb files.
idmap config * : backend = tdb
idmap
2017 Dec 19
2
Upgrading from Gluster 3.8 to 3.12
I have not done the upgrade yet. Since this is a production cluster I
need to make sure it stays up or schedule some downtime if it doesn't
doesn't. Thanks.
On Tue, Dec 19, 2017 at 10:11 AM, Atin Mukherjee <amukherj at redhat.com> wrote:
>
>
> On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com>
> wrote:
>>
>> Hi,
>>
2004 Apr 12
2
FW: cluster1 error
I am trying to use:
ocfs-support-1.0.10-1
ocfs-2.4.21-EL-smp-1.0.11-1
ocfs-tools-1.0.10-1
with RedHat AS 3.0, 2-node cluster with shared SCSI. 2 dell 1650s, dual
CPUs, PERC 3/DC cards chained to a PowerVault 220S.
I am using lvm, and here is my layout:
[root@cluster1 archive]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 32G 5.1G 25G