Displaying 20 results from an estimated 27 matches for "vmstore".
Did you mean:
vmstor1
2017 Nov 17
2
?==?utf-8?q? Help with reconnecting a faulty brick
...> file. The good copy will have non-zero trusted.afr* xattrs that blame
> the bad one and heal will happen from good to bad.? If both bricks have
> attrs blaming the other, then the file is in split-brain.
Thanks.
So, say I have a file with this on the correct node
# file: mnt/bricks/vmstore/prod/bilbao_sys.qcow2
security.selinux=0x73797374656d5f753a6f626a6563745f723a66696c655f743a733000
trusted.afr.vmstore-client-0=0x00050f7e0000000200000000
trusted.afr.vmstore-client-1=0x000000000000000100000000
trusted.gfid=0xe86c24e5fc6b4fc6bf2b896f3cc8537d
And this on the bad one
# file: mnt/bri...
2016 Nov 21
1
blockcommit and gluster network disk path
Hi,
I'm running into problems with blockcommit and gluster network disks -
wanted to check how to pass path for network disks. How's the protocol and
host parameters specified?
For a backing volume chain as below, executing
virsh blockcommit fioo5
vmstore/912d9062-3881-479b-a6e5-7b074a252cb6/images/27b0cbcb-4dfd-4eeb-8ab0-8fda54a6d8a4/027a3b37-77d4-4fa9-8173-b1fedba1176c
--base
vmstore/912d9062-3881-479b-a6e5-7b074a252cb6/images/27b0cbcb-4dfd-4eeb-8ab0-8fda54a6d8a4/d4c23ec6-20ce-4a2f-9b32-ca91e65a114a
--top
vmstore/912d9062-3881-479b-a6e5-7b074a252c...
2017 Nov 13
0
Help with reconnecting a faulty brick
Le 13/11/2017 ? 10:04, Daniel Berteaud a ?crit?:
>
> Could I just remove the content of the brick (including the .glusterfs
> directory) and reconnect ?
>
In fact, what would be the difference between reconnecting the brick
with a wiped FS, and using
gluster volume remove-brick vmstore replica 1 master1:/mnt/bricks/vmstore
gluster volume add-brick myvol replica 2 master1:/mnt/bricks/vmstore
gluster volume heal vmstore full
As explained here:
http://lists.gluster.org/pipermail/gluster-users/2014-January/015533.html
?
Cheers,
Daniel
--
Logo FWS
*Daniel Berteaud*
FIREWALL-S...
2011 Oct 15
2
SELinux triggered during Libvirt snapshots
I recently began getting periodic emails from SEalert that SELinux is
preventing /usr/libexec/qemu-kvm "getattr" access from the directory I store
all my virtual machines for KVM.
All VMs are stored under /vmstore , which is it's own mount point, and
every file and folder under /vmstore currently has the correct context that
was set by doing the following:
semanage fcontext -a -t virt_image_t "/vmstore(/.*)?"
restorecon -R /vmstore
So far I've noticed then when taking snapshots and also...
2017 Nov 17
0
Help with reconnecting a faulty brick
...py will have non-zero trusted.afr* xattrs that blame
>> the bad one and heal will happen from good to bad.? If both bricks have
>> attrs blaming the other, then the file is in split-brain.
> Thanks.
>
> So, say I have a file with this on the correct node
> # file: mnt/bricks/vmstore/prod/bilbao_sys.qcow2
> security.selinux=0x73797374656d5f753a6f626a6563745f723a66696c655f743a733000
> trusted.afr.vmstore-client-0=0x00050f7e0000000200000000
> trusted.afr.vmstore-client-1=0x000000000000000100000000
> trusted.gfid=0xe86c24e5fc6b4fc6bf2b896f3cc8537d
>
> And this on...
2017 Nov 13
2
Help with reconnecting a faulty brick
Hi everyone.
I'm running a simple Gluster setup like this:
? * Replicate 2x1
? * Only 2 nodes, with one brick each
? * Nodes are CentOS 7.0, uising GlusterFS 3.5.3 (yes, I know it's old,
I just can't upgrade right now)
No sharding or anything "fancy". This Gluster volume is used to host VM
images, and are used by both nodes (which are gluster server and
clients).
2017 Nov 15
2
Help with reconnecting a faulty brick
...eaud a ?crit?:
>>
>> Could I just remove the content of the brick (including the
>> .glusterfs directory) and reconnect ?
>>
>
> In fact, what would be the difference between reconnecting the brick
> with a wiped FS, and using
>
> gluster volume remove-brick vmstore replica 1 master1:/mnt/bricks/vmstore
> gluster volume add-brick myvol replica 2 master1:/mnt/bricks/vmstore
> gluster volume heal vmstore full
>
> As explained here:
> http://lists.gluster.org/pipermail/gluster-users/2014-January/015533.html
>
> ?
No one can help ?
Cheers,...
2017 Nov 16
0
Help with reconnecting a faulty brick
On 11/16/2017 12:54 PM, Daniel Berteaud wrote:
> Le 15/11/2017 ? 09:45, Ravishankar N a ?crit?:
>> If it is only the brick that is faulty on the bad node, but
>> everything else is fine, like glusterd running, the node being a part
>> of the trusted storage pool etc,? you could just kill the brick first
>> and do step-13 in "10.6.2. Replacing a Host Machine with
2017 Nov 16
2
Help with reconnecting a faulty brick
Le 15/11/2017 ? 09:45, Ravishankar N a ?crit?:
> If it is only the brick that is faulty on the bad node, but everything
> else is fine, like glusterd running, the node being a part of the
> trusted storage pool etc,? you could just kill the brick first and do
> step-13 in "10.6.2. Replacing a Host Machine with the Same Hostname",
> (the mkdir of non-existent dir,
2017 Nov 15
0
Help with reconnecting a faulty brick
...e brick by restarting glusterd on that node. Read 10.5
and 10.6 sections in the doc to get a better understanding of replacing
bricks.
>> In fact, what would be the difference between reconnecting the brick
>> with a wiped FS, and using
>>
>> gluster volume remove-brick vmstore replica 1 master1:/mnt/bricks/vmstore
>> gluster volume add-brick myvol replica 2 master1:/mnt/bricks/vmstore
>> gluster volume heal vmstore full
>>
>> As explained here:
>> http://lists.gluster.org/pipermail/gluster-users/2014-January/015533.html
>>
>> ?...
2017 Nov 13
0
Prevent total volume size reduction
...ely appear bigger, up to the size of the smallest brick.
Now, I had a problem on my setup, long story short, an LVM bug has
forcibly unmounted the volumes on which my bricks are running, while
gluster was being used. The problem is that instead of having a 8TB file
system mounted on /mnt/bricks/vmstore the server suddently found an
empty /mnt/bricks/vmstore pointing on / of this server (20GB)
After 3 hours during which Gluster complained about missing files on
node1 (but continuing to serve files from node2 transparently), it
decided to start healing from the correct node to this empty
/mnt/...
2017 Sep 18
0
Confusing lstat() performance
I did a quick test on one of my lab clusters with no tuning except for quota being enabled:
[root at dell-per730-03 ~]# gluster v info
Volume Name: vmstore
Type: Replicate
Volume ID: 0d2e4c49-334b-47c9-8e72-86a4c040a7bd
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.50.1:/rhgs/brick1/vmstore
Brick2: 192.168.50.2:/rhgs/brick1/vmstore
Brick3: 192.168.50.3:/rhgs/ssd/vmstore (arbiter)
Option...
2011 Oct 12
1
XML file format for snapshot-create
I've created a very basic snapshot XML file, to allow for a description of
the snapshot. However when running the virsh command, it doesn't like the
formatting.
# virsh snapshot-create proxy_0 /vmstore/proxy_0/proxy_0_ss.xml
error: XML description for failed to parse snapshot xml document is not well
formed or invalid
This are the XML file contents...
# cat proxy_0_ss.xml
<domainsnapshot>
<name></name>
<description>Before updating to CR repo</desciption>
</...
2017 Sep 14
5
Confusing lstat() performance
Hi,
I have a gluster 3.10 volume with a dir with ~1 million small files in
them, say mounted at /mnt/dir with FUSE, and I'm observing something weird:
When I list and stat them all using rsync, then the lstat() calls that
rsync does are incredibly fast (23 microseconds per call on average,
definitely faster than a network roundtrip between my 3-machine bricks
connected via Ethernet).
But
2011 Jul 28
0
Snapshot error "command savevm not found"
...boot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source file='/vmstore/CentOS6-x86-001/CentOS6-x86-001_sys.qcow2'/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
</...
2011 Aug 02
1
Snapshot error "command savevm not found"
...boot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source file='/vmstore/CentOS6-x86-001/CentOS6-x86-001_sys.qcow2'/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
</...
2011 Sep 22
2
Centos 6 First Install, gripes - cool things- tips/help
Finally got a new server the other day.
You know I had to try out centos 6 with this one.
dual quad cores, 24 gb ram (12 for each cpu) 6 working drives bays.
My first big surprise was the partition system with anaconda. It is a
lot different than the centos 5.x version.
I am sure it is a bug that it has options for hot spares but does not
allow it to be ungreyed out.
I think in the end I will
2013 Aug 21
2
Bug on PAM_Winbind ?
...password sufficient /lib/security/pam_smbpass.so
password required /lib/security/pam_winbind.so
session required /lib/security/pam_unix.so
*cat /etc/samba/smb.conf*
[Global]
available= yes
client signing= auto
server signing= auto
server string= Bla
Workgroup= DISNEY
netbios name= vmstore-4
realm= DISNEY.XXTEST.ASD-ABC.LOCALDOMAIN
password server= *
idmap backend= tdb
idmap uid= 5000-9999999
idmap gid= 5000-9999999
idmap config DISNEY : backend= rid
idmap config DISNEY : range= 10000000-19999999
security= ADS
name resolve order= wins host bcast lmhosts
client use spnego= yes
dns pro...
2011 Sep 25
8
Unable to get IP address in guest OS through DHCP
...IP address of the type 192.168.1.0/24.
The configuration file is:
kernel = "/mnt/dati/xen/kernel/vmlinuz-xen-3.0.4-domU"
memory = "512"
name = "gentoo-10.0-x86_64"
vif = [''bridge=xenbr0'']
dhcp = "dhcp"
disk = [''file:/mnt/data/xen/vmstore/gentoo-10.0/gentoo-10.0.x86-64.img,xvda1,w'',
''file:/mnt/data/xen/vmstore/gentoo-10.0/swap.img,xvda2,w'']
root = "/dev/xvda1 ro"
vcpus = 2
extra = ''console=hvc0 xencons=tty''
ip = "off"
Thanks
--
Flavio
____________________________...
2012 Jun 21
1
echo 0 > /proc/sys/kernel/hung_task_timeout_secs and others error, Part II
...list the slotmap information, there is not node1.
But the heartbeat information on disk is ok.
Ant there are lot of "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" error message in the log.
We format the device with 32 node using the command:
mkfs.ocfs2 -b 4k -C 1M -L target100 -T vmstore -N 32 /dev/sdb
So we have to delete the ocfs2 cluster, reboot nodes, and rebuild the ocfs2.
After all node joins into the cluster, we copy data again, and there are "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" message still.
Jun 20 20:42:01 H3CRDS11-RD kernel: [17509.006781] IN...