similar to: [ovirt-users] VM paused due unknown storage error

Displaying 20 results from an estimated 700 matches similar to: "[ovirt-users] VM paused due unknown storage error"

2017 Dec 22
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi Karthik, Thanks for the info. Maybe the documentation should be updated to explain the different AFR versions, I know I was confused. Also, when looking at the changelogs from my three bricks before fixing: Brick 1: trusted.afr.virt_images-client-1=0x000002280000000000000000 trusted.afr.virt_images-client-3=0x000000000000000000000000 Brick 2:
2017 Dec 22
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hey Henrik, Good to know that the issue got resolved. I will try to answer some of the questions you have. - The time taken to heal the file depends on its size. That's why you were seeing some delay in getting everything back to normal in the heal info output. - You did not hit the split-brain situation. In split-brain all the bricks will be blaming the other bricks. But in your case the
2017 Dec 21
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi Karthik and Ben, I'll try and reply to you inline. On 21 December 2017 at 07:18, Karthik Subrahmanya <ksubrahm at redhat.com> wrote: > Hey, > > Can you give us the volume info output for this volume? # gluster volume info virt_images Volume Name: virt_images Type: Replicate Volume ID: 9f3c8273-4d9d-4af2-a4e7-4cb4a51e3594 Status: Started Snapshot Count: 2 Number of Bricks:
2017 Dec 22
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi Henrik, Thanks for providing the required outputs. See my replies inline. On Thu, Dec 21, 2017 at 10:42 PM, Henrik Juul Pedersen <hjp at liab.dk> wrote: > Hi Karthik and Ben, > > I'll try and reply to you inline. > > On 21 December 2017 at 07:18, Karthik Subrahmanya <ksubrahm at redhat.com> > wrote: > > Hey, > > > > Can you give us the
2010 Sep 24
2
grep contents of file on remote server
Hello, I am attempting to grep the contents of a key file I have SCP'd to a remote server. I am able to cat it: [code] [bluethundr at LBSD2:~]$:ssh root at sum1 cat /root/id_rsa.pub root at lcent01.summitnjhome.com's password: ssh-rsa
2010 Dec 13
2
Deploying libvirt with live migration
I have two physical servers: Virt1 and Virt2. I'm setting up live migration with CentOS 5.5 between the two. I've done this by NFS mounting /etc/libvirt and /var/lib/libvirt/images on both servers. This is working well for me except for one thing. I see the same list of VMs on each server (as expected), but each server (Virt1 and Virt2) are able to start the same VM at the same time.
2017 Dec 21
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Here is the process for resolving split brain on replica 2: https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Recovering_from_File_Split-brain.html It should be pretty much the same for replica 3, you change the xattrs with something like: # setfattr -n trusted.afr.vol-client-0 -v 0x000000000000000100000000 /gfs/brick-b/a When I try to decide which
2020 Oct 02
0
Centos8: Glusterd do not start correctly when I startup or reboot all server together
The systemd glusterd.service unit modify do not resolve my problem The solution is mount the glusterfs volume with this line into /etc/fstab: virt2:/gfsvol2 /virt-gfs glusterfs defaults,_netdev,noauto,x-systemd.automount,x-systemd.device-timeout=20,x-systemd.requires=glusterd.service 0 0 run systemctl daemon-reload and run this for mount the volume systemctl restart virt\\x2dgfs.mount
2017 Dec 20
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi, I have the following volume: Volume Name: virt_images Type: Replicate Volume ID: 9f3c8273-4d9d-4af2-a4e7-4cb4a51e3594 Status: Started Snapshot Count: 2 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: virt3:/data/virt_images/brick Brick2: virt2:/data/virt_images/brick Brick3: printserver:/data/virt_images/brick (arbiter) Options Reconfigured: features.quota-deem-statfs:
2017 Dec 21
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hey, Can you give us the volume info output for this volume? Why are you not able to get the xattrs from arbiter brick? It is the same way as you do it on data bricks. The changelog xattrs are named trusted.afr.virt_images-client-{1,2,3} in the getxattr outputs you have provided. Did you do a remove-brick and add-brick any time? Otherwise it will be trusted.afr.virt_images-client-{0,1,2} usually.
2020 Sep 28
1
Centos8: Glusterd do not start correctly when I startup or reboot all server together
I have install and configure on two server centos8 glusterfs in replica mode in this manner: dnf install centos-release-gluster -y dnf install glusterfs-server glusterfs glusterfs-fuse -y systemctl enable --now glusterd gluster peer probe virt1 gluster peer status sh creavolume.sh gfsvol1 301G /gfsvol1 xfs # NOTE: this is a my shell script to create fs on lvm mkdir
2017 Nov 15
1
unable to remove brick, pleas help
Hi, I am trying to remove a brick, from a server which is no longer part of the gluster pool, but I keep running into errors for which I cannot find answers on google. [root at virt2 ~]# gluster peer status Number of Peers: 3 Hostname: srv1 Uuid: 2bed7e51-430f-49f5-afbc-06f8cec9baeb State: Peer in Cluster (Disconnected) Hostname: srv3 Uuid: 0e78793c-deca-4e3b-a36f-2333c8f91825 State: Peer in
2015 Sep 16
0
Problem installing R on Debian 6
Dear Hartyun Khachatryan, thanks for your report. It seems not a lot of people are using the backport to squeeze, as I have forgot to update the package index after doing the backport for R 3.2.2 and nobody (including me) noticed... The updated index is on its way to the CRAN mirrors, it will take some time to be synchronised, so please try again tomorrow. Johannes Am Mittwoch, 16.
2017 Dec 07
2
devirtualization with new-PM pipeline
Chandler et al, I have been playing with the new PM pipeline, being particularly interested in how it can handle devirtualization. Now, I discovered what I believe is a "regression" vs old PM on a rather simple one-translation-unit testcase. clang is able to devirtualize it with -O3 and fails to do so with -fexperimental-new-pass-manager added. It looks like a pipeline issue,
2007 Dec 31
1
Segmentation fault in dovecot-sieve-1.1.2 + dovecot-1.1.beta13
Hi, dovecot-sieve-1.1.2 + dovecot-1.1.beta13 segfaults with the following sieve filter: --- require ["imapflags"]; if header :contains "subject" ["test"] { addflag "$testflag"; } --- when a message with a subject containing "test" is delivered via dovecot lda. The fault backtrace is: (gdb) run Starting
2017 Dec 14
2
devirtualization with new-PM pipeline
Yes, this looks broken in the new PM. The DevirtSCCRepeatedPass::run method first scans the functions in the SCC to collect value handles for indirect calls, runs the CGSCC pass pipeline and then checks if any of the call value handles now point to a direct call, in which case it runs the pipeline again (which should inline the devirtualized call) . The problem is scanning the initial SCC for
2010 Jun 15
1
Solaris 10 Branded Zones & Exclusive IP Zones
In an effort to get a better understanding of Crossbow I decided to create some vnics for use between a couple of Solarsi 10 branded zones. I was quite surprised that when I went to verify the setup within zonecfg I got the following error. zonecfg:virt1> verify Error: solaris10 zones do not currently support exclusive ip-type stacks virt1: Brand-specific error Is this a feature that is
2005 Aug 15
16
swig_up
Tracing down some things to add in validators and I''ve run across something that kinda bothers me... In order to implement validators you have to override the clone method. The directors seems to be set up to specifically handle this situation. However, whenever C++ calls back to the object''s methods the swig_get_up function is returning false. It seems like swig_up
2010 Nov 16
5
ssh prompting for password
hello list I have a network mounted home directory shared between all hosts on my network: [bluethundr at LCENT03:~]#df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 140G 4.4G 128G 4% / /dev/sda1 99M 35M 60M 37% /boot tmpfs 1.6G 0 1.6G 0% /dev/shm nas.summitnjhome.com:/mnt/nas
2014 Mar 07
1
Fw: Default shell in Debian 6 of R is SH instead of BASH
Dear all, I am experiencing this problem with R in Debian 6. The config saying about default shell in R for Debian 6 is in file /usr/lib/R/etc/Makeconf (line 80 "SHELL = /bin/bash") but in Debian 6 default shell for R is always SH instead of BASH. It can be checked by just typing in R system("echo $BASH_VERSION"). The same command system("echo $BASH_VERSION") in R