similar to: Geo-replication faulty

Displaying 20 results from an estimated 100 matches similar to: "Geo-replication faulty"

2017 Aug 16
0
Geo replication faulty-extended attribute not supported by the backend storage
Hi, I have a Glusterfs (v3.11.2-1) geo replication master-slave setup between two sites. The idea is to provide an off-site backup for my storage. When I start the session, I get the following message: [2017-08-15 20:07:41.110635] E [fuse-bridge.c:3484:fuse_xattr_cbk] 0-glusterfs-fuse: extended attribute not supported by the backend storage Then it starts syncing the data but it stops at the
2008 Jan 05
2
Multisim 10 under Wine, don't work?
Hello. I'm trying to install Multisim 10 under Wine, but it fails with an "Unhandled MSI Error". Final error messages on the console are: renan at gothic:~/ewb_temp$ wine setup.exe fixme:volume:GetVolumePathNameW (L"I:\\ewb_temp\\support\\Resource_eng.dll", 0x717e28, 38), stub! fixme:volume:GetVolumePathNameW (L"I:\\ewb_temp\\support\\nipie.exe", 0x718970,
2017 Nov 13
0
Help with reconnecting a faulty brick
Le 13/11/2017 ? 10:04, Daniel Berteaud a ?crit?: > > Could I just remove the content of the brick (including the .glusterfs > directory) and reconnect ? > In fact, what would be the difference between reconnecting the brick with a wiped FS, and using gluster volume remove-brick vmstore replica 1 master1:/mnt/bricks/vmstore gluster volume add-brick myvol replica 2
2013 Apr 09
1
Faulty manual?
Hi, Look under examples: http://builder.virt-tools.org/artifacts/libvirt-virshcmdref/html/sect-attach-disk.html (root at h2)-(/)# virsh attach-disk vps_99 /dev/nbd2 vdb --address pci:0000.00.11.0 --persistent *error: command 'attach-disk' doesn't support option --address* (root at h2)-(/)# virsh --version 0.7.5 Is someone playing a trick on me? Regards, Daniele -------------- next
2017 Sep 04
0
linux-4.13/drivers/gpu/drm/nouveau/nvkm/subdev/therm/fan.c:86: possible faulty logic ?
Hello there, [linux-4.13/drivers/gpu/drm/nouveau/nvkm/subdev/therm/fan.c:93]: (warning) Opposite inner 'if' condition leads to a dead code block. Source code is if (target != duty) { u16 bump_period = fan->bios.bump_period; u16 slow_down_period = fan->bios.slow_down_period; u64 delay; if (duty > target) delay = slow_down_period;
2013 Aug 31
2
Auto-blocking faulty login attempts
Dear group, How can I block login attempts to dovecot after trying 5 times in error? -- Best regards, Jos Chrispijn --- Artificial intelligence is no match for natural stupidity
2009 Aug 26
2
faulty formatting of toLatex(sessionInfo())
Dear all I am writing an Sweave document and have encountered formatting issues with the "locale" part of toLatex(sessionInfo()). The fact that there is no spaces between the various "locale" variables means that LaTeX cannot easily find an appropriate place to break the lines, and some will get printed off screen. Below is the text output, and this .pdf document [1] shows the
2017 Nov 15
0
Help with reconnecting a faulty brick
On 11/15/2017 12:54 PM, Daniel Berteaud wrote: > > > > Le 13/11/2017 ? 21:07, Daniel Berteaud a ?crit?: >> >> Le 13/11/2017 ? 10:04, Daniel Berteaud a ?crit?: >>> >>> Could I just remove the content of the brick (including the >>> .glusterfs directory) and reconnect ? >>> >> If it is only the brick that is faulty on the bad node,
2002 Nov 18
1
write access to shares on PDC faulty (samba 2.2.6)
Hi, I just configured samba (2.2.6) on a Linux box (2.4.18) to act as a PDC. I can bring W2K PCs into the domain, and I can read the data on exported shares. However, when I try to write data, the following occurs: 1. trying to create directories (AKA folders :-) from the W2K GUI using "New Folder": Error Message Box "Unable to create the folder 'New Folder'. Cannot create a
2017 Nov 17
0
Help with reconnecting a faulty brick
On 11/17/2017 03:41 PM, Daniel Berteaud wrote: > Le Jeudi, Novembre 16, 2017 13:07 CET, Ravishankar N <ravishankar at redhat.com> a ?crit: > >> On 11/16/2017 12:54 PM, Daniel Berteaud wrote: >>> Any way in this situation to check which file will be healed from >>> which brick before reconnecting ? Using some getfattr tricks ? >> Yes, there are afr
2017 Nov 16
0
Help with reconnecting a faulty brick
On 11/16/2017 12:54 PM, Daniel Berteaud wrote: > Le 15/11/2017 ? 09:45, Ravishankar N a ?crit?: >> If it is only the brick that is faulty on the bad node, but >> everything else is fine, like glusterd running, the node being a part >> of the trusted storage pool etc,? you could just kill the brick first >> and do step-13 in "10.6.2. Replacing a Host Machine with
2013 Mar 06
1
aov() and anova() making faulty F-tests
Dear useRs, I've just encountered a serious problem involving the F-test being carried out in aov() and anova(). In the provided example, aov() is not making the correct F-test for an hypothesis involving the expected mean square (EMS) of a factor divided by the EMS of another factor (i.e., instead of the error EMS). Here is the example: Expected Mean Square
2017 Nov 13
2
Help with reconnecting a faulty brick
Hi everyone. I'm running a simple Gluster setup like this: ? * Replicate 2x1 ? * Only 2 nodes, with one brick each ? * Nodes are CentOS 7.0, uising GlusterFS 3.5.3 (yes, I know it's old, I just can't upgrade right now) No sharding or anything "fancy". This Gluster volume is used to host VM images, and are used by both nodes (which are gluster server and clients).
2017 Jun 09
0
substitution of two faulty servers
Go for replace brick. On Fri, 9 Jun 2017 at 19:29, Erekle Magradze <erekle.magradze at recogizer.de> wrote: > Hello, > > I have glusterfs 3.8.9, integrated with oVirt. > > glusterfs is running on 6 servers, I have one brick from each server for > oVirt virtdata volume (used for VM images) > > and 2 bricks from each servers (12 bricks in total) for data volume, >
2017 Nov 15
2
Help with reconnecting a faulty brick
Le 13/11/2017 ? 21:07, Daniel Berteaud a ?crit?: > > Le 13/11/2017 ? 10:04, Daniel Berteaud a ?crit?: >> >> Could I just remove the content of the brick (including the >> .glusterfs directory) and reconnect ? >> > > In fact, what would be the difference between reconnecting the brick > with a wiped FS, and using > > gluster volume remove-brick vmstore
2012 Aug 23
3
System will not boot - faulty fstab?
I believe that I made a boo boo recently when recovering some unused disk space. Without going into painfully embarrassing detail I need to delete an entry in fstab for a now non-existent logical volume. The system reports that the there is a bad superblock for said logical volume. Mainly I expect because there isn't one anymore. How do I edit fstab so as to remove the mount request? For
2014 Jun 27
1
geo-replication status faulty
Venky Shankar, can you follow up on these questions? I too have this issue and cannot resolve the reference to '/nonexistent/gsyncd'. As Steve mentions, the nonexistent reference in the logs looks like the culprit especially seeing that the ssh command trying to be run is printed on an earlier line with the incorrect remote path. I have followed the configuration steps as documented in
2017 Jun 09
2
substitution of two faulty servers
Hello, I have glusterfs 3.8.9, integrated with oVirt. glusterfs is running on 6 servers, I have one brick from each server for oVirt virtdata volume (used for VM images) and 2 bricks from each servers (12 bricks in total) for data volume, which is used as a file storage (with different sizes). I would like to substitute 2 servers with new ones, due to the hardware reorganization. So for
2017 Nov 16
2
Help with reconnecting a faulty brick
Le 15/11/2017 ? 09:45, Ravishankar N a ?crit?: > If it is only the brick that is faulty on the bad node, but everything > else is fine, like glusterd running, the node being a part of the > trusted storage pool etc,? you could just kill the brick first and do > step-13 in "10.6.2. Replacing a Host Machine with the Same Hostname", > (the mkdir of non-existent dir,
2023 Oct 25
1
Replace faulty host
Hi all, I have a problem with one of our gluster clusters. This is the setup: Volume Name: gds-common Type: Distributed-Replicate Volume ID: 42c9fa00-2d57-4a58-b5ae-c98c349cfcb6 Status: Started Snapshot Count: 26 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: urd-gds-031:/urd-gds/gds-common Brick2: urd-gds-032:/urd-gds/gds-common Brick3: urd-gds-030:/urd-gds/gds-common