similar to: Gluster performance with VM's

Displaying 20 results from an estimated 400 matches similar to: "Gluster performance with VM's"

2017 Jul 17
1
Gluster set brick online and start sync.
Hello everybody, Please, help to fix me a problem. I have a distributed-replicated volume between two servers. On each server I have 2 RAID-10 arrays, that replicated between servers. Brick gl1:/mnt/brick1/gm0 49153 0 Y 13910 Brick gl0:/mnt/brick0/gm0 N/A N/A N N/A Brick gl0:/mnt/brick1/gm0 N/A
2008 May 16
1
NetBios name resolution from WINDOWS
Hi, I've been following a similar and current thread about name resolution from LINUX side. I have exactly the opposite problem (running ubuntu 7.10 on a network with a mixture ubuntu 6.06, WinXP and WinVista boxes) The winboxes cannot see or mount this 7.10 box I can mount WinPC shares on this 7.10 box. I was wondering about firewalls, but I can ping this 7.10 from the WinBox by IP address
2013 Mar 16
1
different size of nodes
hi All, There is a distributed cluster with 5 bricks: gl0 Filesystem Size Used Avail Use% Mounted on /dev/sda4 5.5T 4.1T 1.5T 75% /mnt/brick1 gl1 Filesystem Size Used Avail Use% Mounted on /dev/sda4 5.5T 4.3T 1.3T 78% /mnt/brick1 gl2 Filesystem Size Used Avail Use% Mounted on /dev/sda4 5.5T 4.1T 1.4T 76% /mnt/brick1 gl3 Filesystem Size Used
2008 Jun 26
1
gmirror+gjournal: unable to boot after crash
Hi, after one month with gmirror and gjournal running on a 7.0-RELEASE #p2 amd64 (built from latest CVS source), the box hung a couple of times when on high disk load. Finally, while building some port it won't boot for no reason obvious to me. This is what I get with kernel.geom.mirror.debug=2: ata2-master: pio=PIO4 wdma=WDMA2 udma=UDMA133 cable=40 wire ad4: 476940MB <SAMSUNG HD501LJ
2023 Aug 10
2
orphaned snapshots
I?ve never had such situation and I don?t recall someone sharing something similar. Most probably it?s easier to remove the node from the TSP and re-add it.Of course , test the case in VMs just to validate that it?s possible to add a mode to a cluster with snapshots. I have a vague feeling that you will need to delete all snapshots. Best Regards,Strahil Nikolov? On Thursday, August 10, 2023, 4:36
2005 Jul 01
2
make error for zaptel
Hi, I'm running SuSE 9.3 fully updated using YOU and I *have* re-booted the box (in a hope to sort out the uname -r issue mentioned below). I'm using the Asterisk Doc Proj vol 1 to guide me through the initial setup. I have no special HW and intend to use asterisk on an internal network just to get some experience. I have downloaded what I think I need and placed it in /usr/src (see
2012 Apr 20
1
GEOM_PART: integrity check failed (mirror/gm0, MBR) on FreeBSD 8.3-RELEASE
I just did a source upgrade from 8.2 to 8.3. System boots but has this warning: GEOM_PART: integrity check failed (mirror/gm0, MBR) Google points to issues with FreeBSD 9 and the need to migrate to GPT but I wasn't expecting this with 8.3! Are there any quick fixes to eliminate this warning or is it safe to ignore please? sudo gpart list: Geom name: mirror/gm0 modified: false state:
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote: > I will try to explain how you can end up in split-brain even with cluster > wide quorum: Yep, the explanation made sense. I hadn't considered the possibility of alternating outages. Thanks! > > > It would be great if you can consider configuring an arbiter or > > > replica 3 volume. > >
2011 Oct 17
1
brick out of space, unmounted brick
Hello Gluster users, Before I put Gluster into production, I am wondering how it determines whether a byte can be written, and where I should look in the source code to change these behaviors. My experiences are with glusterfs 3.2.4 on CentOS 6 64-bit. Suppose I have a Gluster volume made up of four 1 MB bricks, like this Volume Name: test Type: Distributed-Replicate Status: Started Number of
2018 Feb 15
0
Failover problems with gluster 3.8.8-1 (latest Debian stable)
Well, it looks like I've stumped the list, so I did a bit of additional digging myself: azathoth replicates with yog-sothoth, so I compared their brick directories. `ls -R /var/local/brick0/data | md5sum` gives the same result on both servers, so the filenames are identical in both bricks. However, `du -s /var/local/brick0/data` shows that azathoth has about 3G more data (445G vs 442G) than
2018 Feb 13
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
I'm using gluster for a virt-store with 3x2 distributed/replicated servers for 16 qemu/kvm/libvirt virtual machines using image files stored in gluster and accessed via libgfapi. Eight of these disk images are standalone, while the other eight are qcow2 images which all share a single backing file. For the most part, this is all working very well. However, one of the gluster servers
2018 Feb 25
2
Re-adding an existing brick to a volume
Hi! I am running a replica 3 volume. On server2 I wanted to move the brick to a new disk. I removed the brick from the volume: gluster volume remove-brick VOLUME rep 2 server2:/gluster/VOLUME/brick0/brick force I unmounted the old brick and mounted the new disk to the same location. I added the empty new brick to the volume: gluster volume add-brick VOLUME rep 3
2009 Feb 23
1
Interleave or not
Lets say you had 4 servers and you wanted to setup replicate and distribute. What methoid would be better: server sdb1 xen0 brick0 xen1 mirror0 xen2 brick1 xen3 mirror1 replicate block0 - brick0 mirror0 replicate block1 - brick1 mirror1 distribute unify - block0 block1 or server sdb1 sdb2 xen0 brick0 mirror3 xen1 brick1 mirror0 xen2 brick2 mirror1 xen3 brick3 mirror2 replicate block0 -
2010 Mar 14
3
likelihood ratio test between glmer and glm
I am currently running a generalized linear mixed effect model using glmer and I want to estimate how much of the variance is explained by my random factor. summary(glmer(cbind(female,male)~date+(1|dam),family=binomial,data= liz3")) Generalized linear mixed model fit by the Laplace approximation Formula: cbind(female, male) ~ date + (1 | dam) Data: liz3 AIC BIC logLik deviance 241.3
2011 Mar 22
3
Urgent query about R!
Hi there, I am currently working on a R programming project and got stuck. I am supposed to generate a set of possibilities of 1296 different combinations of 4 numbers, ie. 1111, 1234, 2361, (only contain 1 to 6) in a matrix form here is what I got which has not been working as it keeps coming out with the same number on the row.. The code: gl1<- gl(1296,1,length=1296, labels=1:1296,
2018 Feb 25
0
Re-adding an existing brick to a volume
.gluster and attr already in that folder so it would not connect it as a brick I don't think there is option to "reconnect brick back" what I did many times - delete .gluster and reset attr on the folder, connect the brick and then update those attr. with stat commands example here http://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html Vlad On Sun, Feb 25, 2018
2007 Apr 04
1
sun x2100 gmirror problem
Hi, We're using gmirror on our sun fire x2100 and FreeBSD 6.1-p10. Some days ago I found this in the logs: Apr 1 02:12:05 x2100 kernel: ad6: WARNING - WRITE_DMA48 UDMA ICRC error (retrying request) LBA=612960533 Apr 1 02:12:05 x2100 kernel: ad6: FAILURE - WRITE_DMA48 status=51<READY,DSC,ERROR> error=10<NID_NOT_FOUND> LBA=612960533 Apr 1 02:12:05 x2100 kernel: GEOM_MIRROR:
2018 Feb 25
1
Re-adding an existing brick to a volume
Let me see if I understand this. Remove attrs from the brick and delete the .glusterfs folder. Data stays in place. Add the brick to the volume. Since most of the data is the same as on the actual volume it does not need to be synced, and the heal operation finishes much faster. Do I have this right? Kind regards, Mitja On 25/02/2018 17:02, Vlad Kopylov wrote: > .gluster and attr already in
2008 Oct 15
1
Glusterfs performance with large directories
We at Wiseguys are looking into GlusterFS to run our Internet Archive. The archive stores webpages collected by our spiders. The test setup consists of three data machines, each exporting a volume of about 3.7TB and one nameserver machine. File layout is such that each host has it's own directory, for example the GlusterFS website would be located in:
2012 Nov 27
1
Performance after failover
Hey, all. I'm currently trying out GlusterFS 3.3. I've got two servers and four clients, all on separate boxes. I've got a Distributed-Replicated volume with 4 bricks, two from each server, and I'm using the FUSE client. I was trying out failover, currently testing for reads. I was reading a big file, using iftop to see which server was actually being read from. I put up an