search for: rhgs

Displaying 20 results from an estimated 20 matches for "rhgs".

Did you mean: regs
2017 Jul 31
1
Hot Tier
...as well. Will give it a try. Here is the volume info (no hot tier at this time) ~]# gluster v info home Volume Name: home Type: Disperse Volume ID: 4583a3cf-4deb-4707-bd0d-e7defcb1c39b Status: Started Snapshot Count: 0 Number of Bricks: 1 x (8 + 4) = 12 Transport-type: tcp Bricks: Brick1: MMR01:/rhgs/b0/data Brick2: MMR02:/rhgs/b0/data Brick3: MMR03:/rhgs/b0/data Brick4: MMR04:/rhgs/b0/data Brick5: MMR05:/rhgs/b0/data Brick6: MMR06:/rhgs/b0/data Brick7: MMR07:/rhgs/b0/data Brick8: MMR08:/rhgs/b0/data Brick9: MMR09:/rhgs/b0/data Brick10: MMR10:/rhgs/b0/data Brick11: MMR11:/rhgs/b0/data Brick12:...
2017 Aug 01
0
Hot Tier
...hot tier at this time) > > ~]# gluster v info home > > Volume Name: home > Type: Disperse > Volume ID: 4583a3cf-4deb-4707-bd0d-e7defcb1c39b > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x (8 + 4) = 12 > Transport-type: tcp > Bricks: > Brick1: MMR01:/rhgs/b0/data > Brick2: MMR02:/rhgs/b0/data > Brick3: MMR03:/rhgs/b0/data > Brick4: MMR04:/rhgs/b0/data > Brick5: MMR05:/rhgs/b0/data > Brick6: MMR06:/rhgs/b0/data > Brick7: MMR07:/rhgs/b0/data > Brick8: MMR08:/rhgs/b0/data > Brick9: MMR09:/rhgs/b0/data > Brick10: MMR10:/rhgs/b...
2017 Jul 31
2
Hot Tier
Hi, Before you try turning off the perf translators can you send us the following, So we will make sure that the other things haven't gone wrong. can you send us the log files for tier (would be better if you attach other logs too), the version of gluster you are using, the client, and the output for: gluster v info gluster v get v1 performance.io-cache gluster v get v1
2017 Aug 03
0
Hot Tier
...o home > > > > Volume Name: home > > Type: Disperse > > Volume ID: 4583a3cf-4deb-4707-bd0d-e7defcb1c39b > > Status: Started > > Snapshot Count: 0 > > Number of Bricks: 1 x (8 + 4) = 12 > > Transport-type: tcp > > Bricks: > > Brick1: MMR01:/rhgs/b0/data > > Brick2: MMR02:/rhgs/b0/data > > Brick3: MMR03:/rhgs/b0/data > > Brick4: MMR04:/rhgs/b0/data > > Brick5: MMR05:/rhgs/b0/data > > Brick6: MMR06:/rhgs/b0/data > > Brick7: MMR07:/rhgs/b0/data > > Brick8: MMR08:/rhgs/b0/data > > Brick9: MMR09:/...
2018 Feb 09
2
RHGS CTDB joining AD domain - AD computer objects
...e servers with samba/ctdb. Both have joined the domain successfully (the ctdb service was started and net ads join issued on both). A container was created by the AD admin and I specified this using createcomputer=OU when I issued the net ads command on both machines. As far as I can tell on the RHGS servers everything looks OK. However the AD admin tells me only one computer object is showing in AD and this is named as the netbios name (the same name used in both hosts smb.conf). Does this sound correct or should we being see one computer object per host? thanks David
2018 Feb 10
0
RHGS CTDB joining AD domain - AD computer objects
...gt; have joined the domain successfully (the ctdb service was started and > net ads join issued on both). > > A container was created by the AD admin and I specified this using > createcomputer=OU when I issued the net ads command on both machines. > > As far as I can tell on the RHGS servers everything looks OK. However > the AD admin tells me only one computer object is showing in AD and > this is named as the netbios name (the same name used in both hosts > smb.conf). > > Does this sound correct or should we being see one computer object per > host? > &gt...
2018 Jan 02
2
2018 - Plans and Expectations on Gluster Community
...t; but again, the documentation for it is pretty short, there isn't for > example a config for 2 node replication cluster with NFS-Ganesha, one of > the most popular configuration for testing so a person could help from this > example how things do work. > 3. The commercial product (RHGS) is "stuck" in the 3.3 days, is there any > chance it will move forward to any of the last stable versions? > That's not true, RHGS 3.3 is not GlusterFS 3.3 rather it's a rebase to glusterfs-3.8.4 + plenty of cherry picks from mainline. It's just how RHGS chooses to mai...
2018 Jan 02
0
2018 - Plans and Expectations on Gluster Community
...mation called gdep[oy, but again, the documentation for it is pretty short, there isn't for example a config for 2 node replication cluster with NFS-Ganesha, one of the most popular configuration for testing so a person could help from this example how things do work. 3. The commercial product (RHGS) is "stuck" in the 3.3 days, is there any chance it will move forward to any of the last stable versions? Thanks On Tue, Jan 2, 2018 at 4:15 AM, Amar Tumballi <atumball at redhat.com> wrote: > Hi All, > > First of all, happy new year 2018! Hope all of your wishes come t...
2018 Jan 02
3
2018 - Plans and Expectations on Gluster Community
Hi All, First of all, happy new year 2018! Hope all of your wishes come true this year, and hope you will have time for contributing to Gluster Project this year too :-) As a contributor and one of the maintainers of the project I would like to propose below plans for Gluster Project, and please share your feedback, and comments on them. - *Improved Automation to reduce the process burden*
2017 Sep 18
0
Confusing lstat() performance
...ers with no tuning except for quota being enabled: [root at dell-per730-03 ~]# gluster v info Volume Name: vmstore Type: Replicate Volume ID: 0d2e4c49-334b-47c9-8e72-86a4c040a7bd Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: 192.168.50.1:/rhgs/brick1/vmstore Brick2: 192.168.50.2:/rhgs/brick1/vmstore Brick3: 192.168.50.3:/rhgs/ssd/vmstore (arbiter) Options Reconfigured: features.quota-deem-statfs: on nfs.disable: on features.inode-quota: on features.quota: on And I ran the smallfile benchmark, created 80k 64KB files. After that I clear...
2018 Mar 21
1
Request For Opinions: what to do about the synthetic statfvs "tweak"?
...velopers: do you know of any scenario where we benefit from the f_bsize tweak? Users: - do you have any application that relies on f_bsize and benefits from its custom value? - do you have any legacy application/stack where the f_frsize == f_bsize workaround is still needed (but GlusterFS / RHGS is being kept up to date, so a change in this regard would hit your setup)? Thanks for your thoughts! Regards, Csaba [1]: https://github.com/gluster/glusterfs/blob/v4.0.0/xlators/mount/fuse/src/fuse-bridge.c#L3177-L3189 [2]: https://debbugs.gnu.org/cgi/bugreport.cgi?bug=11406 [3] practically...
2017 Sep 14
5
Confusing lstat() performance
Hi, I have a gluster 3.10 volume with a dir with ~1 million small files in them, say mounted at /mnt/dir with FUSE, and I'm observing something weird: When I list and stat them all using rsync, then the lstat() calls that rsync does are incredibly fast (23 microseconds per call on average, definitely faster than a network roundtrip between my 3-machine bricks connected via Ethernet). But
2023 Oct 27
1
Replace faulty host
Hi Markus, It looks quite well documented, but please use?https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/sect-replacing_hosts?as 3.5?is the latest version for RHGS. If the OS disks are failing, I would have tried?moving the data disks to the new machine and transferring the gluster files in /etc and /var/lib to the new node. Any reason to reuse the FQDN ?For me it was always much simpler to remove the brick, remove the node from TSP, add the new node and then...
2017 Aug 02
1
glusterd daemon - restart
Sorry, I meant RedHat's Gluster Storage Server 3.2 which is latest and greatest. On Wed, Aug 2, 2017 at 9:28 AM, Kaushal M <kshlmster at gmail.com> wrote: > On Wed, Aug 2, 2017 at 5:07 PM, Mark Connor <markconnor64 at gmail.com> > wrote: > > Can the glusterd daemon be restarted on all storage nodes without causing > > any disruption to data being served or the
2017 Nov 08
1
BUG: After stop and start wrong port is advertised
Hi, This bug is hitting me hard on two different clients. In RHGS 3.3 and on glusterfs 3.10.2 on Centos 7.4 in once case I had 59 differences in a total of 203 bricks. I wrote a quick and dirty script to check all ports against the brick file and the running process. #!/bin/bash Host=`uname -n| awk -F"." '{print $1}'` GlusterVol=`ps -eaf...
2017 Nov 08
0
BUG: After stop and start wrong port is advertised
We've a fix in release-3.10 branch which is merged and should be available in the next 3.10 update. On Wed, Nov 8, 2017 at 4:58 PM, Mike Hulsman <mike.hulsman at proxy.nl> wrote: > Hi, > > This bug is hitting me hard on two different clients. > In RHGS 3.3 and on glusterfs 3.10.2 on Centos 7.4 > in once case I had 59 differences in a total of 203 bricks. > > I wrote a quick and dirty script to check all ports against the brick file > and the running process. > #!/bin/bash > > Host=`uname -n| awk -F"." '{print $1...
2023 Oct 25
1
Replace faulty host
Hi all, I have a problem with one of our gluster clusters. This is the setup: Volume Name: gds-common Type: Distributed-Replicate Volume ID: 42c9fa00-2d57-4a58-b5ae-c98c349cfcb6 Status: Started Snapshot Count: 26 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: urd-gds-031:/urd-gds/gds-common Brick2: urd-gds-032:/urd-gds/gds-common Brick3: urd-gds-030:/urd-gds/gds-common
2017 Oct 27
1
BUG: After stop and start wrong port is advertised
Hello Atin, ? ? I just read it and very happy you found the issue. We really hope this will be fixed in the next 3.10.7 version! ? ? PS: Wow nice all that c code and those "goto out" statements (not always considered clean but the best way often I think). Can remember the days I wrote kernel drivers myself in c :) ? ? Regards Jo Goossens ? ? ? -----Original message----- From:Atin
2018 May 01
3
Finding performance bottlenecks
On 01/05/2018 02:27, Thing wrote: > Hi, > > So is the KVM or Vmware as the host(s)?? I basically have the same setup > ie 3 x 1TB "raid1" nodes and VMs, but 1gb networking.? I do notice with > vmware using NFS disk was pretty slow (40% of a single disk) but this > was over 1gb networking which was clearly saturating.? Hence I am moving > to KVM to use glusterfs
2017 Oct 27
3
BUG: After stop and start wrong port is advertised
We (finally) figured out the root cause, Jo! Patch https://review.gluster.org/#/c/18579 posted upstream for review. On Thu, Sep 21, 2017 at 2:08 PM, Jo Goossens <jo.goossens at hosted-power.com> wrote: > Hi, > > > > > > We use glusterfs 3.10.5 on Debian 9. > > > > When we stop or restart the service, e.g.: service glusterfs-server restart > > >