similar to: Upgrade / Downgrade package glusterfs-server

Displaying 20 results from an estimated 20000 matches similar to: "Upgrade / Downgrade package glusterfs-server"

2012 Oct 11
0
samba performance downgrade with glusterfs backend
Hi folks, We found that samba performance downgrade a lot with glusterfs backend. volume info as followed, Volume Name: vol1 Type: Distribute Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: pana53:/data/ Options Reconfigured: auth.allow: 192.168.* features.quota: on nfs.disable: on Use dd (bs=1MB) or iozone (block=1MB) to test write performance, about 400MB/s. #dd
2018 May 15
1
[External] Re: glusterfs 3.13 repo unavailable and downgrade to 3.12.9 fails
Thanks Kaleb, any chance i can make the node working after the downgrade? thanks On Tue, May 15, 2018 at 2:02 PM, Kaleb S. KEITHLEY <kkeithle at redhat.com> wrote: > > You can still get them from > https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.13/ > > (I don't know how much longer they'll be there. I suggest you copy them > if you think
2018 May 15
1
[External] Re: glusterfs 3.13 repo unavailable and downgrade to 3.12.9 fails
On 05/15/2018 08:08 AM, Davide Obbi wrote: > Thanks Kaleb, > > any chance i can make the node working after the downgrade? > thanks Without knowing what doesn't work, I'll go out on a limb and guess that it's an op-version problem. Shut down your 3.13 nodes, change their op-version to one of the valid 3.12 op-versions (e.g. 31203) and restart. Then the 3.12 nodes should
2018 May 15
0
glusterfs 3.13 repo unavailable and downgrade to 3.12.9 fails
You can still get them from https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.13/ (I don't know how much longer they'll be there. I suggest you copy them if you think you're going to need them in the future.) n 05/15/2018 04:58 AM, Davide Obbi wrote: > hi, > > i noticed that this repo for glusterfs 3.13 does not exists anymore at: > >
2018 May 15
2
glusterfs 3.13 repo unavailable and downgrade to 3.12.9 fails
hi, i noticed that this repo for glusterfs 3.13 does not exists anymore at: http://mirror.centos.org/centos/7/storage/x86_64/ i knew was not going to be long term supported however the downgrade to 3.12 breaks the server node i believe the issue is with: *[2018-05-15 08:54:39.981101] E [MSGID: 101019] [xlator.c:503:xlator_init] 0-management: Initialization of volume 'management'
2017 Jun 06
0
Glusterfs 3.10.3 has been tagged
Apologies for delay in sending mail about builds. We have packages ready for all distributions, thanks to Kaleb and Niels. As per [1] (mostly[2]): * Packages for Fedora 24 and Fedora 27 are available at [3]. * Packages for Fedora 25 and Fedora 26 are built and queued in Fedora bodhi for testing. Once pushed to testing they will be available in the Updates-Testing repo. After a nominal testing
2017 May 31
1
Glusterfs 3.10.3 has been tagged
Glusterfs 3.10.3 has been tagged. Packages for the various distributions will be available in a few days, and with that a more formal release announcement will be made. - Tagged code: https://github.com/gluster/glusterfs/tree/v3.10.3 - Release notes: https://github.com/gluster/glusterfs/blob/release-3.10/doc/release-notes/3.10.3.md Thanks, Raghavendra Talur NOTE: Tracker bug for 3.10.3 will be
2017 Jul 03
0
Very slow performance on Sharded GlusterFS
Hi, I want to give an update for this. I also tested READ speed. It seems, sharded volume has a lower read speed than striped volume. This machine has 24 cores with 64GB of RAM . I really don?t think its caused due to low system. Stripe is kind of a shard but a fixed size based on stripe value / filesize. Hence, I would expect at least the same speed or maybe little slower. What I get is
2017 Jul 26
1
GlusterFS Fuse client hangs when coping large files
Hello, I?m having some weird problems while coping large files (> 1GB) to a GlusterFS through a Fuse client. When the copy is done using the cp command everything is fine, but if I use a Java program, the GlusterFS Fuse hangs. The kern.log shows timeout for the Java program and the Fuse client. Anyone have experienced this behavior? Test environment: Ubuntu 16.04.2 LTS (GNU/Linux
2016 Nov 02
0
Latest glusterfs 3.8.5 server not compatible with livbirt libgfapi access
Hi, After updating glusterfs server to 3.8.5 (from Centos-gluster-3.8.repo) the KVM virtual machines (qemu-kvm-ev-2.3.0-31) that access storage using libgfapi are no longer able to start. The libvirt log file shows: [2016-11-02 14:26:41.864024] I [MSGID: 104045] [glfs-master.c:91:notify] 0-gfapi: New graph 73332d32-3937-3130-2d32-3031362d3131 (0) coming up [2016-11-02 14:26:41.864075] I [MSGID:
2017 Dec 06
1
SAMBA VFS module for GlusterFS crashes
Dear Anoop, thank you very much for your detailed explanation. > I think you are hitting a bug[1] from vfs module for GlusterFS inside Samba during a realpath() > call. > > This regression got in when glfs_realpath() was modified in GlusterFS[2] to correctly handle memory > allocation and corresponding freeing of string arguments. And this particular change is present from >
2012 Oct 13
1
low samba performance with glusterfs backend
Hello folks, We test samba performance with local ext4 and glusterfs backends, it shows performance is very different. The samba server has 4 1Gbps NICs and bond with mode 6, backend storage is raid0 with 12 SAS disks. A LUN is created over all disks, make as EXT4 file system, and used as glusterfs brick. On the samba server, use dd test local ext4 and glusterfs, write bandwidth are 477MB/s and
2012 Oct 13
1
low samba performance with glusterfs backend
Hello folks, We test samba performance with local ext4 and glusterfs backends, it shows performance is very different. The samba server has 4 1Gbps NICs and bond with mode 6, backend storage is raid0 with 12 SAS disks. A LUN is created over all disks, make as EXT4 file system, and used as glusterfs brick. On the samba server, use dd test local ext4 and glusterfs, write bandwidth are 477MB/s and
2007 Jul 03
1
puppetversion and downgrade/upgrade of puppet
i wonder why this case fail: case puppetversion { ''0.22.4'':{ #ok notice(''ok bonne version'') } default:{ fail("mauvaise version de puppet $puppetversion, downgrade vers 0.22.4") } } gives me: protos:/usr/local/.aqadmin/home%(aqadmin)> facter puppetversion 0.22.4
2017 Jul 03
0
Very slow performance on Sharded GlusterFS
Hi Krutika, Have you be able to look out my profiles? Do you have any clue, idea or suggestion? Thanks, -Gencer From: Krutika Dhananjay [mailto:kdhananj at redhat.com] Sent: Friday, June 30, 2017 3:50 PM To: gencer at gencgiyen.com Cc: gluster-user <gluster-users at gluster.org> Subject: Re: [Gluster-users] Very slow performance on Sharded GlusterFS Just noticed that the
2017 Sep 20
2
hostname
Hi, how to change the host name of gluster servers? if I modify the hostname1 in /etc/lib/glusterd/peers/uuid, the change is not save... gluster pool list return ipserver and not new hostname... Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170920/c5b95a89/attachment.html>
2017 Jan 26
2
samba rpm deps - with yum downgrade = kind of a mayhem
On 25/01/17 14:26, Jonathan Billings wrote: > On Wed, Jan 25, 2017 at 11:32:16AM +0000, lejeczek wrote: >> hi guys, gals >> >> do you see this: >> >> ~]$ yum downgrade samba >> Resolving Dependencies >> [...] >> --> Processing Dependency: libtevent.so.0(TEVENT_0.9.9) for package: >> samba-client-libs-4.4.4 >> >> ..and process
2017 Jan 26
0
samba rpm deps - with yum downgrade = kind of a mayhem
On Thu, Jan 26, 2017 at 11:44:06AM +0000, lejeczek wrote: > but Centos should, right? > Being able to downgrade can save you a day or a few. I know, for it did save > mine several times, eg. buggy samba new update and downgrade fixed it, etc. > We absolutely should be able to downgrade every rpm, if this is just a > matter of an opinion(I'd fail to see any other reasons to not
2017 Jun 01
0
yum install <olderversion> does not downgrade
Use the 'downgrade' option. https://access.redhat.com/solutions/29617 On Thu, Jun 1, 2017 at 1:46 PM, Anand Buddhdev <anandb at ripe.net> wrote: > We're using ansible to configure our CentOS 6 servers, and we have a > task to install a specific version of a package: > > - name: install thrift2 > yum: name=ripencc-thrift2-{{ version }} > > In this ansible
2017 Jun 18
0
gluster peer probe failing
Hi, Below please find the reserved ports and log, thanks. sysctl net.ipv4.ip_local_reserved_ports: net.ipv4.ip_local_reserved_ports = 30000-32767 glusterd.log: [2017-06-18 07:04:17.853162] I [MSGID: 106487] [glusterd-handler.c:1242:__glusterd_handle_cli_probe] 0-glusterd: Received CLI probe req 192.168.1.17 24007 [2017-06-18 07:04:17.853237] D [MSGID: 0] [common-utils.c:3361:gf_is_local_addr]