similar to: Recommendations for busy static web server replacement

Displaying 20 results from an estimated 20000 matches similar to: "Recommendations for busy static web server replacement"

2017 Jun 28
2
afr-self-heald.c:479:afr_shd_index_sweep
On Wed, Jun 28, 2017 at 9:45 PM, Ravishankar N <ravishankar at redhat.com> wrote: > On 06/28/2017 06:52 PM, Paolo Margara wrote: > >> Hi list, >> >> yesterday I noted the following lines into the glustershd.log log file: >> >> [2017-06-28 11:53:05.000890] W [MSGID: 108034] >> [afr-self-heald.c:479:afr_shd_index_sweep] >>
2009 May 25
1
raid5 or raid6 level cluster
Hello, ?s there anyway to create raid6 or raid5 level glusterfs installation ? >From docs I undetstood that I can do raid1 base glusterfs installation or radi0 (strapting data too all servers ) and raid10 based solution but raid10 based solution is not cost effective because need too much server. Do you have a plan for keep one or two server as a parity for whole glusterfs system
2017 Jun 29
2
afr-self-heald.c:479:afr_shd_index_sweep
On 06/29/2017 01:08 PM, Paolo Margara wrote: > > Hi all, > > for the upgrade I followed this procedure: > > * put node in maintenance mode (ensure no client are active) > * yum versionlock delete glusterfs* > * service glusterd stop > * yum update > * systemctl daemon-reload > * service glusterd start > * yum versionlock add glusterfs* > *
2017 Jun 28
3
afr-self-heald.c:479:afr_shd_index_sweep
Hi list, yesterday I noted the following lines into the glustershd.log log file: [2017-06-28 11:53:05.000890] W [MSGID: 108034] [afr-self-heald.c:479:afr_shd_index_sweep] 0-iso-images-repo-replicate-0: unable to get index-dir on iso-images-repo-client-0 [2017-06-28 11:53:05.001146] W [MSGID: 108034] [afr-self-heald.c:479:afr_shd_index_sweep] 0-vm-images-repo-replicate-0: unable to get index-dir
2017 Jun 29
2
afr-self-heald.c:479:afr_shd_index_sweep
Hi Pranith, I'm using this guide https://github.com/nixpanic/glusterdocs/blob/f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md Definitely my fault, but I think that is better to specify somewhere that restarting the service is not enough simply because in many other case, with other services, is sufficient. Now I'm restarting every brick process (and waiting for
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
Hi all, for the upgrade I followed this procedure: * put node in maintenance mode (ensure no client are active) * yum versionlock delete glusterfs* * service glusterd stop * yum update * systemctl daemon-reload * service glusterd start * yum versionlock add glusterfs* * gluster volume heal vm-images-repo full * gluster volume heal vm-images-repo info on each server every time
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
Paolo, Which document did you follow for the upgrade? We can fix the documentation if there are any issues. On Thu, Jun 29, 2017 at 2:07 PM, Ravishankar N <ravishankar at redhat.com> wrote: > On 06/29/2017 01:08 PM, Paolo Margara wrote: > > Hi all, > > for the upgrade I followed this procedure: > > - put node in maintenance mode (ensure no client are active)
2013 Dec 09
3
Gluster infrastructure question
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Heyho guys, I'm running since years glusterfs in a small environment without big problems. Now I'm going to use glusterFS for a bigger cluster but I've some questions :) Environment: * 4 Servers * 20 x 2TB HDD, each * Raidcontroller * Raid 10 * 4x bricks => Replicated, Distributed volume * Gluster 3.4 1) I'm asking me, if I can
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara <paolo.margara at polito.it> wrote: > Hi Pranith, > > I'm using this guide https://github.com/nixpanic/glusterdocs/blob/ > f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md > > Definitely my fault, but I think that is better to specify somewhere that > restarting the service is not enough simply
2017 Jun 29
1
afr-self-heald.c:479:afr_shd_index_sweep
Il 29/06/2017 16:27, Pranith Kumar Karampuri ha scritto: > > > On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara > <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote: > > Hi Pranith, > > I'm using this guide > https://github.com/nixpanic/glusterdocs/blob/f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md
2012 Mar 15
2
Usage Case: just not getting the performance I was hoping for
All, For our project, we bought 8 new Supermicro servers. Each server is a quad-core Intel cpu with 2U chassis supporting 8 x 7200 RPM Sata drives. To start out, we only populated 2 x 2TB enterprise drives in each server and added all 8 peers with their total of 16 drives as bricks to our gluster pool as distributed replicated (2). The replica worked as follows: 1.1 -> 2.1 1.2
2017 Jun 28
0
afr-self-heald.c:479:afr_shd_index_sweep
On 06/28/2017 06:52 PM, Paolo Margara wrote: > Hi list, > > yesterday I noted the following lines into the glustershd.log log file: > > [2017-06-28 11:53:05.000890] W [MSGID: 108034] > [afr-self-heald.c:479:afr_shd_index_sweep] > 0-iso-images-repo-replicate-0: unable to get index-dir on > iso-images-repo-client-0 > [2017-06-28 11:53:05.001146] W [MSGID: 108034] >
2012 Aug 24
1
Typical setup questions
All, I am curious what is used typically for the file system replication and how do you make sure that it is consistent. So for example when using large 3TB+ sata/NL-sas drives. Is is typical to replicate three times to get similar protection to raid 6? Also what is typically done to ensure that all replicas are in place and consistent? A cron that stats of ls's the file system from a
2012 Oct 31
2
Best practices for creating bricks
Hello, I am working with several Dell PE720xd. I have 24 disks per server at my disposal with a high end raid card with 1GB RAM and BBC. I will be building a distributed-replicated volume. Is it better for me to set up one or two large RAID0 arrays and use those as bricks, or should I make each hard drive a brick? This will be back end storage for an image search engine with lots of small file
2012 Jun 14
4
RAID options for Gluster
I think this discussion probably came up here already but I couldn't find much on the archives. Would you able to comment or correct whatever might look wrong. What options people think is more adequate to use with Gluster in terms of RAID underneath and a good balance between cost, usable space and performance. I have thought about two main options with its Pros and Cons No RAID (individual
2012 Jun 07
2
Performance optimization tips Gluster 3.3? (small files / directory listings)
Hi, I'm using Gluster 3.3.0-1.el6.x86_64, on two storage nodes, replicated mode (fs1, fs2) Node specs: CentOS 6.2 Intel Quad Core 2.8Ghz, 4Gb ram, 3ware raid, 2x500GB sata 7200rpm (RAID1 for os), 6x1TB sata 7200rpm (RAID10 for /data), 1Gbit network I've it mounted data partition to web1 a Dual Quad 2.8Ghz, 8Gb ram, using glusterfs. (also tried NFS -> Gluster mount) We have 50Gb of
2013 Jun 11
1
cluster.min-free-disk working?
Hi, have a system consisting of four bricks, using 3.3.2qa3. I used the command gluster volume set glusterKumiko cluster.min-free-disk 20% Two of the bricks where empty, and two were full to just under 80% when building the volume. Now, when syncing data (from a primary system), and using min-free-disk 20% I thought new data would go to the two empty bricks, but gluster does not seem
2013 Dec 17
1
Project pre planning
Hello GlusterFS users, can anybody give me please his opinion about the following facts and questions: 4 storage server with 16 SATA bays, connected by GigE: Q1: Volume will be set up as distributed-replicated. Maildir, FTP Dir, htdocs, file store directory => as sub dir's in one big GlusterVolume or each dir in it's own GlusterVolume? Q2: Set up the bricks as a collection of
2017 Aug 25
4
GlusterFS as virtual machine storage
> This is true even if I manage locking at application level (via virlock > or sanlock)? Yes. Gluster has it's own quorum, you can disable it but that's just a recipe for a disaster. > Also, on a two-node setup it is *guaranteed* for updates to one node to > put offline the whole volume? I think so, but I never took the chance so who knows. > On the other hand, a 3-way
2023 Mar 26
1
hardware issues and new server advice
Hi, sry if i hijack this, but maybe it's helpful for other gluster users... > pure NVME-based volume will be waste of money. Gluster excells when you have more servers and clients to consume that data. > I would choose LVM cache (NVMEs) + HW RAID10 of SAS 15K disks to cope with the load. At least if you decide to go with more disks for the raids, use several (not the built-in ones)