similar to: RAID options for Gluster

Displaying 20 results from an estimated 40000 matches similar to: "RAID options for Gluster"

2012 Jun 23
4
Can't run KVM Virtual Machines on a Gluster volume
I just built a 2 node(4 bricks), Distributed-Replicated and everything mounts fine. Each node mounts using GlusterFS client on its hostname (mount -t glusterfs hostname:VOLUME /virtual-machines) When creating a new Virtual Machine using virt-manager it creates the file on the storage, but when trying to power it On, it doesn't work and gives back an error message.(See below. Yes the folder has
2012 Jun 07
2
Performance optimization tips Gluster 3.3? (small files / directory listings)
Hi, I'm using Gluster 3.3.0-1.el6.x86_64, on two storage nodes, replicated mode (fs1, fs2) Node specs: CentOS 6.2 Intel Quad Core 2.8Ghz, 4Gb ram, 3ware raid, 2x500GB sata 7200rpm (RAID1 for os), 6x1TB sata 7200rpm (RAID10 for /data), 1Gbit network I've it mounted data partition to web1 a Dual Quad 2.8Ghz, 8Gb ram, using glusterfs. (also tried NFS -> Gluster mount) We have 50Gb of
2012 Aug 13
1
Problem with too many small files
I am not sure how it works on Gluster but to mitigate the problem with listing a lot of small files wouldn't it be suitable to keep on every node a copy of the directory tree. I think Isilon does that and there is probably a lot to be learned from them which seems quiet mature technology. Could also have another interesting thing added in the future, local SSD to keep the file system metadata
2012 Jun 11
1
"mismatching layouts" flooding in the logs
I have the following appended to gluster logs at around 100kB of logs per second, on all 10 gluster servers: [2012-06-11 15:08:15.729429] I [dht-layout.c:682:dht_layout_dir_mismatch] 0-sites-dht: subvol: sites-client-41; inode layout - 966367638 - 1002159031; disk layout - 930576244 - 966367637 [2012-06-11 15:08:15.729465] I [dht-common.c:525:dht_revalidate_cbk] 0-sites-dht: mismatching layouts
2012 Nov 02
8
Very slow directory listing and high CPU usage on replicated volume
Hi all, I am having problems with painfully slow directory listings on a freshly created replicated volume. The configuration is as follows: 2 nodes with 3 replicated drives each. The total volume capacity is 5.6T. We would like to expand the storage capacity much more, but first we need to figure this problem out. Soon after loading up about 100 MB of small files (about 300kb each), the
2012 Jun 01
3
Striped replicated volumes in Gluster 3.3.0
Hi all, I'm very happy to see the release of 3.3.0. One of the features I was waiting for are striped replicated volumes. We plan to store KVM images (from a OpenStack installation) on it. I read through the docs and found the following phrase: "In this release, configuration of this volume type is supported only for Map Reduce workloads." What does that mean exactly? Hopefully not,
2012 Feb 14
4
Exorbitant cost to achieve redundancy??
I'm trying to justify a GlusterFS storage system for my technology development group and I want to get some clarification on something that I can't seem to figure out architecture wise... My storage system will be rather large. Significant fraction of a petabyte and will require scaling in size for at least one decade. from what I understand GlusterFS achieves redundancy through
2012 Mar 15
2
Usage Case: just not getting the performance I was hoping for
All, For our project, we bought 8 new Supermicro servers. Each server is a quad-core Intel cpu with 2U chassis supporting 8 x 7200 RPM Sata drives. To start out, we only populated 2 x 2TB enterprise drives in each server and added all 8 peers with their total of 16 drives as bricks to our gluster pool as distributed replicated (2). The replica worked as follows: 1.1 -> 2.1 1.2
2012 Feb 20
2
Replacing a node
I have two servers running gluster 3.1.2 hosting a single replica-2 volume (web images) on Ubuntu Lucid 64. I need to replace one of the nodes with a new server. What's the best approach to this? There's not much data, but I'd like to do it with no downtime if possible. Marcus -- Marcus Bointon Synchromedia Limited: Creators of http://www.smartmessages.net/ UK info at hand CRM
2023 Mar 18
1
hardware issues and new server advice
hi, our current servers are suffering from a weird hardware issue that forces us to start over. in short we have two servers with 15 disks at 6TB each, divided into three raid5 arrays for three bricks per server at 22TB per brick. each brick on one server is replicated to a brick on the second server. the hardware issue is that somewhere in the backplane random I/O errors happen when the system
2009 Dec 02
7
Slightly OT: FakeRaid or Software Raid
I have had great luck with nvidia fakeraid on RAID1, but I see there are preferences for software raid. I have very little hands on with full Linux software RAID and that was about 14 years ago. I am trying to determine which to use on a rebuild in a "standard" CentOS/Xen enviroment. It seems to me that while FakeRaid is/can be completely taken care of in dom0 dmraid whereas with
2013 Mar 28
1
Glusterfs gives up with endpoint not connected
Dear all, Right out of the blue glusterfs is not working fine any more every now end the it stops working telling me, Endpoint not connected and writing core files: [root at tuepdc /]# file core.15288 core.15288: ELF 64-bit LSB core file AMD x86-64, version 1 (SYSV), SVR4-style, from 'glusterfs' My Version: [root at tuepdc /]# glusterfs --version glusterfs 3.2.0 built on Apr 22 2011
2011 Aug 17
1
cluster.min-free-disk separate for each, brick
On 15/08/11 20:00, gluster-users-request at gluster.org wrote: > Message: 1 > Date: Sun, 14 Aug 2011 23:24:46 +0300 > From: "Deyan Chepishev - SuperHosting.BG"<dchepishev at superhosting.bg> > Subject: [Gluster-users] cluster.min-free-disk separate for each > brick > To: gluster-users at gluster.org > Message-ID:<4E482F0E.3030604 at superhosting.bg>
2013 Oct 02
1
Shutting down a GlusterFS server.
Hi, I have a 2-node replica volume running with GlusterFS 3.3.2 on Centos 6.4. I want to shut down one of the gluster servers for maintenance. Any best practice that is to be followed while turning off a server in terms of services etc. Or can I just shut down the server. ? Thanks & Regards, Bobby Jacob -------------- next part -------------- An HTML attachment was scrubbed... URL:
2012 Jan 22
2
Best practices?
Suppose I start building nodes with (say) 24 drives each in them. Would the standard/recommended approach be to make each drive its own filesystem, and export 24 separate bricks, server1:/data1 .. server1:/data24 ? Making a distributed replicated volume between this and another server would then have to list all 48 drives individually. At the other extreme, I could put all 24 drives into some
2014 Oct 09
3
dovecot replication (active-active) - server specs
Hello, i have some questions about the new dovecot replication and mdbox format. my company has currently 3 old dovecot 2.0.x fileserver/backend with ca. 120k mailboxes and ca. 6 TB data used. They are synchronised per drbd/corosync. Each fileserver/backend have ca. 40k mailboxes im Maildir format. Our MX server is delivering ca. 30 GB new mails per day. Two IMAP proxy server get the
2013 Dec 09
3
Gluster infrastructure question
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Heyho guys, I'm running since years glusterfs in a small environment without big problems. Now I'm going to use glusterFS for a bigger cluster but I've some questions :) Environment: * 4 Servers * 20 x 2TB HDD, each * Raidcontroller * Raid 10 * 4x bricks => Replicated, Distributed volume * Gluster 3.4 1) I'm asking me, if I can
2011 Oct 17
1
Need help with optimizing GlusterFS for Apache
Our webserver is configured as such: The actual website files, php, html ,css and so on. Or on a dedicated non-glusterfs ext4 partition. However, the website access Videos and especially image files on a gluster mounted directory. The write performance for our backend gluster storage is not that important. Since it only comes into play when someone uploads a video or image. However, the files
2008 Jul 24
13
performace of disks
Hello, Queries: 1 - what is the best RAID (0,1,5,10,50) for server running very, very VM instances ??? 2 - and config of stripes/element sizes (32, 64, 128, 256, etc, KB) ?? 3 - in the configuration of VM, exit one difference in performance between image file (disk:/) or LVM partition (phy:/) ??? any URL or docs for read about this theme ? mmm.. it is all for moment. thanks -- -- Victor
2017 Oct 09
3
Peer isolation while healing
Hi everyone, I've been using gluster for a few month now, on a simple 2 peers replicated infrastructure, 22Tb each. One of the peers has been offline last week during 10 hours (raid resync after a disk crash), and while my gluster server was healing bricks, I could see some write errors on my gluster clients. I couldn't find a way to isolate my healing peer, in the documentation or