similar to: software raid performance

Displaying 20 results from an estimated 7000 matches similar to: "software raid performance"

2008 Dec 10
3
AFR healing problem after returning one node.
I've got configuration which in simple includes combination of afrs and unify - servers exports n[1-3]-brick[12] and n[1-3]-ns and client got cluster configuration: volume afr-ns type cluster/afr subvolumes n1-ns n2-ns n3-ns option data-self-heal on option metadata-self-heal on option entry-self-heal on end-volume volume afr1 type cluster/afr subvolumes n1-brick2
2010 Jan 03
2
Where is log file of GlusterFS 3.0?
I not found log file of Gluster 3.0! In the past, I install well with GlusterFS 2.06, and Log file of server and Client placed in /var/log/glusterfs/... But after install GlusterFS 3.0( on Centos5.4 64 bit), (4 server + 1 client), I start glusterFS servers and client, and type *df -H* at client, result is : "Transport endpoint is not connected" *I want to detect BUG, but I not found
2009 Jun 11
2
Issue with files on glusterfs becoming unreadable.
elbert at host1:~$ dpkg -l|grep glusterfs ii glusterfs-client 1.3.8-0pre2 GlusterFS fuse client ii glusterfs-server 1.3.8-0pre2 GlusterFS fuse server ii libglusterfs0 1.3.8-0pre2 GlusterFS libraries and translator modules I have 2 hosts set up to use AFR with
2008 Dec 18
3
Feedback and Questions on afr+unify
Hi, I just installed and configured a couple of machines with glusterfs (1.4.0-rc3). It seems to work great. Thanks for the amazing software.! I've been looking for something like this for years. I have some feedback and questions. My configuration is a bit complicated. I have two machines each with two disks and each of which with two partitions that I wanted to use (i.e. 8
2012 Jun 07
2
Performance optimization tips Gluster 3.3? (small files / directory listings)
Hi, I'm using Gluster 3.3.0-1.el6.x86_64, on two storage nodes, replicated mode (fs1, fs2) Node specs: CentOS 6.2 Intel Quad Core 2.8Ghz, 4Gb ram, 3ware raid, 2x500GB sata 7200rpm (RAID1 for os), 6x1TB sata 7200rpm (RAID10 for /data), 1Gbit network I've it mounted data partition to web1 a Dual Quad 2.8Ghz, 8Gb ram, using glusterfs. (also tried NFS -> Gluster mount) We have 50Gb of
2008 Dec 16
4
GlusterFS process take very many memory
Hello!!! I try use GLusterFS + openvz, but gfs process every 1 minute memory usare increase at ~2MB. How i can fix this? P.S. sorry about my bad english. Cluster information: 1) 3 nodes (server-client), conf: ############## # local data # ############## volume vz type storage/posix option directory /home/local end-volume volume vz-locks type features/posix-locks subvolumes vz end-volume
2018 Apr 11
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On Wed, Apr 11, 2018 at 4:35 AM, TomK <tomkcpr at mdevsys.com> wrote: > On 4/9/2018 2:45 AM, Alex K wrote: > Hey Alex, > > With two nodes, the setup works but both sides go down when one node is > missing. Still I set the below two params to none and that solved my issue: > > cluster.quorum-type: none > cluster.server-quorum-type: none > > yes this disables
2018 Apr 11
3
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On 4/9/2018 2:45 AM, Alex K wrote: Hey Alex, With two nodes, the setup works but both sides go down when one node is missing. Still I set the below two params to none and that solved my issue: cluster.quorum-type: none cluster.server-quorum-type: none Thank you for that. Cheers, Tom > Hi, > > You need 3 nodes at least to have quorum enabled. In 2 node setup you > need to
2004 May 21
3
rsync hangs in cron (not SSH-problem)
This is the case - mounted Inetpub's windows-webserver on /mnt/web1 /mnt/web2, etc. - rsync this to local dir: rsync -av --delete /mnt/web1 /mass/kuurne/day rsync -av --delete /mnt/web2 /mass/kuurne/day etc.. - when logged in, everything works (I do see some errors about non-existing files, but rsync won't stop. When used this command in cron 00 01 * * * rsync -av --delete
2008 Nov 09
3
Still problem with trivial self heal
Hi! I have trivial problem with self healing. Maybe somebody will be able to tell mi what am I doing wrong, and why do the files not heal as I expect. Configuration: Servers: two nodes A, B --------- volume posix type storage/posix option directory /ext3/glusterfs13/brick end-volume volume brick type features/posix-locks option mandatory on subvolumes posix end-volume volume server
2014 Oct 29
2
CentOS 6.5 RHCS fence loops
Hi Guys, I'm using centos 6.5 as guest on RHEV and rhcs for cluster web environment. The environtment : web1.example.com web2.example.com When cluster being quorum, the web1 reboots by web2. When web2 is going up, web2 reboots by web1. Does anybody know how to solving this "fence loop" ? master_wins="1" is not working properly, qdisk also. Below the cluster.conf, I
2008 Nov 04
1
fuse_setlk_cbk error
I'm building a two node cluster to run vserver systems on. I've setup glusterfs with this config: # node a volume data-posix type storage/posix option directory /export/cluster end-volume volume data1 type features/posix-locks subvolumes data-posix end-volume volume data2 type protocol/client option transport-type tcp/client option remote-host
2012 Jun 22
1
Fedora 17 GlusterFS 3.3.0 problmes
When I do a NFS mount and do a ls I get: [root at ovirt share]# ls ls: reading directory .: Too many levels of symbolic links [root at ovirt share]# ls -fl ls: reading directory .: Too many levels of symbolic links total 3636 drwxr-xr-x 3 root root 16384 Jun 21 19:34 . dr-xr-xr-x. 21 root root 4096 Jun 21 19:29 .. drwxr-xr-x 3 root root 16384 Jun 21 19:34 . dr-xr-xr-x. 21 root root 4096
2008 Dec 01
2
Error while copying/moving file
An HTML attachment was scrubbed... URL: http://zresearch.com/pipermail/gluster-users/attachments/20081201/151a90cd/attachment.htm
2017 Oct 19
3
gluster tiering errors
All, I am new to gluster and have some questions/concerns about some tiering errors that I see in the log files. OS: CentOs 7.3.1611 Gluster version: 3.10.5 Samba version: 4.6.2 I see the following (scrubbed): Node 1 /var/log/glusterfs/tier/<vol>/tierd.log: [2017-10-19 17:52:07.519614] I [MSGID: 109038] [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion failed
2008 Dec 14
1
Is that iozone result normal?
5-nodes server and 1 node client are connected by gigabits Ethernet. #] iozone -r 32k -r 512k -s 8G KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 8388608 32 10559 9792 62435 62260 8388608 512 63012 63409 63409 63138 It seems 32k write/rewrite performance are very
2018 Apr 09
2
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hey All, In a two node glusterfs setup, with one node down, can't use the second node to mount the volume. I understand this is expected behaviour? Anyway to allow the secondary node to function then replicate what changed to the first (primary) when it's back online? Or should I just go for a third node to allow for this? Also, how safe is it to set the following to none?
2009 May 28
2
Glusterfs 2.0 hangs on high load
Hello! After upgrade to version 2.0, now using 2.0.1, I'm experiencing problems with glusterfs stability. I'm running 2 node setup with cliet side afr, and glusterfsd also is running on same servers. Time to time glusterfs just hangs, i can reproduce this running iozone benchmarking tool. I'm using patched Fuse, but same result is with unpatched.
2017 Oct 22
0
gluster tiering errors
Herb, What are the high and low watermarks for the tier set at ? # gluster volume get <vol> cluster.watermark-hi # gluster volume get <vol> cluster.watermark-low What is the size of the file that failed to migrate as per the following tierd log: [2017-10-19 17:52:07.519614] I [MSGID: 109038] [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion failed for
2008 Oct 15
1
Glusterfs performance with large directories
We at Wiseguys are looking into GlusterFS to run our Internet Archive. The archive stores webpages collected by our spiders. The test setup consists of three data machines, each exporting a volume of about 3.7TB and one nameserver machine. File layout is such that each host has it's own directory, for example the GlusterFS website would be located in: