similar to: ZFS Uptime/Availability?

Displaying 20 results from an estimated 5000 matches similar to: "ZFS Uptime/Availability?"

2008 Feb 15
38
Performance with Sun StorageTek 2540
Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and connected via load-shared 4Gbit FC links. This week I have tried many different configurations, using firmware managed RAID, ZFS managed RAID, and with the controller cache enabled or disabled. My objective is to obtain the best single-file write performance.
2018 Mar 06
2
Fixing a rejected peer
> On Mar 5, 2018, at 6:41 PM, Atin Mukherjee <amukherj at redhat.com> wrote: > I'm tempted to repeat - down things, copy the checksum the "good" ones agree on, start things; but given that this has turned into a balloon-squeezing exercise, I want to make sure I'm not doing this the wrong way. > > Yes, that's the way. Copy
2010 Feb 16
2
Highly Performance and Availability
Hello everyone, I am currently running Dovecot as a high performance solution to a particular kind of problem. My userbase is small, but it murders email servers. The volume is moderate, but message retention requirements are stringent, to put it nicely. Many users receive a high volume of email traffic, but want to keep every message, and *search* them. This produces mail accounts up to
2003 Jan 28
2
rsync 2.5.6 fails on Tru64 v5.0 with rsync://<hostname>/
I've just compiled 2.5.6 release on Tru64 V5.0A (configure detects alphaev67-dec-osf5.0, gcc release is a 3.1.1). rsync fails using rsync://<hostname>/ syntax. > lct@goliath(32) [rsync-2.5.6]$ ./rsync rsync://stitch/ > rsync: getaddrinfo: stitch 873: servname not supported for ai_socktype > rsync error: error in socket IO (code 10) at clientserver.c(83) Is there anyone else
2007 Jun 16
1
audio stitching in php
I am looking at writing a program using php that after X number of days (eg 60 days) a speex file is created that has all the speex file in a set folder with a bit of text to speech between the file for id reasons the speex file in the folder are deleted once done based on what I have read (corrent me if Im wrong): I need to convert the speex files to wav files (executable program), then I can
2018 Mar 05
0
tiering
Hi, There isn't a way to replace the failing tier brick through a single command as we don't have support for replace/ remove or add brick with tier. Once you bring the brick online(volume start force), the data in the brick will be built by the self heal daemon (Done because its a replicated tier). But adding brick will still not work. Else if you use the force option, it will work as
2017 Oct 22
1
gluster tiering errors
There are several messages "no space left on device". I would check first that free disk space is available for the volume. On Oct 22, 2017 18:42, "Milind Changire" <mchangir at redhat.com> wrote: > Herb, > What are the high and low watermarks for the tier set at ? > > # gluster volume get <vol> cluster.watermark-hi > > # gluster volume get
2018 Mar 04
1
tiering
Hi, Have a glusterfs 3.10.10 (tried 3.12.6 as well) volume on Ubuntu 16.04 with a 3 ssd tier where one ssd is bad. Status of volume: labgreenbin Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick labgfs81:/gfs/p1-tier/mount 49156 0 Y 4217 Brick
2017 Oct 27
0
gluster tiering errors
Herb, I'm trying to weed out issues here. So, I can see quota turned *on* and would like you to check the quota settings and test to see system behavior *if quota is turned off*. Although the file size that failed migration was 29K, I'm being a bit paranoid while weeding out issues. Are you still facing tiering errors ? I can see your response to Alex with the disk space consumption and
2017 Oct 22
0
gluster tiering errors
Herb, What are the high and low watermarks for the tier set at ? # gluster volume get <vol> cluster.watermark-hi # gluster volume get <vol> cluster.watermark-low What is the size of the file that failed to migrate as per the following tierd log: [2017-10-19 17:52:07.519614] I [MSGID: 109038] [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion failed for
2018 Feb 27
1
On sharded tiered volume, only first shard of new file goes on hot tier.
Does anyone have any ideas about how to fix, or to work-around the following issue? Thanks! Bug 1549714 - On sharded tiered volume, only first shard of new file goes on hot tier. https://bugzilla.redhat.com/show_bug.cgi?id=1549714 On sharded tiered volume, only first shard of new file goes on hot tier. On a sharded tiered volume, only the first shard of a new file goes on the hot tier, the rest
2018 Jan 18
0
Blocking IO when hot tier promotion daemon runs
Thanks for the info, Hari. Sorry about the bad gluster volume info, I grabbed that from a file not realizing it was out of date. Here's a current configuration showing the active hot tier: [root at pod-sjc1-gluster1 ~]# gluster volume info Volume Name: gv0 Type: Tier Volume ID: d490a9ec-f9c8-4f10-a7f3-e1b6d3ced196 Status: Started Snapshot Count: 13 Number of Bricks: 8 Transport-type: tcp Hot
2017 Oct 24
2
gluster tiering errors
Milind - Thank you for the response.. >> What are the high and low watermarks for the tier set at ? # gluster volume get <vol> cluster.watermark-hi Option Value ------ ----- cluster.watermark-hi 90 # gluster volume get <vol> cluster.watermark-low Option
2018 Mar 07
0
Fixing a rejected peer
Please run 'gluster v get all cluster.max-op-version' and what ever value it throws up should be used to bump up the cluster.op-version (gluster v set all cluster.op-version <value>) . With that if you restart the rejected peer I believe the problem should go away, if it doesn't I'd need to investigate further once you can pass down the glusterd and cmd_history log files and
2018 Mar 05
0
Why files goes to hot tier and cold tier at same time
Hi, The actual data will be in the hot tier only till demotion. The file that you see on the cold tier is just a linkto file of the file on the hot tier. These linkto file are necessary for the internal working of the tier. On Mon, Mar 5, 2018 at 1:16 PM, Sherin George <allmyforums at outlook.in> wrote: > Hi Guys > > Got a quick question regarding hot tier and cold tier. > I
2017 Oct 19
3
gluster tiering errors
All, I am new to gluster and have some questions/concerns about some tiering errors that I see in the log files. OS: CentOs 7.3.1611 Gluster version: 3.10.5 Samba version: 4.6.2 I see the following (scrubbed): Node 1 /var/log/glusterfs/tier/<vol>/tierd.log: [2017-10-19 17:52:07.519614] I [MSGID: 109038] [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion failed
2017 Aug 03
0
Hot Tier
Hi, We will look into the " failed to get index" error. It shouldn't affect the normal working. Do let us know if you face any other issues. Regards, Hari. On 02-Aug-2017 11:55 PM, "Dmitri Chebotarov" <4dimach at gmail.com> wrote: Hello I reattached hot tier to a new empty EC volume and started to copy data to the volume. Good news is I can see files now on SSD
2018 Jan 10
0
Blocking IO when hot tier promotion daemon runs
I should add that additional testing has shown that only accessing files is held up, IO is not interrupted for existing transfers. I think this points to the heat metadata in the sqlite DB for the tier, is it possible that a table is temporarily locked while the promotion daemon runs so the calls to update the access count on files are blocked? On Wed, Jan 10, 2018 at 10:17 AM, Tom Fite
2018 Jan 09
2
Blocking IO when hot tier promotion daemon runs
I've recently enabled an SSD backed 2 TB hot tier on my 150 TB 2 server / 3 bricks per server distributed replicated volume. I'm seeing IO get blocked across all client FUSE threads for 10 to 15 seconds while the promotion daemon runs. I see the 'glustertierpro' thread jump to 99% CPU usage on both boxes when these delays occur and they happen every 25 minutes (my
2017 Dec 18
0
Testing sharding on tiered volume
----- Original Message ----- > From: "Viktor Nosov" <vnosov at stonefly.com> > To: gluster-users at gluster.org > Cc: vnosov at stonefly.com > Sent: Friday, December 8, 2017 5:45:25 PM > Subject: [Gluster-users] Testing sharding on tiered volume > > Hi, > > I'm looking to use sharding on tiered volume. This is very attractive > feature that could