Displaying 20 results from an estimated 55 matches for "mtbf".
2005 Dec 21
1
System Reliability Metrics
I need to calculate some metrics such as Mean Time Between Failure
(MTBF), etc (see http://www.cs.sandia.gov/~jrstear/ras for a more
complete list). I have observations like
start end state
1 2005-11-11 09:05:00 2005-11-11 12:20:00 Scheduled Downtime
2 2005-11-12 13:42:00 2005-11-12 14:45:00 Unscheduled Downtime...
2009 Apr 16
2
MTBF of Ext3 and Partition Size
Hi All,
On several of my servers I seem to have a high rate of server crashes do to
file system errors. So I have some questions related to this:
Is there any Mean Time Between Failure ( MTBF) data for the ext3
file-system?
Does increased partition size cause a higher risk of the partition being
corrupted? If so, is there any data on the ratio between partition size and
the likely hood of failure?
Does ext3 on hardware raid (10) increase the possibility of file system
corruption?
Doe...
2007 Jul 31
1
MTBF Reliability calculations
I'm working on a project involving reliability values (known failure
rates) for a system with approximately 700 components with a set
cconfiguration.
I'm looking to compute a "parts-count" MTBF (mean time between failures)
for the system.
(See also MIL-HDBK-217)
Is there anything in R that can help me with this?
Thanks,
Eric Jennings
QA Technical Assistant
Crane Electronics --Redmond
10301 Willows Road
P.O. Box 97005
Redmond, WA 98073
PH 425.895.5039
e-mail eric.jennings@i...
2006 Sep 19
4
Disk Layout for New Storage Server
...is to create a raidz2 pool for each shelf (i.e. 5TB per shelf of usable storage) and stripe across the shelves. This would let me lose up to two drives per shelf and still be operational. My only concern with this is if a shelf fails (SCSI card failure, etc) the whole system is down, however the MTBF for a SCSI card is WAAAAY higher than the MTBF of a hard drive...
FWIW, we do have a backup strategy onto a SpectraLogic tape loader, so losing the whole array, while bad, won''t put us out of business, though I''d prefer it didn''t happen :)
Obviously there are dozens of...
2007 May 07
5
Anaconda doesn't support raid10
So after troubleshooting this for about a week, I was finally able to
create a raid 10 device by installing the system, copying the md modules
onto a floppy, and loading the raid10 module during the install.
Now the problem is that I can't get it to show up in anaconda. It
detects the other arrays (raid0 and raid1) fine, but the raid10 array
won't show up. Looking through the logs
2008 Feb 05
4
Enterprise-class monitoring system for CentOS and Win2k3 server
...Linux and Windows servers? Here are my requirements:
SNMP trap collection, ability to import custom MIBs
isup/isdown monitoring of ports and daemons
Server health monitors (CPU, Disk, Memory, etc)
SLA reporting with nice graphs
Pager/Email/SMS alerts with groups, filters and escalations
Built-in MTBF and MTTR reporting
Robust parent-child relationships between monitors or probes. For
example, the system must be smart enough to know that if 25 URLs have
gone down all at once, that they belong to an apache process that has
died. I don't want 25 alerts, I want *one* alert telling me that the...
2006 Mar 30
1
disk drive sparing - questions not answers
...lty ZFS
elements with ZFS pre-allocated "spares".
Many hardware RAID controllers support the concept of ''spares''. Some allow
the spare to be powered down - so that the spare does not fail just as the
drive it is intended to replace, also fails! Whoops - the wonders of MTBF
and the statistical likelyhood that identical drives, with identical
runtimes and MTBF specs, operated in an identical runtime environment, are
likely to fail within a short timeframe of one another.
[0] Unfortunately, the "supply" of knowledgeable and talented admins in
various technica...
2016 Oct 28
2
Re: Disk near failure
...h 5 year warranty, 3. group, 240GiB ~ 100?
The "Corsair Force LE" with 3 year warranty, 5. group, 240GiB ~ 70?
>From the user standpoint, the difference between the Samsung SSD 850 Evo
and the Corsair Neutron XTi is not that big.
Samsung: TLC 3D Flash, 75TBW @ 250 Gib size, 1,5 Mh MTBF, 512MB Cache
Corsair: MLC 2D Flash, 160TBW @ 240 GiB size, ?? MTBF, no RAM-Cache
Either Corsair does not want to a) test for MTBF, b) show the MTBF, or
c) they are not really satisfied with it and thus hide it. *shrugs*
http://www.anandtech.com/show/9799/best-ssds
http://www.anandtech.com/show/1...
2016 Oct 28
0
Disk near failure
.... group, 240GiB ~ 100?
> The "Corsair Force LE" with 3 year warranty, 5. group, 240GiB ~ 70?
>
> From the user standpoint, the difference between the Samsung SSD 850 Evo
> and the Corsair Neutron XTi is not that big.
> Samsung: TLC 3D Flash, 75TBW @ 250 Gib size, 1,5 Mh MTBF, 512MB Cache
> Corsair: MLC 2D Flash, 160TBW @ 240 GiB size, ?? MTBF, no RAM-Cache
>
> Either Corsair does not want to a) test for MTBF, b) show the MTBF, or
> c) they are not really satisfied with it and thus hide it. *shrugs*
>
>
> http://www.anandtech.com/show/9799/best-ssds...
2010 Apr 27
42
Performance drop during scrub?
Hi all
I have a test system with snv134 and 8x2TB drives in RAIDz2 and currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on the testpool drops to something hardly usable while scrubbing the pool.
How can I address this? Will adding Zil or L2ARC help? Is it possible to tune down scrub''s priority somehow?
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
roy at
2004 Jul 22
1
RAID/SCSI/IDE/SATA and a TE405P (or T100P) c ard. Should I expect problems?
...---- Original Message -----
From: "Dana Nowell" <DanaNowell@cornerstonesoftware.com>
To: <asterisk-users@lists.digium.com>
Sent: Thursday, July 22, 2004 11:57 AM
Subject: Re: [Asterisk-Users] RAID affecting X100P performance...
> In an article on IDE vs. SCSI I read that MTBF numbers for IDE were
> frequently caculated at 8 hours on 16 hours off per day (assumes desktop
> usage) but SCSI drives were calculated at 24hrs on per day. So even
though
> the MTBF numbers look the same ... The main reason is, reportedly, better
> quality controller parts and motor...
2009 Dec 08
1
Seagate announces enterprise SSD
FYI,
Seagate has announced a new enterprise SSD. The specs appear
to be competitive:
+ 2.5" form factor
+ 5 year warranty
+ power loss protection
+ 0.44% annual failure rate (AFR) (2M hours MTBF, IMHO too low :-)
+ UER 1e-16 (new), 1e-15 (5 years)
+ 30,000/25,000 4 KB read IOPS (peak/aligned zero offset)
+ 30,000/10,500 4 KB write IOPS (peak/aligned zero offset)
http://www.seagate.com/www/en-us/products/servers/pulsar/pulsar/
http://storageeffect.media.seagate.com/2009/12/storage-effec...
2007 Oct 17
1
Asterisk on USB Flash?
Size/Speed/write cycles have gone way up, price has gone way down. More
common than CompactFlash and no need for an adapter. So is it feasible to
run an Asterisk server on something like this? With a MTBF of 1million
write cycles coupled with dynamic wear management on a 4Gig USB drive,
lifetime is a non-issue. Just wondering how well it works, if it works.
2005 Jul 08
1
Re: Hot swap CPU -- shared memory (1 NUMA/UPA) v. clustered (4 MCH)
...ead, because it's much better to have a number of
point-to-point devices with direct pins to a PHY to a wide ASIC
than a wide bus that is shared by all devices.
And these 10K RPM SATA models are rolling of the _exact_same_
fab lines as their SCSI equivalents, with the same vibration specs
and MTBF numbers. They are not "commodity" [S]ATA drives, of
which many 7,200rpm SCSI drives are even now sharing (and
share the same 0.4Mhr MTBF as commodity [S]ATA).
--
Bryan J. Smith mailto:b.j.smith at ieee.org
2017 Sep 08
2
GlusterFS as virtual machine storage
I would prefer the behavior was different to what it is of I/O stopping.
The argument I heard for the long 42 second time out was that MTBF on a
server was high, and that the client reconnection operation was *costly*.
Those were arguments to *not* change the ping timeout value down from 42
seconds. I think it was mentioned that low ping timeout settings could lead
to high cpu loads with many clients trying to reconnect if a short time...
2009 Aug 06
10
RAID[56] status
If we''ve abandoned the idea of putting the number of redundant blocks
into the top bits of the type bitmask (and I hope we have), then we''re
fairly much there. Current code is at:
git://, http://git.infradead.org/users/dwmw2/btrfs-raid56.git
git://, http://git.infradead.org/users/dwmw2/btrfs-progs-raid56.git
We have recovery working, as well as both full-stripe writes
2010 Apr 26
23
SAS vs SATA: Same size, same speed, why SAS?
...rations; the SAS model offers one quarter
the buffer (16MB vs 64MB on the SATA model), the same rotational speed, and
costs 10% more than its enterprise SATA twin. (They also offer a Barracuda
XT SATA drive; it''s roughly 20% less expensive than the Constellation drive,
but rated at 60% the MTBF of the others and a predicted rate of
nonrecoverable errors an order of magnitude higher.)
Assuming I''m going to be using three 8-drive RAIDz2 configurations, and
further assuming this server will be used for backing up home directories
(lots of small writes/reads), how much benefit will...
2016 Oct 28
4
Disk near failure
On Fri, October 28, 2016 2:42 am, Alessandro Baggi wrote:
> Il 27/10/2016 19:38, Yamaban ha scritto:
>> For my personal use I would replace that Drive asap.
>> - There is no warranty for it anymore (time since buy)
>> - You can't buy it new anymore (discontinued)
>> - There are more reliable drives available.
>>
>> I'd go for a Samsung Evo 850, that
2005 Jun 03
3
bad blocks showing up
I am getting a few bad sector messages in my /var/log/messages.
I have read where an "fsck -c -c /dev/hda" may be what I need.
Before I go doing such things I am looking for confirmation that
that is what I should do. Anyone please comment on how to
tell linux to not use sectors in my disk. This is stright IDE no
raid not nothing at this point /dev/hda is all.
Thanks,
jerry
2006 Nov 03
27
# devices in raidz.
for s10u2, documentation recommends 3 to 9 devices in raidz. what is the
basis for this recommendation? i assume it is performance and not failure
resilience, but i am just guessing... [i know, recommendation was intended
for people who know their raid cold, so it needed no further explanation]
thanks... oz
--
ozan s. yigit | oz at somanetworks.com | 416 977 1414 x 1540
I have a hard time