it is possible to se my hdd total time it have been in use so I can switch to a new one before it gets too many hours old -- This message posted from opensolaris.org
On Jan 21, 2011, at 22:36, Tobias Lauridsen wrote:> it is possible to se my hdd total time it have been in use so I can switch to a new one before it gets too many hours oldAny hard drive can die at any time. If such a feature exists, it wouldn''t be wise to rely on it. I''ve had a drive die in an array at work after about 1.5 years of service, and when the replacement came in, it was actually dead as well. So we had to call up and get a /third/ drive and send back the first two. That''s why we have mirroring/RAID and backups. You do have backups, both at work and at home, correct?
yes a HDD can die any time but the older it be that biger are the chance it will die when you buy a new card the chance it will go down are smaller if you card have meny year on the road and yes backup all the way ;-) I know I have se the total houre ind S.M.A.R.T tool but how do I get that i solaris -- This message posted from opensolaris.org
On Jan 21, 2011, at 7:36 PM, Tobias Lauridsen wrote:> it is possible to se my hdd total time it have been in use so I can switch to a new one before it gets too many hours oldIn theory, yes. In practice, I''ve never seen a disk properly report this data on a consistent basis :-( Perhaps some of the more modern disks do a better job? Look for the power on hours (POH) attribute of SMART. http://en.wikipedia.org/wiki/S.M.A.R.T. -- richard
Richard Elling wrote:> On Jan 21, 2011, at 7:36 PM, Tobias Lauridsen wrote: > > >> it is possible to se my hdd total time it have been in use so I can switch to a new one before it gets too many hours old >> > > In theory, yes. In practice, I''ve never seen a disk properly report this data on a > consistent basis :-( Perhaps some of the more modern disks do a better job? > > Look for the power on hours (POH) attribute of SMART. > http://en.wikipedia.org/wiki/S.M.A.R.T. >If you''re looking for stats to give an indication of likely wear, and thus increasing probably of failure, POH is probably not very useful by itself (or even at all). Things like Head Flying Hours and Load Cycle Count are probably more indicative, although not necessarily maintained by all drives. Of course, data which gives indication of actual (rather than likely) wear is even more important as an indicator of impending failure, such as the various error and retry counts. -- Andrew Gabriel
> If you''re looking for stats to give an indication of likely wear, and > thus increasing probably of failure, POH is probably not very useful > by > itself (or even at all). Things like Head Flying Hours and Load Cycle > Count are probably more indicative, although not necessarily > maintained > by all drives. > > Of course, data which gives indication of actual (rather than likely) > wear is even more important as an indicator of impending failure, such > as the various error and retry counts.I cannot but agree. iostat will show better info, and a script like http://karlsbakk.net/iostat-overview.sh can give you a pretty decent overview of which drives should be replaced. This will show you drives with errors reported. In my experience, a drive can last a long time, but may die early as well. Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ for alle pedagoger ? unng? eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer p? norsk.
On 1/23/11 10:30 AM, Roy Sigurd Karlsbakk wrote:>> If you''re looking for stats to give an indication of likely wear, and >> thus increasing probably of failure, POH is probably not very useful >> by >> itself (or even at all). Things like Head Flying Hours and Load Cycle >> Count are probably more indicative, although not necessarily >> maintained >> by all drives. >> >> Of course, data which gives indication of actual (rather than likely) >> wear is even more important as an indicator of impending failure, such >> as the various error and retry counts. > > I cannot but agree. iostat will show better info, and a script like http://karlsbakk.net/iostat-overview.sh can give you a pretty decent overview of which drives should be replaced. This will show you drives with errors reported. In my experience, a drive can last a long time, but may die early as well.But google and CMU found there was an increase in failures as POH increased and that the "bathtub curve" was a myth perpetuated by drive manufacturers (who, of course, know that it is not true since they have certain "big picture" statistical data that the rest of us don''t have). I believe the CMU data showed that effectively after the third year, you are better off doing proactive drive refreshes rather than waiting for failures. YMMV. And I would add that I consider the environments that CMU tested - large HPC installed might be more "coddled" than many environments people have their disks in. http://www.cs.cmu.edu/~bianca/fast07.pdf In year 4 and year 5 (which are still within the nominal lifetime of these disks), the actual replacement rates are 7?10 times higher than the failure rates we expected based on datasheet MTTF. ... Observation 5: Contrary to common and proposed models, hard drive replacement rates do not enter steady state after the first year of operation. Instead replacement rates seem to steadily increase over time. Observation 6: Early onset of wear-out seems to have a much stronger impact on lifecycle replacement rates than infant mortality, as experienced by end customers, even when considering only the first three or five years of a system?s lifetime. We therefore recommend that wear-out be incorporated into new standards for disk drive reliability. The new standard suggested by IDEMA does not take wear-out into account [5, 33]. http://static.googleusercontent.com/external_content/untrusted_dlcp/labs.google.com/en/us/papers/disk_failures.pdf Also, google mentioned that they could not find a good statistical correlation for any smart data fields to serve as predictors of failure.> Vennlige hilsener / Best regards > > roy > -- > Roy Sigurd Karlsbakk > (+47) 97542685 > roy at karlsbakk.net > http://blogg.karlsbakk.net/ > -- > I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ for alle pedagoger ? unng? eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer p? norsk. > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss