Hi list, on my workstation I've a md raid (mirror) for / on md1. This raid has 2 ssd as members (each corsair GT force 120GB MLC). This disks are ~ 5 years old. Today I've checked my ssds smart status and I get: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 100 100 050 Pre-fail Always - 0/4754882 5 Retired_Block_Count 0x0033 100 100 003 Pre-fail Always - 0 9 Power_On_Hours_and_Msec 0x0032 000 000 000 Old_age Always - 17337h+11m+24.440s 12 Power_Cycle_Count 0x0032 099 099 000 Old_age Always - 1965 171 Program_Fail_Count 0x0032 000 000 000 Old_age Always - 0 172 Erase_Fail_Count 0x0032 000 000 000 Old_age Always - 0 174 Unexpect_Power_Loss_Ct 0x0030 000 000 000 Old_age Offline - 780 177 Wear_Range_Delta 0x0000 000 000 000 Old_age Offline - 3 181 Program_Fail_Count 0x0032 000 000 000 Old_age Always - 0 182 Erase_Fail_Count 0x0032 000 000 000 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 194 Temperature_Celsius 0x0022 033 042 000 Old_age Always - 33 (Min/Max 15/42) 195 ECC_Uncorr_Error_Count 0x001c 120 120 000 Old_age Offline - 0/4754882 196 Reallocated_Event_Count 0x0033 100 100 003 Pre-fail Always - 0 201 Unc_Soft_Read_Err_Rate 0x001c 120 120 000 Old_age Offline - 0/4754882 204 Soft_ECC_Correct_Rate 0x001c 120 120 000 Old_age Offline - 0/4754882 230 Life_Curve_Status 0x0013 100 100 000 Pre-fail Always - 100 231 SSD_Life_Left 0x0013 100 100 010 Pre-fail Always - 0 233 SandForce_Internal 0x0000 000 000 000 Old_age Offline - 6585 234 SandForce_Internal 0x0032 000 000 000 Old_age Always - 6885 241 Lifetime_Writes_GiB 0x0032 000 000 000 Old_age Always - 6885 242 Lifetime_Reads_GiB 0x0032 000 000 000 Old_age Always - 6244 The second ssd has very similar values. SSD_Life_Left is 0 since ~ 1years ago for each ssd. Today these disks are working without problems: # hdparm -tT /dev/md1 /dev/md1: Timing cached reads: 26322 MB in 2.00 seconds = 13181.10 MB/sec Timing buffered disk reads: 1048 MB in 3.00 seconds = 349.00 MB/sec # hdparm -tT /dev/sda /dev/sda: Timing cached reads: 26604 MB in 2.00 seconds = 13322.82 MB/sec Timing buffered disk reads: 1140 MB in 3.00 seconds = 379.87 MB/sec # hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 26258 MB in 2.00 seconds = 13148.38 MB/sec Timing buffered disk reads: 1140 MB in 3.00 seconds = 379.70 MB/sec # dd if=/dev/zero of=file count=2000000 2000000+0 record in 2000000+0 record out 1024000000 byte (1,0 GB) copied, 2,36335 s, 433 MB/s My ssds are failing? Thanks in advance.
On 10/21/2016 2:03 AM, Alessandro Baggi wrote:> > My ssds are failing?SSD's wear out based on writes per block. they distribute those writes, but once each block has been written X number of times, they are no longer reliable. they appear to still be working perfectly, but they are beyond their design life. soon or later, if you continue the amount of writes you've been doing, you'll get back errors or bad data. I would plan on replacing those drives sooner rather than later. 5 years was a good run. -- john r pierce, recycling bits in santa cruz
Hello Alessandro, On Fri, 2016-10-21 at 11:03 +0200, Alessandro Baggi wrote:> ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED > WHEN_FAILED RAW_VALUE > 1 Raw_Read_Error_Rate 0x000f 100 100 050 Pre-fail Always > - 0/4754882smartctl -A only show a total error count for my disks, but I suppose this means 0 errors on 4754882 reads... Note that the "Pre-fail" does not indicate that your disk is about to fail, it is an indication of the type of is issue that causes this particular class of errors.> 5 Retired_Block_Count 0x0033 100 100 003 Pre-fail Always > - 0No retired blocks, that seems alright...> My ssds are failing?The easiest way to test for disk errors is by issuing smartctl -l xerror /dev/sda If the output contains "No Errors Logged" your disks are fine. Quite easy to put this in a (daily) cron job that greps the output of smartctl for that string and if it does not find a match sends a mail warning you about those disk errors. #!/bin/bash SMARTCTL=/usr/sbin/smartctl GREP=/bin/grep DEVICES='sda sdb' HOST='hostname' TO='a at example.com' CC='b at example.com' for d in $DEVICES ; do if [ "$($SMARTCTL -l xerror /dev/$d | $GREP No\ Errors\ Logged)" == '' ]; then # ERRORS FOUND $SMARTCTL -x /dev/$d | mail -c $CC -s "$HOST /dev/$d SMART errors" $TO fi done Regards, Leonard. -- mount -t life -o ro /dev/dna /genetic/research
John R Pierce wrote:> On 10/21/2016 2:03 AM, Alessandro Baggi wrote: >> >> My ssds are failing? > > SSD's wear out based on writes per block. they distribute those > writes, but once each block has been written X number of times, they are > no longer reliable. > > they appear to still be working perfectly, but they are beyond their > design life. soon or later, if you continue the amount of writes > you've been doing, you'll get back errors or bad data. > > I would plan on replacing those drives sooner rather than later. 5 > years was a good run.1. Especially if they're consumer grade. 2. And that's a fairly early large (for SSD) drive. 3. We've got a RAID appliance that takes actual SCSI that's still running, though we're now in the process of replacing these 10 yr old RAIDs.... 4. SATA is a *lot* cheaper for *much* larger capacity drives... mark
hi there. The new update for links in EPEL takes it from 2.8-2 to 2.13-1. But yum includes 21 xWindows dependencies that weren't required before. I'd rather not install them - it's a headless server. Was this intentional? =================================================================================================================================== Package Arch Version Repository Size ===================================================================================================================================Updating: links x86_64 1:2.13-1.el7 epel 2.8 M Installing for dependencies: cairo x86_64 1.14.2-1.el7 base 711 k fontconfig x86_64 2.10.95-7.el7 base 228 k fontpackages-filesystem noarch 1.44-8.el7 base 9.9 k graphite2 x86_64 1.3.6-1.el7_2 updates 112 k harfbuzz x86_64 0.9.36-1.el7 base 156 k libXdamage x86_64 1.1.4-4.1.el7 base 20 k libXext x86_64 1.3.3-3.el7 base 39 k libXfixes x86_64 5.0.1-2.1.el7 base 18 k libXft x86_64 2.3.2-2.el7 base 58 k libXrender x86_64 0.9.8-2.1.el7 base 25 k libXxf86vm x86_64 1.1.3-2.1.el7 base 17 k libevent x86_64 2.0.21-4.el7 base 214 k librsvg2 x86_64 2.39.0-1.el7 base 123 k libthai x86_64 0.1.14-9.el7 base 187 k libxshmfence x86_64 1.2-1.el7 base 7.2 k mesa-libEGL x86_64 10.6.5-3.20150824.el7 base 74 k mesa-libGL x86_64 10.6.5-3.20150824.el7 base 184 k mesa-libgbm x86_64 10.6.5-3.20150824.el7 base 40 k mesa-libglapi x86_64 10.6.5-3.20150824.el7 base 39 k pango x86_64 1.36.8-2.el7 base 287 k pixman x86_64 0.32.6-3.el7 base 254 k Transaction Summary ===================================================================================================================================Install ( 21 Dependent packages) Upgrade 1 Package Cheers!
hi there. The new update for links in EPEL takes it from 2.8-2 to 2.13-1. But yum includes 21 xWindows dependencies that weren't required before. I'd rather not install them - it's a headless server. Was this intentional? =================================================================================================================================== Package Arch Version Repository Size ===================================================================================================================================Updating: links x86_64 1:2.13-1.el7 epel 2.8 M Installing for dependencies: cairo x86_64 1.14.2-1.el7 base 711 k fontconfig x86_64 2.10.95-7.el7 base 228 k fontpackages-filesystem noarch 1.44-8.el7 base 9.9 k graphite2 x86_64 1.3.6-1.el7_2 updates 112 k harfbuzz x86_64 0.9.36-1.el7 base 156 k libXdamage x86_64 1.1.4-4.1.el7 base 20 k libXext x86_64 1.3.3-3.el7 base 39 k libXfixes x86_64 5.0.1-2.1.el7 base 18 k libXft x86_64 2.3.2-2.el7 base 58 k libXrender x86_64 0.9.8-2.1.el7 base 25 k libXxf86vm x86_64 1.1.3-2.1.el7 base 17 k libevent x86_64 2.0.21-4.el7 base 214 k librsvg2 x86_64 2.39.0-1.el7 base 123 k libthai x86_64 0.1.14-9.el7 base 187 k libxshmfence x86_64 1.2-1.el7 base 7.2 k mesa-libEGL x86_64 10.6.5-3.20150824.el7 base 74 k mesa-libGL x86_64 10.6.5-3.20150824.el7 base 184 k mesa-libgbm x86_64 10.6.5-3.20150824.el7 base 40 k mesa-libglapi x86_64 10.6.5-3.20150824.el7 base 39 k pango x86_64 1.36.8-2.el7 base 287 k pixman x86_64 0.32.6-3.el7 base 254 k Transaction Summary ===================================================================================================================================Install ( 21 Dependent packages) Upgrade 1 Package Cheers!