I've got an old RAID that I attached to a box. LSI card, and the RAID has 12 drives, for a total RAID size of 9.1TB, I think. I started shred /dev/sda the Friday before last... and it's still running. Is this reasonable for it to be taking this long...? mark
On Wed, 31 May 2017, m.roth at 5-cent.us wrote:> I've got an old RAID that I attached to a box. LSI card, and the > RAID has 12 drives, for a total RAID size of 9.1TB, I think. I > started shred /dev/sda the Friday before last... and it's still > running. Is this reasonable for it to be taking this long...?Unless you specified non-default options, shred overwrites each file three times -- and writing 27 TB to an old RAID array will be extremely slow. Also, shred has a builtin PRNG, and I'm not really sure how speedy it is. Still, 12 days seems like a really long time... -- Paul Heinlein <> heinlein at madboa.com <> http://www.madboa.com/
On Wed, May 31, 2017 10:39 am, Paul Heinlein wrote:> On Wed, 31 May 2017, m.roth at 5-cent.us wrote: > >> I've got an old RAID that I attached to a box. LSI card, and the >> RAID has 12 drives, for a total RAID size of 9.1TB, I think. I >> started shred /dev/sda the Friday before last... and it's still >> running. Is this reasonable for it to be taking this long...? > > Unless you specified non-default options, shred overwrites each file > three timesWith modern drives (read: larger than 100GB) writing the track over once with anything will be sufficient. Overwriting multiple times with different information was used awfully long ago when track had noticeable width and distinct edge (drives were smaller than 1 GB then), thus it was possible to distinguish narrow side of older record (using much more sensitive equipment)) as newly recorded track is usually slightly shifted with respect to old one, so narrow stripe of old one is not covered on one side. These times are long gone, one can clean drives one at a time just overwriting the whole device using dd (mind bs size to not impend speed). Better though: physically destroy platters, it may take less of _your_ time to do that. Just my $0.02 Valeri> -- and writing 27 TB to an old RAID array will be > extremely slow. Also, shred has a builtin PRNG, and I'm not really > sure how speedy it is. > > Still, 12 days seems like a really long time... > > -- > Paul Heinlein <> heinlein at madboa.com <> http://www.madboa.com/ > _______________________________________________ > CentOS mailing list > CentOS at centos.org > https://lists.centos.org/mailman/listinfo/centos >++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++
On 5/31/2017 8:04 AM, m.roth at 5-cent.us wrote:> I've got an old RAID that I attached to a box. LSI card, and the RAID has > 12 drives, for a total RAID size of 9.1TB, I think. I started shred > /dev/sda the Friday before last... and it's still running. Is this > reasonable for it to be taking this long...?not at all surprising, as that raid sounds like its built with older slower drives. I would discombobulate the raid, turn it into 12 discrete drives, and use dd if=/dev/zero of=/dev/sdX bs=65536 on each drive, running these concurrently unless that volume has data that requires military level destruction, where upon the proper method is to run the drives through a grinder so they are metal filings. the old DoD multipass erasure specification is long obsolete and was never that great. -- john r pierce, recycling bits in santa cruz
John R Pierce wrote:> On 5/31/2017 8:04 AM, m.roth at 5-cent.us wrote: >> I've got an old RAID that I attached to a box. LSI card, and the RAID >> has 12 drives, for a total RAID size of 9.1TB, I think. I started shred >> /dev/sda the Friday before last... and it's still running. Is this >> reasonable for it to be taking this long...? > > not at all surprising, as that raid sounds like its built with older > slower drives.It's maybe from '09 or '10. I *think* they're 1TB (which would make sense, given the size of what I remember of the RAID).> > I would discombobulate the raid, turn it into 12 discrete drives, and useWell, shred's already been running for this long... <snip>> unless that volume has data that requires military level destruction, > where upon the proper method is to run the drives through a grinder so > they are metal filings. the old DoD multipass erasure specification > is long obsolete and was never that great.If I had realized it would run this long, I would have used DBAN.... For single drives, I do, and choose DoD 5220.22-M (seven passes), which is *way* overkill these days... but I sign my name to a certificate that gets stuck on the outside of the server, meaning I, personally, am responsible for the sanitization of the drive(s). And I work for a US federal contractor[1][2] mark 1. I do not speak for my employer, the US federal government agency I work at, nor, as my late wife put it, the view out my window (if I had a window). 2. I'm with the government, and I'm here to help you. (Actually, civilian sector, so yes, I am.
On 05/31/2017 08:04 AM, m.roth at 5-cent.us wrote:> I've got an old RAID that I attached to a box. LSI card, and the RAID has > 12 drives, for a total RAID size of 9.1TB, I think. I started shred > /dev/sda the Friday before last... and it's still running. Is this > reasonable for it to be taking this long...?Was the system booting from /dev/sda, or were you running any binaries/libraries from sda? Often you'll be able to shred the device you boot from, but you won't get a prompt back when it's done.
Gordon Messmer wrote:> On 05/31/2017 08:04 AM, m.roth at 5-cent.us wrote: >> I've got an old RAID that I attached to a box. LSI card, and the RAID >> has 12 drives, for a total RAID size of 9.1TB, I think. I started shred >> /dev/sda the Friday before last... and it's still running. Is this >> reasonable for it to be taking this long...? > > Was the system booting from /dev/sda, or were you running any > binaries/libraries from sda? Often you'll be able to shred the device > you boot from, but you won't get a prompt back when it's done.No, the h/w RAID showed up as sda when I booted; / showed up on sdb. mark