Displaying 20 results from an estimated 4000 matches similar to: "Hard drives being renamed"
2016 May 25
0
Hard drives being renamed
>> I've run into this with ZFS on Linux. The 'blkid' is useful to identify the
>> target device and then add that to your fstab. I don't use device names >> at all anymore, too ambiguous (depending on the circumstance) in my >> opinion.
Right. And there are other ways to identify disks unequivocally. Under CentOS, for example, I find the following
2019 Feb 28
0
What files to edit when changing the sdX of hard drives?
> No, I dislike UUIDs. I dislike, strongly, lots of extra typing that
> doesn't really get me anything. MAYBE, if you're in a Google or Amazon
> datacenter, with 500,000 physical servers (I phone interviewed with them
> 10 years ago)... but short of that? Nope.
You can (perhaps should...) use the World Wide Name, which is a
manufacturer ID unique to each disk. Contrary to the
2019 Feb 28
3
What files to edit when changing the sdX of hard drives?
Phelps, Matthew wrote:
> On Thu, Feb 28, 2019 at 11:52 AM mark <m.roth at 5-cent.us> wrote:
>> Nicolas Kovacs wrote:
>>> Le 28/02/2019 ? 04:12, Jobst Schmalenbach a ?crit :
>>>> I want to lock in the SDA/SDB/SDC for my drives
>>>
>>> In short : use UUIDs or labels instead of hardcoding /dev/sdX.
>>>
>>>
2012 May 28
1
Disk geometry problem.
Hi all.
I have a CentOS server:
CentOS release 5.7 (Final)
2.6.18-274.3.1.el5 x86_64
I have two SSD disks attached:
smartctl -i /dev/sdc
smartctl version 5.38 [x86_64-redhat-linux-gnu] Copyright (C) 2002-8 Bruce
Allen
Home page is http://smartmontools.sourceforge.net/
=== START OF INFORMATION SECTION ===
Device Model: INTEL SSDSA2CW120G3
Serial Number: CVPR13010957120LGN
Firmware
2020 Nov 12
1
ssacli start rebuild?
in large raids, I label my disks with the last 4 or 6 digits of the drive
serial number (or for SAS disks, the WWN). this is visible via smartctl,
and I record it with the zpool documentation I keep on each server
(typically a text file on a cloud drive). zpools don't actually care
WHAT slot a given pool member is in, you can shut the box down, shuffle all
the disks, boot back up and
2016 May 09
1
Internal RAID controllers question
On 05/08/2016 06:20 PM, John R Pierce wrote:
> there are really only two choices today, Adaptec and Avago (formerly
> LSI, they also control the former Areca product line).
I don't believe that is correct. LSI acquired 3ware, and Avago acquired
LSI. So, Avago owns the 3ware and LSI technology, but Adaptec and Areca
are still competitors.
> Whoops, Avago is now Broadcom, a
2017 Apr 29
0
SCSI drives and Centos 7
On 4/29/2017 12:42 PM, Gregory P. Ennis wrote:
> what does `lspci` have to say about this raid card ?
>
> John,
>
> Thanks for the prompt :
>
> lspci demonstrates :
>
> 02:01.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID (rev 01)
> 02:02.0 SCSI storage controller: Adaptec AIC-7902B U320 (rev 10)
> 02:02.1 SCSI storage controller: Adaptec AIC-7902B U320
2002 Feb 01
1
Sampling from a database
I use RODBC and RpgSQL quite a lot to access files stored in another
machine under PostgreSQL. Since I am now using files which do not fit
into R's memory, I would like to take random samples. What I would
like is to issue a query such as
SELECT * FROM file WHERE runif > 0.9
with "runif" being a uniformly distributed random number, generated on
the fly; but I cannot
2016 Jul 15
1
NPIV storage pools do not map to same LUN units across hosts.
Link: http://wiki.libvirt.org/page/NPIV_in_libvirt
Topic: Virtual machine configuration change to use vHBA LUN
There is a NPIV storage pool defined on two hosts and pool contains a
total of 8 volumes, allocated from a storage device.
Source:
# virsh vol-list poolvhba0
Name Path
------------------------------------------------------------------------------
unit:0:0:0
2017 Apr 30
2
SCSI drives and Centos 7
-----Original Message-----From: John R Pierce <pierce at hogranch.com>
Reply-to: CentOS mailing list <centos at centos.org>
To: centos at centos.org
Subject: Re: [CentOS] SCSI drives and Centos 7
Date: Sat, 29 Apr 2017 13:31:11 -0700
On 4/29/2017 12:42 PM, Gregory P. Ennis wrote:
> what does `lspci` have to say about this raid card ?
>
> John,
>
> Thanks for the prompt
2016 Oct 27
0
Disk near failure
Il 24/10/2016 14:05, Leonard den Ottolander ha scritto:
> Hi,
>
> On Mon, 2016-10-24 at 12:07 +0200, Alessandro Baggi wrote:
>> === START OF READ SMART DATA SECTION ===
>> SMART Error Log not supported
>
> I reckon there's a <snip> between those lines. The line right after the
> first should read something like:
>
> SMART overall-health self-assessment
2006 Mar 08
1
Want to fit random intercept in logistic regression (testing lmer and glmmML)
Greetings. Here is sample code, with some comments. It shows how I
can simulate data and estimate glm with binomial family when there is
no individual level random error, but when I add random error into the
linear predictor, I have a difficult time getting reasonable estimates
of the model parameters or the variance component.
There are no clusters here, just individual level responses, so
2016 May 24
0
Hard drives being renamed
On 5/24/2016 2:08 PM, Pat Haley wrote:
>
> We are running Centos 6.7 - 2.6.32-573.22.1.el6.x86_64 on a Quanta
> Cirrascale, up to date with patches. We have had a couple of instances
> in which the hard drives have become renamed after reboot (e.g. drive
> sda is renamed to sdc after reboot). One time this occurred when we
> rebooted following the installation of a 10GB NIC
2016 Oct 28
0
Disk near failure
Il 27/10/2016 19:38, Yamaban ha scritto:
> On Thu, 27 Oct 2016 11:25, Alessandro Baggi wrote:
>> Il 24/10/2016 14:05, Leonard den Ottolander ha scritto:
>>> On Mon, 2016-10-24 at 12:07 +0200, Alessandro Baggi wrote:
>>> > === START OF READ SMART DATA SECTION ===
>>> > SMART Error Log not supported
>>>
>>> I reckon there's a
2016 May 25
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
LSI/Avago?s web pages don?t have any downloads for the SAS2308, so I think I?m out of luck wrt MegaRAID.
Bounced the node, confirmed MPT Firmware 15.10.09.00-IT.
HP Driver is v 15.10.04.00.
Both are the latest from HP.
Unsure why, but the module itself reports version 20.100.00.00:
[root at r1k1 sys] # cat module/mpt2sas/version
20.100.00.00
On 2016-05-25, 3:20 PM, "centos-bounces at
2014 Jan 27
2
smartctl: is my disc dying?
I've got a 1Tb USB disc that appears to be dying - eg it took about 10 days
(!) to run 'badblocks -nsv /dev/sdc' and it only did less than 2% in that
time. Read access became _really_ slow.
So there's definitely something amiss and I've got it offline.
There's no drama about the content as I have other backups and I'm resigned
to junking the thing, but I'm curious
2015 Feb 08
0
Intermittent problem, likely disk IO related - mptscsih: ioc0: attempting task abort!
> -----Original Message-----
> From: Jason Pyeron
> Sent: Saturday, February 07, 2015 22:54
>
> NOTE: this is happening on Centos 6 x86_64,
> 2.6.32-504.3.3.el6.x86_64 not Centos 5
>
> Dell PowerEdge 2970, Seagate SATA drive, non-raid.
>
> I have this server which has been dying randomly, with no logs.
Here is a console picture.
http://i.imgur.com/ZYHlB82.jpg
2015 Jul 21
1
Stickers for people in the EU
hi
Up for grabs, a few packs of CentOS stickers ( but only to the EU )
These already have Royal Mail EU stamps, so I cant send them elsewhere.
There are 20 packs in all, each pack has
4 x 1" by 1" CentOS logo sticker
4 x 2.5" by 2.5" CentOS logo sticker
2 x 2" round CentOS sticker
Want one ? email me at kbsingh at centos.org and give me your address, I'll
aim to
2018 Mar 25
0
rsync to my external eSATA HD is crashing/freezing my system...
On 19/03/18 14:01, Morgan Read wrote:
> Hello list
>
> I've been running the following command, first in fc20 and then now
> (since the beginning of March) in fc26:
> now=$(date +"%Y%m%d-%H%M"); sudo rsync -ahuAESX -vi /home/
> /run/media/readlegal/Backup/home >
> /run/media/readlegal/Backup/rsync-changes_$now
>
> Since the move to fc26, this
2014 Jan 27
1
UC smartctl: is my disc dying?
I've seen similar cases where a USB drive appears to fail but the SMART
reports success. The most recent was a 500 GB disk which had internally
a Seagate Barracuda SATA drive. It appeared to work well until I sent
it
a largish (7GB) tarball. As well as SMART I ran a surface check and
exercise,
all passed. The tar kept failing.
I can't test further, the disk has been broken up for