search for: sdq

Displaying 18 results from an estimated 18 matches for "sdq".

Did you mean: sd
2010 May 27
2
Multipathing with Sun 7310
...n" sdd: checker msg is "readsector0 checker reports path is down" sdh: checker msg is "readsector0 checker reports path is down" sdl: checker msg is "readsector0 checker reports path is down" sdp: checker msg is "readsector0 checker reports path is down" sdq: checker msg is "readsector0 checker reports path is down" sdr: checker msg is "readsector0 checker reports path is down" sds: checker msg is "readsector0 checker reports path is down" sdt: checker msg is "readsector0 checker reports path is down" sdu: checke...
2010 May 28
2
permanently add md device
...All Currently i'm setting up a 5.4 server and try to create a 3rd raid device, when i run: $mdadm --create /dev/md2 -v --raid-devices=15 --chunk=32 --level=raid6 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq the device file "md2" is created and the raid is being configured. but somehow /dev/md2 is flushed when i reboot the system , same story if i create the file by mknod or MAKEDEV. does anyone know a way to solve this issue and permanently add md2 to devices? Thanks, Wessel
2012 Sep 05
3
BTRFS thinks device is busy [kernel 3.5.3]
...: ''firstpool'' uuid: 517e8cfa-4275-4589-8da4-6a46ad613daa Total devices 13 FS bytes used 242.82GB devid 3 size 931.51GB used 90.28GB path /dev/sdg devid 14 size 931.51GB used 91.33GB path /dev/sdr devid 13 size 931.51GB used 90.50GB path /dev/sdq devid 12 size 931.51GB used 90.50GB path /dev/sdp devid 11 size 931.51GB used 90.50GB path /dev/sdo devid 10 size 931.51GB used 90.50GB path /dev/sdn devid 9 size 931.51GB used 90.50GB path /dev/sdm devid 8 size 931.51GB used 90.50GB path /dev/sdl...
2017 Jan 03
2
Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
...vered to normal mode Dec 15 01:57:53 test.example.com multipathd: 360000970000196801239533037303434: remaining active paths: 1 Dec 15 01:57:53 test.example.com kernel: sd 1:0:2:20: [sdeu] Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK [root at test log]# multipath -ll |grep -i fail |- 1:0:0:15 sdq 65:0 failed ready running - 3:0:0:15 sdai 66:32 failed ready running We are using default multipath.conf HBA driver version 8.07.00.26.06.8-k HBA model QLogic Corp. ISP8324-based 16Gb Fibre Channel to PCI Express Adapter OS: CentOS 64-bit/2.6.32-642.6.2.el6.x86_64 Hardware:Intel/HP ProLi...
2010 Sep 17
1
multipath troubleshoot
...sdl 8:176 1 [active][ready] XXX....... 7/20 5:0:0:49154 sdm 8:192 1 [active][ready] XXX....... 7/20 5:0:0:3 sdn 8:208 1 [active][ready] XXX....... 7/20 5:0:0:16387 sdo 8:224 1 [active][ready] XXX....... 7/20 5:0:0:32771 sdp 8:240 1 [active][ready] XXX....... 7/20 5:0:0:49155 sdq 65:0 1 [active][ready] XXX....... 7/20 5:0:0:4 sdr 65:16 1 [active][ready] XXX....... 7/20 5:0:0:16388 sds 65:32 1 [active][ready] XXX....... 7/20 5:0:0:32772 sdt 65:48 1 [active][ready] XXX....... 7/20 5:0:0:49156 sdu 65:64 1 [active][ready] XXX....... 7/20 5:0:0:5 sdv...
2011 Nov 22
1
Recovering data from old corrupted file system
...t got corrupted ages ago (as I recall, one of the drives stopped responding, causing btrfs to panic). I am hoping to recover some of the data. For what it''s worth, here is the dmesg output from trying to mount the file system on a 3.0 kernel: device label Media devid 6 transid 816153 /dev/sdq device label Media devid 7 transid 816153 /dev/sdl device label Media devid 11 transid 816153 /dev/sdj device label Media devid 9 transid 816153 /dev/sdk device label Media devid 10 transid 816153 /dev/sdi device label Media devid 3 transid 816152 /dev/sdh device label Media devid 4 transid 816152...
2020 Sep 09
0
Btrfs RAID-10 performance
...| FW: Miloslav> 24.16.0-0082 Miloslav> -- Array information -- Miloslav> -- ID | Type | Size | Strpsz | Flags | DskCache | Status | OS Miloslav> Path | CacheCade |InProgress Miloslav> c0u0 | RAID-0 | 838G | 256 KB | RA,WB | Enabled | Optimal | Miloslav> /dev/sdq | None |None Miloslav> c0u1 | RAID-0 | 558G | 256 KB | RA,WB | Enabled | Optimal | Miloslav> /dev/sda | None |None Miloslav> c0u2 | RAID-0 | 558G | 256 KB | RA,WB | Enabled | Optimal | Miloslav> /dev/sdb | None |None Miloslav> c0u3 | RAID-0 | 558G |...
2020 Sep 09
4
Btrfs RAID-10 performance
....0-0082 > > Miloslav> -- Array information -- > Miloslav> -- ID | Type | Size | Strpsz | Flags | DskCache | Status | OS > Miloslav> Path | CacheCade |InProgress > Miloslav> c0u0 | RAID-0 | 838G | 256 KB | RA,WB | Enabled | Optimal | > Miloslav> /dev/sdq | None |None > Miloslav> c0u1 | RAID-0 | 558G | 256 KB | RA,WB | Enabled | Optimal | > Miloslav> /dev/sda | None |None > Miloslav> c0u2 | RAID-0 | 558G | 256 KB | RA,WB | Enabled | Optimal | > Miloslav> /dev/sdb | None |None > Miloslav> c0u...
2017 Jan 05
0
Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
> On Jan 3, 2017, at 2:59 PM, lakhera2017 <plakhera at salesforce.com> wrote: > > |- 1:0:0:15 sdq 65:0 failed ready running > - 3:0:0:15 sdai 66:32 failed ready running Does the same SAN target fail each time? What brand/model/firmware SAN switch is between initiator and target? Does the HBA show any SCSI aborts?
2020 Sep 07
4
Btrfs RAID-10 performance
...| BBU | Firmware c0 | AVAGO MegaRAID SAS 9361-8i | 1024MB | 72C | Good | FW: 24.16.0-0082 -- Array information -- -- ID | Type | Size | Strpsz | Flags | DskCache | Status | OS Path | CacheCade |InProgress c0u0 | RAID-0 | 838G | 256 KB | RA,WB | Enabled | Optimal | /dev/sdq | None |None c0u1 | RAID-0 | 558G | 256 KB | RA,WB | Enabled | Optimal | /dev/sda | None |None c0u2 | RAID-0 | 558G | 256 KB | RA,WB | Enabled | Optimal | /dev/sdb | None |None c0u3 | RAID-0 | 558G | 256 KB | RA,WB | Enabled | Optimal | /dev/sdc | None |N...
2006 Apr 13
2
One model won''t work like the others, generating weird error
...9;',''TRI'',''CAP'',''BUF'',''MCM'',''WMS'',''PHI'',''BOW'',''PAD'',''STL'',''POR'',''LAW''] nc_teams = [''SDQ'',''CSP'',''MIL'',''CAJ'',''HAG'',''BUZ'',''DTR'',''COL'',''SPO'',''SEA'',''MAD'',''CRE''] result = Team.find(:...
2020 Sep 09
0
Btrfs RAID-10 performance
....0-0082 > > Miloslav> -- Array information -- > Miloslav> -- ID | Type?? |????Size |??Strpsz | Flags | DskCache |?? Status |??OS > Miloslav> Path | CacheCade |InProgress > Miloslav> c0u0??| RAID-0 |????838G |??256 KB | RA,WB |??Enabled |??Optimal | > Miloslav> /dev/sdq | None??????|None > Miloslav> c0u1??| RAID-0 |????558G |??256 KB | RA,WB |??Enabled |??Optimal | > Miloslav> /dev/sda | None??????|None > Miloslav> c0u2??| RAID-0 |????558G |??256 KB | RA,WB |??Enabled |??Optimal | > Miloslav> /dev/sdb | None??????|None > Miloslav> c0u...
2010 Jun 03
2
Tracking down hangs
...s off [ 40.870756] sd 15:0:0:0: [sdp] Mode Sense: 00 3a 00 00 [ 40.870789] sd 15:0:0:0: [sdp] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA [ 40.870792] sdp: unknown partition table [ 40.889217] sd 15:0:0:0: [sdp] Attached SCSI disk [ 40.889290] sd 16:0:0:0: [sdq] 976773168 512-byte hardware sectors (500108 MB) [ 40.889310] sd 16:0:0:0: [sdq] Write Protect is off [ 40.889312] sd 16:0:0:0: [sdq] Mode Sense: 00 3a 00 00 [ 40.889346] sd 16:0:0:0: [sdq] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA [ 40.889402] sd 16:0:0:0:...
2020 Sep 09
0
Btrfs RAID-10 performance
...W: Miloslav> 24.16.0-0082 >> Miloslav> -- Array information -- Miloslav> -- ID | Type | Size | Strpsz | Flags | DskCache | Status | OS Miloslav> Path | CacheCade |InProgress Miloslav> c0u0 | RAID-0 | 838G | 256 KB | RA,WB | Enabled | Optimal | Miloslav> /dev/sdq | None |None Miloslav> c0u1 | RAID-0 | 558G | 256 KB | RA,WB | Enabled | Optimal | Miloslav> /dev/sda | None |None Miloslav> c0u2 | RAID-0 | 558G | 256 KB | RA,WB | Enabled | Optimal | Miloslav> /dev/sdb | None |None Miloslav> c0u3 | RAID-0 | 558G |...
2013 Mar 28
1
question about replacing a drive in raid10
...: total=11.00GB, used=8.51GB # btrfs filesystem show Label: ''firstpool'' uuid: 517e8cfa-4275-4589-8da4-6a46ad613daa Total devices 15 FS bytes used 2.58TB devid 13 size 931.51GB used 381.58GB path /dev/sdp devid 14 size 931.51GB used 381.58GB path /dev/sdq devid 12 size 931.51GB used 381.58GB path /dev/sdo devid 11 size 931.51GB used 381.58GB path /dev/sdn devid 10 size 931.51GB used 381.58GB path /dev/sdm devid 9 size 931.51GB used 381.58GB path /dev/sdl devid 8 size 931.51GB used 380.58GB path /de...
2013 Jan 03
33
Option LABEL
Hallo, linux-btrfs, please delete the option "-L" (for labelling) in "mkfs.btrfs", in some configurations it doesn''t work as expected. My usual way: mkfs.btrfs -d raid0 -m raid1 /dev/sdb /dev/sdc /dev/sdd ... One call for some devices. Wenn I add the option "-L mylabel" then each device gets the same label, and therefore some other programs
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
...2 blocksize=4k filename=/dev/sdb filename=/dev/sdc filename=/dev/sdd filename=/dev/sde filename=/dev/sdf filename=/dev/sdg filename=/dev/sdh filename=/dev/sdi filename=/dev/sdj filename=/dev/sdk filename=/dev/sdl filename=/dev/sdm filename=/dev/sdn filename=/dev/sdo filename=/dev/sdp filename=/dev/sdq filename=/dev/sdr filename=/dev/sds filename=/dev/sdt filename=/dev/sdu filename=/dev/sdv filename=/dev/sdw filename=/dev/sdx filename=/dev/sdy filename=/dev/sdz filename=/dev/sdaa filename=/dev/sdab filename=/dev/sdac filename=/dev/sdad filename=/dev/sdae filename=/dev/sdaf filename=/dev/sdag Gue...
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
...2 blocksize=4k filename=/dev/sdb filename=/dev/sdc filename=/dev/sdd filename=/dev/sde filename=/dev/sdf filename=/dev/sdg filename=/dev/sdh filename=/dev/sdi filename=/dev/sdj filename=/dev/sdk filename=/dev/sdl filename=/dev/sdm filename=/dev/sdn filename=/dev/sdo filename=/dev/sdp filename=/dev/sdq filename=/dev/sdr filename=/dev/sds filename=/dev/sdt filename=/dev/sdu filename=/dev/sdv filename=/dev/sdw filename=/dev/sdx filename=/dev/sdy filename=/dev/sdz filename=/dev/sdaa filename=/dev/sdab filename=/dev/sdac filename=/dev/sdad filename=/dev/sdae filename=/dev/sdaf filename=/dev/sdag Gue...