search for: sdj1

Displaying 17 results from an estimated 17 matches for "sdj1".

Did you mean: sda1
2012 Feb 26
0
"device delete" kills contents
Hallo, linux-btrfs, I''ve (once again) tried "add" and "delete". First, with 3 devices (partitions): mkfs.btrfs -d raid0 -m raid1 /dev/sdk1 /dev/sdl1 /dev/sdm1 Mounted (to /mnt/btr), filled with about 100 GByte data. Then btrfs device add /dev/sdj1 /mnt/btr results in # show Label: none uuid: 6bd7d4df-e133-47d1-9b19-3c7565428770 Total devices 4 FS bytes used 100.44GB devid 3 size 68.37GB used 44.95GB path /dev/sdm1 devid 2 size 136.73GB used 43.95GB path /dev/sdl1 devid 1 size 16.96GB used 16.96GB path /dev/sdk1 devid 4 si...
2003 Jun 12
2
How can I read superblock(ext2, ext3)'s information?
Hello, I'd like to read superblock's information on Redhat 7.3, but I don't know how to do it. For example, Input : "/dev/sdj2" Output : ext2_super_block struct's s_wtime (I saw it at "/usr/include/ext2fs/ext2_fs.h") Input : "/dev/sdj1" Output : ext3_super_block struct's s_wtime (I saw it at "/usr/include/linux/ext3_fs.h") Please give me an answer...
2007 Oct 25
0
group descriptors corrupted on ext3 filesystem
I am unable to mount an ext3 filesystem on RHEL AS 2.1. This is not the boot or root filesystem. When I try to mount the file system, I get the following error: mount: wrong fs type, bad option, bad superblock on /dev/sdj1, or too many mounted file systems When I try to run e2fsck -vvfy /dev/sdj1 I get the following error: Group descriptors look bad... trying backup blocks... Segmentation fault When I try running e2fsck -b 294912 -vvfy /dev/sdj1 (providing backup superblock location ..which I tried severa...
2015 Aug 25
0
CentOS 6.6 - reshape of RAID 6 is stucked
Hello I have a CentOS 6.6 Server with 13 disks in a RAID 6. Some weeks ago, i upgraded it to 17 disks, two of them configured as spare. The reshape worked like normal in the beginning. But at 69% it stopped. md2 : active raid6 sdj1[0] sdg1[18](S) sdh1[2] sdi1[5] sdm1[15] sds1[12] sdr1[14] sdk1[9] sdo1[6] sdn1[13] sdl1[8] sdd1[20] sdf1[19] sdq1[16] sdb1[10] sde1[17](S) sdc1[21] 19533803520 blocks super 1.2 level 6, 1024k chunk, algorithm 2 [15/15] [UUUUUUUUUUUUUUU] [=============>.......] reshape = 69.0% (13478...
2009 Sep 24
4
mdadm size issues
...Start End Blocks Id System /dev/sda1 1 243201 1953512001 83 Linux .... I go about creating the array as follows # mdadm --create --verbose /dev/md3 --level=6 --raid-devices=10 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 64K mdadm: size set to 1953511936K Continue creating array? As you can see mdadm sets the size to 1.9T. Looking around there was this limitation on older versions of mdadm if they are the 32 bit version. I...
2023 Mar 30
1
Performance: lots of small files, hdd, nvme etc.
Well, you have *way* more files than we do... :) Il 30/03/2023 11:26, Hu Bert ha scritto: > Just an observation: is there a performance difference between a sw > raid10 (10 disks -> one brick) or 5x raid1 (each raid1 a brick) Err... RAID10 is not 10 disks unless you stripe 5 mirrors of 2 disks. > with > the same disks (10TB hdd)? The heal processes on the 5xraid1-scenario >
2007 Aug 23
1
Transport endpoint not connected after crash of one node
...ocfs2 ppsdb102 /dev/sdc1 ocfs2 ppsdb102 /dev/sdd1 ocfs2 ppsdb102 /dev/sde1 ocfs2 ppsdb102 /dev/sdf1 ocfs2 ppsdb102 /dev/sdg1 ocfs2 ppsdb102 /dev/sdh1 ocfs2 ppsdb102 /dev/sdi1 ocfs2 ppsdb102 /dev/sdj1 ocfs2 ppsdb102 /dev/sdk1 ocfs2 ppsdb102 /dev/sdl1 ocfs2 ppsdb102, ppsdb101 /dev/sdm1 ocfs2 ppsdb102 /dev/sdn1 ocfs2 ppsdb102 /dev/sdo1 ocfs2 ppsdb102 /dev/sdp1 ocfs2 ppsdb102, ppsdb101 /dev/sdq1 o...
2011 Feb 10
0
(o2net, 6301, 0):o2net_connect_expired:1664 ERROR: no connection established with node 1 after 60.0 seconds, giving up and returning errors.
...o2cb 3A88FC23DBC34233BB175B056BC744A1 cluster /dev/sdf1 ocfs2 o2cb A183305755964CF7B8D1A7307FAA0171 cluster /dev/sdg1 ocfs2 o2cb F5E5D05E10D5416690D8EBA6A6C56D64 cluster /dev/sdh1 ocfs2 o2cb 3A88FC23DBC34233BB175B056BC744A1 cluster /dev/sdi1 ocfs2 o2cb A183305755964CF7B8D1A7307FAA0171 cluster /dev/sdj1 ocfs2 o2cb F5E5D05E10D5416690D8EBA6A6C56D64 cluster /dev/sdk1 ocfs2 o2cb 3A88FC23DBC34233BB175B056BC744A1 cluster /dev/sdl1 ocfs2 o2cb A183305755964CF7B8D1A7307FAA0171 cluster /dev/sdm1 ocfs2 o2cb F5E5D05E10D5416690D8EBA6A6C56D64 cluster /dev/mapper/san_lun2p1 ocfs2 o2cb F5E5D05E10D5416690D8EBA6A6C...
2015 Jun 25
0
LVM hatred, was Re: /boot on a separate partition?
...sdf1 VG vg_opt lvm2 [1.95 TB / 0 free] PV /dev/sda2 VG VolGroup00 lvm2 [39.88 GB / 0 free] PV /dev/sdg1 VG bak-rdc lvm2 [1.95 TB / 0 free] PV /dev/sdh1 VG bak-rdc lvm2 [1.95 TB / 0 free] PV /dev/sdi1 VG bak-rdc lvm2 [1.95 TB / 0 free] PV /dev/sdj1 VG bak-rdc lvm2 [1.95 TB / 0 free] PV /dev/sdk1 VG bak-rdc lvm2 [1.47 TB / 0 free] PV /dev/sdl1 VG bak-rdc lvm2 [1.47 TB / 0 free] PV /dev/sdm1 VG bak-rdc lvm2 [1.95 TB / 0 free] PV /dev/sdn1 VG bak-rdc lvm2 [1.95 TB / 0 free] PV /dev/sdo1 VG...
2012 Nov 13
1
mdX and mismatch_cnt when building an array
CentOS 6.3, x86_64. I have noticed when building a new software RAID-6 array on CentOS 6.3 that the mismatch_cnt grows monotonically while the array is building: # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md11 : active raid6 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0] 3904890880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
2012 Jul 10
1
Problem with RAID on 6.3
...s, 63 sectors/track, 242251 cylinders Units = cylinders of 16128 * 512 = 8257536 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdj1 1 242252 1953514583+ ee GPT % dd if=/dev/sdj count=1 2> /dev/null | hexdump 0000000 0000 0000 0000 0000 0000 0000 0000 0000 * 00001c0 0002 ffee ffff 0001 0000 88af e8e0 0000 00001d0 0000 0000 0000 0000 0000 0000 0000 0000 * 00001f0 0000 0000 0000 0000 0000...
2006 Dec 23
3
How to start installing a Quad-Devel-Station?
...2 swap 256 MByte /dev/sdf1 /Chroot-3.0 7000 MByte # Debian Woody / OldStable /dev/sdf2 swap 256 MByte /dev/sdg1 /Chroot-2.2 7000 MByte # Debian Potato /dev/sdg2 swap 256 MByte /dev/sdh1 /Chroot-2.1 7000 MByte # Debian Slink /dev/sdh2 swap 256 MByte /dev/sdi1 /usr/src 9100 MByte /dev/sdj1 /backups 9100 MByte ----8<------------------------------------------------------------------ Since GRUB does not work on this Mainboard, I must use LILO. The system is configured to boot the "Master-System" and then starting chroots using /etc/inittab: ----8<---------------------...
2009 Apr 17
0
problem with 5.3 upgrade or just bad timing?
...the md devices is generating these errors. system is running centos 5.3 64bit: # uname -a Linux xenmaster.dimension-x.local 2.6.18-128.1.6.el5xen #1 SMP Wed Apr 1 09:53:14 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md1 : active raid5 sdk1[2] sdj1[4] sdi1[3] sdh1[0] sdg1[1] 976783616 blocks level 5, 64k chunk, algorithm 2 [5/5] [UUUUU] md0 : active raid5 sdf1[3] sde1[1] sdd1[4](S) sdc1[0] sdb1[2] 2197715712 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] unused devices: <none> here is what dmesg has to say: (some of...
2011 Jan 18
6
BUG while writing to USB btrfs filesystem
...Root fs is on /dev/sdi1, and /dev/sdj2 is the card reader which was the target of the untar. [29571.448889] sd 11:0:0:0: [sdj] Assuming drive cache: write through [29571.451153] sdj: unknown partition table [29572.047390] sd 11:0:0:0: [sdj] Assuming drive cache: write through [29572.048149] sdj: sdj1 sdj2 [29602.642175] device fsid 5648f85292725e76-72190c37da5211b6 devid 1 transid 7 /dev/sdj2 [29602.643256] btrfs: use spread ssd allocation scheme [36852.550219] sd 11:0:0:0: [sdj] Assuming drive cache: write through [36852.552958] sdj: unknown partition table [36852.571770] ------------[ cut he...
2010 Jan 08
7
SAN help
My CentOS 5.4 box has a single HBA card with 2 ports connected to my Storage. 2 Luns are assigned to my HBA card. Under /dev instead of seeing 4 devices I can see 12 devices from sdb to sdm. I am using qlogic driver that is bulitin to the OS. Has any one seen this kind of situation? Paras -------------- next part -------------- An HTML attachment was scrubbed... URL:
2011 Dec 31
1
problem with missing bricks
Gluster-user folks, I'm trying to use gluster in a way that may be a considered an unusual use case for gluster. Feel free to let me know if you think what I'm doing is dumb. It just feels very comfortable doing this with gluster. I have been using gluster in other, more orthodox configurations, for several years. I have a single system with 45 inexpensive sata drives - it's a
2002 Mar 02
4
ext3 on Linux software RAID1
.../dev/sdg1 raid-disk 4 device /dev/sdh1 raid-disk 5 device /dev/sdi1 raid-disk 6 device /dev/sdj1 raid-disk 7 device /dev/sdk1 raid-disk 8 device /dev/sdl1 raid-disk 9 device /dev/sdm1 raid-disk...