Hi,
I am trying to create a 10 drive raid6 array. OS is Centos 5.3 (64 Bit)
All 10 drives are 2T in size.
device sd{a,b,c,d,e,f} are on my motherboard
device sd{i,j,k,l} are on a pci express areca card (relevant lspci info below)
#lspci 
06:0e.0 RAID bus controller: Areca Technology Corp. ARC-1210 4-Port PCI-Express
to SATA RAID Controller
The controller is set to JBOD the drives. 
All drives have the same partition size
# fdisk -l
Disk /dev/sda: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1      243201  1953512001   83  Linux
....
I go about creating the array as follows
# mdadm --create  --verbose /dev/md3 --level=6 --raid-devices=10 /dev/sda1
/dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdi1 /dev/sdj1 /dev/sdk1
/dev/sdl1
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 64K
mdadm: size set to 1953511936K
Continue creating array?
As you  can see mdadm sets the size to 1.9T. Looking around there was this
limitation on older versions of mdadm if they are the 32 bit version. I am using
the 64 bit most up to date version from Centos.
If I go ahead and say yes create the array, mdadm says the following (I have
zeroed the drives before adding them and creating the array)
# mdadm --detail /dev/md3
/dev/md3:
        Version : 0.90
  Creation Time : Thu Sep 24 23:48:32 2009
     Raid Level : raid6
     Array Size : 15628095488 (14904.11 GiB 16003.17 GB)
  Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
   Raid Devices : 10
  Total Devices : 10
Preferred Minor : 3
    Persistence : Superblock is persistent
    Update Time : Thu Sep 24 23:48:32 2009
          State : clean, resyncing
 Active Devices : 10
Working Devices : 10
 Failed Devices : 0
  Spare Devices : 0
     Chunk Size : 64K
 Rebuild Status : 0% complete
           UUID : cfa565db:4d0e07ca:b4d87a59:6e96cb06
         Events : 0.1
    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1
       4       8       65        4      active sync   /dev/sde1
       5       8       81        5      active sync   /dev/sdf1
       6       8      129        6      active sync   /dev/sdi1
       7       8      145        7      active sync   /dev/sdj1
       8       8      161        8      active sync   /dev/sdk1
       9       8      177        9      active sync   /dev/sdl1
Anyone got any ideas?
Nathan
Nathan Norton wrote:> Hi, > > I am trying to create a 10 drive raid6 array. OS is Centos 5.3 (64 Bit) >thats way too big of a single raid stripe set. rebuilding that when a drive fails will take days, maybe even weeks. a volume that big (8*2TB == 16TB) will need to be partitioned as GPT rather than MBR. a 16TB file system will be unwieldy when something goes sideways and it requires a FSCK or similar. your output *looks* like its built and its busy striping. I'd wait about 12 hours and check the status again, then whatever % its done, take (1 / that fraction) times the number of hours so far, and thats how long it will take to complete, eg, if its 5% done after 12 hours, it will take (1/.05)*12 hours = 10 days to complete
Nathan Norton wrote:> Hi, > > I am trying to create a 10 drive raid6 array. OS is Centos 5.3 (64 Bit) > > All 10 drives are 2T in size. > .... > mdadm: size set to 1953511936K > Continue creating array? > > As you can see mdadm sets the size to 1.9T. Looking around there was this limitation on older versions of mdadm if they are the 32 bit version. I am using the 64 bit most up to date version from Centos. >a "2TB" drive is approximately 2,000,000,000,000 bytes. This is about 1.8TB if you measure a terabyte as 1024^4 the way software does. **2 000 000 000 000 / 1024 = 1 953 125 000 Kilobytes** **2 000 000 000 000 / 1024^2 = 1 907 348.63 Megabytes** *2 000 000 000 000 / (1 024^3) = 1 862.645 Gigabytes* **2 000 000 000 000 / (1 024^4) = 1.819 Terabytes **
On Thu, Sep 24, 2009 at 03:48:35PM +1000, Nathan Norton wrote:> I am trying to create a 10 drive raid6 array. OS is Centos 5.3 (64 Bit) > > All 10 drives are 2T in size.> Disk /dev/sda: 2000.3 GB, 2000398934016 bytes> mdadm: size set to 1953511936K > Continue creating array? > > As you can see mdadm sets the size to 1.9T. Looking around thereBut later...> # mdadm --detail /dev/md3 > /dev/md3: > Version : 0.90 > Creation Time : Thu Sep 24 23:48:32 2009 > Raid Level : raid6 > Array Size : 15628095488 (14904.11 GiB 16003.17 GB)That's a little bit bigger than 2T :-) Indeed, it looks around right. Assuming all disks are the same size, then to a first approximation you would have this much space in your raid6: 8*2000398934016/1024 = 15628116672 KiB mdadm reports 15628095488 I think the difference falls within expected limits.> Anyone got any ideas?The "create" line may have reported a small value (it reported the size of each disk), but the final array looks nice and big. -- rgds Stephen
On Thursday 24 September 2009, Nathan Norton wrote: ...> All 10 drives are 2T in size....> # mdadm --create --verbose /dev/md3 --level=6 --raid-devices=10 /dev/sda1 > /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdi1 /dev/sdj1 > /dev/sdk1 /dev/sdl1 mdadm: layout defaults to left-symmetric > mdadm: chunk size defaults to 64K > mdadm: size set to 1953511936K > Continue creating array? > > As you can see mdadm sets the size to 1.9T1953511936K (KiB, 2*2^40) is equal to 2 TB (2*10^12) no mystery here. ...> # mdadm --detail /dev/md3 > /dev/md3: > Version : 0.90 > Creation Time : Thu Sep 24 23:48:32 2009 > Raid Level : raid6 > Array Size : 15628095488 (14904.11 GiB 16003.17 GB) > Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)16003.17 GB (10^9) is exactly what you should get (10 drives -2 for parity times 2 TB). /Peter ...> Anyone got any ideas? > > Nathan-------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part. URL: <http://lists.centos.org/pipermail/centos/attachments/20090924/ee8ed435/attachment.sig>
Apparently Analagous Threads
- CentOS 6.6 - reshape of RAID 6 is stucked
 - "device delete" kills contents
 - (o2net, 6301, 0):o2net_connect_expired:1664 ERROR: no connection established with node 1 after 60.0 seconds, giving up and returning errors.
 - Transport endpoint not connected after crash of one node
 - LVM hatred, was Re: /boot on a separate partition?