Displaying 20 results from an estimated 10000 matches similar to: "2TB limit, weird mounting issues on reboot"
2008 Oct 15
2
formatting large volume
I just got a new server with a Dell MD-1000 SAS unit and 6-750 gigabyte
drives which are now initializing in RAID 10 which will give me just
about 2 terabytes.
I vaguely recall reading that fdisk wasn't suitable for partitioning and
wonder if I shouldn't be using partd instead. I am also wondering if I
should use lvm or just mkfs to create the filesystem. Anyone have
suggestions before I
2009 Jun 26
2
2TB partition limitation on X86_64 version??
we have DELL server with Centos 5.3 X86_64 bits version on it.  This server also have couple MD1000 connect to it.  We configured MD1000 as one hardware Volume size 2990GB.
I tried to use "fdisk" to partition this 2990Gb volume and "fdisk" can only see 2000GB.  does 64 bits O.S. still have 2TB limitation on File system? 
Does there has other tool can partition disk size large
2011 Oct 12
1
raid on large disks?
What's the right way to set up >2TB partitions for raid1 autoassembly?
 I don't need to boot from this but I'd like it to come up and mount
automatically at boot.
-- 
  Les Mikesell
    lesmikesell at gmail.com
2007 Feb 05
1
LVM on large partitions greater than 2TB
I want to create one large LVM volume on the 2.5TB device.  I seem to be
able to create an LVM physical volume on the whole device, but I've read
that it's better to create a single large partition for LVM as the existence
of the partition informs other apps that the disk is in use which prevents
accidental corruption of the LVM volume.
Creating a large partition of this size under Centos
2015 Feb 12
2
test, and H/W
Hi, folks,
   This is a test post; to make it of interest, here's an issue you might
want to be aware of, those with not brand new hardware. We've got a few
Dell PE R415s. A 2TB b/u drive on one was getting full, so I went to
replace it with a 3TB drive (a WD Red, not that it matters.) We got the
system in '11.
I built the drive - a GPT, 1 3TB partition, on an R320, from '12.
2010 May 24
2
Mounting LVM disk
List Readers -
 
I have a Dell server that uses the Perc 6i controller and had 5 1Tb
disks installed (1 for OS and the other 4 in a Raid0 for a large storage
pool). The owner of the server wanted me to swap out the 1Tb disks for
2Tb disks - easy enough I thought, but I ran into some issues trying to
clone the OS disk to the new 2Tb disk, so I just did a re-install. So
basically we now have 5 2Tb
2009 Sep 24
4
mdadm size issues
Hi,
I am trying to create a 10 drive raid6 array. OS is Centos 5.3 (64 Bit)
All 10 drives are 2T in size.
device sd{a,b,c,d,e,f} are on my motherboard
device sd{i,j,k,l} are on a pci express areca card (relevant lspci info below)
#lspci 
06:0e.0 RAID bus controller: Areca Technology Corp. ARC-1210 4-Port PCI-Express to SATA RAID Controller
The controller is set to JBOD the drives. 
All
2006 Oct 15
1
Proper partition/LVM growth after RAID migration
Hi
This topic is perhaps not for this list, but it I'm running on a
CentOS 4.4 and it seems that a lot of people here uses 3Ware and RAID
volumes.
I did a RAID migration on a 3Ware 9590SE-12, so that an exported disk
grew from 700GB to 1400GB. The exported disk is managed by LVM. The
problem now is that I don't really know what to do now to let LVM and
my locigal volume to make use of
2010 Feb 08
7
Can I use direct attached storage as a shared filesystem in Xen
I have a quad core server in which I want to run 4 virtual servers. On this
server I have a 1/2 terabyte raid 1 I have split between the 4 members that
have the OS on it. I have  raid 5 10 terabyte internal storage running on a
3ware 9690a card.  I want to share this storage between the servers without
partitioning it.  Is this possible?
-------------- next part --------------
An HTML attachment
2010 Dec 18
1
Xapian index size 475GB = 170 million documents (URLs)
Xapians,
I am maintaining about two indexes for my search engines which
approximately is each the same size. I would like to share this
knowledge with you, since many of you have never seen Xapian index of
this size. And of course you can search the index by yourself at
- http://myhealthcare.com/
- http://find1friend.com/
I need 2 x 100 million more documents into each index, and I hope it
will
2009 Nov 09
1
max file size
Hello,
does anybody know what's the maximum file size (terabytes?) when using rsync 
with options --checksum and / or --inplace?
What file sizes have been tested in reality? Are there any experiences using 
rsync (with --checksum and / or --inplace) for big files with several / dozens 
or terabytes?
Thanks a lot, Heinz-Josef Claes
2004 May 13
1
2 terabyte filesystem limitation on linux client
Hi all.
I have recently introduced two 5.5TB XFS filesystems to our storage
backend.  I export the filesystem via samba 3.0.3 on Fedora core 2. 
Linux clients that mount the share show only 2TB available.  Windows
clients show the full capacity.  Before I put these filesystems into
production I'd like to find out if the reported filesystem size is going
to cause a problem.  Is SMB actually
2007 Nov 26
3
DomU 2TB vbd limit?
Hi everyone,
Just a quick question:
Is there a limit on the vbd size for pv guest domains, I''ve found only
rumors on the Internet about a 2TB limit, but from my experience with
xen-3.0.3 /linux-kernel-2.6.18-5 (installed from a debian etch packages)
I can say that a lvm volume of 3.7TB as created and formated in Dom0 is
seen as 1.85TB only of course reporting a corrupted xfs filesystem
2018 Jul 11
3
[PATCH v35 1/5] mm: support to get hints of free page blocks
On Tue, Jul 10, 2018 at 6:24 PM Wei Wang <wei.w.wang at intel.com> wrote:
>
> We only get addresses of the "MAX_ORDER-1" blocks into the array. The
> max size of the array that could be allocated by kmalloc is
> KMALLOC_MAX_SIZE (i.e. 4MB on x86). With that max array, we could load
> "4MB / sizeof(u64)" addresses of "MAX_ORDER-1" blocks, that is,
2018 Jul 11
3
[PATCH v35 1/5] mm: support to get hints of free page blocks
On Tue, Jul 10, 2018 at 6:24 PM Wei Wang <wei.w.wang at intel.com> wrote:
>
> We only get addresses of the "MAX_ORDER-1" blocks into the array. The
> max size of the array that could be allocated by kmalloc is
> KMALLOC_MAX_SIZE (i.e. 4MB on x86). With that max array, we could load
> "4MB / sizeof(u64)" addresses of "MAX_ORDER-1" blocks, that is,
2005 Jun 13
5
formatting a 3 terabyte partition
hi.
i'm hitting a wall each time i try to format a 3 terabyte partition. i'm 
able to create the partition using parted but whenever i try to create a 
3 terabyte xfs or jfs or ext3 filesystem, the mounted filesystem created 
is only 1 terabyte. i tried centos x86 and x86_64 4.0 but i always hit a 
1 terabyte limit. please help.
2010 Mar 30
1
NFS freeze when transmitting big files
Hi,
I have one old spare PC, a PIII with 128 MB RAM, that I use as a spare 
file server. I have a headless CentOS 5 install on it, and a 2 terabyte 
external USB harddisk. The machine is in my basement (because it's quite 
loud), and I'm using what's called "CPL" here (Courant Porteur), which 
is basically Ethernet over 220V power lines. It's much slower than 
normally
2009 Jan 27
6
More than 2TB RAID...
Hi,
I just received a new server (HP DL180G5) with 12x 1TB HDs and I bumped into fdisks 2TB limits...
Since this is an entry level server, I can't use the classic HP bootable utilities to create smaller volumes et can only create a big RAID6.
I found out that: using parted, labelling it gpt and creating the partitions would do the trick.
But, what about grub?  I read that it does not support
2008 Feb 08
3
Disk partitions and LVM limits
Hi,
I've got a DAS DELL MD1000 with a bunch of SATA drives in RAID 5 configuration 
with total space of 5.4TB. This box is attached to a CentOS5 system (kernel 
2.6.18-53.1.6.el5).
Any idea how to make this space usable?
Is there a limit how big a partition can be? What is the work around?
Is there a limit how big a file system ca be?
I've tried to partition it but no matter how bug
2004 Jan 16
1
Any (known) scaling issues?
I'm considering using rsync in our data center but I'm worried about whether
it will scale to the numbers and sizes we deal with. We would be moving up
to a terabyte in a typical sync, consisting of about a million files. Our
data mover machines are RedHat Linux Advanced Server 2.1 and all the sources
and destinations are NFS mounts. The data is stored on big NFS file servers.
The