Displaying 14 results from an estimated 14 matches for "40tb".
Did you mean:
40gb
2011 Apr 13
1
Expanding RAID 10 array, WAS: 40TB File System Recommendations
On 4/13/11, Rudi Ahlers <Rudi at softdux.com> wrote:
>> to expand the array :)
>
> I haven't had problems doing it this way yet.
I finally figured out my mistake creating the raid devices and got a
working RAID 0 on two RAID 1 arrays. But I wasn't able to add another
RAID 1 component to the array with the error
mdadm: add new device failed for /dev/md/mdr1_3 as 2:
2011 Apr 12
17
40TB File System Recommendations
Hello All
I have a brand spanking new 40TB Hardware Raid6 array to play around
with. I am looking for recommendations for which filesystem to use. I am
trying not to break this up into multiple file systems as we are going
to use it for backups. Other factors is performance and reliability.
CentOS 5.6
array is /dev/sdb
So here is what...
2006 Feb 13
1
lilesystem size limit samba on 64bit 2.6 kernel vs. clients win2k/winxp ?
...to 32bit windows 2003
and windows-xp by googling.
The closest I could get to was that cifs filesystem size correlates to ntfs
filesystem size and therefore there would be a 64TB limit. Are there additional
limits within the software stack that I should be aware of?
We are considering exporting a 40TB xfs filesystem striped over 20 2TB luns on a
fibre channel SAN via LVM2 on SLES9-SP3, and will be falling back to 16TB
filesystems if necessary. Anything below that would be cumbersome because of the
amount of mountpoints.
Can the 32bit windows clients keep up with that size as long as the files o...
2013 Aug 28
0
Quotas on Lustre File system
Hello,
I am a newbie to Lustre File system.
In out data centre we have 2 Lustre file systems (small 40TB an larger 200
TB).
We using our Lustre File system to perform I/O for Life sciences and
bioinformatics applications.
The vendor has decided to mount home directories on smaller lustre file
system (40TB) and also installed bioinformatics applications on this
smaller Lustre FS.
The larger Lustre FS...
2013 Jun 19
1
XFS inode64 NFS export on CentOS6
Hi,
I am trying to get the most out of my 40TB xfs file system and I have noticed that the inode64 mount option gives me a roughly 30% performance increase (besides the other useful things).
The problem is that I have to export the filesystem via NFS and I cannot seem to get this working with the current version of nfs-utils (1.2.3).
The expor...
2009 Nov 06
4
2 TB limit on a samba share
I'm running a server with Centos 3 that I have set up a smbfs share to a
Buffalo LinkStation. The LS has 4 drives configured with RAID 5. Each
disk has 1 TB capacity, so the resulting drive is approximately 2.7 TB.
When doing a df, the result shows 2 TB, and no used blocks. Is there
some setting I can change so that Centos sees and uses all 2.7 TB or
does Centos 3 not support this?
Steve
2023 Mar 23
1
hardware issues and new server advice
...Interesting, we have a similar workload: hundreds of millions of
images, small files, and especially on weekends with high traffic the
load+iowait is really heavy. Or if a hdd fails, or during a raid
check.
our hardware:
10x 10TB hdds -> 5x raid1, each raid1 is a brick, replicate 3 setup.
About 40TB of data.
Well, the bricks are bigger than recommended... Sooner or later we
will have to migrate that stuff, and use nvme for that, either 3.5TB
or bigger ones. Those should be faster... *fingerscrossed*
regards,
Hubert
2007 Apr 23
5
Re: [nfs-discuss] Multi-tera, small-file filesystems
...07, at 6:44 AM, Yaniv Aknin wrote:
> Hello,
>
> I''d like to plan a storage solution for a system currently in
> production.
>
> The system''s storage is based on code which writes many files to
> the file system, with overall storage needs currently around 40TB
> and expected to reach hundreds of TBs. The average file size of the
> system is ~100K, which translates to ~500 million files today, and
> billions of files in the future. This storage is accessed over NFS
> by a rack of 40 Linux blades, and is mostly read-only (99% of the
&...
2013 Dec 12
3
Is Gluster the wrong solution for us?
...(1MB-100MB). For the most part, these files are write once, read several times. Our initial store is 80TB, but we expect to go to roughly 320TB fairly quickly. After that, we expect to be adding another 80TB every few months. We are using some COTS servers which we add in pairs; each server has 40TB of usable storage. We intend to keep two copies of each file. We currently run 4TB bricks
In our somewhat limited test environment, GlusterFS seemed to work well. And, our initial introduction of GlusterFS into our production environment went well. We had our initial 2 server (80TB) cluster ab...
2006 Mar 30
39
Proposal: ZFS Hot Spare support
As mentioned last night, we''ve been reviewing a proposal for hot spare
support in ZFS. Below you can find a current draft of the proposed
interfaces. This has not yet been submitted for ARC review, but
comments are welcome. Note that this does not include any enhanced FMA
diagnosis to determine when a device is "faulted". This will come in a
follow-on project, of which some
2023 Mar 21
1
hardware issues and new server advice
Excerpts from Strahil Nikolov's message of 2023-03-21 00:27:58 +0000:
> Generally,the recommended approach is to have? 4TB disks and no more
> than 10-12 per HW RAID.
what kind of raid configuration and brick size do you recommend here?
> Of course , it's not always possible but a
> resync of a failed 14 TB drive will take eons.
right, that is my concern too.
but with raid
2023 Mar 24
2
hardware issues and new server advice
...Interesting, we have a similar workload: hundreds of millions of
images, small files, and especially on weekends with high traffic the
load+iowait is really heavy. Or if a hdd fails, or during a raid
check.
our hardware:
10x 10TB hdds -> 5x raid1, each raid1 is a brick, replicate 3 setup.
About 40TB of data.
Well, the bricks are bigger than recommended... Sooner or later we
will have to migrate that stuff, and use nvme for that, either 3.5TB
or bigger ones. Those should be faster... *fingerscrossed*
regards,
Hubert
________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesda...
2010 Sep 14
9
dedicated ZIL/L2ARC
We are looking into the possibility of adding a dedicated ZIL and/or L2ARC devices to our pool. We are looking into getting 4 ? 32GB Intel X25-E SSD drives. Would this be a good solution to slow write speeds? We are currently sharing out different slices of the pool to windows servers using comstar and fibrechannel. We are currently getting around 300MB/sec performance with 70-100% disk busy.
2012 Jul 02
14
HP Proliant DL360 G7
Hello,
Has anyone out there been able to qualify the Proliant DL360 G7 for your Solaris/OI/Nexenta environments? Any pros/cons/gotchas (vs. previous generation HP servers) would be greatly appreciated.
Thanks in advance!
-Anh