Displaying 20 results from an estimated 44 matches for "12tb".
Did you mean:
12gb
2020 Sep 25
6
Question regarding cent OS 7.8.2003 compatibility with large SAS disks
Hello,
I have a blade server with SAS HDD's of 12TB in total.
3 HDD's of 4TB each.
Is it possible to install Cent OS 7.8.2003 on 12TB disk space?
I will be installing Cent OS on the bare metal HW.
I referred = https://wiki.centos.org/About/Product
But slightly confused with the 'maximum file size' row for ext4 FS.
Thanks & Regard...
2020 Sep 25
1
Question regarding cent OS 7.8.2003 compatibility with large SAS disks
...lade servers after a gap of decade hence I
was in doubt :-)
The Cloud computing era has wiped my knowledge about server HW & OS
compatibility :-/
Regards,
Amey.
>
> On 9/24/20 10:25 PM, Amey Abhyankar wrote:
> > Hello,
> >
> > I have a blade server with SAS HDD's of 12TB in total.
> > 3 HDD's of 4TB each.
> >
> > Is it possible to install Cent OS 7.8.2003 on 12TB disk space?
> > I will be installing Cent OS on the bare metal HW.
> >
> > I referred = https://wiki.centos.org/About/Product
> > But slightly confused with t...
2008 Oct 26
1
Looking for configuration suggestions
...s, 2x JetStor and a Dell MD3000.
The Coraid and JetStor are network connected via ATA over Ethernet &
iSCSI. The disks are also being upgraded to 1TB in the process. I do
plan to use unify to bring most if not all the storage together.
My first question is that since these units are over 12TB after being
RAIDed, should I divide them up into smaller bricks or just use the
entire 12TB+ as one brick?
What other things should I consider?
Thanks for the help!
-- Matt
2008 Jun 10
3
ZFS space map causing slow performance
Hello,
I have several ~12TB storage servers using Solaris with ZFS. Two of them have recently developed performance issues where the majority of time in an spa_sync() will be spent in the space_map_*() functions. During this time, "zpool iostat" will show 0 writes to disk, while it does hundreds or thousands of sm...
2011 Jan 28
8
ZFS Dedup question
I created a zfs pool with dedup with the following settings:
zpool create data c8t1d0
zfs create data/shared
zfs set dedup=on data/shared
The thing I was wondering about was it seems like ZFS only dedup at the file level and not the block. When I make multiple copies of a file to the store I see an increase in the deup ratio, but when I copy similar files the ratio stays at 1.00x.
--
This
2010 Mar 26
23
RAID10
Hi All,
I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection?
So if I have 8 x 1.5tb drives, wouldn''t I:
- mirror drive 1 and 5
- mirror drive 2 and 6
- mirror drive 3 and 7
- mirror drive 4 and 8
Then stripe 1,2,3,4
Then stripe 5,6,7,8
How does one do this with ZFS?
2020 Sep 25
0
Question regarding cent OS 7.8.2003 compatibility with large SAS disks
I have done it numerous times.
On 9/24/20 10:25 PM, Amey Abhyankar wrote:
> Hello,
>
> I have a blade server with SAS HDD's of 12TB in total.
> 3 HDD's of 4TB each.
>
> Is it possible to install Cent OS 7.8.2003 on 12TB disk space?
> I will be installing Cent OS on the bare metal HW.
>
> I referred = https://wiki.centos.org/About/Product
> But slightly confused with the 'maximum file size' row...
2010 Mar 25
3
RAID 5 setup?
Can anyone provide a tutorial or advice on how to configure a software RAID 5 from the command-line (since I did not install Gnome)?
I have 8 x 1.5tb Drives.
-Jason
2012 Feb 01
2
Doubts about dsync, mdbox, SIS
...sers 145M 113M 33M 78% /usr/local/atmail/users
very little of this is compressed (zlib plugin enabled during christmas).
I'm surprised that the destination server is so large, was expecting zlib and
mdbox and SIS would compress it down to much less than what we're seeing
(12TB -> 5TB):
$ df -h /srv/mailbackup
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/mailbackupvg-mailbackuplv
5.7T 4.8T 882G 85% /srv/mailbackup
Lots and lots of the attachement storage is duplicated into identical files,
instead of hard linked.
Whe...
2013 Oct 09
1
mdraid strange surprises...
Hey,
I installed 2 new data servers with a big (12TB) RAID6 mdraid.
I formated the whole arrays with bad blocks checks.
One server is moderately used (nfs on one md), while the other not.
One week later, after the raid-check from cron, I get on both servers
a few block_mismatch... 1976162368 on the used one and a tiny bit less
on the other...?...
2023 Mar 14
1
How to configure?
Hello all.
Our Gluster 9.6 cluster is showing increasing problems.
Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 cores dual
thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 [12TB]),
configured in replica 3 arbiter 1. Using Debian packages from Gluster
9.x latest repository.
Seems 192G RAM are not enough to handle 30 data bricks + 15 arbiters and
I often had to reload glusterfsd because glusterfs processed got killed
for OOM.
On top of that, performance have been quite...
2012 Apr 20
1
Upgrading from V3.0 production system to V3.3
...6 -> rbrick6
svr1:vol7 <-> srv2:vol7 -> rbrick7
svr1:vol8 <-> srv2:vol8 -> rbrick8
Then distributed across all 8 replicated bricks
rbrick1+rbrick2+rbrick3+rbrick4+>rbrick5+>rbrick6+rbrick7+rbrick8 -> dhtvolume
(16TB dht glusterfs volume)
I'm currently using over 12TB of storage and am seeing a time when I will run
out of space. I want to upgrade our storage system to Gluster V3.3 and upgrade
hard drives to 4TB models. I purchased a new server (same base hardware as the
existing ones), but outfitted with new controller (that supports 4TB drives) and
with 8 x 4...
2023 Mar 15
1
How to configure?
...On Tue, Mar 14, 2023 at 16:44, Diego Zuccato<diego.zuccato at unibo.it> wrote: Hello all.
Our Gluster 9.6 cluster is showing increasing problems.
Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 cores dual
thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 [12TB]),
configured in replica 3 arbiter 1. Using Debian packages from Gluster
9.x latest repository.
Seems 192G RAM are not enough to handle 30 data bricks + 15 arbiters and
I often had to reload glusterfsd because glusterfs processed got killed
for OOM.
On top of that, performance have been quite...
2012 Jun 12
4
rsync takes long pauses in xfer ?
Hey folks,
I did some googling on this but did not come up with much. I'm using
rsnapshot which uses rsync, and I notice some pretty long pauses in
the xfers as you can see on this graph from "munin". THe machine in
question right at the moment is doing nothing but rsyncing (
rsnapshoting ) some 12T of NAS storage to local disk, so there is
nothing else going on at all.
2023 Oct 27
1
State of the gluster project
...entries that are not healing after a couple months of (not really heavy)
use, directories that can't be removed because not all files have been
deleted from all the bricks and files or directories that become
inaccessible with no apparent reason.
Given that I currently have 3 nodes with 30 12TB disks each in replica 3
arbiter 1 it's become a major showstopper: can't stop production, backup
everything and restart from scratch every 3-4 months. And there are no
tools helping, just log digging :( Even at version 9.6 seems it's not
really "production ready"... More l...
2008 Jun 27
2
Help needed. Samba 3.2.0rc2 - IDMAP - Windows 2008 Server - ADS Integration - Winbind
...pany as this is required for some other projects. But we want to
keep our nice 5+ samba-server providing fast 50TB+ of storage.
So we have to find a way to nicely integrate the storage with the new
ADS installed. Therefor I installed a Testlab consisting of 2 debian
etch storage-servers with each 12TB lvm-based storage attached. Also we
have 2 MS 2008 Server SP1 as PDC and BDC. Further we have some Windows
XP 32 and 64 Bit clients as workstations for testing.
Now we setup everything and decided to use samba 3.2.0 as there are some
bugs related to W2k8 server are solved. So I build debian packag...
2023 Mar 15
1
How to configure?
...> <diego.zuccato at unibo.it> wrote:
> Hello all.
>
> Our Gluster 9.6 cluster is showing increasing problems.
> Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 cores dual
> thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 [12TB]),
> configured in replica 3 arbiter 1. Using Debian packages from Gluster
> 9.x latest repository.
>
> Seems 192G RAM are not enough to handle 30 data bricks + 15 arbiters
> and
> I often had to reload glusterfsd because glusterfs processed got killed
>...
2012 Apr 05
1
Better to use a single large storage server or multiple smaller for mdbox?
I'm trying to improve the setup of our Dovecot/Exim mail servers to
handle the increasingly huge accounts (everybody thinks it's like
infinitely growing storage like gmail and stores everything forever in
their email accounts) by changing from Maildir to mdbox, and to take
advantage of offloading older emails to alternative networked storage
nodes.
The question now is whether having a
2006 Apr 09
0
CentOS 4 and multi TB storage solutions
...es
to storage servers (all with 3ware cards offering RAID5 arrays) that
collectively serve 10TB of images. The data grows by 2TB every year.
This has been working so far but this current implementation is facing
growing pains so I'm looking at more scalable options. Something that
will offer 12TB of storage and room to grow by adding more storage as
needed.
Currently the Apple Xserve RAID 7TB offerings connected to a QLogic
SANbox 5200 fibre switch looks very appealing. Especially considering
the Xserve RAIDs are listed in Red Hat's RHEL HCL and Google searches
have revealed successfu...
2012 Sep 21
0
Bricks topology
...the folliwing ok for a small cluster that will serve hundreds of virtual
machines?
- two DELL R515 with 12 2TB (or 3TB) SATA disks
- RAID 1 between every 2 disks on each server
- one brick for every RAID1
- distributed replicated clusterd between bricks.
This should give me a big cluster made of 12TB
Is a raid10 better then many distributed bricks? With bricks should I have
more flexibility in case of disk expansion
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120921/6ecded5c/attachment....