search for: 15tb

Displaying 20 results from an estimated 21 matches for "15tb".

Did you mean: 15gb
2004 Jan 25
0
RE: SAMBA 15TB Volume?
...fice 212.416.0740 FAX Thomas_Massano@StorageTek.com INFORMATION made POWERFUL -----Original Message----- From: Green, Paul [mailto:Paul.Green@stratus.com] Sent: Saturday, January 24, 2004 7:11 PM To: Green, Paul; Massano, Thomas Cc: 'Samba Questions (samba@samba.org)' Subject: RE: SAMBA 15TB Volume? And the reason you never got a response from samba@samba.org is that you have your mailer configured to send HTML mail, which is immediately rejected by the mail server. So no one saw your question. Or my answer. Until now. PG -----Original Message----- From: Green, Paul Sent: Saturd...
2004 Jan 25
0
RE: SAMBA 15TB Volume?
...il, which is immediately rejected by the mail server. So no one saw your question. Or my answer. Until now. PG -----Original Message----- From: Green, Paul Sent: Saturday, January 24, 2004 7:09 PM To: 'Massano, Thomas' Cc: 'Samba Questions (samba@samba.org)' Subject: RE: SAMBA 15TB Volume? Tom, The OS (Solaris in this case) handles the disk, or virtual disks, or Veritas Disks, or SAN disks, or whatever. Samba only deals with individual files. As long you build samba with largefile support Samba can deal with any file the OS can throw at it. As far as you are concerned, Sam...
2019 Nov 09
3
Fwd: LARGE folder/filesystem shared to Win 2k.
Hi all. Got a strange question for you (it's for an industrial setting.) I have a large folder (15TB) on a *buntu server that I'd like to have available to a Windows 2000 Server VM machine (don't ask.) The host filesystem is ext4, but Samba shouldn't care about that. What I'm wondering is will Windows be able to access/utilize all the space on that, or is it going to start erro...
2009 Nov 09
4
GUID Partition Tables and Ext3 Partition Size
Hello, Does CentOS 5.4 support large ( > 2 TB) external storage devices using GPT (GUID Partition Tables), while the main OS resides on smaller hard disks using MBR. In this scenario, what can be the largest possible size of an ext3 partition (and filesystem) which can be created on the storage array under CentOS 5.4 ? Thanks, Manish
2006 Mar 18
1
ext3 - max filesystem size
Hi all, I am working with a pc cluster, running redhat el 4, on opteron cpus. we have several bigger RAID systems locally attached to the fileservers; now I would like to create a big striped filesystem with around 15TB. ext3 unfortunatelly only supports filesystem size up to 8TB, do you have an idea if / when this border will be increased ? I already found some discussions on LKML about it ? Which FS would be a goof alternative ? AFAIK xfs is not suported by redhat el 4 ... thanks for any hint, cheers alex
2012 Apr 16
1
Rebuilding corrupt databases from .DB files.
We've had some catastrophic filesystem failures that have left us with corrupted databases with empty files and no backup for about 15TB of our data. Recreating the 15TB from source data backups is possible but will take a very very long time. I'm hoping that, given all of the .DB files are still intact, there my be some way to extract their contents and rebuild the other tables. This is one of our 800 databases that has bee...
2010 Aug 30
5
pool died during scrub
...pool first, which is hard to do if it''s not imported. My zdb skills are lacking - zdb -l gets you about so far and that''s it. (where the heck are the other options to zdb even written down, besides in the code?) OK, so this isn''t the end of the world, but it''s 15TB of data I''d really rather not have to re-copy across a 100Mbit line. It really more concerns me that ZFS would do this in the first place - it''s not supposed to corrupt itself!!
2010 Jun 26
1
--recursive and -H
...hard links. But yet it still starts transferring before reading the entire file list. Don't get me wrong, this is excellent as there are 10's of millions of files and it would take days and tons of memory to do it without the --recursive option. I'm just curious about if the large (15TB) backup I am currently running is actually preserving hard links, or if I'm going to see in a few days that it is not actually preserving them? Thanks, -Rob -- Rob Thompson, Systems Analyst Enterprise Applications Computing & Information Technology Wayne State University phone: 313-577-56...
2018 Apr 03
0
Is the size of bricks limiting the size of files I can store?
...ze of the >> individual bricks. >> > > Thanks a lot for that definitive answer. Is there a way to manage this? > Can you shard just those files, making them replicated in the process? > +Krutika, xlator/shard maintainer for the answer. > I just can't have users see 15TB free and fail copying a 15GB file. They > will show me the bill they paid for those "disks" and flay me. > > -andreas > > > -- > "economics is a pseudoscience; the astrology of our time" > Kim Stanley Robinson > ________________________________________...
2018 Apr 02
0
Is the size of bricks limiting the size of files I can store?
...cated in the > process? I manage this by using thin pool, thin lvm and add new drives to the lvm across all gluster nodes and expand the user space. My thinking on this is a RAID 10 with the RAID 0 in the lvm and the RAID1 handled by gluster replica 2+ :-) > I just can't have users see 15TB free and fail copying a 15GB file. > They > will show me the bill they paid for those "disks" and flay me. > > -andreas > > -- > "economics is a pseudoscience; the astrology of our time" > Kim Stanley Robinson > _____________________________________...
2013 Nov 25
2
Zlib plugin - when does it make sense?
Hi, I run a small IMAP server for a dozen guys in the office, serving about 55GB of Maildir. I recently became aware of the Zlib plugin ( http://wiki2.dovecot.org/Plugins/Zlib ) and wondered 1. given that there is about zero CPU load on my IMAP server, is enabling the plugin a no-brainer or are there other things (except CPU load) to consider? 2. For enabling the plugin, I suppose you
2018 Apr 13
0
Is the size of bricks limiting the size of files I can store?
...me and then move it back into the volume. 2. copy the existing file into a temporary file on the same volume and rename the file back to its original name. -Krutika >>> >> +Krutika, xlator/shard maintainer for the answer. >> >> >> I just can't have users see 15TB free and fail copying a 15GB file. They >>> will show me the bill they paid for those "disks" and flay me. >>> >> > Any input on that Krutika? > > /andreas > > -- > "economics is a pseudoscience; the astrology of our time" > Kim Stanl...
2008 Mar 31
3
Samba Restrictions
Hi, I'm hopping you can give me some advice, I work for a Financial Institute and we are very interested in implementing Samba as a file server running on AIX 5.3. Before we can think about implementing this we need to no if Samba has any limitation on number of folders, files and shares. The current file storage system is running on Windows 2003 server and has somewhere in the region of
2019 Jan 29
4
Dovecot and FTS experiment
Hello, I'm trying to experiment with Dovecot and Solr server. I have >30k email addresses that I want to index to speed up searching and save IOPS on mail servers. For now - I'm doing some experiments and I'm testing how it is working. I'm thinking about adding one additional server with Solr and configure all mail servers to use that server. I have some questions. 1. I have
2008 Mar 20
7
ZFS panics solaris while switching a volume to read-only
Hi, I just found out that ZFS triggers a kernel-panic while switching a mounted volume into read-only mode: The system is attached to a Symmetrix, all zfs-io goes through Powerpath: I ran some io-intensive stuff on /tank/foo and switched the device into read-only mode at the same time (symrdf -g bar failover -establish). ZFS went ''bam'' and triggered a Panic: WARNING: /pci at
2010 Apr 13
6
12-15 TB RAID storage recommendations
Hello listmates, I would like to build a 12-15 TB RAID 5 data server to run under ContOS. Any recommendations as far as hardware, configuration, etc? Thanks. Boris.
2009 Nov 22
9
Resilver/scrub times?
Hi all! I''ve decided to take the "big jump" and build a ZFS home filer (although it might also do "other work" like caching DNS, mail, usenet, bittorent and so forth). YAY! I wonder if anyone can shed some light on how long a pool scrub would take on a fairly decent rig. These are the specs as-ordered: Asus P5Q-EM mainboard Core2 Quad 2.83 GHZ 8GB DDR2/80 OS: 2 x
2013 Jul 05
4
What FileSystems for large stores and very very large stores?
I was learning about the different FS exists. I was working on systems that ReiserFS was the star but since there is no longer support from the creator there are other consolidations to be done. I want to ask about couple FS options. EXT4 which is amazing for one node but for more it's another story. I have heard about GFS2 and GlusterFS and read the docs and official materials from RH on
2012 Feb 26
1
"Structure needs cleaning" error
Hi, We have recently upgraded our gluster to 3.2.5 and have encountered the following error. Gluster seems somehow confused about one of the files it should be serving up, specifically /projects/philex/PE/2010/Oct18/arch07/BalbacFull_250_200_03Mar_3.png If I go to that directory and simply do an ls *.png I get ls: BalbacFull_250_200_03Mar_3.png: Structure needs cleaning (along with a listing
2011 Aug 17
1
cluster.min-free-disk separate for each, brick
On 15/08/11 20:00, gluster-users-request at gluster.org wrote: > Message: 1 > Date: Sun, 14 Aug 2011 23:24:46 +0300 > From: "Deyan Chepishev - SuperHosting.BG"<dchepishev at superhosting.bg> > Subject: [Gluster-users] cluster.min-free-disk separate for each > brick > To: gluster-users at gluster.org > Message-ID:<4E482F0E.3030604 at superhosting.bg>