search for: 10tb

Displaying 20 results from an estimated 75 matches for "10tb".

Did you mean: 10mb
2008 May 02
4
ext3 filesystems larger than 8TB
Greetings. I am trying to create a 10TB (approx) ext3 filesystem. I am able to successfully create the partition using parted, but when I try to use mkfs.ext3, I get an error stating there is an 8TB limit for ext3 filesystems. I looked at the specs for 5 on the "upstream" vendor's website, and they indicate that the...
2011 Oct 28
2
How can we horizontally scale Dovecot across multiple servers?
Hi, How can we horizontally scale Dovecot across multiple servers? Do we require to install independent instances of Dovecot on each server? We are planning to use a NAS/SAN device using ZFS or EFS for email storage. Each logical unit will be of 10TB and similarly as the no: of user increases we are planning to add multiple 10TB units. In this case how we can manage the email storage on multiple volumes from Dovecot. The configuration of our existing system is:- Dovecot 1.0.15 / Maildirs Postfix 2.5.5 Debian 5.0.9 (Lenny) MySQL 5.0....
2004 Aug 10
3
rsync to a destination > 8TB problem...
I am trying to use a large (10TB) reiserfs filesystem as an rsync target. The filesystem is on top of lvm2 (pretty sure this doesn't matter, but just in case.) I get the following error when trying to sync a modest set of files to that 10TB target (just syncing /etc for now): rsync: writefd_unbuffered failed to write 4...
2005 May 25
2
Volume Size
Okay, I must be a total Nerf-herder, but I am trying to find info as to how big a volume samba can actually share - eg if I have a 10TB volume, can I share it via SMB (assuming everything else is ok)? Specifically for samba 3.0.10... No matter what and where I search I can't find a damn thing! Can anyone here shed some light? Thanks, Daniel.
2013 Oct 23
2
Information about Option min-free-disk
An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131024/d5fba257/attachment.html>
2013 Jul 09
1
tips/nest practices for gluster rdma?
...timeout=2,acl,_netdev 0 0 where holyscratch is a RRDNS entry for all the IPoIB interfaces for fetching the volfile (something it seems, just like peering, MUST be tcp? ) but, again, when running just dumb,dumb,dumb tests (160 threads of dd over 8 nodes w/ each thread writing 64GB, so a 10TB throughput test), I'm seeing all the traffic on the IPoIB interface for both RDMA and TCP transports...when i really shouldn't be seeing ANY tcp traffic, aside from volfile fetches/management on the IPoIB interface when using RDMA as a transport...right? As a result, from early tests (the b...
2023 Mar 30
1
Performance: lots of small files, hdd, nvme etc.
...do... :) Il 30/03/2023 11:26, Hu Bert ha scritto: > Just an observation: is there a performance difference between a sw > raid10 (10 disks -> one brick) or 5x raid1 (each raid1 a brick) Err... RAID10 is not 10 disks unless you stripe 5 mirrors of 2 disks. > with > the same disks (10TB hdd)? The heal processes on the 5xraid1-scenario > seems faster. Just out of curiosity... It should be, since the bricks are smaller. But given you're using a replica 3 I don't understand why you're also using RAID1: for each 10T of user-facing capacity you're keeping 60TB of d...
2011 May 12
2
Xapian support for huge data sets?
Hello, I?m currently using another open source search engine/indexer and am having performance issues, which brought me to learn about Xapian. We have approximately 350 million docs/10TB data that doubles every 3 years. The data mostly consists of Oracle DB records, webpage-ish files (HTML/XML, etc.) and office-type docs (doc, pdf, etc.). There are anywhere from 2 to 4 dozen users on the system at any one time. The indexing server has upwards of 28GB memory, but even then, it get...
2011 Dec 28
3
Btrfs: blocked for more than 120 seconds, made worse by 3.2 rc7
...00gb of writes. with rc7 it happens after much less writes, probably 10gb or so, but only on machine 1 for the time being. machine 2 has not crashed yet after 200gb of writes and I am still testing that. machine 1: btrfs on a 6tb sparse file, mounted as loop, on a xfs filesystem that lies on a 10TB md raid5. mount options compress=zlib,compress-force machine 2: btrfs over md raid 5 (4x2TB)=5.5TB filesystem. mount options compress=zlib,compress-force pastebins: machine1: 3.2rc7 http://pastebin.com/u583G7jK 3.2rc6 http://pastebin.com/L12TDaXa machine2: 3.2rc6 http://pastebin.com/khD0wGXx...
2011 Oct 05
5
too many files open
Good morning Btrfs list, I have been loading a btrfs file system via a script rsyncing data files from an nfs mounted directory. The script runs well but after several days (moving about 10TB) rsync reports that it is sending the file list but stops moving data because btrfs balks saying too many files open. A simple umount/mount fixes the problem. What am I flushing when I remount that would affect this, and is there a way to do this without a remount. Once again thanks for any...
2010 Sep 25
4
dedup testing?
...ing with dedup with OI? On opensolaris there is a nifty "feature" that allows the system to hang for hours or days if attempting to delete a dataset on a deduped pool. This is said to be fixed, but I haven''t seen that myself, so I''m just wondering... I''ll get a 10TB test box released for testing OI in a few weeks, but before than, has anyone tested this? Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er...
2023 Mar 30
2
Performance: lots of small files, hdd, nvme etc.
Hello there, as Strahil suggested a separate thread might be better. current state: - servers with 10TB hdds - 2 hdds build up a sw raid1 - each raid1 is a brick - so 5 bricks per server - Volume info (complete below): Volume Name: workdata Type: Distributed-Replicate Number of Bricks: 5 x 3 = 15 Bricks: Brick1: gls1:/gluster/md3/workdata Brick2: gls2:/gluster/md3/workdata Brick3: gls3:/gluster/md3/w...
2007 Dec 15
4
Is round-robin I/O correct for ZFS?
I''m testing an Iscsi multipath configuration on a T2000 with two disk devices provided by a Netapp filer. Both the T2000 and the Netapp have two ethernet interfaces for Iscsi, going to separate switches on separate private networks. The scsi_vhci devices look like this in `format'': 1. c4t60A98000433469764E4A413571444B63d0 <NETAPP-LUN-0.2-50.00GB>
2023 Mar 23
1
hardware issues and new server advice
...t; or checks happen the server load sometimes increases to cause issues. Interesting, we have a similar workload: hundreds of millions of images, small files, and especially on weekends with high traffic the load+iowait is really heavy. Or if a hdd fails, or during a raid check. our hardware: 10x 10TB hdds -> 5x raid1, each raid1 is a brick, replicate 3 setup. About 40TB of data. Well, the bricks are bigger than recommended... Sooner or later we will have to migrate that stuff, and use nvme for that, either 3.5TB or bigger ones. Those should be faster... *fingerscrossed* regards, Hubert
2011 Aug 10
1
fsck hangs in Pass 0a
Hello list, I have a ~10TB ocfs2 filesystem in a 8-node cluster. This sits on a logical volume (I know lv is not cluster aware, but I make sure no one touches the lv, while the cluster is running). The LV consists of 5x2TB multipath devices. I recently had errors like this on some nodes: OCFS2: ERROR (device dm-7): ocfs2_c...
2014 May 07
1
directory permissions not set until all files copied
...way, the corruption will not affect the other volumes automatically. At the same time, I want to treat these as one unified filesystem, so I effectively concatenate them via aufs. Incidentally, this also allows me to more easily address strange behavior because juggle smaller chunks. If I had 1 10TB fs, and something was going wonky, I would have to have an additional 10TB of capacity to be able to offload it, but as 5 2TB filesystems, I can juggle 2TB at a time. It just so happens that I recently replaced my disk controller and I noticed an unexpected side effect which was not immediately im...
2010 Aug 18
1
Kernel panic on import / interrupted zfs destroy
I have a box running snv_134 that had a little boo-boo. The problem first started a couple of weeks ago with some corruption on two filesystems in a 11 disk 10tb raidz2 set. I ran a couple of scrubs that revealed a handful of corrupt files on my 2 de-duplicated zfs filesystems. No biggie. I thought that my problems had something to do with de-duplication in 134, so I went about the process of creating new filesystems and copying over the "good"...
2017 Oct 11
1
gluster volume + lvm : recommendation or neccessity ?
After some extra reading about LVM snapshots & Gluster, I think I can conclude it may be a bad idea to use it on big storage bricks. I understood that the LVM maximum metadata, used to store the snapshots data, is about 16GB. So if I have a brick with a volume arount 10TB (for example), daily snapshots, files changing ~100GB : the LVM snapshot is useless. LVM's snapshots doesn't seems to be a good idea with very big LVM partitions. Did I missed something ? Hard to find clear documentation on the subject. ++ Quentin Le 11/10/2017 ? 09:07, Ric Wheeler...
2016 May 04
3
Unicast or Multicast?
...> http://lists.xiph.org/mailman/listinfo/icecast > -- -----BEGIN PUBLIC INFO BLOCK----- Workstation(s) 2xIvy Bridge|16GB-SDRAM|120GB-SDHD|2x16GB|2x3TB - LUKS Operating Systems - Arch Linux & Ubuntu Studio Programming - C/C++|ASM|php|bash CMS - Drupal - Server 3 Cores|3GB-RAM|75GB-SDHD|10TB Traffic Tor-Exit|Icecast-Stream|Torrent-Stream - Gee Bee Productions Radio IT-Consulting Harware One-Off-Productions Intrusion Detection - Forensic - Pentesting Webdesign - Web-Authoring - Content Management - www.pirate-radio.eu +41/76-7569208 donations (bitcoin) 13aXxBnwBnnJApKhCA9ZYLHqPYfHMY8B...
2023 Mar 24
2
hardware issues and new server advice
...t; or checks happen the server load sometimes increases to cause issues. Interesting, we have a similar workload: hundreds of millions of images, small files, and especially on weekends with high traffic the load+iowait is really heavy. Or if a hdd fails, or during a raid check. our hardware: 10x 10TB hdds -> 5x raid1, each raid1 is a brick, replicate 3 setup. About 40TB of data. Well, the bricks are bigger than recommended... Sooner or later we will have to migrate that stuff, and use nvme for that, either 3.5TB or bigger ones. Those should be faster... *fingerscrossed* regards, Hubert __...