search for: 100gb

Displaying 20 results from an estimated 235 matches for "100gb".

Did you mean: 200gb
2008 Dec 20
2
General question about ZFS and RAIDZ
Hello to the forum, with my general question about ZFS and RAIDZ I want the following to know: Must all harddisks for the storage pool have the same capacity or is it possible to use harddisks with different capacities? Many thanks for the answers. Best regards JueDan -- This message posted from opensolaris.org
2013 Oct 23
2
Information about Option min-free-disk
An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131024/d5fba257/attachment.html>
2006 Mar 10
3
pool space reservation
What is a use case of setting a reservation on the base pool object? Say I have a pool of 3 100GB drives dynamic striped (pool size of 300GB), and I set the reservation to 200GB. I don''t see any commands that let me ever reduce a pool''s size, so how is the 200GB reservation used? Related question: is there a plan in the future to allow me to replace the 3 100GB drives with 2...
2007 Jan 08
1
Extremely poor rsync performance on very large files (near 100GB and larger)
I've been playing with rsync and very large files approaching and surpassing 100GB, and have found that rsync has excessively very poor performance on these very large files, and the performance appears to degrade the larger the file gets. The problem only appears to happen when the file is being "updated", that is, when it already exists on the receiving side. For...
2017 Nov 09
2
GlusterFS healing questions
Hi, We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10 bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit nics) 1. Tests show that healing takes about double the time on healing 200gb vs 100, and abit under the double on 400gb vs 200gb bricksizes. Is this expected behaviour? In light of this would make 6,4 tb bricksizes use ~ 377 hours to heal. 100gb brick heal: 18 hou...
2003 Mar 21
2
100GB incremental backups
We've recently migrated my entire University including faculty and staff from Novell to Samba. There's typically 700+ clients connected to the samba server at any given time and thus far there are about 400GB of client's files on the server. Basically every Microsoft Windows user generated file (Word, Excel, whatever) of the entire University gets stored on my Samba server. Obviously
2017 Nov 09
0
GlusterFS healing questions
Hi Rolf, answers follow inline... On Thu, Nov 9, 2017 at 3:20 PM, Rolf Larsen <rolf at jotta.no> wrote: > Hi, > > We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10 > bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit > nics) > > 1. > Tests show that healing takes about double the time on healing 200gb vs > 100, and abit under the double on 400gb vs 200gb bricksizes. Is this > expected behaviour? In light of this would make 6,4 tb bricksizes use ~ 377 > hours...
2017 Nov 09
2
GlusterFS healing questions
...; wrote: > Hi Rolf, > > answers follow inline... > > On Thu, Nov 9, 2017 at 3:20 PM, Rolf Larsen <rolf at jotta.no> wrote: >> >> Hi, >> >> We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10 >> bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit >> nics) >> >> 1. >> Tests show that healing takes about double the time on healing 200gb vs >> 100, and abit under the double on 400gb vs 200gb bricksizes. Is this >> expected behaviour? In light of this would make 6,4 tb bricksiz...
2012 Sep 11
7
Issue with large directory content
...g::repository_mount}/${sonatype_work_dir}": ensure => directory, owner => $nexus_user, group => $nexus_group, mode => 0755, recurse => false, backup => false, } Today I added some 100GB of artifacts to a subdirectory of "${codebase_ng::repository_mount}/${sonatype_work_dir}". Now the result is that the Puppet seems to run "forever". If I uncomment this code, Puppet finishes in 15 seconds. So I presume Puppet is doing some recursive scanning of this director...
2015 Jul 17
4
clone a disk
Hello i have a machine A with 2 disks 1 et 2 running Debian Jessie on 1 is the system and the boot and the swap on 2 different partitions like /home /opt ETC..... i have a machine B with 1 disk running kali-linux and *100G free* Can i clone the disk 1 of machine A on the 100G free on machine B with rsync? If it is possible, how to do that? Many thanks TG -------------- next part --------------
2017 Nov 09
0
GlusterFS healing questions
...swers follow inline... >> >>> On Thu, Nov 9, 2017 at 3:20 PM, Rolf Larsen <rolf at jotta.no> wrote: >>> >>> Hi, >>> >>> We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10 >>> bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit >>> nics) >>> >>> 1. >>> Tests show that healing takes about double the time on healing 200gb vs >>> 100, and abit under the double on 400gb vs 200gb bricksizes. Is this >>> expected behaviour? In light of this w...
2008 Aug 13
2
Help setting up external drive via Firewire
I got a WD 1TB My Book with eSATA/USB/Firewire400 connectivity to backup data on a client Centos 5.1 machine. USB 2.0 works fine out of the box but is rather slow, Nautilus predicts about 1+ hour to fully backup just one day's worth of data or about 100GB. So I was hoping Firewire would be faster, which is why we got the version with all 3 interfaces to experiment with first. Following the suggestions given to another user here http://www.centos.org/modules/newbb/viewtopic.php?topic_id=15767&forum=37 I updated the system's kernel to the C...
2010 Nov 08
6
XEN large scale cluster
Dear All, We are preparing amount 400 servers to deploy a large scale XEN cluster; I am looking for the best solution for that, these are some things I need to resolve: 1) 400 servers use raid-1, each server will have more than 100GB free disk space, I want to combine all of these for the cluster''s storage, so, it need to install the distribute file system; 2) How to deploy the high availability of XEN server? 3) How to assign the resources pool for each customer? 4) We provide virtualization servi...
2016 Jul 13
2
questions regarding 40G Samba Server
Hi, we expect a growth in bigger file transfers (files n*10 GB up to multiple 100GB) on a regular base from a smaller number of users combined with the "normal" office use with word files, mail profiles etc from about 500 users. We plan a new Fileserver/Storage Head with 40Gb NICs client and storage side. What would be regarding to samba be of more importance: multiple...
2015 Jul 17
0
clone a disk
Hi TG, You can keep an up-to-date copy of the files/folders/pipes/etc. in the 100GB space using rsync, but not a true clone of the partition. To get a true clone of the boot partition, you'd need to boot from a rescue CD, mount the other machine's 100GB space and dd the boot partition device to a file on the 100GB space. You'd also probably want to get the Master Boo...
2013 Jul 18
2
Archiving mail
...ail from different users (I got about 50 users) on some read-only media but mount that media in the users mail-dirs. That way I will have less to backup after I backup that old mail and store it safely away. I can't convince my users to really clean up their mailboxes, so I backup more than 100GB mail while the total backup is a bit more than 300GB. Writing this I realise I could give each user a folder oldmail and symlink that to a read-only oldmail folder. Would this work ? I will have to find out how my backup-software can ignore the oldmail-folder. Thanks for any suggestions, Koenr...
2018 Mar 21
2
dovecot-uidlist is not up-to-date
...ut not at this scale. Now the question is if there's any way to tell dovecot to rebuild dovecot-uidlist files using actual Maildir data. I don't want to remove dovecot-uidlist files as this triggers the whole mailbox being re-downloaded by the imap client. With some accounts having over 100Gb of mail this is too much of a hassle. I just need dovecot itself to fix it's data. Thanks, Fil -- Dmitry Filonov Network Analyst 300 Longwood Ave. Enders-1262.2 Boston, MA 02115 617-919-4702
2009 Aug 06
5
rsync speedup - how ?
Hello, i`m using rsync to sync large virtual machine files from one esx server to another. rsync is running inside the so called "esx console" which is basically a specially crafted linux vm with some restrictions. the speed is "reasonable", but i guess it`s not the optimum - at least i don?t know where the bottleneck is. i`m not using ssh as transport but run rsync in
2008 Feb 14
9
100% random writes coming out as 50/50 reads/writes
I''m running on s10s_u4wos_12b and doing the following test. Create a pool, striped across 4 physical disks from a storage array. Write a 100GB file to the filesystem (dd from /dev/zero out to the file). Run I/O against that file, doing 100% random writes with an 8K block size. zpool iostat shows the following... capacity operations bandwidth pool used avail read write read write ---------- ----- --...
2018 Mar 22
2
dovecot-uidlist is not up-to-date
...ote: >> Now the question is if there's any way to tell dovecot to rebuild >dovecot-uidlist files using actual Maildir data. I don't want to remove >dovecot-uidlist files as this triggers the whole mailbox being >re-downloaded by the imap client. With some accounts having over 100Gb >of mail this is too much of a hassle. I just need dovecot itself to fix >it's data. > > > doveadm index -A '*' > >if that doesn't work then perhaps > > doveadm force-resync -A '*' > >of course you can use -u <user> for a specific user...