search for: 600gb

Displaying 20 results from an estimated 50 matches for "600gb".

Did you mean: 200gb
2017 Nov 02
1
low end file server with h/w RAID - recommendations
...ives, and yes, 10k or 15k speeds. > > those are internally 2.5" disks in a 3.5" frame.?? you can't spin a 3.5" > disk much faster than 7200 rpm without it coming apart. > Sorry, that's incorrect. I have, sitting here in front of me, a Dell-branded Seagate Cheetah, 600GB (it's a few years old) 15k 3.5" drive. mark
2008 Jan 10
10
ZFS versus VxFS as file system inside Netbackup 6.0 DSSU
Hello experts, We have a large implementation of Symantec Netbackup 6.0 with disk staging. Today, the customer is using VxFS as file system inside Netbackup 6.0 DSSU (disk staging). The customer would like to know if it is best to use ZFS or VxFS as file system inside Netbackup disk staging in order to get the best performance possible. Could you provide some information regarding this
2004 Aug 06
3
I declare ices stable
...Fri, Jul 20, 2001 at 11:27:16AM +0500, Asif M. Baloch wrote: > Hi guys, > > ICE cast resembles shoutcast, thats for sure and its good that its open > source. But, it has too many probs. I ran it on a dual p3 800 with a T3 comm We are running icecast on linux 2.2.19 serving more than 600gb each months and it has been rock stable so fare. Right now our uptime is around 3 months. > link but it jsut starts actibg weird when the users reach 12. Also, the > system just blows up. All the CPU is consumed. The platform is FreeBSD 4.1. > Any ideas how to solve it? I think there wh...
2015 Feb 03
2
Very slow disk I/O
On 2/2/2015 8:52 PM, Jatin Davey wrote: > So , You dont think that any configuration changes like increasing the > number of volumes or anything else will help in reducing the I/O wait > time ? not by much. it might reduce the overhead if you use LVM volumes for virtual disks instead of using files, but if you're doing too much disk IO, there's not much that helps other
2006 Dec 12
1
ZFS Storage Pool advice
...t be better to have a storage pool per LUN or combine the 3 LUNS as one big disks under ZFS and create 1 huge ZFS storage pool. Example: LUN1 200gb ZFS Storage Pool "pooldata1" LUN2 200gb ZFS Storage Pool "pooldata2" LUN3 200gb ZFS Storage Pool "pooldata3" or LUN 600gb ZFS Storage Pool "alldata" -- This messages posted from opensolaris.org
2004 Nov 23
3
files missing
I have a 1 TB ext3 filesystem mounted via iscsi on a redhat 9 system w/ kernel version - 2.4.20-30.9. I'm not sure when it happened, but today there appears to be about 7,000 files (600GB) missing. The output from df implies that the files are still there. It shows 861 GB utilized. But du shows only 300 GB of data. I'm sure that there are no processes holding onto deleted files because I have unmounted/mounted the filesystem several times, synced, etc. Here's an excerpt...
2012 Sep 04
1
suggestion for filesystem or general performance optimization
...ted by iscsi, it is a sun storage with sas harddisk. All suggestions so far: migrate to ext4 and good luck :) I read a couple of filesystem comparisons and ext4 looks like the best option, but what else could I do or expect? Locking? Limits ... blocksizes, more RAM (4GB installed), we have about 600GB of user data. so not really much... Thanks for any suggestion or hint . Regards . G?tz -- G?tz Reinicke IT-Koordinator Filmakademie Baden-W?rttemberg GmbH
2015 Feb 03
2
Very slow disk I/O
...vey <jashokda at cisco.com> wrote: > > I will test and get the I/O speed results with the following and see what > works best with the given workload: > > Create 5 volumes each with 150 GB in size for the 5 VMs that i will be > running on the server > Create 1 volume with 600GB in size for the 5 VMs that i will be running on > the server > Try with LVM volumes instead of files > > Will test and compare the I/O responsiveness in all cases and go with the > one which is acceptable. Unless you put each VM on its own physical disk or raid1 mirror you aren'...
2012 Jul 09
2
Storage Resource RAID & Disk type
Are there any best practice recommendations on RAID & disk type for the shared storage resource for a pool? I''m adding shelves to a HP MSA2000 G2, 24 drives total, minus hot spare(s). I can use 600GB SAS drives (15k rpm), 1TB or 2TB Midline SAS drives (7200rpm). I need about 3TB usable for roughly 30 VM guests, 100GB each, web servers so I/O needs are nominal. I''m also going to need to get 4TB usable space for a SQL Server data volume out of those 24 drives, unrelated to the VM boot...
2008 Aug 02
4
checksum errors after online''ing device
...r completed pretty quickly as the filesystem was read-mostly (ftp, http server) Nevertheless during the first hour of operation after onlining we recognized numerous checksum errors on the formerly offlined device. We decided to scrub the pool and after several hours we got about 3500 error in 600GB of data. I always thought that ZFS would sync the mirror immediately after bringing the device online not requiring a scrub. Am I wrong? Both, servers and clients run s10u5 with the latest patches but we saw the same behaviour with OpenSolaris clients Any hints? Thomas -----------------------...
2012 Apr 18
1
ACLs behaving differently on Samba 4 / Ubuntu 12.04 / Bind 9.81 between ZFS and EXT4 file systems
...it-cloned yesterday. I've imported a zpool created on another ubuntu system with the same version of zfs-linux (RC-8) http://zfsonlinux.org/ The zpool is working perfectly well; responsive, no errors reported, scrubbed. Samba can see the zpool as part of the greater file system and share the 600GB or so spread across the varios zfs file systems on it via cifs. I've been through all the tests mentioned on the Samba 4 HOWTO and they return successful results. I'm sharing only via smb.conf - not using native ZFS CIFS commands. The problem: When I alter file permissions via CIFS from...
2012 Nov 22
19
ZFS Appliance as a general-purpose server question
A customer is looking to replace or augment their Sun Thumper with a ZFS appliance like 7320. However, the Thumper was used not only as a protocol storage server (home dirs, files, backups over NFS/CIFS/Rsync), but also as a general-purpose server with unpredictably-big-data programs running directly on it (such as corporate databases, Alfresco for intellectual document storage, etc.) in order to
2003 Nov 13
2
Disappointing Performance Using 9i RAC with OCFS on Linux
...3 (about 240Mb). No swapping for raw and ocfs. All above runs with following parameters: DB: Oracle 9iR2 (9.2.0.3) TPCC Kit: 1000 Warehouse Users: 100 Kernel : e.25 (RHAS2.1) as you can see, ext3 and ext2 are 1/4th to 1/5th of raw/ocfs this is on a big box, 1000 warehouses, thats like what, 500-600gb of data. the really really short answer is this : Every benchmark we ever do has been on raw, because 1- filesystems don't scale 2- filesystems don't give the same raw throughput even on single node we would be using filesystems if those were faster, trust me. look on oraclem etalink fo...
2003 Nov 13
2
Disappointing Performance Using 9i RAC with OCFS on Linux
...3 (about 240Mb). No swapping for raw and ocfs. All above runs with following parameters: DB: Oracle 9iR2 (9.2.0.3) TPCC Kit: 1000 Warehouse Users: 100 Kernel : e.25 (RHAS2.1) as you can see, ext3 and ext2 are 1/4th to 1/5th of raw/ocfs this is on a big box, 1000 warehouses, thats like what, 500-600gb of data. the really really short answer is this : Every benchmark we ever do has been on raw, because 1- filesystems don't scale 2- filesystems don't give the same raw throughput even on single node we would be using filesystems if those were faster, trust me. look on oraclem etalink fo...
2015 Feb 03
0
Very slow disk I/O
...caching via having > more memory). > > > > Thanks John I will test and get the I/O speed results with the following and see what works best with the given workload: Create 5 volumes each with 150 GB in size for the 5 VMs that i will be running on the server Create 1 volume with 600GB in size for the 5 VMs that i will be running on the server Try with LVM volumes instead of files Will test and compare the I/O responsiveness in all cases and go with the one which is acceptable. Appreciate your responses in this regard. Thanks again.. Regards, Jatin
2015 Feb 03
0
Very slow disk I/O
...Davey <jashokda at cisco.com> wrote: > > I will test and get the I/O speed results with the following and see > what works best with the given workload: > > Create 5 volumes each with 150 GB in size for the 5 VMs that i will be > running on the server Create 1 volume with 600GB in size for the 5 VMs > that i will be running on the server Try with LVM volumes instead of > files > > Will test and compare the I/O responsiveness in all cases and go with > the one which is acceptable. Unless you put each VM on its own physical disk or raid1 mirror you aren...
2017 Nov 02
2
low end file server with h/w RAID - recommendations
John R Pierce wrote: > On 11/2/2017 9:21 AM, hw wrote: >> Richard Zimmerman wrote: >>> hw wrote: >>>> Next question: you want RAID, how much storage do you need? Will 4 >>>> or 8 3.5" drives be enough (DO NOT GET crappy 2.5" drives - they're >>>> *much* more expensive than the 3.5" drives, and >smaller disk space.
2018 Sep 17
0
4.8.5 + TimeMachine = Disk identity changed on every connect, cannot backup
I configured Samba 4.8.5 on Debian with vfs_fruit as a TimeMachine destination and while it detects it and does the initial backup to some extent (300GB out of 600GB), TimeMachine then fails with a message about the disk identity having changed. Options are “don’t backup” and “backup anyway”. When using “backup anyway”, the backup creates a secondary sparse image and starts from scratch, and won’t even touch the existing sparse image. While I understand that t...
2010 Sep 06
1
dovecot Digest, Vol 89, Issue 25
...surprised their clients don't break). For info sake: we have Dovecot running on a pair of dedicated dual E5530 (quad-core X86 1300MHZ FSB cpu) servers (RH cluster+GFS2) with 24Gb RAM apiece and ~2TB of 8Gb/s san-attached storage available. There are only ~250 accounts but there's around 600Gb in the maildir folder areas and another 300Gb in a separate dedicated inbox area - and I believe there's another 1TB of stuff in client local folders we're trying to pull back to the servers in order to simplify backups. Even with this kind of horsepower available, over-large folders ca...
2003 Oct 21
1
Is anyone replicating .5TB or higher?
Greetings! I've heard about using rsync to replicate data across the WAN, but need to know if anyone is using it on a large scale. I have a client who is contemplating consolidating Windows file/print servers into a Linux partition on an iSeries. The show stopper is whether rsync (or any replication product) can and will replicate a) at the file level, and b)a database approaching .6TB