Displaying 20 results from an estimated 20 matches for "72gb".
Did you mean:
12gb
2006 Aug 19
1
Tape backup question.
Hello,
I want to backup my PE2800 on tape but I'm not sure how to do this.
My PE2800 has a RAID5 setup with 2 72GB SCSI disks and a Seagate DAT-2 72GB
internal tape-streamer and I'm running Centos 4.3 with the latest yum
updates.
I'm not sure if this tape-streamer is recognised by the OS properly because
it doesn't show up along with my DVD/floppy
I also recieved with the server a copy of Galaxy E...
2006 Mar 09
2
howto mount a scsi tape drive?
Hi,
after installing centos 4.2 I've noticed that my internal Seagate scsi DAT
72GB tape drive hasn't been recognised in /media or doesn't show up in gnome
when putting in a tape.
Do I need to edit fstab first or load additional modules in the kernel
during startup?
regards, Geert
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://...
2002 Apr 30
2
RAID-5/LVM/ext3
Hello:
We trying to configure one machine (Compaq Proliant ML760) with 8 disks
(72GB each disk) with RAID. We are thinking to use ext3 and LVM in RedHat
7.2 for manage one filesystem with 500GB. This filesystems have to store
near off 5.000.000 of files.
Is this possible? Could I resize the filesystems/volume to 1TB? what are the
ext3 and LVM limits? someone have tested sismilar e...
2006 Sep 21
1
Software versus hardware RAID performance.
...CentOS4 to Fedora Core 5 (keeping everything
updated for GNUradio to run on CentOS 4 became more of a job that it should
have) for our pulsar data processing machine (it has a GNUradio Universal
Software Radio Peripheral (USRP) attached USB 2.0), which has a Dell PERC
4e/Di RAID controller, two 72GB 10k U320 SCSI drives, and four 146GB 10k U320
SCSI drives. Well, during installation of a sound card (so we can actually
listen to the dedispersed, phase-aligned, and detected pulsar chirp audio),
something happened to the RAID firmware, so I put the PERC in SCSI-only mode
and installed anyway...
2003 May 22
3
Tuning system response degradation under heavy ext3/2 activity.
Hello.
I'm looking for assistance or pointers for the following problem.
OS: RHAS2.1 enterprise16
HW: HP proliant 2CPU 6GB RAM, internal RAID1 + RAID5(4 x 10K 72GB)
When we run any kind of process (especially tar for some reason) that
creates heavy disk activity the machine becomes Very Slow, (e.g. takes
30-45 seconds to get a reply from ls at the console, or a minute to log
in.)
I experimented with data=journal, and that made the box more responsive
to ot...
2007 Nov 13
4
Need advice on storage
Hi all,? I have a CentOS 4.5 server running on a workstation mainboard (PCI Slots only).? We have now one 200 Gigs IDE disk dedicated for e-mail server storage.? We use Communigate Pro and the server has 45 Outlook clients with the MAPI connector (All mailboxes on the server).? When a user opens Outlook, a refresh of the local cache is performed for his data.? There is a big "Public"
2006 Apr 02
2
raid setup
Hi,
I have 2 identical xSeries 346 with 2 identical IBM 72GB scsi drive. What i
did is install the centos 4.2 serverCD on the first IBM and set the HDD to
raid1 and raid0 for swap. Now what i did is get the 2nd HDD in the 1st
Server swap it with the 1st HDD in the 2nd Server and rebuild the Raids. The
1st server rebuild the array fine. My problem is the Seco...
2010 Feb 28
3
puzzling md error ?
this has never happened to me before, and I'm somewhat at a loss. got a
email from the cron thing...
/etc/cron.weekly/99-raid-check:
WARNING: mismatch_cnt is not 0 on /dev/md10
WARNING: mismatch_cnt is not 0 on /dev/md11
ok, md10 and md11 are each raid1's made from 2 x 72GB scsi drives, on a
dell 2850 or something dual single-core 3ghz server.
these two md's are in turn a striped LVM volume group
dmesg shows....
md: syncing RAID array md10
md: minimum _guaranteed_ reconstruction speed: 1000 KB/sec/disc.
md: using maximum available idle IO bandwidth...
2012 Oct 23
0
76Gb to 146Gb [Resolved]
Hello all,
I would like to thank you all for your kind replies and
feedback in regards to migrating from a smaller hdd to a
bigger one (namely from 72gb to 146gb).
I finally found a painless way of doing this.
Since I believe that this is still an off-topic post, if
anyone is interested in the solution i've adopted for this,
let me know by replying to me privately.
If, however, you don't mind me posting here, let me know.
Many many than...
2018 Mar 08
0
fuse vs libgfapi LIO performances comparison: how to make tests?
Dear support, I need to export gluster volume with LIO for a
virtualization system. In this moment I have a very basic test
configuration: 2x HP 380 G7(2 * Intel X5670 (Six core @ 2,93GHz), 72GB
ram, hd RAID10 6xsas 10krpm, lan Intel X540 T2 10GB) directly
interconnected. Gluster configuration is replica 2. OS is Fedora 27
For my tests I used dd and I found strange results. Apparently the
volume mounted locally and exported with LIO is more faster than the
same volume exported directl...
2006 Oct 18
0
Is there a way to expand a formated ocfs2 partition without loosi ng the data on it?
Hello
I have the feeling this may not be the right forum for the following
question, but I d like to try it here anyway:
This is the case:
I had a 3x72GB HDD RAID5 shared external disk drives (HP MSA500),
totalizing about 145,6 GB of data.
I needed to increase the available amount of data, so I added a 4th 72GB HDD
and using HP ACU I expanded my existing RAID5 Array, I have now about 218,5
GB available.
But this is of course not reflected in my p...
2006 Apr 24
1
SCSI install to IDE install - help
...we must tranfer the installation to some
other disks. SCSI disks are very difficult to find here and must be ordered
(and wait about 45 days to arrive at the dealer).
Can I clone the SCSI disk to a IDE Disk with a clone utility such as Acronis
TrueImage and then rebuild the boot loader?
The SCSI is 72gb with the bootloader, one swap partition and the second
partition is /
The customer will maintain the exactly same server, we will only change the
storage media.
some other things to consider?
Thanks,
--
-------------------------------------------
Erick Perez
Linux User 376588
http://counter.li.o...
2007 Sep 06
0
Zfs with storedge 6130
...w on I only
want JBOD!
Works even better when I export each disk in my array as a single raid0 x14
then create the zpool :)
#zpool create -f vol0 c2t1d12 c2t1d11 c2t1d10 c2t1d9 c2t1d8 c2t1d7 c2t1d6
c2t1d5 c2t1d4 c2t1d3 c2t1d2 c2t1d1 c2t1d0 spare c2t1d13
>
>> The storedge shelf has 14 FC 72gb disks attached to a solaris snv_68.
>>
>> I was thinking that since I cant export all the disks un-raided out to the
>> solaris system that I would instead:
>>
>> (on the 6130)
>> Create 3 raid5 volumes of 200gb each using the "Sun_ZFS" pool (128k seg...
2006 Dec 07
3
Good value for /proc/sys/vm/min_free_kbytes
Hi,
what would be a good value for
/proc/sys/vm/min_free_kbytes
on a Dual-CPU EM64T System with 8 GB of memory. Kernel is
2.6.9-42.0.3.ELsmp. I know that the default might be a bit low.
Unfortunatelly the documentation is a bit weak in this area.
We are experiencing responsiveness problems (and higher than expected
load) when the system is under combined memory+network+disk-IO stress.
2011 Nov 29
8
megaraid/PERC
I've got two drives from a now-dead server, they were RAIDed, a mirror,
I'd assume. I need to see if there's anything on them I need to transfer
to the replacement, so I just shoved them into another Dell server, with a
PERC 5 controller - I think that's what the dead one had. I fired up
MegaRAID storage manager... but can't see any way to tell it to recreate
that RAID. Anyone
2012 Feb 26
3
zfs diff performance
I had high hopes of significant performance gains using zfs diff in
Solaris 11 compared to my home-brew stat based version in Solaris 10.
However the results I have seen so far have been disappointing.
Testing on a reasonably sized filesystem (4TB), a diff that listed 41k
changes took 77 minutes. I haven''t tried my old tool, but I would
expect the same diff to take a couple of
2010 Aug 12
6
NTFS is more resilient than ext3? Or is it hardware issue?
Hi guys,
I don't mean to incite debate or something, just want to share
experience and a little curiosity.
Back long time ago, we have an old file MS W2K (NTFS) server where due
no admin was available to manage it, the server would get power off
when the office closed, and auto power on again in the morning. That
thing happened for years and it was fine ^^
Recently, I setup a Centos 5.5 file
2010 Apr 12
7
Slightly OT: which hardware for CentOS file server (Samba, 2 To storage, 50 users)?
Hi,
The language lab from the local university has contacted me. They'd like
to have a low-cost file server for storing all their language video
files. They have a mix of Windows, Mac OS X and even Linux clients,
roughly 50 machines. The files are quite big, and they calculated a
total amount of 2 To of storage.
I'm not very proficient with hardware, meaning either I'm dealing
2012 Oct 25
46
[RFC] New attempt to a better "btrfs fi df"
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi all,
this is a new attempt to improve the output of the command "btrfs fi df".
The previous attempt received a good reception. However there was no a
general consensus about the wording.
Moreover I still didn''t understand how btrfs was using the disks.
A my first attempt was to develop a new command which shows how the
disks
2010 Oct 06
14
Bursty writes - why?
I have a 24 x 1TB system being used as an NFS file server. Seagate SAS disks connected via an LSI 9211-8i SAS controller, disk layout 2 x 11 disk RAIDZ2 + 2 spares. I am using 2 x DDR Drive X1s as the ZIL. When we write anything to it, the writes are always very bursty like this:
ool 488K 20.0T 0 0 0 0
xpool 488K 20.0T 0 0 0 0
xpool