Displaying 20 results from an estimated 32 matches for "nearlin".
Did you mean:
nearline
2017 Nov 03
2
low end file server with h/w RAID - recommendations
...use it?s so expensive that you can only use it
for the limited amounts of data that actually benefit from, or require,
the advantage in performance. For this application, it makes perfectly
sense.
> 3.5" SATA drives spinning at 5400 and 7200 rpm are the choice for large capacity bulk 'nearline' storage which is typically sequentially written once
Why would you write them only once? Where are you storing your data when you
do that?
2015 Feb 03
3
Very slow disk I/O
...lse to make the
> I/O a bit faster than what it currently does. I cannot go in for using
> SSD's due to budget constraints. Need to make the best use of the SATA
> disks that i have currently.
so, you have 2x2 ST91000640NS
http://www.seagate.com/products/enterprise-servers-storage/nearline-storage/enterprise-capacity-2-5-hdd/
those are "Nearline" disks, 7200RPM. They are intended for bulk
secondary storage, archives, backups and such.
you said you have a number of virtual machines all attempting to access
this raid10 at once? I'm not surprised that it is slow. Yo...
2017 Nov 03
0
low end file server with h/w RAID - recommendations
...e IOPS.
>
> That?s not for storage because it?s so expensive that you can only use it
> for the limited amounts of data that actually benefit from, or require,
> the advantage in performance.? For this application, it makes perfectly
> sense.
>
online high performance storage, vs nearline/archival storage. the first
needs high IOPS and high concurrency.??? the 2nd needs high capacity,
fast sequential speeds but little or no random access..??? two
completely different requirements.? both are 'storage'.
>> 3.5" SATA drives spinning at 5400 and 7200 rpm are the...
2012 Apr 06
6
Seagate Constellation vs. Hitachi Ultrastar
Happy Friday, List!
I''m spec''ing out a Thumper-esque solution and having trouble finding my favorite Hitachi Ultrastar 2TB drives at a reasonable post-flood price. The Seagate Constellations seem pretty reasonable given the market circumstances but I don''t have any experience with them. Anybody using these in their ZFS systems and have you had good luck?
Also, if
2012 Sep 19
2
self-encrypting drives
whats the state of support for self-encrypting drives in CentOS 6 ?
these are becoming increasingly common on both laptops and for
enterprise storage (particularlly nearline), with features like
instant-erase via key destruction.
--
john r pierce N 37, W 122
santa cruz ca mid-left coast
2015 Feb 03
0
Very slow disk I/O
...I/O a bit faster than what it currently does. I cannot go in for
>> using SSD's due to budget constraints. Need to make the best use of
>> the SATA disks that i have currently.
>
> so, you have 2x2 ST91000640NS
> http://www.seagate.com/products/enterprise-servers-storage/nearline-storage/enterprise-capacity-2-5-hdd/
>
>
> those are "Nearline" disks, 7200RPM. They are intended for bulk
> secondary storage, archives, backups and such.
>
> you said you have a number of virtual machines all attempting to
> access this raid10 at once? I'm...
2008 Feb 25
2
Panic when ZFS pool goes down?
Is it still the case that there is a kernel panic if the device(s) with the ZFS pool die?
I was thinking to attach some cheap SATA disks to a system to use for nearline storage. Then I could use ZFS send/recv on the local system (without ssh) to keep backups of the stuff in the main pool. The main pool is replicated across 2 arrays on the SAN and we have multipathing and it''s quite robust.
However I don''t want my mail server going DOWN if the...
2008 Jan 25
2
Capturing credentials for imap sync
Hi List
All the imap sync apps I could find requires the username /password
credentails to be known before a sync occurs. I have Dovecot using ldap
acting as a nearline backup mail server to MS Exchange. Every hour
imapsync syncs mail between Exchange and Dovecot. This all works fine
becuase the users credentials are known, but when new users are added I
would like the process to work seemless like this:
The user is added to the Active Directory. The mail cli...
2013 May 04
4
Scrub CPU usage ...
...ave
been completely transitioned to btrfs for nearly a month with the only
exception being a backup mirror drive formatted jfs that gets mounted
and updated hourly via cron''d rsync and then immediately unmounted until
the next update. I am using btrfs raid 1 across five 500GB Seagate
nearline drives and btrfs single on a Seagate 4TB backup drive. I am
absolutely delighted with how this system is working. This is my
primary day in and day out system. I have to date hard crashed this
system at least three times without sustaining any apparent damage to
the filesystems. Perhaps I...
2007 Sep 06
0
Zfs with storedge 6130
...al...
>
> Andy Lubel wrote:
>> Hi All,
>>
>> I have been asked to implement a zfs based solution using storedge 6130 and
>> im chasing my own tail trying to decide how best to architect this. The
>> storage space is going to be used for database dumps/backups (nearline
>> storage). What is killing me is that I must mix hardware raid and zfs..
>
> Why should that be killing you? ZFS works fine with RAID arrays.
What kills me is the fact that I have a choice and it was hard to decide on
which one was going to be at the top of the totem pole. From...
2011 May 20
2
scsi3 persistent reservations in cluster storage fencing
...s on a
SAS chain. I don't have any suitable hardware for running any tests or
evaluations yet.
general idea: 2 centos servers each with 8 port external SAS cards (2
x4), cascaded through a SAS box-o-disks with a whole bunch of SAS dual
ported drives, to implement high availability large nearline
storage. all storage configured as JBOD, using linux md raid or
lvm mirroring. drives should only be accessible by the currently active
server, with heartbeat managing the fencing.
here's a hardware block diagram
http://freescruz.com/shared-sas.jpg
This is about the only details I&...
2011 May 10
5
Tuning disk failure detection?
We recently had a disk fail on one of our whitebox (SuperMicro) ZFS
arrays (Solaris 10 U9).
The disk began throwing errors like this:
May 5 04:33:44 dev-zfs4 scsi: [ID 243001 kern.warning] WARNING: /pci at 0,0/pci8086,3410 at 9/pci15d9,400 at 0 (mpt_sas0):
May 5 04:33:44 dev-zfs4 mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31110610
And errors for the drive were
2017 Nov 02
11
low end file server with h/w RAID - recommendations
Richard Zimmerman wrote:
> hw wrote:
>> Next question: you want RAID, how much storage do you need? Will 4 or 8 3.5" drives be enough (DO NOT GET crappy 2.5" drives - they're *much* more expensive than the 3.5" drives, and >smaller disk space. For the price of a 1TB 2.5", I can get at least a 4TB WD Red.
>
> I will second Marks comments here. Yes,
2010 Jan 28
16
Large scale ZFS deployments out there (>200 disks)
While thinking about ZFS as the next generation filesystem without limits I am wondering if the real world is ready for this kind of incredible technology ...
I''m actually speaking of hardware :)
ZFS can handle a lot of devices. Once in the import bug (http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6761786) is fixed it should be able to handle a lot of disks.
I want to
2017 Nov 02
0
low end file server with h/w RAID - recommendations
...r large arrays of SSDs, as they don't even come in 3.5". ?? With 2.5"
you can pack more disks per U (24-25 2.5" per 2U face, vs 12 3.5" max
per 2U)... more disks == more IOPS.
3.5" SATA drives spinning at 5400 and 7200 rpm are the choice for large
capacity bulk 'nearline' storage which is typically sequentially written
once
--
john r pierce, recycling bits in santa cruz
2017 Nov 02
2
low end file server with h/w RAID - recommendations
...s, as they don't even come in 3.5". ?? With 2.5"
> you can pack more disks per U (24-25 2.5" per 2U face, vs 12 3.5" max
> per 2U)... more disks == more IOPS.
>
> 3.5" SATA drives spinning at 5400 and 7200 rpm are the choice for large
> capacity bulk 'nearline' storage which is typically sequentially written
> once
>
We have a fair number of SAS 3.5" drives, and yes, 10k or 15k speeds.
mark
2009 Nov 02
2
How do I protect my zfs pools?
...At home I am in the process of moving all my data from a Linux NFS
server to OpenSolaris. It''s something I''d been meaning to do ever
since I heard Jeff Bonwick talk about ZFS at LISA ''07.
My plan was:
- mirrored zfs pool on an OpenSolaris host, exported via NFS/Samba
- nearline amanda backups on the same host, but to an external eSATA
mirrored zfs pool
- archival amanda backups on my old Linux host
That plan would involve three zfs pools:
- root pool on OpenSolaris host
- mirrored storage pool on OpenSolaris host
- mirrored external pool
How do I protect these zfs pool...
2014 May 28
3
The state of xfs on CentOS 6?
We're looking at getting an HBR (that's a technical term, honkin' big
RAID). What I'm considering is, rather than chopping it up into 14TB or
16TB filesystems, of using xfs for really big filesystems. The question
that's come up is: what's the state of xfs on CentOS6? I've seen a number
of older threads seeing problems with it - has that mostly been resolved?
How does
2012 Oct 02
2
new "large" fileserver config questions
Hi all,
I was recently charged with configuring a new fairly large (24x3TB
disks) fileserver for my group. I think I know mostly what I want to do
with it, but I did have two questions, at least one of which is directly
related to CentOS.
1) The controller node has two 90GB SSDs that I plan to use as a
bootable RAID1 system disk. What is the preferred method for laying
out the RAID array? I
2018 Apr 09
0
JBOD / ZFS / Flash backed
Yes the flash-backed RAID cards use a super-capacitor to backup the flash
cache. You have a choice of flash module sizes to include on the card.
The card supports RAID modes as well as JBOD.
I do not know if Gluster can make use of battery-backed flash-based Cache
when the disks are presented by the RAID card in JBOD. The Hardware
vendor asked "Do you know if Gluster makes use of