similar to: Coraid Ether Drives

Displaying 20 results from an estimated 1000 matches similar to: "Coraid Ether Drives"

2005 Nov 08
0
Re: ATA-over-Ethernet v's iSCSI -- CORAID is NOT SAN , also check multi-target SAS
From: Bryan J. Smith [mailto:thebs413 at earthlink.net] > > CORAID will _refuse_ to allow anything to access to volume after one > system mounts it. It is not multi-targettable. SCSI-2, iSCSI and > FC/FC-AL are. AoE is not. As I understand it, Coraid will allow multiple machines to mount a volume, it just doesn't handle the synchronization. So you can have more than one
2009 Jan 21
0
Anyone using zfs over coraid aoe?
Hello, Is anyone using zfs over coraid aoe? I was thinking about creating a bunch of single disk lblades and then mirroring or raidz them using zfs. Does anyone have any experiences they would like to share? -- This message posted from opensolaris.org
2006 Jun 22
0
ocfs2 on coraid?
Anyone tried to use ocfs2 on a coraid? So far I've found that I had to crank the heartbeat threshold up. Anything else I should know about? I'm running the ocfs2 version that comes with kernel 2.6.16.7 (is there something more recent I should use?) and ocfs2-tools and ocfs2console both v1.1.5 backported to debian stable. Oh, and a coraid SR1520 with ~6T of space in a single
2005 Nov 07
2
ATA-over-Ethernet v's iSCSI
Nick, What are you planning on running over the shared connection? Database, eMail, File Shares? How many users? How much data? What is your I/O profile? I've worked with 'enterprise' storage most of my career either as a consumer, adviser or provider - can't comment on AoE other than to suggest you look at what are the business & technical goals, how they solve it and what
2010 Sep 20
5
XCP ethernet jumbo frames????
Hi, I have an XCP 0.5 box running a few DomUs some of the DomU''s data is stored at (clustered lvm volumes on) a Coraid etherdrive box which is connected via an Ethernet cable to a dedicated card of the XCP box, for which in the network config I'' ve specified: MTU=9344: xe network-param-list uuid=.... gives: MTU ( RW): 9344 on the DomUs if I leave the MTU at 1500 (default
2005 Sep 14
3
CentOS + GFS + EtherDrive
I am considering building a pair of storage servers that will be using CentOS and GFS to share the storage from a Coraid (SATA+Raid) EtherDrive shelf. Has anyone else tried such a setup? Is GFS stable enough to use in a production environment? There is a build of GFS 6.1 at http://rpm.karan.org/el4/csgfs/. Has anyone used this? Is it stable? Will I run into any problems if I use CentOS 3.5
2007 Apr 29
3
Building Modules Against Xen Sources
I''m currently trying to build modules against the kernels created with Xen 3.0.5rc4. This used to not be such a problem, as Xen created a kernel directory and the built in it. Plain Jane, nothing fancy. I''ve noticed that somewhere since I did this (which was as recent as 3.0.4-1) the kernel build now does things a bit different. Apparently there is some sort of
2005 Sep 23
2
17G File size limit?
Hi everyone, This is a strange problem I have been having. I'm not sure where the problem is, so I figured I'd start here. I as having problems with Bacula stopping on 17Gig Volume sizes, so I decided to try to Just dd a 50 gig file. Sure enough, once the file hit 17 gigs dd stopped and spit out an error (pandora bacula)# dd if=/dev/zero of=bigfile bs=1M count=50000 File size
2006 Jul 28
3
Private Interconnect and self fencing
I have an OCFS2 filesystem on a coraid AOE device. It mounts fine, but with heavy I/O the server self fences claiming a write timeout: (16,2):o2hb_write_timeout:164 ERROR: Heartbeat write timeout to device etherd/e0.1p1 after 12000 milliseconds (16,2):o2hb_stop_all_regions:1789 ERROR: stopping heartbeat on all active regions. Kernel panic - not syncing: ocfs2 is very sorry to be fencing this
2014 Nov 05
1
Performance issue
Hi, Since few days I noticed very high load on my mailserver (Centos 6.6 64bit, 8 GB RAM, 2 x CPU 3.00GHz I am using Dovecot + Postfix + Roundcube + Nginx. I have about 10000 users. Spool is on network attached storage (Coraid). File system is ext4 (mounted with noatime). Problem appears almost every morning (while load is normal during afternoon). I suspect that this can be related to some
2008 Oct 26
1
Looking for configuration suggestions
I am in the process of completely overhauling the storage setup here and plan to go to glusterfs. The storage involved is 8x Coraid units, 2x JetStor and a Dell MD3000. The Coraid and JetStor are network connected via ATA over Ethernet & iSCSI. The disks are also being upgraded to 1TB in the process. I do plan to use unify to bring most if not all the storage together. My first
2014 Dec 01
4
best file system ?
W dniu 2014-12-01 o 18:19, Alessio Cecchi pisze: > > Il 01/12/2014 17:24, absolutely_free at libero.it ha scritto: >> Hi, >> I'm going to set up a new storage for our email users (about 10k). >> It's a network attached storage (Coraid). >> In your opinion, what is the best file system for mail server >> (pop3/imap/webmail) purpose? >> Thank you
2014 Dec 01
10
best file system ?
Hi, I'm going to set up a new storage for our email users (about 10k). It's a network attached storage (Coraid). In your opinion, what is the best file system for mail server (pop3/imap/webmail) purpose? Thank you
2005 Jul 22
10
AOE (Ata over ethernet) troubles on xen 2.0.6
I understand that all work is going into xen3, but I had wanted to note that aoe (drivers/block/aoe) is giving me trouble on xen 2.0.6 (so we can keep and eye on xen3). Specifically I can''t see nor export AOE devices. As a quick background on AOE, it is not IP (not routable, etc), but works with broadcasts and packets to MAC addresses (see http://www.coraid.com). (for anyone who
2014 Dec 12
2
Duplicate messages
Hi, I just moved mail spool to a different network storage. Now, several users are complaining about duplicate message that are fetched by their clients (Outlook, Microsoft Outlook). What is the reason? This is my conf: # dovecot -n # 2.0.9: /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-71.el6.x86_64 x86_64 CentOS release 6.6 (Final) auth_mechanisms = plain login digest-md5 cram-md5
2007 Apr 18
2
[Bridge] aoe/vblade on "localhost"
hello ! i try to use a network technology on one single host, which wasn`t designed for that. to give a short overview of what i`m talking about: AoE is just like a "networked blockdevice" (just like nbd/enbd) - but without tcp/ip. AoE kernel driver is the "client end" (see this like an iSCSI initiator) - and an etherblade storage appliance is the "server end"
2014 Dec 12
1
R: Re: Duplicate messages
Hi Steffen, with rsync Thank you >----Messaggio originale---- >Da: skdovecot at smail.inf.fh-brs.de >Data: 12/12/2014 9.44 >A: "absolutely_free at libero.it"<absolutely_free at libero.it> >Cc: <dovecot at dovecot.org> >Ogg: Re: Duplicate messages > >-----BEGIN PGP SIGNED MESSAGE----- >Hash: SHA1 > >On Fri, 12 Dec 2014, absolutely_free at
2015 Apr 24
2
Performance
Hi, at moment I have this environment: CentOS nginx + phpfpm Dovecot, with Maildir format Postfix Roundcube MySQL backend about 10000 mailusers dual core Intel(R) Pentium(R) D CPU 3.00GHz 8 GB RAM network storage device (Coraid), ext4 file system I have no performance issue now, but I need to move to a different server: FreeBSD 10.1-RELEASE nginx + phpfpm Dovecot Postfix Roundcube dual
2014 Dec 12
1
R: Re: R: Re: Duplicate messages
Sorry, I haven't shut users. I simply copied data between two folders >----Messaggio originale---- >Da: absolutely_free at libero.it >Data: 12/12/2014 13.14 >A: <dovecot at dovecot.org> >Ogg: R: Re: R: Re: Duplicate messages > >Hi, > >I mounted both network storage on this server. >After that, I used: > ># nice -n 19 rsync -av --progress
2006 Jun 19
4
Looking for tips about Physical Migration on XEN
Hi people. Im new on Xen and I''m looking in how to do a physical migration on Xen. I know that there is a lot of choices (that is the first problem) My environment is simple: 2 physical servers, each one running one instance of XEN. Each host has 2 gigabit cards. One to talk with the world, other to talk between theirselves. I want to run the every vm on the both hosts, if one fail