Displaying 20 results from an estimated 10000 matches similar to: "ATA-over-Ethernet v's iSCSI"
2005 Sep 14
3
CentOS + GFS + EtherDrive
I am considering building a pair of storage servers that will be using
CentOS and GFS to share the storage from a Coraid (SATA+Raid) EtherDrive
shelf. Has anyone else tried such a setup?
Is GFS stable enough to use in a production environment?
There is a build of GFS 6.1 at http://rpm.karan.org/el4/csgfs/. Has anyone
used this? Is it stable?
Will I run into any problems if I use CentOS 3.5
2006 Jun 19
4
Looking for tips about Physical Migration on XEN
Hi people.
Im new on Xen and I''m looking in how to do a physical migration on Xen. I
know that there is a lot of choices (that is the first problem)
My environment is simple:
2 physical servers, each one running one instance of XEN. Each host has 2
gigabit cards. One to talk with the world, other to talk between
theirselves.
I want to run the every vm on the both hosts, if one fail
2007 Apr 18
2
[Bridge] aoe/vblade on "localhost"
hello !
i try to use a network technology on one single host, which wasn`t designed for that.
to give a short overview of what i`m talking about:
AoE is just like a "networked blockdevice" (just like nbd/enbd) - but without tcp/ip.
AoE kernel driver is the "client end" (see this like an iSCSI initiator) - and an etherblade storage appliance is the "server end"
2007 Dec 14
3
Expandable network storage
I want to thank everyone who has provided insight into my thread about
clustering MySql. I kind of just sat back and watched it develop. I
learned a lot from it all.
I have been reading all of the documentation on clustering provided by
Centos/Red Hat, and find I travel in circles. I read one chapter and
answer a self-imposed question but I end up asking myself another.
What I really want to
2009 Jan 27
20
Xen SAN Questions
Hello Everyone,
I recently had a question that got no responses about GFS+DRBD clusters for Xen VM storage, but after some consideration (and a lot of Googling) I have a couple of new questions.
Basically what we have here are two servers that will each have a RAID-5 array filled up with 5 x 320GB SATA drives, I want to have these as useable file systems on both servers (as they will both be
2010 Jul 05
21
Aoe or iScsi???
Hi people...
Here we use Xen 4 with Debian Lenny... We''re using kernel 2.6.31.13
pvops...
As a storage system, we use AoE devices...
So, we installed VM''s on AoE partition... The "NAS" server is a Intel
based baremetal with SATA hard disc...
However, sometime I feeling that VM''s is so slow...
Also, all VM has GPLPV drivers installed...
So, I am thing about
2006 Jun 07
14
HA Xen on 2 servers!! No NFS, special hardware, DRBD or iSCSI...
I''ve been brainstorming...
I want to create a 2-node HA active/active cluster (In other words I want to run a handful of
DomUs on one node and a handful on another). In the event of a failure I want all DomUs to fail
over to the other node and start working immediately. I want absolutely no
single-points-of-failure. I want to do it with free software and no special hardware. I want
2010 Sep 20
5
XCP ethernet jumbo frames????
Hi,
I have an XCP 0.5 box running a few DomUs some of the DomU''s data is
stored at (clustered lvm volumes on) a Coraid etherdrive box which is
connected via an Ethernet cable to a dedicated card of the XCP box, for
which in the network config I'' ve specified: MTU=9344: xe
network-param-list uuid=.... gives:
MTU ( RW): 9344
on the DomUs if I leave the MTU at 1500 (default
2008 Nov 20
27
lenny amd64 and xen.
I''ve installed debian lenny amd64, it is frozen now.
I''ve install kernel for xen support but it doesn''t start.
It says "you need to load kernel first" but I''ve installed all the packages
concerning xen, also packages related to the kernel.
Perhaps lenny doesn''t support xen anymore?
Any solution?
2005 Jun 22
11
Opteron Mobo Suggestions
I've been planning to build a dual Opteron server for awhile. I'd like
to get people's suggestions on a suitable motherboard.
I've looked at the Tyan K8SE (S2892) and K8SRE (S2891) but would like to
find more Linux-specific experiences with these boards.
Some features I expect are at least 4 SATA (SATA-300?) ports, serial
console support in the BIOS, USB 2.0 and IEEE-1394
2006 May 23
7
Load Balancing
Hi,
We are starting a new project, and are trying to decide the best way to
proceed. We want to setup a LAMP configuration using Centos, something
we have been doing in the past with great success.
The question is load balancing. We antisipate the potential for the
system to receive 500,000 requests/ day with in the next year. We want
to plan for that extra load now as we start the
2006 Nov 20
7
ISCSI SAN suggestion
Sorry for the off-topic question but I need advice on a buying a ISCSI
SAN for 4-6 servers running CentOS 4.4 .The main purpose for the SAN
is to store email accounts (that will be accessed by imap - dovecot)
and other documents. Minimal redundancy is required (e.g. dual power
supplies, battery backed write cache or mirrored controllers) and
price for a 2 TB configuration should be under $ 10000.
2010 Oct 08
74
Performance issues with iSCSI under Linux
Hi!We''re trying to pinpoint our performance issues and we could use all the help to community can provide. We''re running the latest version of Nexenta on a pretty powerful machine (4x Xeon 7550, 256GB RAM, 12x 100GB Samsung SSDs for the cache, 50GB Samsung SSD for the ZIL, 10GbE on a dedicated switch, 11x pairs of 15K HDDs for the pool). We''re connecting a single Linux
2005 Jul 22
10
AOE (Ata over ethernet) troubles on xen 2.0.6
I understand that all work is going into xen3, but I had wanted to note
that aoe (drivers/block/aoe) is giving me trouble on xen 2.0.6 (so we
can keep and eye on xen3).
Specifically I can''t see nor export AOE devices. As a quick background
on AOE, it is not IP (not routable, etc), but works with broadcasts and
packets to MAC addresses (see http://www.coraid.com).
(for anyone who
2007 Feb 13
4
Live Migration... shortest path
In a prior message I documented my woes in getting an NFS_ROOT xen
going. I haven''t resolved those yet, want to try a different tack at
this:
If the group were to recommend a path of least resistance
to showing migration/live migration, which configuration would it be?
Joe.
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
2006 Jul 28
3
Private Interconnect and self fencing
I have an OCFS2 filesystem on a coraid AOE device.
It mounts fine, but with heavy I/O the server self fences claiming a
write timeout:
(16,2):o2hb_write_timeout:164 ERROR: Heartbeat write timeout to device
etherd/e0.1p1 after 12000 milliseconds
(16,2):o2hb_stop_all_regions:1789 ERROR: stopping heartbeat on all
active regions.
Kernel panic - not syncing: ocfs2 is very sorry to be fencing this
2005 Nov 18
1
ZFS ATA over Ethernet and OpenSolaris
On Coraids (Inventor of AoE) website, they claim the?r drivers will also work on Solaris.
So my question is: Will there be any technical limitations regarding implementation of ZFS and AoE ?
If not yet possible, the developers of OpenSolaris might consider this option as a natural step I guess. What do You people think ?
With cheap disk attachments over an ethernet, and with ZFS, I think
2007 Apr 29
3
Building Modules Against Xen Sources
I''m currently trying to build modules against the kernels created
with Xen 3.0.5rc4.
This used to not be such a problem, as Xen created a kernel directory
and the built in it. Plain Jane, nothing fancy.
I''ve noticed that somewhere since I did this (which was as recent as
3.0.4-1) the kernel build now does things a bit different.
Apparently there is some sort of
2005 Nov 08
0
Re: ATA-over-Ethernet v's iSCSI -- CORAID is NOT SAN , also check multi-target SAS
From: Bryan J. Smith [mailto:thebs413 at earthlink.net]
>
> CORAID will _refuse_ to allow anything to access to volume after one
> system mounts it. It is not multi-targettable. SCSI-2, iSCSI and
> FC/FC-AL are. AoE is not.
As I understand it, Coraid will allow multiple machines to mount a
volume, it just doesn't handle the synchronization. So you can have
more than one
2009 Jun 18
12
Best way to use iSCSI in domU
Hello,
We need to use iSCSI in some of our domUs. By the moment, iSCSI is not
for system filesystem, but for data filesystem.
I am wondering what is the best way to use it. Is it better to
configure it in dom0 and then attach the device to the domU? Or is it
better to configure it directly in the domU?
I am thinking that if we configure it in the dom0, then we can''t share
that iscsi