Hi list, would briefly like to get your ideas on this. I understand this is a subjective matter, but I''d like some pointers anyway. I''m planning to make a HA Xen solution based on DRBD and Pacemaker, mainly serving as a disaster recovery solution for my customers. It will daily sync their files and mails (rsync, imapsync) and in case of disaster they can access this info by means of web based applications. So I''m basically looking for a budget friendly rack server with good Linux support (will probably be using SLES11). Any advice? thx! b.
On Thu, Feb 23, 2012 at 4:28 PM, Bart Coninckx <bart.coninckx@telenet.be> wrote:> Hi list, > > would briefly like to get your ideas on this. I understand this is a > subjective matter, but I''d like some pointers anyway. > > I''m planning to make a HA Xen solution based on DRBD and Pacemaker, mainly > serving as a disaster recovery solution for my customers.have you tried drbd with xen?> > It will daily sync their files and mails (rsync, imapsync) and in case of > disaster they can access this info by means of web based applications. > > So I''m basically looking for a budget friendly rack server with good Linux > support (will probably be using SLES11).I''d actually suggest you buy a third one, with lots of internal disks, and install whatever OS you''re familiar with to act as block storage server. Example: openindiana + nappit + iscsi, SLES + aoe, etc. -- Fajar
On 02/23/12 10:41, Fajar A. Nugraha wrote:> On Thu, Feb 23, 2012 at 4:28 PM, Bart Coninckx<bart.coninckx@telenet.be> wrote: >> Hi list, >> >> would briefly like to get your ideas on this. I understand this is a >> subjective matter, but I''d like some pointers anyway. >> >> I''m planning to make a HA Xen solution based on DRBD and Pacemaker, mainly >> serving as a disaster recovery solution for my customers. > have you tried drbd with xen?yes, that''s what I''m saying> It will daily sync their files and mails (rsync, imapsync) and in case of > disaster they can access this info by means of web based applications. > > So I''m basically looking for a budget friendly rack server with good Linux > support (will probably be using SLES11). > I''d actually suggest you buy a third one, with lots of internal disks, > and install whatever OS you''re familiar with to act as block storage > server. Example: openindiana + nappit + iscsi, SLES + aoe, etc. >That would be a single point of failure, rendering the "frontend" cluster kinda useless, except for load balancing. I plan to go for dual primary - cost is an issue, so limitation to two servers is key, thx; B.
On 02/23/12 10:41, Fajar A. Nugraha wrote:> On Thu, Feb 23, 2012 at 4:28 PM, Bart Coninckx<bart.coninckx@telenet.be> wrote: >> Hi list, >> >> would briefly like to get your ideas on this. I understand this is a >> subjective matter, but I''d like some pointers anyway. >> >> I''m planning to make a HA Xen solution based on DRBD and Pacemaker, mainly >> serving as a disaster recovery solution for my customers. > have you tried drbd with xen? > >> It will daily sync their files and mails (rsync, imapsync) and in case of >> disaster they can access this info by means of web based applications. >> >> So I''m basically looking for a budget friendly rack server with good Linux >> support (will probably be using SLES11). > I''d actually suggest you buy a third one, with lots of internal disks, > and install whatever OS you''re familiar with to act as block storage > server. Example: openindiana + nappit + iscsi, SLES + aoe, etc. >Also, it might be so that both nodes in the future will be geographically separated, so I third one is less convenient, B.
On Thu, Feb 23, 2012 at 10:06 PM, Bart Coninckx <bart.coninckx@telenet.be> wrote:> On 02/23/12 10:41, Fajar A. Nugraha wrote: >> >> On Thu, Feb 23, 2012 at 4:28 PM, Bart Coninckx<bart.coninckx@telenet.be>>>> I''m planning to make a HA Xen solution based on DRBD and Pacemaker, >>> mainly >>> serving as a disaster recovery solution for my customers. >> >> have you tried drbd with xen? > > > yes, that''s what I''m saying >My point is, last time I tried drbd+ocfs2 introduce huge performance penalty, complexity, and possible data loss. But then again, it was an active-active setup with no external heartbeat, relying on ocfs2 to reboot the nodes on split-brain scenario. If you HAVE tested it, then it''s great. As usual, whatever solution you choose, testing is important.> >> It will daily sync their files and mails (rsync, imapsync) and in case of >> disaster they can access this info by means of web based applications. >> >> So I''m basically looking for a budget friendly rack server with good Linux >> support (will probably be using SLES11). >> I''d actually suggest you buy a third one, with lots of internal disks, >> and install whatever OS you''re familiar with to act as block storage >> server. Example: openindiana + nappit + iscsi, SLES + aoe, etc. >> > > That would be a single point of failure, rendering the "frontend" cluster > kinda useless, except for load balancing. I plan to go for dual primary - > cost is an issue, so limitation to two servers is key,"dual primary" and "active-active" is similar, but can be different. An active-active drbd setup requires protocol C (sync), which (among others) decrease performance but allow live migration. An active-standby setup can use async replication, which should be much better performance-wise. If each node is acting as active for their own domUs while acting as standby for domUs on the other node, that can be considered dual primary. If your definition of dual primary is what I mentioned above, then yes, drbd would be more appropriate. However if you have live migration as requirement, then IMHO a third storage server is much better. -- Fajar
On Thu, Feb 23, 2012 at 10:06 PM, Bart Coninckx <bart.coninckx@telenet.be> wrote:>> It will daily sync their files and mails (rsync, imapsync) and in case of >> disaster they can access this info by means of web based applications.... and I''m kinda confused with this one. Why would you need rsync? You can setup drbd to replicate all changes automatically on block-device level, either sync or async (small delay, but MUCH faster compared to daily manual sync). -- Fajar
On 02/23/12 16:25, Fajar A. Nugraha wrote:> My point is, last time I tried drbd+ocfs2 introduce huge performance > penalty, complexity, and possible data loss. But then again, it was an > active-active setup with no external heartbeat, relying on ocfs2 to > reboot the nodes on split-brain scenario. If you HAVE tested it, then > it''s great. As usual, whatever solution you choose, testing is important.The Xen + DRBD dual primary clusters I use or not file/image based, so no real need for ocfs2 and the added complexity of it. There is little risk for split-brain as everything is controlled by Pacemaker. No worries there, performance is good.> > "dual primary" and "active-active" is similar, but can be different. > An active-active drbd setup requires protocol C (sync), which (among > others) decrease performance but allow live migration.yes, that is what I''m using. The performance is very acceptable. remember, this offers web services. The available bandwidth and amount of simultaneous users will probably never hit the DRBD performance limits.> An active-standby setup can use async replication, which should be > much better performance-wise. If each node is acting as active for > their own domUs while acting as standby for domUs on the other node, > that can be considered dual primary.disallowing live migration, not preferable.> If your definition of dual primary is what I mentioned above, then > yes, drbd would be more appropriate. However if you have live > migration as requirement, then IMHO a third storage server is much > better. >that''s relative - as mentioned, this offers a SPOF. also, way more expensive. the inital question pointed to a cost/budget friendly proposition. The LSI 2008 based Supermicro servers (like in http://www.servethehome.com/supermicro-x8si6-f-motherboard-review-including-onboard-lsi-sas-2008-controller/) seem interesting, B.
On 02/23/12 16:28, Fajar A. Nugraha wrote:> On Thu, Feb 23, 2012 at 10:06 PM, Bart Coninckx > <bart.coninckx@telenet.be> wrote: >>> It will daily sync their files and mails (rsync, imapsync) and in case of >>> disaster they can access this info by means of web based applications. > ... and I''m kinda confused with this one. Why would you need rsync?rsync is the best suitable tool to delta-block-sync my customer''s data to the individual Xen DomUs .> You can setup drbd to replicate all changes automatically on > block-device level, either sync or async (small delay, but MUCH faster > compared to daily manual sync). >rsync is not meant to update the data on both nodes, this is done by DRBD. B.
I have used DRBD for Den block devices before, without dual primary and no cluster fs. It worked very well for me. I tend to like supermicro based systems for budget builds. Their IPMI management features are excellent. Aberdeen is a system builder that uses supermicro chassis and boards: http://www.aberdeeninc.com -- Thaddeus ----- Reply message ----- From: "Bart Coninckx" <bart.coninckx@telenet.be> To: "Fajar A. Nugraha" <list@fajar.net> Cc: <xen-users@lists.xen.org> Subject: [Xen-users] Server purchase pointers Date: Thu, Feb 23, 2012 9:40 am On 02/23/12 16:25, Fajar A. Nugraha wrote:> My point is, last time I tried drbd+ocfs2 introduce huge performance > penalty, complexity, and possible data loss. But then again, it was an > active-active setup with no external heartbeat, relying on ocfs2 to > reboot the nodes on split-brain scenario. If you HAVE tested it, then > it's great. As usual, whatever solution you choose, testing is important.The Xen + DRBD dual primary clusters I use or not file/image based, so no real need for ocfs2 and the added complexity of it. There is little risk for split-brain as everything is controlled by Pacemaker. No worries there, performance is good.> > "dual primary" and "active-active" is similar, but can be different. > An active-active drbd setup requires protocol C (sync), which (among > others) decrease performance but allow live migration.yes, that is what I'm using. The performance is very acceptable. remember, this offers web services. The available bandwidth and amount of simultaneous users will probably never hit the DRBD performance limits.> An active-standby setup can use async replication, which should be > much better performance-wise. If each node is acting as active for > their own domUs while acting as standby for domUs on the other node, > that can be considered dual primary.disallowing live migration, not preferable.> If your definition of dual primary is what I mentioned above, then > yes, drbd would be more appropriate. However if you have live > migration as requirement, then IMHO a third storage server is much > better. >that's relative - as mentioned, this offers a SPOF. also, way more expensive. the inital question pointed to a cost/budget friendly proposition. The LSI 2008 based Supermicro servers (like in http://www.servethehome.com/supermicro-x8si6-f-motherboard-review-including-onboard-lsi-sas-2008-controller/) seem interesting, B. _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
Meh, I''ve always been a little suspicious of the network block device stuff. It means that a network problem can be a lot bigger than it otherwise would be. On the other hand, I know plenty of people using them, and they seem to work ok. I myself bought a very small (two server) xen vps company from a friend that uses drbd in an active/passive configuration, and I haven''t had trouble with his two servers in a year. All my other stuff, though, is on local disk, which works pretty well. I''ve got spares for everything, so worst case I drive down to the data center and swap drives from one box to another. There is downtime, but it''s simple. So yeah, uh, I guess I can''t help you too much with that part. But cheap hardware? that''s what I am.. If you are assembling yourself and you aren''t really into assembling hardware, stick with the barebones. Supermicro calls it the "super server" it comes with the chassis, motherboard, fans, etc.... all wired in, and the heatsinks and rack rails in a box. You pop in your own ram, cpu, disks and screw on the heatsinks and you are ready to go. Make sure you use an ESD wrist strap and don''t do it over carpet. For hardware, I like supermicro, and right now I think the quad-core 56xx CPUs in a dual-socket configuration with either 96 or 144GiB ram is the best deal; 8GiB modules can be had for $65 for no name (transcend) and $85 for Kingston. (if you want 3 modules per channel for the 144g boxes, you need dual-rank ram, which is like $90 per 8GiB module.) Tyan barebones are also excellent. I prefer supermicro mostly because their chassis changes less often, so I can often use an ancient ''scratch and dent'' chassis and put in a new psu, motherboard, backplane, and save a few hundred bucks, but unless you are setup for this sort of thing, you are probably best off just buying the barebones, and in that case, tyan is as good as supermicro. I''m a big fan of Kingston for cheap ram. It''s really, really cheap; it usually works, their configurator tool is pretty good, and when it breaks, the warranty is excellent; back when I was using used stuff, I''d buy broken systems on ebay with Kingston ram, test it and RMA the bad ram for working ram. (I mean, Kingston is still a cheap ram brand, If I had infinite money, I could do better, but as far as cheap ram brands go, they are my favorite.) If you don''t want to build yourself, there are all sorts of people willing to build you supermicro stuff. I suggest getting multiple quotes for the specification you want, and then go back and pick your own parts, then get multiple quotes with the parts you pick (for example, most of the time the ram that the builder uses costs more than the Kingston, even though they use no-name ram without a transferable warranty, while Kingston has a lifetime ''no questions'' warranty even if you got it on ebay.) If you want a specific builder recommendation, I like kingstarusa.com. I''m renting an office above their location on kifer and wolfe in Sunnyvale. If you are local, they are pretty great for parts if you are building yourself. If you check the price on provantage they will match it, which saves you a lot on shipping (assuming you pay your use tax like I do, so the provantage "ship from California to ohio and back to California to avoid sales tax" thing doesn''t get you anything but high shipping costs and a long wait.) And at least once, they''ve RMA''d a supermicro part for me even though I told them that I bought it somewhere else. I haven''t bought anything built from them personally, just ''cause i like building that stuff myself, but I hear good things. But yeah, the dual-socket 56xx CPUs with either 12 or 18 8GiB ram modules is the way to go right now, hardware-wise, if you ask me. 18 modules means 3 modules per channel, which means you are running at 800mhz, but eh, that''s still a lot of ram. I think you get 1033 with 12 dual-rank modules.
On Thu, Feb 23, 2012 at 10:10 PM, Bart Coninckx <bart.coninckx@telenet.be> wrote:> Also, it might be so that both nodes in the future will be geographically > separated, so I third one is less convenient,geo-separated nodes, running drbd protocol C? Good luck :P Anyway, Luke already gave his suggestions, which is very reasonable. At work I''m stuck with big-brand names (e.g. HP), which probably wouldn''t be suitable for you. One last suggestion from me: make sure you also tested whatever device you''re going to use for fencing (e.g. IPMI) -- Fajar
On Feb 24, 2012 11:42 AM, "Fajar A. Nugraha" <list@fajar.net> wrote:> > On Thu, Feb 23, 2012 at 10:10 PM, Bart Coninckx > <bart.coninckx@telenet.be> wrote: > > Also, it might be so that both nodes in the future will begeographically> > separated, so I third one is less convenient, > > geo-separated nodes, running drbd protocol C? Good luck :P > > Anyway, Luke already gave his suggestions, which is very reasonable. > At work I''m stuck with big-brand names (e.g. HP), which probably > wouldn''t be suitable for you. >Besides, Telkom bought their big irons straight from the principals right? :-)> One last suggestion from me: make sure you also tested whatever device > you''re going to use for fencing (e.g. IPMI) > > -- > Fajar >BTW, is there a way to contact you via "japri", Pak Fajar? This email you''re using seems to reject japri emails. Rgds, _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
On 02/23/12 16:28, Fajar A. Nugraha wrote:> On Thu, Feb 23, 2012 at 10:06 PM, Bart Coninckx > <bart.coninckx@telenet.be> wrote: >>> It will daily sync their files and mails (rsync, imapsync) and in case of >>> disaster they can access this info by means of web based applications. > ... and I''m kinda confused with this one. Why would you need rsync? > > You can setup drbd to replicate all changes automatically on > block-device level, either sync or async (small delay, but MUCH faster > compared to daily manual sync). >it''s a backup setup for data elsewhere. the data needs to get there. hence rsync. can''t put it more clearly, sorry, B.
Thaddeus, I get Supermicro pointers from different places. I think that will be the way to go, thx, B. On 02/23/12 18:18, thaddeus@thogan.com wrote:> I have used DRBD for Den block devices before, without dual primary > and no cluster fs. It worked very well for me. > > I tend to like supermicro based systems for budget builds. Their IPMI > management features are excellent. Aberdeen is a system builder that > uses supermicro chassis and boards: http://www.aberdeeninc.com > > > -- Thaddeus > > ----- Reply message ----- > From: "Bart Coninckx" <bart.coninckx@telenet.be> > To: "Fajar A. Nugraha" <list@fajar.net> > Cc: <xen-users@lists.xen.org> > Subject: [Xen-users] Server purchase pointers > Date: Thu, Feb 23, 2012 9:40 am > > > On 02/23/12 16:25, Fajar A. Nugraha wrote: > > My point is, last time I tried drbd+ocfs2 introduce huge performance > > penalty, complexity, and possible data loss. But then again, it was an > > active-active setup with no external heartbeat, relying on ocfs2 to > > reboot the nodes on split-brain scenario. If you HAVE tested it, then > > it''s great. As usual, whatever solution you choose, testing is > important. > > The Xen + DRBD dual primary clusters I use or not file/image based, so > no real need for ocfs2 and the added complexity of it. > There is little risk for split-brain as everything is controlled by > Pacemaker. No worries there, performance is good. > > > > > "dual primary" and "active-active" is similar, but can be different. > > An active-active drbd setup requires protocol C (sync), which (among > > others) decrease performance but allow live migration. > yes, that is what I''m using. The performance is very acceptable. > remember, this offers web services. The available bandwidth and amount > of simultaneous users will probably never hit the DRBD performance limits. > > > An active-standby setup can use async replication, which should be > > much better performance-wise. If each node is acting as active for > > their own domUs while acting as standby for domUs on the other node, > > that can be considered dual primary. > disallowing live migration, not preferable. > > > If your definition of dual primary is what I mentioned above, then > > yes, drbd would be more appropriate. However if you have live > > migration as requirement, then IMHO a third storage server is much > > better. > > > > that''s relative - as mentioned, this offers a SPOF. > > also, way more expensive. the inital question pointed to a cost/budget > friendly proposition. > > The LSI 2008 based Supermicro servers (like in > http://www.servethehome.com/supermicro-x8si6-f-motherboard-review-including-onboard-lsi-sas-2008-controller/) > > seem interesting, > > B. > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xen.org > http://lists.xen.org/xen-users >
On 02/24/12 05:35, Fajar A. Nugraha wrote:> On Thu, Feb 23, 2012 at 10:10 PM, Bart Coninckx > <bart.coninckx@telenet.be> wrote: >> Also, it might be so that both nodes in the future will be geographically >> separated, so I third one is less convenient, > geo-separated nodes, running drbd protocol C? Good luck :Pno problem, I have 100 mbit guaranteed between two datacenters.> Anyway, Luke already gave his suggestions, which is very reasonable. > At work I''m stuck with big-brand names (e.g. HP), which probably > wouldn''t be suitable for you. > > One last suggestion from me: make sure you also tested whatever device > you''re going to use for fencing (e.g. IPMI) >I always use APC PDUs, more reliable than IPMI, unless the IPMI cards are battery powered. B.
On Fri, Feb 24, 2012 at 4:53 PM, Bart Coninckx <bart.coninckx@telenet.be> wrote:> On 02/24/12 05:35, Fajar A. Nugraha wrote: >> >> On Thu, Feb 23, 2012 at 10:10 PM, Bart Coninckx >> <bart.coninckx@telenet.be> wrote: >>> >>> Also, it might be so that both nodes in the future will be geographically >>> separated, so I third one is less convenient, >> >> geo-separated nodes, running drbd protocol C? Good luck :P > > > no problem, I have 100 mbit guaranteed between two datacenters.The problem is not bandwidth. It''s latency (which pretty much kills sync performance). Async aren''t affected by latency, which is why I prefer using async wherever possible. Even at the cost of live migration. Again, if you''ve tested that it fits your requirement, or your workload is extremely low, then you should be fine. -- Fajar
On 02/24/12 10:58, Fajar A. Nugraha wrote:> On Fri, Feb 24, 2012 at 4:53 PM, Bart Coninckx<bart.coninckx@telenet.be> wrote: >> On 02/24/12 05:35, Fajar A. Nugraha wrote: >>> On Thu, Feb 23, 2012 at 10:10 PM, Bart Coninckx >>> <bart.coninckx@telenet.be> wrote: >>>> Also, it might be so that both nodes in the future will be geographically >>>> separated, so I third one is less convenient, >>> geo-separated nodes, running drbd protocol C? Good luck :P >> >> no problem, I have 100 mbit guaranteed between two datacenters. > The problem is not bandwidth. It''s latency (which pretty much kills > sync performance). Async aren''t affected by latency, which is why I > prefer using async wherever possible. Even at the cost of live > migration. > > Again, if you''ve tested that it fits your requirement, or your > workload is extremely low, then you should be fine. >I''ve been assured that the line performs close to a LAN connection. And, as you state, in case of problems A is still possible. no worries, b.
On Fri, Feb 24, 2012 at 5:05 AM, Bart Coninckx <bart.coninckx@telenet.be> wrote:> I''ve been assured that the line performs close to a LAN connection.don''t believe that until you see hard round-trip-time numbers. LAN latencies are under a single milisecond (unless badly configured), but even short WANs are hard pressed to go under 10msec. WAN vendors like to say "it''s like local LAN", but they omit that they''re talking about bandwidth, not latency. and for other latency-sensitive protocols (SMB file sharing is one of the worst), they have specific ''accelerators'' (in short, big proxies cacheing most of the metadata going each way). if you deploy your own ''weird'' protocol, you''re on your own, and get the whole ugly scene. but, if you ge your own dark fiber, then it might be just right! (if you do good traffic shaping.... a whole dark art on its own) -- Javier
> -----Original Message----- > From: xen-users-bounces@lists.xen.org [mailto:xen-users-bounces@lists.xen.org] On > Behalf Of Javier Guerra Giraldez > Sent: Friday, February 24, 2012 3:11 PM > > On Fri, Feb 24, 2012 at 5:05 AM, Bart Coninckx <bart.coninckx@telenet.be> wrote: > > I''ve been assured that the line performs close to a LAN connection. > > > don''t believe that until you see hard round-trip-time numbers. LAN latencies are under > a single milisecond (unless badly configured), but even short WANs are hard pressed to > go under 10msec.It clearly depends on distance. The hard limit here is speed of light. So the best theoretical latency between, say, New York and L.A. is 30ms round-trip. In practice, if you''re seeing double that (60ms), you''re doing well, since the fiber isn''t a straight line and each hop adds a little latency of its own. -Jeff
On Fri, Feb 24, 2012 at 03:11:00PM -0500, Javier Guerra Giraldez wrote:> On Fri, Feb 24, 2012 at 5:05 AM, Bart Coninckx <bart.coninckx@telenet.be> wrote: > > I''ve been assured that the line performs close to a LAN connection. > > > don''t believe that until you see hard round-trip-time numbers. LAN > latencies are under a single milisecond (unless badly configured), but > even short WANs are hard pressed to go under 10msec. > > WAN vendors like to say "it''s like local LAN", but they omit that > they''re talking about bandwidth, not latency. and for other > latency-sensitive protocols (SMB file sharing is one of the worst), > they have specific ''accelerators'' (in short, big proxies cacheing most > of the metadata going each way). if you deploy your own ''weird'' > protocol, you''re on your own, and get the whole ugly scene. > > > but, if you ge your own dark fiber, then it might be just right! (if > you do good traffic shaping.... a whole dark art on its own)Why would you need to do traffic shaping on dark fiber? with simple 10G-LR optics you can do 10G over a pair. I''m currently experimenting and haven''t gotten a working system up yet, but I /think/ for under ten grand in used dwdm stuff, I can do 30x that. With real money, you can get a giant wad of 100G channels; you can do multiple terribits/sec with modern DWDM gear and 100G optics, but again, we''re talking real money. As far as I can tell, once you pay for the fiber, you can incrementally add bandwidth, 10G, 40G, or 100G a wave. I''m experimenting with this now; There is cheap municipal fiber in santa clara, and I have a friend with a bunch of surplus cisco 15540 units; Sure, they eat half a rack, but they are cheap and you can get a whole lot of 10G waves over a single pair with them. Active, too, so your ''client'' interface is just a whole bunch of 10G-LR optics. The problem with dark fiber (I mean, the problem besides finding what links are in the ground. Even the municipal fiber places only publish very rough maps before NDA) is going to be the distance. I mean, as another poster pointed out, you can''t go faster than light. But, if you are going within a city and your gear is good, you might be close to the ''lan performance'' you are talking about. Now, if you buy a lit wavelength on fiber lit by someone else, again, unless they are oversubscribing (and as I said, there''s no reason to oversubscribe a dark fiber run, unless it''s a really long dark fiber run, and a ''wave'' or ''lambda'' usually refers to a dwdm channel, meaning it can''t be oversubscribed) performance should be the same as having your own dark fiber run. Of course, when buying ''lit'' point to point links, I find that it''s often hard to get the sales people to distinguish between a ''wave'' (a non-oversubscribable link) and a mpls connection (an oversubscribable link) and everyone oversubscribes when they can, and nobody admits to it, and further, the cost of a 10G lit connection, in my experience, is pretty close to the cost of a pair of dark fiber (note, I only explored this over short runs, namely from 55 s. market to 250 stockton in san jose, and from place to place in santa clara. YMMV, etc, etc, I''m a poor negotiator and it''s quite possible that the economics are very different on longer runs, and I haven''t actually gotten anything working or signed any papers yet, so I could be completely wrong about all of this.)
On 02/24/12 22:02, Jeff Sturm wrote:>> -----Original Message----- >> From: xen-users-bounces@lists.xen.org [mailto:xen-users-bounces@lists.xen.org] On >> Behalf Of Javier Guerra Giraldez >> Sent: Friday, February 24, 2012 3:11 PM >> >> On Fri, Feb 24, 2012 at 5:05 AM, Bart Coninckx<bart.coninckx@telenet.be> wrote: >>> I''ve been assured that the line performs close to a LAN connection. >> >> don''t believe that until you see hard round-trip-time numbers. LAN latencies are under >> a single milisecond (unless badly configured), but even short WANs are hard pressed to >> go under 10msec. > It clearly depends on distance. The hard limit here is speed of light. So the best theoretical latency between, say, New York and L.A. is 30ms round-trip. In practice, if you''re seeing double that (60ms), you''re doing well, since the fiber isn''t a straight line and each hop adds a little latency of its own. > > -Jeff > >This is in Belgium. Belgium is about the size of New York if I''m not mistaking, B.
On 02/24/12 22:02, Jeff Sturm wrote:>> -----Original Message----- >> From: xen-users-bounces@lists.xen.org [mailto:xen-users-bounces@lists.xen.org] On >> Behalf Of Javier Guerra Giraldez >> Sent: Friday, February 24, 2012 3:11 PM >> >> On Fri, Feb 24, 2012 at 5:05 AM, Bart Coninckx<bart.coninckx@telenet.be> wrote: >>> I''ve been assured that the line performs close to a LAN connection. >> >> don''t believe that until you see hard round-trip-time numbers. LAN latencies are under >> a single milisecond (unless badly configured), but even short WANs are hard pressed to >> go under 10msec. > It clearly depends on distance. The hard limit here is speed of light. So the best theoretical latency between, say, New York and L.A. is 30ms round-trip. In practice, if you''re seeing double that (60ms), you''re doing well, since the fiber isn''t a straight line and each hop adds a little latency of its own. > > -Jeff > >mmm, a bit off there - it''s way bigger, roughly the same amount of inhabitants. Anyway, we''re small. B.
On Fri, Feb 24, 2012 at 4:42 PM, Luke S. Crawford <lsc@prgmr.com> wrote:> Why would you need to do traffic shaping on dark fiber? with simple > 10G-LR optics you can do 10G over a pair.because he's getting only 100Mbit/s of course, if he's paying for dark fiber (unlikely) somebody is skimping on the tranceivers. much more likely is that it's a normal VPN setup, where latency can be anywhere from 3ms if he's lucky to 50ms if not. in any case, with just 100Mbit/s, traffic shaping is a must. the question is who will be doing it, the customer, or the provider. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
On Fri, Feb 24, 2012 at 05:51:59PM -0500, Javier Guerra Giraldez wrote:> On Fri, Feb 24, 2012 at 4:42 PM, Luke S. Crawford <lsc@prgmr.com> wrote: > > Why would you need to do traffic shaping on dark fiber? with simple > > 10G-LR optics you can do 10G over a pair. > > because he''s getting only 100Mbit/s of course, if he''s paying for > dark fiber (unlikely) somebody is skimping on the tranceivers.Right; someone else was saying that dark fiber would solve the problem except for the traffic shaping problem. If you only have 100Mbps, you probably have a lower-tier product, and traffic shaping is going to be important to performance, as that''s just not a lot of bandwidth. My point was just that if you were lighting your own dark fiber, getting to way more bandwidth than you can send is probably the way to go. If you lease a pair of fiber and own the equipment at either end, the cost for the dark fiber is the same, regardless if you buy equipment that can do 100M/sec or equipment than can do 2TB/sec, so you might as well get a fast enough link.