Displaying 16 results from an estimated 16 matches for "nlb".
Did you mean:
nb
2012 Nov 29
0
Windows NLB crashing VM's
Hi All,
We have a somewhat serious issue around NLB on Windows 2012 and Xen.
First, let me describe our environment and then I''ll let you know what''s
wrong.
2 X Debian-squeeze boxes running the latest provided AMD64 Xen kernel and
about 100GB of RAM.
These boxes are connected via infiniband and DRBD is running over
this(IPoIB).
Ea...
2013 Jan 19
7
load balancer recommendations
Hello all,
The question is not necessarily CentOS-specific - but there are lots of
bright people on here, and - quite possibly - the final implementation will
be on CentOS hence I figured I'd ask it here. Here is the situation.
I need to configure a Linux-based network load balancer (NLB) solution. The
idea is this. Let us say I have a public facing load balancer machine with
an public IP of, say, 50.50.50.50. It is to receive the traffic (let's say,
HTTP traffic) and then route it to two private HTTP servers, let's say,
192.168.10.10 and 192.168.10.11. It has to have persi...
2017 Jul 05
3
[PATCH v2] virtio-blk: add DISCARD support to virtio-blk driver
...o_blk_discard *range;
+ struct bio *bio;
+
+ if (block_size < 512 || !block_size)
+ return -1;
+
+ range = kmalloc_array(segments, sizeof(*range), GFP_ATOMIC);
+ if (!range)
+ return -1;
+
+ __rq_for_each_bio(bio, req) {
+ u64 slba = (bio->bi_iter.bi_sector << 9) / block_size;
+ u32 nlb = bio->bi_iter.bi_size / block_size;
+
+ range[n].reserved = cpu_to_le32(0);
+ range[n].nlba = cpu_to_le32(nlb);
+ range[n].slba = cpu_to_le64(slba);
+ n++;
+ }
+
+ if (WARN_ON_ONCE(n != segments)) {
+ kfree(range);
+ return -1;
+ }
+
+ req->special_vec.bv_page = virt_to_page(range);
+...
2017 Jul 05
3
[PATCH v2] virtio-blk: add DISCARD support to virtio-blk driver
...o_blk_discard *range;
+ struct bio *bio;
+
+ if (block_size < 512 || !block_size)
+ return -1;
+
+ range = kmalloc_array(segments, sizeof(*range), GFP_ATOMIC);
+ if (!range)
+ return -1;
+
+ __rq_for_each_bio(bio, req) {
+ u64 slba = (bio->bi_iter.bi_sector << 9) / block_size;
+ u32 nlb = bio->bi_iter.bi_size / block_size;
+
+ range[n].reserved = cpu_to_le32(0);
+ range[n].nlba = cpu_to_le32(nlb);
+ range[n].slba = cpu_to_le64(slba);
+ n++;
+ }
+
+ if (WARN_ON_ONCE(n != segments)) {
+ kfree(range);
+ return -1;
+ }
+
+ req->special_vec.bv_page = virt_to_page(range);
+...
2008 Apr 04
2
simple load balancing/failover for OWA
We are building an exchange cluster with two front end Outlook Web
Access servers. We would like to at least have some sort of failover,
and prefereably load balancing for them.
The MS recommended way is to use NLB, but for various reasons that's not
working with our set up.
We are looking to set up a single linux server and use something like
LVS to load balance/fail over the connections.
Looking at LVS, it looks like it hasn't been updated in a while. Is it
stable? Is it still the preferred...
2017 Jul 04
0
[PATCH v2] virtio-blk: add DISCARD support to virtio-blk driver
...if (block_size < 512 || !block_size)
> + return -1;
> +
> + range = kmalloc_array(segments, sizeof(*range), GFP_ATOMIC);
> + if (!range)
> + return -1;
> +
> + __rq_for_each_bio(bio, req) {
> + u64 slba = (bio->bi_iter.bi_sector << 9) / block_size;
> + u32 nlb = bio->bi_iter.bi_size / block_size;
> +
> + range[n].reserved = cpu_to_le32(0);
> + range[n].nlba = cpu_to_le32(nlb);
> + range[n].slba = cpu_to_le64(slba);
> + n++;
> + }
> +
> + if (WARN_ON_ONCE(n != segments)) {
> + kfree(range);
> + return -1;
> + }
&g...
2017 Jul 05
2
[PATCH v2] virtio-blk: add DISCARD support to virtio-blk driver
...> + return -1;
> > +
> > + range = kmalloc_array(segments, sizeof(*range), GFP_ATOMIC);
> > + if (!range)
> > + return -1;
> > +
> > + __rq_for_each_bio(bio, req) {
> > + u64 slba = (bio->bi_iter.bi_sector << 9) / block_size;
> > + u32 nlb = bio->bi_iter.bi_size / block_size;
> > +
> > + range[n].reserved = cpu_to_le32(0);
> > + range[n].nlba = cpu_to_le32(nlb);
> > + range[n].slba = cpu_to_le64(slba);
> > + n++;
> > + }
> > +
> > + if (WARN_ON_ONCE(n != segments)) {
> > +...
2017 Jul 05
2
[PATCH v2] virtio-blk: add DISCARD support to virtio-blk driver
...> + return -1;
> > +
> > + range = kmalloc_array(segments, sizeof(*range), GFP_ATOMIC);
> > + if (!range)
> > + return -1;
> > +
> > + __rq_for_each_bio(bio, req) {
> > + u64 slba = (bio->bi_iter.bi_sector << 9) / block_size;
> > + u32 nlb = bio->bi_iter.bi_size / block_size;
> > +
> > + range[n].reserved = cpu_to_le32(0);
> > + range[n].nlba = cpu_to_le32(nlb);
> > + range[n].slba = cpu_to_le64(slba);
> > + n++;
> > + }
> > +
> > + if (WARN_ON_ONCE(n != segments)) {
> > +...
2005 Nov 29
1
wavelet transform
...Department of Mathematics and Computer Science
:: ul.Umultowska 87, room: B4-5
:: 61-614 Pozna?
:: email: dominikz@amu.edu.pl, dominik.zalewski@gmail.com
:: mobile: +48 692484801, phone: +48 618295333
:: gg: 2662959
:: www: http://www.staff.amu.edu.pl/~dominikz,
http://tri.wmid.amu.edu.pl, http://nlb.amu.edu.pl
2014 Feb 09
1
[Bug 900] New: Bridging issue: IP packets with Multicast Ethernet Address
...Severity: enhancement
Priority: P5
Component: bridging
AssignedTo: netfilter-buglog at lists.netfilter.org
ReportedBy: sophal.lee at live.com
Estimated Hours: 0.0
Non-IP multicast/broadcast Ethernet does get flooded in all bridge ports as
expected (e.g. ARP/NLB/STP all get through).
IP packets with multicast Ethernet address gets dropped instead of being
broadcast to all bridge ports. I've managed to work around problem by
forwarding IP to the broadcast address of ff:ff:ff:ff:ff:ff.
However, I think the bridging should determine whether IP packets i...
2015 Sep 10
6
[RFC PATCH 0/2] virtio nvme
Hi all,
These 2 patches added virtio-nvme to kernel and qemu,
basically modified from virtio-blk and nvme code.
As title said, request for your comments.
Play it in Qemu with:
-drive file=disk.img,format=raw,if=none,id=D22 \
-device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4
The goal is to have a full NVMe stack from VM guest(virtio-nvme)
to host(vhost_nvme) to LIO NVMe-over-fabrics
2015 Sep 10
6
[RFC PATCH 0/2] virtio nvme
Hi all,
These 2 patches added virtio-nvme to kernel and qemu,
basically modified from virtio-blk and nvme code.
As title said, request for your comments.
Play it in Qemu with:
-drive file=disk.img,format=raw,if=none,id=D22 \
-device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4
The goal is to have a full NVMe stack from VM guest(virtio-nvme)
to host(vhost_nvme) to LIO NVMe-over-fabrics
2014 Dec 21
3
PJSIP ports, multiple IP addresses and wrong owner
...e Source Destination Protocol Length Info
9225 7.503015 192.168.20.48 xx.xxx.xx.xxx SIP/SDP 886 Request: INVITE sip:004982349663847 at fpbx.de |
Frame 9225: 886 bytes on wire (7088 bits), 886 bytes captured (7088 bits)
Ethernet II, Src: MS-NLB-PhysServer-01_01:01:05:01 (02:01:01:01:05:01), Dst: D-Link_03:a4:18 (00:1b:11:03:a4:18)
Internet Protocol Version 4, Src: 192.168.20.48 (192.168.20.48), Dst: xx.xxx.xx.xxx (xx.xxx.xx.xxx)
User Datagram Protocol, Src Port: 5060 (5060), Dst Port: 5060 (5060)
Session Initiation Protocol (INVITE)
R...
2014 Dec 22
0
PJSIP ports, multiple IP addresses and wrong owner
...Destination Protocol Length Info
> 9225 7.503015 192.168.20.48 xx.xxx.xx.xxx SIP/SDP 886 Request: INVITE sip:004982349663847 at fpbx.de |
>
> Frame 9225: 886 bytes on wire (7088 bits), 886 bytes captured (7088 bits)
> Ethernet II, Src: MS-NLB-PhysServer-01_01:01:05:01 (02:01:01:01:05:01), Dst: D-Link_03:a4:18 (00:1b:11:03:a4:18)
> Internet Protocol Version 4, Src: 192.168.20.48 (192.168.20.48), Dst: xx.xxx.xx.xxx (xx.xxx.xx.xxx)
> User Datagram Protocol, Src Port: 5060 (5060), Dst Port: 5060 (5060)
> Session Initiation Protocol...
2004 Oct 13
5
Looking for large-ish deployment advice
Colleagues-
I am working on the design of a fairly large samba deployment, and I am
looking for feedback on some of my design ideas.
I have 10 buildings spread out in and around a city, all interconnected
via 1.5Mb leased lines. There are samba servers in each building. I have
some users that move from building to building. We are using primarily
windows 98 desktops, with a few 2K and XPP
2009 Jul 23
1
[PATCH server] changes required for fedora rawhide inclusion.
...;<)C+<zV0E?qAUu49K*fgzToT`n#<sdLZ!wN5
z&pkJbZ?e1*y8HvsUVyZDc=(*^ENX1!nAkB$tPucBrz<;}g%Xl402bPy+_|}t8*;(f
z48p`IIwT%MO9M9S?1ia at _lXx(fU5Ax`eBOi(EsQ^^ht{OGl*2r3}1znCrw!C>|)Ca
zkhyKVIzO*j$WY?^1x__Bm??p+P#B7|eU@%H>*e7BXm#m*nuWQ_-vJ02g5-8+kiDk;
zbN8p*|M<s4)*b4vsPc<-3z~KfKwlVNlBwaKR0j4_7hs{KNu0-ve?AQ}9nvz-p>{I_
zkty#b#vf*+pC&;jy*FP{^hcgVX5x6Srf5!%PT+mvNL)YNO#n@<1FV9d3L#rw#LzWl
z>Lm+{DLo5&8uOzaKQ_}Awo#GUHA1TOF2!Y*H_s<Xzc?=XbJzIFmUsaPTJ{JqJrJJq
zNMPD9RVzP#y-*ZhjwtgtGJY7YEQby9S{PqIR5%E)5-4eoN}?Pmu_8!+6wbUG94YZc
z_vezpkq;=4A_yfY`N6^FVgFIjdl_Cz<LBdES~&...