Displaying 18 results from an estimated 18 matches similar to: "Windows NLB crashing VM's"
2004 Dec 08
0
Re: Spandsp loading via asterisk app_rxfax.c brokenpipe.
It should be a mpg123 problem, not a spandsp problem.
Stop asterisk, make clean, make install and start asterisk again.
Have fun.
"Ariel Batista" <arielb27@hotmail.com> wrote in message
news:<BAY22-DAV14862521E60E81DB568FDCDBB60@phx.gbl>...
I have compiled Spandsp without any problems. I got no errors I have also
done the patch without getting any error. I have tried pre4
2013 Jan 19
7
load balancer recommendations
Hello all,
The question is not necessarily CentOS-specific - but there are lots of
bright people on here, and - quite possibly - the final implementation will
be on CentOS hence I figured I'd ask it here. Here is the situation.
I need to configure a Linux-based network load balancer (NLB) solution. The
idea is this. Let us say I have a public facing load balancer machine with
an public IP
2017 Jul 05
3
[PATCH v2] virtio-blk: add DISCARD support to virtio-blk driver
Currently virtio-blk driver does not provide discard feature flag, so the
filesystems which built on top of the block device will not send discard
command. This is okay for HDD backend, but it will impact the performance
for SSD backend.
Add a feature flag VIRTIO_BLK_F_DISCARD and command VIRTIO_BLK_T_DISCARD
to extend exist virtio-blk protocol, define 16 bytes discard descriptor
for each discard
2017 Jul 05
3
[PATCH v2] virtio-blk: add DISCARD support to virtio-blk driver
Currently virtio-blk driver does not provide discard feature flag, so the
filesystems which built on top of the block device will not send discard
command. This is okay for HDD backend, but it will impact the performance
for SSD backend.
Add a feature flag VIRTIO_BLK_F_DISCARD and command VIRTIO_BLK_T_DISCARD
to extend exist virtio-blk protocol, define 16 bytes discard descriptor
for each discard
2008 Apr 04
2
simple load balancing/failover for OWA
We are building an exchange cluster with two front end Outlook Web
Access servers. We would like to at least have some sort of failover,
and prefereably load balancing for them.
The MS recommended way is to use NLB, but for various reasons that's not
working with our set up.
We are looking to set up a single linux server and use something like
LVS to load balance/fail over the
2017 Jul 04
0
[PATCH v2] virtio-blk: add DISCARD support to virtio-blk driver
On 05/07/2017 10:44, Changpeng Liu wrote:
> Currently virtio-blk driver does not provide discard feature flag, so the
> filesystems which built on top of the block device will not send discard
> command. This is okay for HDD backend, but it will impact the performance
> for SSD backend.
>
> Add a feature flag VIRTIO_BLK_F_DISCARD and command VIRTIO_BLK_T_DISCARD
> to extend
2017 Jul 05
2
[PATCH v2] virtio-blk: add DISCARD support to virtio-blk driver
> -----Original Message-----
> From: Paolo Bonzini [mailto:pbonzini at redhat.com]
> Sent: Tuesday, July 4, 2017 5:24 PM
> To: Liu, Changpeng <changpeng.liu at intel.com>; virtualization at lists.linux-
> foundation.org
> Cc: stefanha at gmail.com; hch at lst.de; mst at redhat.com
> Subject: Re: [PATCH v2] virtio-blk: add DISCARD support to virtio-blk driver
>
>
2017 Jul 05
2
[PATCH v2] virtio-blk: add DISCARD support to virtio-blk driver
> -----Original Message-----
> From: Paolo Bonzini [mailto:pbonzini at redhat.com]
> Sent: Tuesday, July 4, 2017 5:24 PM
> To: Liu, Changpeng <changpeng.liu at intel.com>; virtualization at lists.linux-
> foundation.org
> Cc: stefanha at gmail.com; hch at lst.de; mst at redhat.com
> Subject: Re: [PATCH v2] virtio-blk: add DISCARD support to virtio-blk driver
>
>
2014 Feb 09
1
[Bug 900] New: Bridging issue: IP packets with Multicast Ethernet Address
https://bugzilla.netfilter.org/show_bug.cgi?id=900
Summary: Bridging issue: IP packets with Multicast Ethernet
Address
Product: netfilter/iptables
Version: unspecified
Platform: All
OS/Version: All
Status: NEW
Severity: enhancement
Priority: P5
Component: bridging
AssignedTo:
2005 Nov 29
1
wavelet transform
Hello,
I am thinking about plugging in Discrete Wavelet Transform as
described in Vorbis I spec:
1.1.2. Classification
Vorbis I is a forward-adaptive monolithic transform CODEC based on the
Modified Discrete Cosine Transform. The codec is structured to allow
addition of a hybrid wavelet filterbank in Vorbis II to offer better
transient response and reproduction using a transform better suited to
2007 Oct 29
12
wxRuby Socket Demo finally available
Hello All,
I''ve just sent off the source code for wxRuby Socket''s demo to Sean and
Alex to do some final testing on Linux and MacOS X, to ensure
compatability of my design with those two operating systems. Once
confirmed, the demo will be posted to the SVN. What is basically
entailed within these two demos, is the implementation of a Server with
a GUI Front End, and the
2015 Sep 10
6
[RFC PATCH 0/2] virtio nvme
Hi all,
These 2 patches added virtio-nvme to kernel and qemu,
basically modified from virtio-blk and nvme code.
As title said, request for your comments.
Play it in Qemu with:
-drive file=disk.img,format=raw,if=none,id=D22 \
-device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4
The goal is to have a full NVMe stack from VM guest(virtio-nvme)
to host(vhost_nvme) to LIO NVMe-over-fabrics
2015 Sep 10
6
[RFC PATCH 0/2] virtio nvme
Hi all,
These 2 patches added virtio-nvme to kernel and qemu,
basically modified from virtio-blk and nvme code.
As title said, request for your comments.
Play it in Qemu with:
-drive file=disk.img,format=raw,if=none,id=D22 \
-device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4
The goal is to have a full NVMe stack from VM guest(virtio-nvme)
to host(vhost_nvme) to LIO NVMe-over-fabrics
2011 Sep 23
4
Gotchas around upgrading from an old version (0.25.4) to a newer version.
I''ve been tasked at my workplace to upgrade our puppet installation to
a more modern version. Currently all the environments run a RubyGem
version of puppet 0.25.4 on mostly RHEL/Centos 5.3 - 5.5 (there are,
like most environments, a few laggards running RHEL4 or new machines
running CentOS 6).
The plan is to upgrade these to the most stable version of Puppet,
which at the time of
2014 Dec 21
3
PJSIP ports, multiple IP addresses and wrong owner
Dear list,
I am currently trying to send faxes via T.38 using PJSIP (newest version 2.3) with Asterisk 13.0.2. After having configured PJSIP, I have seen several things the cause of which I would like to know.
1) Ports and IP addresses which PJSIP bind to
I have configured one transport like that:
[tr_wZCMk5MvC2ATNzAr]
type = transport
protocol = udp
bind = 192.168.20.48
Nevertheless, PJSIP
2014 Dec 22
0
PJSIP ports, multiple IP addresses and wrong owner
On Sun, Dec 21, 2014 at 4:54 AM, Recursive <lists at binarus.de> wrote:
> Dear list,
>
> I am currently trying to send faxes via T.38 using PJSIP (newest version 2.3) with Asterisk 13.0.2. After having configured PJSIP, I have seen several things the cause of which I would like to know.
>
> 1) Ports and IP addresses which PJSIP bind to
>
> I have configured one transport
2004 Oct 13
5
Looking for large-ish deployment advice
Colleagues-
I am working on the design of a fairly large samba deployment, and I am
looking for feedback on some of my design ideas.
I have 10 buildings spread out in and around a city, all interconnected
via 1.5Mb leased lines. There are samba servers in each building. I have
some users that move from building to building. We are using primarily
windows 98 desktops, with a few 2K and XPP
2009 Jul 23
1
[PATCH server] changes required for fedora rawhide inclusion.
Signed-off-by: Scott Seago <sseago at redhat.com>
---
AUTHORS | 17 ++++++
README | 10 +++
conf/ovirt-agent | 12 ++++
conf/ovirt-db-omatic | 12 ++++
conf/ovirt-host-browser | 12 ++++