Displaying 7 results from an estimated 7 matches for "maxwait".
2010 Apr 15
1
STP default behavior for bridged (off) and NAT (on) networking in libvirt
...and the official
libvirt wiki does exactly in this way here [1]. NAT networking is
already configured by libvirt creating a bridge called libvirt0, while
bridged networking have to be configured manually by the user.
The libvirt0 bridge for NAT networking is default configured in this way:
STP=on
MAXWAIT=0
While suggested br0 bridge for bridged networking is configured in this way:
STP=off
MAXWAIT=5
Please, can you explain me why STP is on for NAT and should be off for
bridged networking? It seems much easier to me to create loops when
using bridged networking than NAT. Moreover, reading this old...
2010 Nov 16
0
Bug#603727: xen-hypervisor-4.0-amd64: i386 Dom0 crashes after doing some I/O on local storage (software Raid1 on SAS-drives with mpt2sas driver)
...414786] Bridge firewalling registered
[ 54.418823] initcall br_init+0x0/0xae [bridge] returned 0 after 3980 usecs
[ 54.445254] device eth1 entered promiscuous mode
[ 54.503370] bnx2: eth1: using MSIX
[ 54.506813] ADDRCONF(NETDEV_UP): eth1: link is not ready
Waiting for xenbr1 to get ready (MAXWAIT is 2 seconds).
WARNING: Could not open /proc/net/vlan/config. Maybe you need to load the 8021q module, or maybe you are not using PROCFS??
[ 56.612284] calling vlan_proto_init+0x0/0xa1 [8021q] @ 1115
[ 56.617849] 802.1Q VLAN Support v1.8 Ben Greear <greearb at candelatech.com>
[ 56.6...
2009 Apr 27
3
[Bridge] Ubuntu: network bridging between wireless and wired connection fails
Hi everybody,
First of all, let me say that I searched a lot on the internet. I
spent several hours sitting after my notebook, but I can't configure
my network bridge in Ubuntu. I'm really desperate, so I hope somebody
can point me out what I'm doing
Hardware:
-----------------------------------------------------------
$ lspci | grep controller
06:05.0 Network controller: Intel
2005 Mar 08
29
Interrupt levels
I''m tracking performance on the machine I installed yesterday.
mutt running on one Xen instance, accessing via imap to another
instance, accessing via nfs the maildir in another instances, seems
little laggy when moving up and down the message index list.
Network latency seems low < 30ms on average.
So I was tracking vmstat.
On the mutt instances is seems reasonable:
[nic@shell:~]
2010 Aug 21
24
Freeze with 2.6.32.19 and xen-4.0.1rc5
...DRCONF(NETDEV_CHANGE): eth0: link becomes ready
Added VLAN with VID == 100 to IF -:eth0:-
[ 14.352613] device eth0.100 entered promiscuous mode
[ 14.357464] device eth0 entered promiscuous mode
[ 14.368491] xenbr100: port 1(eth0.100) entering learning state
Waiting for xenbr100 to get ready (MAXWAIT is 20 seconds).
[ 23.369074] xenbr100: port 1(eth0.100) entering forwarding state
if-up.d/mountnfs[xenbr100]: waiting for interface eth0.100 before doing
NFS mounts (warning).
if-up.d/mountnfs[xenbr100]: waiting for interface eth0.200 before doing
NFS mounts (warning).
if-up.d/mountnfs[xenbr100]:...
2010 May 28
3
Problems with PCI pass-through
Hello!
I'm having problems getting PCI pass-through to work.
This is on a AMD64 system, paravirtualized with xen-hypervisor-4.0-amd64
4.0.0-1~experimental.1, dom0: linux-image-2.6.32-5-xen-amd64 2.6.32-12.
From IRC, earlier today:
<tschwinge> waldi: Aren't the Debian xen domU-capable kernels supposed to
contain the PCI frontend (needed for PCI pass-through)? I'm getting:
2010 May 28
3
Problems with PCI pass-through
Hello!
I'm having problems getting PCI pass-through to work.
This is on a AMD64 system, paravirtualized with xen-hypervisor-4.0-amd64
4.0.0-1~experimental.1, dom0: linux-image-2.6.32-5-xen-amd64 2.6.32-12.
From IRC, earlier today:
<tschwinge> waldi: Aren't the Debian xen domU-capable kernels supposed to
contain the PCI frontend (needed for PCI pass-through)? I'm getting: