Displaying 20 results from an estimated 4000 matches similar to: "idmap backend ad and trusted domains?"
2006 Mar 02
2
getting rid of lmhashes?
Hi,
is there a way of disabling the creation of the (insecure) lm-hash in
the passdb backend of a samba3-pdc?
Mark
2010 May 14
2
Samba4-alpha11
Just thought I'd say that samba4 is working quite nicely. Samba4 DC
on Ubuntu server. Added a W2k3R2 and W2k8R2 server as DC's. Took a
little bit of play to get it done, but not much.
The only thing I've noticed so far (still in early lab stage) is a GC
issue.
Now if I can upgrade a Samba3-LDAP domain....
Cheers,
TMS III
2010 Jun 11
2
Samba 4--Somethings decidedly broken
Hmmm...not quite sure where to go to fix this up.
Samba 4 PDC, 1 W2K3R2, 1 W2K8R2 additional DC's. samba.log
perpetually spewing:
[Fri Jun 11 14:47:42 2010 PDT, 0
librpc/rpc/dcerpc_util.c:619:dcerpc_pipe_auth_recv()]
Failed to bind to uuid e3514235-4b06-11d1-ab04-00c04fc2dcd2 -
NT_STATUS_INVALID_PARAMETER
[Fri Jun 11 14:47:42 2010 PDT, 0
2012 Mar 07
1
[HELP!]GFS2 in the xen 4.1.2 does not work!
[This email is either empty or too large to be displayed at this time]
2009 Nov 03
3
virsh troubling zfs!?
Hi and hello,
I have a problem confusing me. I hope someone can help me with it.
I followed a "best practise" - I think - using dedicated zfs filesystems for my virtual machines.
Commands (for completion):
[i]zfs create rpool/vms[/i]
[i]zfs create rpool/vms/vm1[/i]
[i] zfs create -V 10G rpool/vms/vm1/vm1-dsk[/i]
This command creates the file system [i]/rpool/vms/vm1/vm1-dsk[/i] and the
2007 Jul 01
1
''qemu-dm'' process just died
One of my W2K3R2 domains just stopped working, and the ''qemu-dm'' process
is in the ''defunct'' state, which means it has seg faulted or
something...
Of course this happens after it has been running perfectly for a week
and we think all of these problems are behind us!
There wasn''t time to attach the debugger to the process unfortunately...
is there any
2007 Jul 22
11
Many same managed domain
Hi,
When I tested xm new command without uuid parameter repeatedly,
I saw many same managed domain as follows.
# xm list
Name ID Mem VCPUs State Time(s)
Domain-0 0 941 2 r----- 51.9
# xm new /xen/vm1.conf
Using config file "/xen/vm1.conf".
# xm new /xen/vm1.conf
Using config file
2011 Feb 16
4
[PATCH] xen: use freeze/restore/thaw PM events for suspend/resume/chkpt
Use PM_FREEZE, PM_THAW and PM_RESTORE power events for
suspend/resume/checkpoint functionality, instead of PM_SUSPEND
and PM_RESUME. Use of these pm events fixes the Xen Guest hangup
when taking checkpoints. When a suspend event is cancelled
(while taking checkpoints once/continuously), we use PM_THAW
instead of PM_RESUME. PM_RESTORE is used when suspend is not
cancelled. See
2007 Feb 16
3
[PATCH][XEND] Don''t call destroy() on exception in start()
destroy() is being called on exception in both start() and create(). It
needs to be called only in create().
Signed-off-by: Aravindh Puthiyaparambil
<aravindh.puthiyaparambil@unisys.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
2006 Apr 26
8
Xen 3.0 on FC4 - guest domains cant ping host domain
I am installed Xen 3.0 on a Fedora Core 4(2.6.12-1.1454_FC4xen0) machine.
This machine is currently running one host domain:
[root@]# xm list
Name Id Mem(MB) CPU VCPU(s) State Time(s)
Domain-0 0 128 0 1 r---- 49.1
fc4-vm1 1 63 0 1 -b--- 18.5
Following is the network configuration for Domain-0:
eth0 Link
2017 Jun 30
2
Re: recovering from deleted snapshot
On Fri, Jun 30, 2017 at 09:23:29 -0400, Doug Hughes wrote:
> On Jun 30, 2017 6:22 AM, "Peter Krempa" <pkrempa@redhat.com> wrote:
> > On Fri, Jun 30, 2017 at 12:05:47 +0200, Peter Krempa wrote:
> > > On Thu, Jun 22, 2017 at 11:02:41 -0400, Doug Hughes wrote:
[...]
> file or directory
> > $ virsh blockcommit --active --pivot fedora23 vda
> >
>
2014 Mar 07
5
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
We used to stop the handling of tx when the number of pending DMAs
exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
of both host and guest. But it was too aggressive in some cases, since
any delay or blocking of a single packet may delay or block the guest
transmission. Consider the following setup:
+-----+ +-----+
| VM1 | | VM2 |
+--+--+
2014 Mar 07
5
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
We used to stop the handling of tx when the number of pending DMAs
exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
of both host and guest. But it was too aggressive in some cases, since
any delay or blocking of a single packet may delay or block the guest
transmission. Consider the following setup:
+-----+ +-----+
| VM1 | | VM2 |
+--+--+
2014 Nov 12
3
Put virbr0 in promiscusous
Hi ,
I have two virtual machines VM1 and VM2. Then I have added eth0 of my VM
to 'default' network.
Use case :-
I want to monitor all traffic on virbr0('default' network).
Steps followed :-
1. Add VM1 eth0 to virbr0
2. Add VM2 eth1 to virbr0
3. brctl setageing ovsbr0 0 ..(To put bridge in promiscuous)
Now I am running tcpdump on eth1 of VM2 and trying to ping
2014 Mar 13
3
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 03/10/2014 04:03 PM, Michael S. Tsirkin wrote:
> On Fri, Mar 07, 2014 at 01:28:27PM +0800, Jason Wang wrote:
>> > We used to stop the handling of tx when the number of pending DMAs
>> > exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
>> > of both host and guest. But it was too aggressive in some cases, since
>> > any delay or blocking
2014 Mar 13
3
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 03/10/2014 04:03 PM, Michael S. Tsirkin wrote:
> On Fri, Mar 07, 2014 at 01:28:27PM +0800, Jason Wang wrote:
>> > We used to stop the handling of tx when the number of pending DMAs
>> > exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
>> > of both host and guest. But it was too aggressive in some cases, since
>> > any delay or blocking
2015 Apr 27
5
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
On Sun, Apr 26, 2015 at 2:24 PM, Luke Gorrie <luke at snabb.co> wrote:
> On 24 April 2015 at 15:22, Stefan Hajnoczi <stefanha at gmail.com> wrote:
>>
>> The motivation for making VM-to-VM fast is that while software
>> switches on the host are efficient today (thanks to vhost-user), there
>> is no efficient solution if the software switch is a VM.
>
>
2014 Feb 25
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
We used to stop the handling of tx when the number of pending DMAs
exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
of both host and guest. But it was too aggressive in some cases, since
any delay or blocking of a single packet may delay or block the guest
transmission. Consider the following setup:
+-----+ +-----+
| VM1 | | VM2 |
+--+--+
2015 Apr 27
5
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
On Sun, Apr 26, 2015 at 2:24 PM, Luke Gorrie <luke at snabb.co> wrote:
> On 24 April 2015 at 15:22, Stefan Hajnoczi <stefanha at gmail.com> wrote:
>>
>> The motivation for making VM-to-VM fast is that while software
>> switches on the host are efficient today (thanks to vhost-user), there
>> is no efficient solution if the software switch is a VM.
>
>
2014 Feb 25
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
We used to stop the handling of tx when the number of pending DMAs
exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
of both host and guest. But it was too aggressive in some cases, since
any delay or blocking of a single packet may delay or block the guest
transmission. Consider the following setup:
+-----+ +-----+
| VM1 | | VM2 |
+--+--+