Displaying 20 results from an estimated 110 matches similar to: "OCFS2 issues reports, any ideads or patches, Thanks"
2013 Sep 03
3
Is virsh blockcommit supported? Thanks a lot
I had test the command virsh blockcommit, but it failed, with the libvirt version 1.1.0, and qemu version 1.6.0.
Is this feature being developing? Thanks
root at cvk-31:/vms/images# virsh -v
1.1.0
root at cvk-31:/vms/images# qemu-img -V
qemu-img version 1.6.0, Copyright (c) 2004-2008 Fabrice Bellard
usage: qemu-img command [command options]
root at cvk-31:/vms/images# virsh blockcommit Vmtest
2013 Sep 04
2
Is virsh blockcommit supported? Thanks a lot
Hi,
I have another question, when I do the blockcommit command and get the result " Top image as the active layer is currently unsupported ", is it being developed?
root@cvk-31:/vms/images# virsh blockcommit Vmtest /vms/images/Vmtest1;echo $?
error: internal error unable to execute QEMU command 'block-commit': Top image as the active layer is currently unsupported
1
But as I
2013 Sep 03
0
Re: Is virsh blockcommit supported? Thanks a lot
[dropping libvir-list - this is a usage question, not a development
question]
On 09/02/2013 11:29 PM, Guozhonghua wrote:
> I had test the command virsh blockcommit, but it failed, with the libvirt version 1.1.0, and qemu version 1.6.0.
> Is this feature being developing? Thanks
The feature is supported, but you have to use it correctly. In
particular, your backing chain MUST label the
2007 Jul 20
2
safe_strcpy errors from winbindd (Samba 3.0.25b)
Hi,
we've upgraded recently from samba 3.0.23 to 3.0.25b. Since then, we've getting these error
messages from winbindd:
Jul 17 12:00:23 cvk027 winbindd[20772]: [2007/07/17 12:00:23, 0] lib/util_str.c:safe_strcpy_fn(659)
The volume of these messages is very high, many times together with unreadable character garbage.
Is there any way to correct or suppress these messages?
Our
2006 Jun 07
1
knn - 10 fold cross validation
Hi,
I was trying to get the optimal 'k' for the knn. To do this I was using the following function :
knn.cvk <- function(datmat, cl, k = 2:9) {
datmatT <- (datmat)
cv.err <- cl.pred <- c()
for (i in k) {
newpre <- as.vector(knn.cv(datmatT, cl, k = i))
cl.pred <- cbind(cl.pred, newpre)
cv.err <- c(cv.err, sum(cl != newpre))
}
2005 May 02
1
Problems with ipsec roadwarrior
Hello,
i have got a problem with the configuration of an roadwarrior ipsec VPN tunnel with shorewall 2.2.3.
I read the Shorewall Kernel 2.6 IPSEC and folowed the instructions to that point
where to modify the hosts with the folowing parameters:
vpn eth0:0.0.0.0/0 ipsec
But i have got an entry like
net eth0:0.0.0.0/0
even in the same file:
If i
2013 Sep 04
0
Re: Is virsh blockcommit supported? Thanks a lot
On 09/03/2013 06:56 PM, Guozhonghua wrote:
> Hi,
>
> I have another question, when I do the blockcommit command and get the result " Top image as the active layer is currently unsupported ", is it being developed?
> root@cvk-31:/vms/images# virsh blockcommit Vmtest /vms/images/Vmtest1;echo $?
> error: internal error unable to execute QEMU command 'block-commit':
2011 Sep 05
0
Slow performance
Hello again,
We have hit some performance problem today in one of our clusters. The
performance suddenly drop from the normal performance (about
30Mbytes/s), read/write, to a few Kbytes/s (about 200Kbytes/s), read
only, for a while, and as sudden as it started, it backs to the normal
read/write performance, cycling randomly. When the "read only" occurs
on one node, the other shows only
2009 Mar 06
0
[PATCH 1/1] ocfs2: recover orphans in offline slots during recovery and mount
During recovery, a node recovers orphans in it's slot and the dead node(s). But
if the dead nodes were holding orphans in offline slots, they will be left
unrecovered.
If the dead node is the last one to die and is holding orphans in other slots
and is the first one to mount, then it only recovers it's own slot, which
leaves orphans in offline slots.
This patch queues complete_recovery
2009 Mar 06
1
[PATCH 1/1] Patch to recover orphans in offline slots during recovery and mount (revised)
During recovery, a node recovers orphans in it's slot and the dead node(s). But
if the dead nodes were holding orphans in offline slots, they will be left
unrecovered.
If the dead node is the last one to die and is holding orphans in other slots
and is the first one to mount, then it only recovers it's own slot, which
leaves orphans in offline slots.
This patch queues complete_recovery
2005 Jul 02
6
Loadbalancing how to ? ? ? ?
I have 2 ADSL ad1 and ad2 , one PC for my firewall and some deamon on it
with 3 ethernet : eth0 connect to my LAN ( 192.168.60.0/24 ) and 2 other
connect to ad1 and ad2
|eth1 (10.0.1.2)--------------------ad1 ( ADSL 1 )
|
My LAN(192.168.60.0/24) |---------eth0( 192.168.60.2)--> PC
|
|eth2 (10.0.2.2)---------------------ad2 (ADSL 2 )
All computer in LAN has default router =
2009 Mar 04
2
[PATCH 1/1] Patch to recover orphans in offline slots during recovery and mount
During recovery, a node recovers orphans in it's slot and the dead node(s). But
if the dead nodes were holding orphans in offline slots, they will be left
unrecovered.
If the dead node is the last one to die and is holding orphans in other slots
and is the first one to mount, then it only recovers it's own slot, which
leaves orphans in offline slots.
This patch queues complete_recovery
2009 May 19
2
[PATCH 1/1] OCFS2: timer to queue scan of all orphan slots
On unlink, all nodes check for the dentry in dcache and if present they mark
the node as unlinked. The last node that purges the inode will clean it from
orphan directory. When there is a memory pressure, a dentry may not be around
and hence the inode is not marked as deleted and this will lead the file to be
in the orphan directory till the slot is re-used during next mount.
This patch initiates
2009 Apr 20
2
BUG: soft lockup - CPU#1 stuck for 61s
?i,
I have a cluster with 5 nodes hosting web application. All web servers
save log info into shared access.log file. There is awstats log
analyzer on the first node. Sometimes this node fails with the
following messages (captured on another server)
Apr 20 17:31:16 um-be-2 [145813.022112] o2net: connection to node
um-fe-1 (num 1) at 192.168.10.10:7777 has been idle for 30.0 seconds,
shutting it
2006 Aug 01
6
Mongrel crash
Hi. A mongrel/rails installation (proxyed through apache) is crashing
for some reason with the following error found in the mongrel.log:
ERROR: meta.c (179): wmf_header_read: this isn''t a wmf file
/root/local/radiant/config/../vendor/rails/activerecord/lib/active_record/base.rb:2068:
[BUG] Segmentation fault
ruby 1.8.4 (2005-12-24) [i386-linux]
Fedora 5
mongrel: 0.3.13.3
Apache/2.2.0
2009 Jun 04
3
Patches that adds delayed orphan scan timer (rev 3)
Resending after implementing review comments.
2009 Jun 02
3
Patches that adds delayed orphan scan timer (rev 2)
Resending after implementing review comments.
2009 Jun 02
3
Patches that adds delayed orphan scan timer
Resending after adding another patch to display delayed orphan scan statistics.
2013 Feb 27
2
ocfs2 bug reports, any advices? thanks
Hi,
I setup two nodes, 192.168.20.20, and 192.168.20.21,
The os is Ubuntu1204 with Kernel version 3.0:
root at Server21:~# uname -a
Linux Server21 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
Server20 reboot for the disconnection with iSCSI SAN, so Server20 recovery resource locks for Server21.
Server20:
Feb 27 09:29:31 Server20 kernel:
2013 Feb 27
2
ocfs2 bug reports, any advices? thanks
Hi,
I setup two nodes, 192.168.20.20, and 192.168.20.21,
The os is Ubuntu1204 with Kernel version 3.0:
root at Server21:~# uname -a
Linux Server21 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
Server20 reboot for the disconnection with iSCSI SAN, so Server20 recovery resource locks for Server21.
Server20:
Feb 27 09:29:31 Server20 kernel: