Displaying 8 results from an estimated 8 matches for "xenblkd".
Did you mean:
xenblk
2005 Jul 15
1
xend start fails
............
root@Eureka:~ #
fine, no errors there.
root@Eureka:~ # cat /var/log/xend.log
[2005-07-15 10:52:47 xend] INFO (SrvDaemon:610) Xend Daemon started
Looks promising. However, following this:
root@Eureka:~ # ps auxw|grep xen
root 754 0.0 0.0 0 0 ? S Jul14 0:00
[xenblkd]
No xend. xm fails with a ''connection refused'' error. Running
"strace /usr/sbin/xend start" yields the following log:
www.ultracode.com/xend.start.gz
Solutions, thoughts, ideas, suggestions for further debugging?
Thanks,
Eric
eric@first-circle.net
________________...
2005 Apr 16
0
ksoftirqd time
...free, 23152k buffers
Swap: 3148732k total, 688k used, 3148044k free, 93588k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3 root 34 19 0 0 0 S 0.0 0.0 150:58.34 ksoftirqd/0
215 root 15 0 0 0 0 S 1.3 0.0 7:35.51 xenblkd
1241 root 16 0 3948 560 468 S 0.0 0.2 3:45.44 nifd
1960 root 16 0 15680 9136 2564 S 0.0 2.6 3:42.15 python
1207 root 16 0 1788 476 404 S 0.0 0.1 2:29.08 irqbalance
6 root 10 -5 0 0 0 S 0.0 0.0 2:04.54 events/0
89 root 15 0...
2005 Jun 18
6
how much rum for xen0 - only ssh
... 0.0 0.0 0:06.93 kblockd/0
72 root 15 -10 0 0 0 S 0.0 0.0 0:00.00 aio/0
71 root 15 0 0 0 0 S 0.0 0.0 0:32.39 kswapd0
655 root 25 0 0 0 0 S 0.0 0.0 0:00.00 kseriod
698 root 15 0 0 0 0 S 0.0 0.0 0:21.67 xenblkd
714 root 15 0 0 0 0 S 0.0 0.0 0:12.01 kjournald
1035 root 16 0 1560 588 492 S 0.0 0.5 0:04.13 syslogd
1038 root 16 0 2068 1088 448 S 0.0 0.9 0:00.13 klogd
1047 root 15 0 3400 1488 1264 S 0.0 1.2 0:07.53 sshd
1052 daemon 16 ...
2011 Apr 04
0
[PATCH] linux-2.6.18/backends: use xenbus_be.ko interfaces instead of open-coding them
...gt;refcnt);
-
- tap_blkif_unmap(blkif);
-}
-
void tap_blkif_kmem_cache_free(blkif_t *blkif)
{
if (!atomic_dec_and_test(&blkif->refcnt))
--- a/drivers/xen/blktap/xenbus.c
+++ b/drivers/xen/blktap/xenbus.c
@@ -187,7 +187,7 @@ static int blktap_remove(struct xenbus_d
if (be->blkif->xenblkd)
kthread_stop(be->blkif->xenblkd);
signal_tapdisk(be->blkif->dev_num);
- tap_blkif_free(be->blkif);
+ tap_blkif_free(be->blkif, dev);
tap_blkif_kmem_cache_free(be->blkif);
be->blkif = NULL;
}
@@ -342,7 +342,7 @@ static void blkif_disconnect(blkif_t *bl
}...
2012 Aug 16
0
[RFC v1 5/5] VBD: enlarge max segment per request in blkfront
...-backend", blkif);
+ if (err < 0) {
+ xenbus_unmap_ring_vfree(blkif->be->dev, blkif->blk_ring);
+ blkif->blk_rings_v2.common.sring = NULL;
+ return err;
+ }
+ blkif->irq = err;
+
+ return 0;
+}
+
static void xen_blkif_disconnect(struct xen_blkif *blkif)
{
if (blkif->xenblkd) {
@@ -192,10 +269,18 @@ static void xen_blkif_disconnect(struct xen_blkif *blkif)
blkif->irq = 0;
}
- if (blkif->blk_rings.common.sring) {
+ if (blkif->blk_backring_type == BACKRING_TYPE_1 &&
+ blkif->blk_rings.common.sring) {
xenbus_unmap_ring_vfree(blkif->be...
2012 Sep 19
27
[PATCH] Persistent grant maps for xen blk drivers
...truct xen_vbd {
struct backend_info;
+
+struct pers_gnt {
+ struct page *page;
+ grant_ref_t gnt;
+ uint32_t handle;
+ uint64_t dev_bus_addr;
+};
+
struct xen_blkif {
/* Unique identifier for this interface. */
domid_t domid;
@@ -190,6 +200,12 @@ struct xen_blkif {
struct task_struct *xenblkd;
unsigned int waiting_reqs;
+ /* frontend feature information */
+ u8 can_grant_persist:1;
+ struct pers_gnt *pers_gnts[BLKIF_MAX_PERS_REQUESTS_PER_DEV *
+ BLKIF_MAX_SEGMENTS_PER_REQUEST];
+ unsigned int pers_gnt_c;
+
/* statistics */
unsigned long st_print;
int st_rd_req;
d...
2010 Sep 15
15
xenpaging fixes for kernel and hypervisor
Patrick,
there following patches fix xenpaging for me.
Granttable handling is incomplete. If a page is gone, a GNTST_eagain
should be returned to the caller to inidcate the hypercall has to be
retried after a while, until the page is available again.
Please review.
Olaf
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
2005 Mar 08
29
Interrupt levels
I''m tracking performance on the machine I installed yesterday.
mutt running on one Xen instance, accessing via imap to another
instance, accessing via nfs the maildir in another instances, seems
little laggy when moving up and down the message index list.
Network latency seems low < 30ms on average.
So I was tracking vmstat.
On the mutt instances is seems reasonable:
[nic@shell:~]