Displaying 20 results from an estimated 62 matches for "nr_segments".
2006 Aug 24
1
block ring interface: nr_segments = 0 results in BLKIF_RSP_ERROR
I am currently developing a blkfront.c for a custom OS over Xen 3.0.2-2. Typical I/O is working, however, I ran into an error while testing a corner case.
On standard I/O, where { 1 <= nr_segments < BLKIF_MAX_SEGMENTS_PER_REQUEST } blkif_int()''s bret->status returns BLKIF_RSP_OKAY.
Yet when { nr_segments == 0 } blkif_int''s bret->status is non-zero. (Yes I realize this is an I/O call of zero-length.)
I checked the documentation and section "8.2.2 Bl...
2011 Sep 01
9
[PATCH V4 0/3] xen-blkfront/blkback discard support
Dear list,
This is the V4 of the trim support for xen-blkfront/blkback,
Now we move BLKIF_OP_TRIM to BLKIF_OP_DISCARD, and dropped all
"trim" stuffs in the patches, and use "discard" instead.
Also we updated the helpers of blkif_x86_{32|64}_request or we
will meet problems using a non-native protocol.
And this patch has been tested with both SSD and raw file,
with SSD we will
2008 Jul 10
2
Minor synchronisation quibble in scsifront
I''ve been having a look through scsifront again, and I saw this bit:
ring_req->timeout_per_command = (sc->timeout_per_command / HZ);
ring_req->nr_segments = 0;
spin_unlock_irq(host->host_lock);
scsifront_do_request(info);
wait_event_interruptible(info->shadow[ring_req->rqid].wq_reset,
info->shadow[ring_req->rqid].wait_reset);
in scsifront_dev_reset_handler(). Looking at scsifront_do_request():
static void scsifront_...
2008 May 30
5
[PATCH 1/4] pvSCSI driver
pvSCSI backend driver
Signed-off-by: Tomonari Horikoshi <t.horikoshi@jp.fujitsu.com>
Signed-off-by: Jun Kamada <kama@jp.fujitsu.com>
-----
Jun Kamada
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
2012 Feb 25
9
[xen-unstable bisection] complete test-amd64-i386-rhel6hvm-amd
branch xen-unstable
xen branch xen-unstable
job test-amd64-i386-rhel6hvm-amd
test redhat-install
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-unstable.git
Tree: qemuu git://xenbits.xen.org/staging/qemu-upstream-unstable.git
Tree: xen http://xenbits.xen.org/staging/xen-unstable.hg
*** Found and reproduced problem
2012 Dec 27
30
[PATCH v3 00/11] xen: Initial kexec/kdump implementation
Hi,
This set of patches contains initial kexec/kdump implementation for Xen v3.
Currently only dom0 is supported, however, almost all infrustructure
required for domU support is ready.
Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code.
This could simplify and reduce a bit size of kernel code. However, this solution
requires some changes in baremetal x86 code. First of
2012 Dec 27
30
[PATCH v3 00/11] xen: Initial kexec/kdump implementation
Hi,
This set of patches contains initial kexec/kdump implementation for Xen v3.
Currently only dom0 is supported, however, almost all infrustructure
required for domU support is ready.
Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code.
This could simplify and reduce a bit size of kernel code. However, this solution
requires some changes in baremetal x86 code. First of
2012 Dec 27
30
[PATCH v3 00/11] xen: Initial kexec/kdump implementation
Hi,
This set of patches contains initial kexec/kdump implementation for Xen v3.
Currently only dom0 is supported, however, almost all infrustructure
required for domU support is ready.
Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code.
This could simplify and reduce a bit size of kernel code. However, this solution
requires some changes in baremetal x86 code. First of
2012 Aug 16
0
[RFC v1 5/5] VBD: enlarge max segment per request in blkfront
...blkif_seg_req_v2(struct xen_blkif *blkif)
+{
+ struct blkif_request_header *req = (struct blkif_request_header *)blkif->req;
+ struct blkif_segment_back_ring *blk_segrings = &blkif->blk_segrings;
+ int i;
+ RING_IDX rc;
+
+ rc = blk_segrings->req_cons;
+ for (i = 0; i < req->u.rw.nr_segments; i++) {
+ memcpy(&blkif->seg_req[i], RING_GET_REQUEST(blk_segrings, rc++),
+ sizeof(struct blkif_request_segment));
+ }
+ blk_segrings->req_cons = rc;
+}
+
/*
* Function to copy the from the ring buffer the ''struct blkif_request''
* (which has the sectors we want,...
2012 May 22
1
[PATCH v2] kexec: simply pass LINUX_REBOOT_CMD_KEXEC to reboot
...X_REBOOT_CMD_RESTART2 0xA1B2C3D4
-#define LINUX_REBOOT_CMD_EXEC_KERNEL 0x18273645
#define LINUX_REBOOT_CMD_KEXEC_OLD 0x81726354
#define LINUX_REBOOT_CMD_KEXEC_OLD2 0x18263645
#define LINUX_REBOOT_CMD_KEXEC 0x45584543
@@ -70,12 +58,6 @@ static inline long kexec_load(void *entry, unsigned long nr_segments,
return (long) syscall(__NR_kexec_load, entry, nr_segments, segments, flags);
}
-static inline long kexec_reboot(void)
-{
- return (long) syscall(__NR_reboot, LINUX_REBOOT_MAGIC1, LINUX_REBOOT_MAGIC2, LINUX_REBOOT_CMD_KEXEC, 0);
-}
-
-
#define KEXEC_ON_CRASH 0x00000001
#define KEXEC_PRESERV...
2012 Sep 19
27
[PATCH] Persistent grant maps for xen blk drivers
...R_REQUEST];
+ struct pers_gnt *new_pers_gnts[BLKIF_MAX_SEGMENTS_PER_REQUEST];
+ struct pers_gnt *pers_gnts[BLKIF_MAX_SEGMENTS_PER_REQUEST];
+ struct page *pages_to_gnt[BLKIF_MAX_SEGMENTS_PER_REQUEST];
+ struct pers_gnt *pers_gnt;
+ phys_addr_t addr;
int i;
+ int new_map;
int nseg = req->u.rw.nr_segments;
+ int segs_to_init = 0;
int ret = 0;
+ int use_pers_gnts;
+ use_pers_gnts = (pending_req->blkif->can_grant_persist &&
+ pending_req->blkif->pers_gnt_c <
+ BLKIF_MAX_SEGMENTS_PER_REQUEST *
+ BLKIF_MAX_PERS_REQUESTS_PER_DEV);
+
+ pending_req->is_pers = use_pers...
2012 May 07
14
Little help with blk ring
...rring blocks the id
on the response is different from the request.
This is the boot up read, count 0.
The guest requests block 0, it has to be located at 7c00.
I go ahead and create a REQUEST with this data:
ring_req = RING_GET_REQUEST(priv,priv->req_prod_pvt);
ring_req->id = 9;
ring_req->nr_segments=1;
ring_req->operation = BLKIF_OP_READ;
ring_req->sector_number = (int)op->lba; //sector to be read
ring_req->seg[0].gref = (bi->buffer_gref); //this should be get_free_gref();
ring_req->seg[0].first_sect = 0;//op->lba;
ring_req->seg[0].last_sect = 7;//op->lba + op->co...
2013 Jul 15
6
[PATCH 0 of 6 RESEND v2] blktap3/sring: shared ring between tapdisk and the front-end
This patch series introduces the shared ring used by the front-end to pass
request descriptors to tapdisk, as well as responses from tapdisk to the
front-end. Requests from this ring end up in tapdisk''s standard request queue.
When the tapback daemon detects that the front-end tries to connect to the
back-end, it spawns a tapdisk and tells it to connect to the shared ring. The
shared
2012 Nov 02
2
[PATCH] xen-blk: persistent-grants fixes
...-blkfront.c
@@ -852,6 +852,7 @@ static void blkif_completion(struct blk_shadow *s, struct blkfront_info *info,
rq_for_each_segment(bvec, s->request, iter) {
BUG_ON((bvec->bv_offset + bvec->bv_len) > PAGE_SIZE);
i = offset >> PAGE_SHIFT;
+ BUG_ON(i >= s->req.u.rw.nr_segments);
shared_data = kmap_atomic(
pfn_to_page(s->grants_used[i]->pfn));
bvec_data = bvec_kmap_irq(bvec, &flags);
@@ -1069,7 +1070,7 @@ again:
goto abort_transaction;
}
err = xenbus_printf(xbt, dev->nodename,
- "feature-persistent-grants", "%u",...
2012 Aug 16
0
[RFC v1 3/5] VBD: enlarge max segment per request in blkfront
...struct blkif_request_segment *seg_req,
struct pending_req *pending_req,
struct seg_buf seg[])
{
- struct gnttab_map_grant_ref map[BLKIF_MAX_SEGMENTS_PER_REQUEST];
+ struct gnttab_map_grant_ref *map = pending_req->map;
int i;
int nseg = req->u.rw.nr_segments;
int ret = 0;
@@ -362,7 +391,7 @@ static int xen_blkbk_map(struct blkif_request *req,
if (pending_req->operation != BLKIF_OP_READ)
flags |= GNTMAP_readonly;
gnttab_set_map_op(&map[i], vaddr(pending_req, i), flags,
- req->u.rw.seg...
2008 Nov 05
0
[PATCH] blktap: ensure vma->vm_mm''s mmap_sem is being held whenever it is being modified
...@@ -1531,7 +1554,7 @@ static void dispatch_rw_block_io(blkif_t
goto fail_flush;
if (xen_feature(XENFEAT_auto_translated_physmap))
- down_write(&info->vma->vm_mm->mmap_sem);
+ down_write(&mm->mmap_sem);
/* Mark mapped pages as reserved: */
for (i = 0; i < req->nr_segments; i++) {
unsigned long kvaddr;
@@ -1545,13 +1568,13 @@ static void dispatch_rw_block_io(blkif_t
MMAP_VADDR(info->user_vstart,
usr_idx, i), pg);
if (ret) {
- up_write(&info->vma->vm_mm->mmap_sem);
+ up_write(&mm->mmap_sem);
goto fail_flush;...
2008 Jul 03
3
[PATCH 1/4] pvSCSI : Add white list to SCSI command emulation
Add "white list" control to SCSI command emulation. Current setting
allows following mandatory and safe commands.
TEST UNIT READY
REZERO UNIT
REQUEST SENSE
FORMAT UNIT
READ BLOCK LIMITS
READ(06)
WRITE(06)
WRITE FILEMARKS
SPACE
INQUIRY
ERASE
MODE SENSE(06)
SEND DIAGNOSTIC
READ CAPACITY
READ(10)
WRITE(10)
REPORT LUN
Signed-off-by: Tomonari Horikoshi <t.horikoshi@jp.fujitsu.com>
2010 Sep 15
15
xenpaging fixes for kernel and hypervisor
Patrick,
there following patches fix xenpaging for me.
Granttable handling is incomplete. If a page is gone, a GNTST_eagain
should be returned to the caller to inidcate the hypercall has to be
retried after a while, until the page is available again.
Please review.
Olaf
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
2011 Dec 01
11
[PATCH 0 of 2] Paging support updates for XCP dom0
This is a cherry pick of two patches that add support for guest paged out
frames in the XCP 2.6.32 dom0 patch queue.
First patch propagates the ENOENT returned by the hypervisor in the case
of a paged out page, all the way up the call chain to the MMAPBATCH_V2
ioctl. The ioctl is mainly used to harvest those return values and retry.
The second patch adds retry loops to all backend grant
2011 Sep 09
7
[PATCH] xen-blk[front|back] FUA additions.
I am proposing these two patches for 3.2. They allow the backend
to process the REQ_FUA request as well. Previous to these patches
it only did REQ_FLUSH. There is also a bug-fix for the logic
of how barrier/flushes were handled.
The patches are based on a branch which also has ''feature-discard''
patches, so they won''t apply nativly on top of 3.1-rc5.
Please review and