search for: page_addr

Displaying 20 results from an estimated 42 matches for "page_addr".

Did you mean: base_addr
2011 Aug 15
9
[PATCH 0/8] Staging: hv: vmbus: Driver cleanup
Further cleanup of the vmbus driver: 1) Cleanup the interrupt handler by inlining some code. 2) Ensure message handling is performed on the same CPU that takes the vmbus interrupt. 3) Check for events before messages (from the host). 4) Disable auto eoi for the vmbus interrupt since Linux will eoi the interrupt anyway. 5) Some general cleanup. Regards, K. Y
2011 Aug 15
9
[PATCH 0/8] Staging: hv: vmbus: Driver cleanup
Further cleanup of the vmbus driver: 1) Cleanup the interrupt handler by inlining some code. 2) Ensure message handling is performed on the same CPU that takes the vmbus interrupt. 3) Check for events before messages (from the host). 4) Disable auto eoi for the vmbus interrupt since Linux will eoi the interrupt anyway. 5) Some general cleanup. Regards, K. Y
2017 Feb 01
15
[PATCH 00/14] hyperv: vmbus related patches
This is a rebase/resend of earlier patches. I skipped the pure cosmetic patches for now. Mostly this is consolidation earlier changes, removing dead code etc. The important part is the change for allowing a vmbus channel to get callback directly in interrupt mode; this is necessary for NAPI support. Stephen Hemminger (14): vmbus: use kernel bitops for traversing interrupt mask vmbus: drop
2017 Feb 01
15
[PATCH 00/14] hyperv: vmbus related patches
This is a rebase/resend of earlier patches. I skipped the pure cosmetic patches for now. Mostly this is consolidation earlier changes, removing dead code etc. The important part is the change for allowing a vmbus channel to get callback directly in interrupt mode; this is necessary for NAPI support. Stephen Hemminger (14): vmbus: use kernel bitops for traversing interrupt mask vmbus: drop
2011 Aug 27
46
[PATCH 0000/0046] Staging: hv: Driver cleanup
Further cleanup of the hv drivers. 1) Cleanup reference counting. 2) Handle all block devices using the storvsc driver. I have modified the implementation here based on Christoph's feedback on my earlier implementation. 3) Fix bugs. 4) Accomodate some host side scsi emulation bugs. 5) In case of scsi errors off-line the device. This patch-set further reduces the size of
2011 Aug 27
46
[PATCH 0000/0046] Staging: hv: Driver cleanup
Further cleanup of the hv drivers. 1) Cleanup reference counting. 2) Handle all block devices using the storvsc driver. I have modified the implementation here based on Christoph's feedback on my earlier implementation. 3) Fix bugs. 4) Accomodate some host side scsi emulation bugs. 5) In case of scsi errors off-line the device. This patch-set further reduces the size of
2010 Nov 01
5
[PATCH 03/10] staging: hv: Convert camel cased struct fields in hv.h to lower cases
.../hv/vmbus.c b/drivers/staging/hv/vmbus.c index 7c54ca9..7b68212 100644 --- a/drivers/staging/hv/vmbus.c +++ b/drivers/staging/hv/vmbus.c @@ -147,7 +147,7 @@ static void VmbusOnCleanup(struct hv_driver *drv) static void VmbusOnMsgDPC(struct hv_driver *drv) { int cpu = smp_processor_id(); - void *page_addr = gHvContext.synICMessagePage[cpu]; + void *page_addr = g_hv_context.synic_message_page[cpu]; struct hv_message *msg = (struct hv_message *)page_addr + VMBUS_MESSAGE_SINT; struct hv_message *copied; @@ -208,7 +208,7 @@ static int VmbusOnISR(struct hv_driver *drv) struct hv_message *msg...
2010 Nov 01
5
[PATCH 03/10] staging: hv: Convert camel cased struct fields in hv.h to lower cases
.../hv/vmbus.c b/drivers/staging/hv/vmbus.c index 7c54ca9..7b68212 100644 --- a/drivers/staging/hv/vmbus.c +++ b/drivers/staging/hv/vmbus.c @@ -147,7 +147,7 @@ static void VmbusOnCleanup(struct hv_driver *drv) static void VmbusOnMsgDPC(struct hv_driver *drv) { int cpu = smp_processor_id(); - void *page_addr = gHvContext.synICMessagePage[cpu]; + void *page_addr = g_hv_context.synic_message_page[cpu]; struct hv_message *msg = (struct hv_message *)page_addr + VMBUS_MESSAGE_SINT; struct hv_message *copied; @@ -208,7 +208,7 @@ static int VmbusOnISR(struct hv_driver *drv) struct hv_message *msg...
2013 Mar 27
0
[PATCH 04/22] block: Convert bio_for_each_segment() to bvec_iter
...>io_addr + bank->size; transfered = 0; - bio_for_each_segment(vec, bio, idx) { - if (unlikely(phys_mem + vec->bv_len > phys_end)) { + bio_for_each_segment(vec, bio, iter) { + if (unlikely(phys_mem + vec.bv_len > phys_end)) { bio_io_error(bio); return; } - user_mem = page_address(vec->bv_page) + vec->bv_offset; + user_mem = page_address(vec.bv_page) + vec.bv_offset; if (bio_data_dir(bio) == READ) - memcpy(user_mem, (void *) phys_mem, vec->bv_len); + memcpy(user_mem, (void *) phys_mem, vec.bv_len); else - memcpy((void *) phys_mem, user_mem, vec->b...
2013 Mar 27
0
[PATCH 04/22] block: Convert bio_for_each_segment() to bvec_iter
...>io_addr + bank->size; transfered = 0; - bio_for_each_segment(vec, bio, idx) { - if (unlikely(phys_mem + vec->bv_len > phys_end)) { + bio_for_each_segment(vec, bio, iter) { + if (unlikely(phys_mem + vec.bv_len > phys_end)) { bio_io_error(bio); return; } - user_mem = page_address(vec->bv_page) + vec->bv_offset; + user_mem = page_address(vec.bv_page) + vec.bv_offset; if (bio_data_dir(bio) == READ) - memcpy(user_mem, (void *) phys_mem, vec->bv_len); + memcpy(user_mem, (void *) phys_mem, vec.bv_len); else - memcpy((void *) phys_mem, user_mem, vec->b...
2013 Aug 07
0
[PATCH 07/22] block: Convert bio_for_each_segment() to bvec_iter
...>io_addr + bank->size; transfered = 0; - bio_for_each_segment(vec, bio, idx) { - if (unlikely(phys_mem + vec->bv_len > phys_end)) { + bio_for_each_segment(vec, bio, iter) { + if (unlikely(phys_mem + vec.bv_len > phys_end)) { bio_io_error(bio); return; } - user_mem = page_address(vec->bv_page) + vec->bv_offset; + user_mem = page_address(vec.bv_page) + vec.bv_offset; if (bio_data_dir(bio) == READ) - memcpy(user_mem, (void *) phys_mem, vec->bv_len); + memcpy(user_mem, (void *) phys_mem, vec.bv_len); else - memcpy((void *) phys_mem, user_mem, vec->b...
2013 Aug 07
0
[PATCH 07/22] block: Convert bio_for_each_segment() to bvec_iter
...>io_addr + bank->size; transfered = 0; - bio_for_each_segment(vec, bio, idx) { - if (unlikely(phys_mem + vec->bv_len > phys_end)) { + bio_for_each_segment(vec, bio, iter) { + if (unlikely(phys_mem + vec.bv_len > phys_end)) { bio_io_error(bio); return; } - user_mem = page_address(vec->bv_page) + vec->bv_offset; + user_mem = page_address(vec.bv_page) + vec.bv_offset; if (bio_data_dir(bio) == READ) - memcpy(user_mem, (void *) phys_mem, vec->bv_len); + memcpy(user_mem, (void *) phys_mem, vec.bv_len); else - memcpy((void *) phys_mem, user_mem, vec->b...
2013 Aug 07
0
[PATCH 07/22] block: Convert bio_for_each_segment() to bvec_iter
...>io_addr + bank->size; transfered = 0; - bio_for_each_segment(vec, bio, idx) { - if (unlikely(phys_mem + vec->bv_len > phys_end)) { + bio_for_each_segment(vec, bio, iter) { + if (unlikely(phys_mem + vec.bv_len > phys_end)) { bio_io_error(bio); return; } - user_mem = page_address(vec->bv_page) + vec->bv_offset; + user_mem = page_address(vec.bv_page) + vec.bv_offset; if (bio_data_dir(bio) == READ) - memcpy(user_mem, (void *) phys_mem, vec->bv_len); + memcpy(user_mem, (void *) phys_mem, vec.bv_len); else - memcpy((void *) phys_mem, user_mem, vec->b...
2013 Oct 29
0
[PATCH 07/23] block: Convert bio_for_each_segment() to bvec_iter
...>io_addr + bank->size; transfered = 0; - bio_for_each_segment(vec, bio, idx) { - if (unlikely(phys_mem + vec->bv_len > phys_end)) { + bio_for_each_segment(vec, bio, iter) { + if (unlikely(phys_mem + vec.bv_len > phys_end)) { bio_io_error(bio); return; } - user_mem = page_address(vec->bv_page) + vec->bv_offset; + user_mem = page_address(vec.bv_page) + vec.bv_offset; if (bio_data_dir(bio) == READ) - memcpy(user_mem, (void *) phys_mem, vec->bv_len); + memcpy(user_mem, (void *) phys_mem, vec.bv_len); else - memcpy((void *) phys_mem, user_mem, vec->b...
2013 Oct 29
0
[PATCH 07/23] block: Convert bio_for_each_segment() to bvec_iter
...>io_addr + bank->size; transfered = 0; - bio_for_each_segment(vec, bio, idx) { - if (unlikely(phys_mem + vec->bv_len > phys_end)) { + bio_for_each_segment(vec, bio, iter) { + if (unlikely(phys_mem + vec.bv_len > phys_end)) { bio_io_error(bio); return; } - user_mem = page_address(vec->bv_page) + vec->bv_offset; + user_mem = page_address(vec.bv_page) + vec.bv_offset; if (bio_data_dir(bio) == READ) - memcpy(user_mem, (void *) phys_mem, vec->bv_len); + memcpy(user_mem, (void *) phys_mem, vec.bv_len); else - memcpy((void *) phys_mem, user_mem, vec->b...
2013 Oct 29
0
[PATCH 07/23] block: Convert bio_for_each_segment() to bvec_iter
...>io_addr + bank->size; transfered = 0; - bio_for_each_segment(vec, bio, idx) { - if (unlikely(phys_mem + vec->bv_len > phys_end)) { + bio_for_each_segment(vec, bio, iter) { + if (unlikely(phys_mem + vec.bv_len > phys_end)) { bio_io_error(bio); return; } - user_mem = page_address(vec->bv_page) + vec->bv_offset; + user_mem = page_address(vec.bv_page) + vec.bv_offset; if (bio_data_dir(bio) == READ) - memcpy(user_mem, (void *) phys_mem, vec->bv_len); + memcpy(user_mem, (void *) phys_mem, vec.bv_len); else - memcpy((void *) phys_mem, user_mem, vec->b...
2013 Jun 09
0
[PATCH 06/26] block: Convert bio_for_each_segment() to bvec_iter
...>io_addr + bank->size; transfered = 0; - bio_for_each_segment(vec, bio, idx) { - if (unlikely(phys_mem + vec->bv_len > phys_end)) { + bio_for_each_segment(vec, bio, iter) { + if (unlikely(phys_mem + vec.bv_len > phys_end)) { bio_io_error(bio); return; } - user_mem = page_address(vec->bv_page) + vec->bv_offset; + user_mem = page_address(vec.bv_page) + vec.bv_offset; if (bio_data_dir(bio) == READ) - memcpy(user_mem, (void *) phys_mem, vec->bv_len); + memcpy(user_mem, (void *) phys_mem, vec.bv_len); else - memcpy((void *) phys_mem, user_mem, vec->b...
2013 Jun 09
0
[PATCH 06/26] block: Convert bio_for_each_segment() to bvec_iter
...>io_addr + bank->size; transfered = 0; - bio_for_each_segment(vec, bio, idx) { - if (unlikely(phys_mem + vec->bv_len > phys_end)) { + bio_for_each_segment(vec, bio, iter) { + if (unlikely(phys_mem + vec.bv_len > phys_end)) { bio_io_error(bio); return; } - user_mem = page_address(vec->bv_page) + vec->bv_offset; + user_mem = page_address(vec.bv_page) + vec.bv_offset; if (bio_data_dir(bio) == READ) - memcpy(user_mem, (void *) phys_mem, vec->bv_len); + memcpy(user_mem, (void *) phys_mem, vec.bv_len); else - memcpy((void *) phys_mem, user_mem, vec->b...
2013 Jun 09
0
[PATCH 06/26] block: Convert bio_for_each_segment() to bvec_iter
...>io_addr + bank->size; transfered = 0; - bio_for_each_segment(vec, bio, idx) { - if (unlikely(phys_mem + vec->bv_len > phys_end)) { + bio_for_each_segment(vec, bio, iter) { + if (unlikely(phys_mem + vec.bv_len > phys_end)) { bio_io_error(bio); return; } - user_mem = page_address(vec->bv_page) + vec->bv_offset; + user_mem = page_address(vec.bv_page) + vec.bv_offset; if (bio_data_dir(bio) == READ) - memcpy(user_mem, (void *) phys_mem, vec->bv_len); + memcpy(user_mem, (void *) phys_mem, vec.bv_len); else - memcpy((void *) phys_mem, user_mem, vec->b...
2013 Nov 27
0
[PATCH 07/25] block: Convert bio_for_each_segment() to bvec_iter
...>io_addr + bank->size; transfered = 0; - bio_for_each_segment(vec, bio, idx) { - if (unlikely(phys_mem + vec->bv_len > phys_end)) { + bio_for_each_segment(vec, bio, iter) { + if (unlikely(phys_mem + vec.bv_len > phys_end)) { bio_io_error(bio); return; } - user_mem = page_address(vec->bv_page) + vec->bv_offset; + user_mem = page_address(vec.bv_page) + vec.bv_offset; if (bio_data_dir(bio) == READ) - memcpy(user_mem, (void *) phys_mem, vec->bv_len); + memcpy(user_mem, (void *) phys_mem, vec.bv_len); else - memcpy((void *) phys_mem, user_mem, vec->b...