search for: host_addr

Displaying 20 results from an estimated 63 matches for "host_addr".

2023 Jan 17
1
[Bridge] [RFC PATCH net-next 2/5] net: dsa: propagate flags down towards drivers
...ent the same way without having to change function interfaces. > > Signed-off-by: Hans J. Schultz <netdev at kapio-technology.com> > --- > @@ -3364,6 +3368,7 @@ static int dsa_slave_fdb_event(struct net_device *dev, > struct dsa_port *dp = dsa_slave_to_port(dev); > bool host_addr = fdb_info->is_local; > struct dsa_switch *ds = dp->ds; > + u16 fdb_flags = 0; > > if (ctx && ctx != dp) > return 0; > @@ -3410,6 +3415,9 @@ static int dsa_slave_fdb_event(struct net_device *dev, > orig_dev->name, fdb_info->addr, fdb_info->...
2014 Feb 27
3
[PATCH] xen/grant-table: Refactor gnttab_[un]map_refs to avoid m2p_override
...n_to_pfn); +int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops, + struct gnttab_map_grant_ref *kmap_ops, + struct page **pages, unsigned int count); +{ + int i; + + for (i = 0; i < count; i++) { + if (map_ops[i].status) + continue; + set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT, + map_ops[i].dev_bus_addr >> PAGE_SHIFT); + } + + return 0; +} +EXPORT_SYMBOL_GPL(set_foreign_p2m_mapping); + +int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops, + struct gnttab_map_grant_ref *kmap_ops, + struct page **pages, un...
2014 Feb 27
3
[PATCH] xen/grant-table: Refactor gnttab_[un]map_refs to avoid m2p_override
...n_to_pfn); +int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops, + struct gnttab_map_grant_ref *kmap_ops, + struct page **pages, unsigned int count); +{ + int i; + + for (i = 0; i < count; i++) { + if (map_ops[i].status) + continue; + set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT, + map_ops[i].dev_bus_addr >> PAGE_SHIFT); + } + + return 0; +} +EXPORT_SYMBOL_GPL(set_foreign_p2m_mapping); + +int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops, + struct gnttab_map_grant_ref *kmap_ops, + struct page **pages, un...
2007 Oct 03
0
[PATCH 3/3] TLB flushing and IO memory mapping
...- { - if ( !rd->is_dying ) - gdprintk(XENLOG_WARNING, "Could not pin grant frame %lx\n", frame); - rc = GNTST_general_error; - goto undo_out; - } - - if ( op->flags & GNTMAP_host_map ) - { - rc = create_grant_host_mapping(op->host_addr, frame, op- >flags); - if ( rc != GNTST_okay ) - { - if ( !(op->flags & GNTMAP_readonly) ) - put_page_type(mfn_to_page(frame)); - put_page(mfn_to_page(frame)); + if ( op->flags & GNTMAP_host_map ) + { + /* Could be an...
2023 Jan 18
1
[Bridge] [RFC PATCH net-next 2/5] net: dsa: propagate flags down towards drivers
...change function interfaces. >> >> Signed-off-by: Hans J. Schultz <netdev at kapio-technology.com> >> --- >> @@ -3364,6 +3368,7 @@ static int dsa_slave_fdb_event(struct net_device >> *dev, >> struct dsa_port *dp = dsa_slave_to_port(dev); >> bool host_addr = fdb_info->is_local; >> struct dsa_switch *ds = dp->ds; >> + u16 fdb_flags = 0; >> >> if (ctx && ctx != dp) >> return 0; >> @@ -3410,6 +3415,9 @@ static int dsa_slave_fdb_event(struct net_device >> *dev, >> orig_dev->n...
2012 May 25
0
[PATCH 3/3] gnttab: cleanup
...- spin_unlock(&rd->grant_table->lock); + spin_unlock(&rgt->lock); if ( put_handle ) { op->map->flags = 0; @@ -1020,7 +1025,7 @@ __gnttab_unmap_grant_ref( struct gnttab_unmap_grant_ref *op, struct gnttab_unmap_common *common) { - common->host_addr = op->host_addr; + common->host_addr = op->host_addr; common->dev_bus_addr = op->dev_bus_addr; common->handle = op->handle; @@ -1083,9 +1088,9 @@ __gnttab_unmap_and_replace( struct gnttab_unmap_and_replace *op, struct gnttab_unmap_common *common) { - c...
2020 Jul 16
0
[RFC for qemu v4 2/2] virtio_balloon: Add dcvq to deflate continuous pages
...size_t size) > { > void *addr = memory_region_get_ram_ptr(mr) + mr_offset; > ram_addr_t rb_offset; > @@ -153,10 +154,11 @@ static void balloon_deflate_page(VirtIOBalloon *balloon, > rb_page_size = qemu_ram_pagesize(rb); > > host_addr = (void *)((uintptr_t)addr & ~(rb_page_size - 1)); > + size &= ~(rb_page_size - 1); > > /* When a page is deflated, we hint the whole host page it lives > * on, since we can't do anything smaller */ > - ret = qemu_madvise(host_addr, rb_page_size, QEMU_M...
2005 Nov 06
2
Bug in use of grant tables in blkback.c error path?
...t_flush area is called. Here''s the bit of fast_flush_area that uses pending_handle: for (i = 0; i < nr_pages; i++) { handle = pending_handle(idx, i); <<<<<<<<<<<<<<<<< if (handle == BLKBACK_INVALID_HANDLE) continue; unmap[i].host_addr = MMAP_VADDR(idx, i); unmap[i].dev_bus_addr = 0; unmap[i].handle = handle; pending_handle(idx, i) = BLKBACK_INVALID_HANDLE; invcount++; } I also checked the implementation of gnttab_map_grant_ref: static long gnttab_map_grant_ref( gnttab_map_grant_ref_t *uop, unsigned...
2010 Dec 08
2
[PATCH] xen: gntdev: move use of GNTMAP_contains_pte next to the map_op
This flag controls the meaning of gnttab_map_grant_ref.host_addr and specifies that the field contains a refernce to the pte entry to be used to perform the mapping. Therefore move the use of this flag to the point at which we actually use a reference to the pte instead of something else, splitting up the usage of the flag in this way is confusing and potentiall...
2019 Apr 04
1
Proof of concept for GPU forwarding for Linux guest on Linux host.
Hi, This is a proof of concept of GPU forwarding for Linux guest on Linux host. I'd like to get comments and suggestions from community before I put more time on it. To summarize what it is: 1. It's a solution to bring GPU acceleration for Linux vm guest on Linux host. It could works with different GPU although the current proof of concept only works with Intel GPU. 2. The basic idea
2015 Nov 20
15
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
Hi, This is the first attempt to add a new qemu nvme backend using in-kernel nvme target. Most code are ported from qemu-nvme and also borrow code from Hannes Reinecke's rts-megasas. It's similar as vhost-scsi, but doesn't use virtio. The advantage is guest can run unmodified NVMe driver. So guest can be any OS that has a NVMe driver. The goal is to get as good performance as
2015 Nov 20
15
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
Hi, This is the first attempt to add a new qemu nvme backend using in-kernel nvme target. Most code are ported from qemu-nvme and also borrow code from Hannes Reinecke's rts-megasas. It's similar as vhost-scsi, but doesn't use virtio. The advantage is guest can run unmodified NVMe driver. So guest can be any OS that has a NVMe driver. The goal is to get as good performance as
2023 Feb 17
1
[Bridge] [PATCH net-next 5/5] net: dsa: mv88e6xxx: implementation of dynamic ATU entries
...now that we'll end up doing something with that FDB entry once the deferred work does get scheduled: /* Check early that we're not doing work in vain. * Host addresses on LAG ports still require regular FDB ops, * since the CPU port isn't in a LAG. */ if (dp->lag && !host_addr) { if (!ds->ops->lag_fdb_add || !ds->ops->lag_fdb_del) return -EOPNOTSUPP; } else { if (!ds->ops->port_fdb_add || !ds->ops->port_fdb_del) return -EOPNOTSUPP; } What you should be doing is you should be using the pahole tool to find a good place for a new unsigne...
2012 Mar 05
11
[PATCH 0001/001] xen: multi page ring support for block devices
...alloc_pv(struct xenbus_device *dev, if (!node) return -ENOMEM; - area = alloc_vm_area(PAGE_SIZE, &pte); + area = alloc_vm_area(PAGE_SIZE * nr_grefs, pte); if (!area) { kfree(node); return -ENOMEM; } - op.host_addr = arbitrary_virt_to_machine(pte).maddr; + for (i = 0; i < nr_grefs; i++) { + op[i].flags = GNTMAP_host_map | GNTMAP_contains_pte, + op[i].ref = gnt_ref[i], + op[i].dom = dev->otherend_id, + op[i].host_addr = arbitrary_virt_to_m...
2012 Mar 05
11
[PATCH 0001/001] xen: multi page ring support for block devices
...alloc_pv(struct xenbus_device *dev, if (!node) return -ENOMEM; - area = alloc_vm_area(PAGE_SIZE, &pte); + area = alloc_vm_area(PAGE_SIZE * nr_grefs, pte); if (!area) { kfree(node); return -ENOMEM; } - op.host_addr = arbitrary_virt_to_machine(pte).maddr; + for (i = 0; i < nr_grefs; i++) { + op[i].flags = GNTMAP_host_map | GNTMAP_contains_pte, + op[i].ref = gnt_ref[i], + op[i].dom = dev->otherend_id, + op[i].host_addr = arbitrary_virt_to_m...
2012 Mar 05
11
[PATCH 0001/001] xen: multi page ring support for block devices
...alloc_pv(struct xenbus_device *dev, if (!node) return -ENOMEM; - area = alloc_vm_area(PAGE_SIZE, &pte); + area = alloc_vm_area(PAGE_SIZE * nr_grefs, pte); if (!area) { kfree(node); return -ENOMEM; } - op.host_addr = arbitrary_virt_to_machine(pte).maddr; + for (i = 0; i < nr_grefs; i++) { + op[i].flags = GNTMAP_host_map | GNTMAP_contains_pte, + op[i].ref = gnt_ref[i], + op[i].dom = dev->otherend_id, + op[i].host_addr = arbitrary_virt_to_m...
2013 Jul 25
0
How to get the PFN of a vmalloc'ed address in a domU ?
...s\n", buffer_num_pages); goto err_unmap; } x->buffer_addr = (unsigned long)x->buffer_area->addr; grefp = &d->buffer_first_gref; for (i = 0; i < buffer_num_pages; i++) { printk(KERN_INFO "Mounting GREF %d\n", *grefp); memset(&op, 0, sizeof(op)); op.host_addr = x->buffer_addr + i * PAGE_SIZE; op.flags = GNTMAP_host_map; op.ref = *grefp; op.dom = x->otherend_id; rc = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1); if (rc == -ENOSYS) { goto err_unmap; } if (op.status) { DPRINTK("error...
2010 Mar 09
1
Bugs with ovirt-awake
...c/init.d/ovirt-awake start $SRV_HOST $SRV_PORT $krb5_tab if [ $? -ne 0 ]; then log "ovirt-awake failed"; return 1 fi It raises an other problem : [...] Mar 08 17:07:52 Starting ovirt Starting ovirt-awake: /etc/init.d/ovirt-awake: line 71: /dev/tcp/"host_addr":12120: No such file or directory /etc/init.d/ovirt-awake: line 73: connect-to-server: command not found /etc/init.d/ovirt-awake: line 49: 3: Bad file descriptor /etc/init.d/ovirt-awake: line 75: [: ==: unary operator expected /etc/init.d/ovirt-awake: line 45: 3: Bad file descriptor /etc/init....
2013 Oct 17
42
[PATCH v8 0/19] enable swiotlb-xen on arm and arm64
Hi all, this patch series enables xen-swiotlb on arm and arm64. It has been heavily reworked compared to the previous versions in order to achieve better performances and to address review comments. We are not using dma_mark_clean to ensure coherency anymore. We call the platform implementation of map_page and unmap_page. We assume that dom0 has been mapped 1:1 (physical address == machine
2007 Mar 20
62
RFC: [0/2] Remove netloop by lazy copying in netback
Hi Keir: These two patches remove the need for netloop by performing the copying in netback and only if it is necessary. The rationale is that most packets will be processed without delay allowing them to be freed without copying at all. So instead of copying every packet destined to dom0 we''ll only copy those that linger longer than a specified amount of time (currently 0.5s). As it