Displaying 20 results from an estimated 48 matches for "1022,7".
Did you mean:
1020,7
2005 Sep 27
1
--delete and --dirs
rsync-2.6.6 manpage says:
--delete
[...]
This option has no effect unless directory recursion is enabled.
True. In fact, I noted that --delete doesn't delete anything if --dirs
is used rather than --recursive.
Is there any reason for --delete not to delete when used with --dirs?
Is there a way to get rsync to actually delete files on the receiving
end when using
2016 Nov 04
0
[PATCH] nouveau: remove unused variables
...ref, n_ref;
bool upper = false;
*M1 = 1;
@@ -1010,7 +1010,6 @@ gk104_pll_calc_hiclk(int target_khz, int crystal,
/* we found a better combination */
if (cur_err < best_err) {
best_err = cur_err;
- best_clk = cur_clk;
*N2 = cur_N;
*N1 = n_ref;
*P1 = p_ref;
@@ -1022,7 +1021,6 @@ gk104_pll_calc_hiclk(int target_khz, int crystal,
- target_khz;
if (cur_err < best_err) {
best_err = cur_err;
- best_clk = cur_clk;
*N2 = cur_N;
*N1 = n_ref;
*P1 = p_ref;
--
2.10.1
2019 Aug 09
0
DANGER WILL ROBINSON, DANGER
...TE 0x4
Uh. How do you know page->mapping would otherwise have bit 2 clear?
Who's guaranteeing that?
This is an awfully big patch to the memory management code, buried in
the middle of a gigantic series which almost guarantees nobody would
look at it. I call shenanigans.
> @@ -1021,7 +1022,7 @@ void page_move_anon_rmap(struct page *page, struct vm_area_struct *vma)
> * __page_set_anon_rmap - set up new anonymous rmap
> * @page: Page or Hugepage to add to rmap
> * @vma: VM area to add page to.
> - * @address: User virtual address of the mapping
> + * @address: Us...
2019 Sep 05
0
[PATCH 17/18] virtiofs: Remove TODO to quiesce/end_requests
...2 100644
--- a/fs/fuse/virtio_fs.c
+++ b/fs/fuse/virtio_fs.c
@@ -208,7 +208,6 @@ static void virtio_fs_free_devs(struct virtio_fs *fs)
if (!fsvq->fud)
continue;
- /* TODO need to quiesce/end_requests/decrement dev_count */
fuse_dev_free(fsvq->fud);
fsvq->fud = NULL;
}
@@ -1022,7 +1021,6 @@ static int virtio_fs_fill_super(struct super_block *sb)
if (i == VQ_REQUEST)
continue; /* already initialized */
fuse_dev_install(fsvq->fud, fc);
- atomic_inc(&fc->dev_count);
}
/* Previous unmount will stop all queues. Start these again */
--
2.20.1
2019 Sep 05
0
DANGER WILL ROBINSON, DANGER
...> > This is an awfully big patch to the memory management code, buried
> > > > > in the middle of a gigantic series which almost guarantees nobody
> > > > > would look at it. I call shenanigans.
> > > > >
> > > > > > @@ -1021,7 +1022,7 @@ void page_move_anon_rmap(struct page
> > *page, struct vm_area_struct *vma)
> > > > > > * __page_set_anon_rmap - set up new anonymous rmap
> > > > > > * @page: Page or Hugepage to add to rmap
> > > > > > * @vma: VM area to add p...
2009 Apr 07
1
Backport to 1.4 of patch that recovers orphans from offline slots
The following patch is a backport of patch that recovers orphans from offline
slots. It is being backported from mainline to 1.4
mainline patch: 0001-Patch-to-recover-orphans-in-offline-slots-during-rec.patch
Thanks,
--Srini
2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
Hi:
This series implement batch updating of used ring for TX. This help to
reduce the cache contention on used ring. The idea is first split
datacopy path from zerocopy, and do only batching for datacopy. This
is because zercopy had already supported its own batching.
TX PPS was increased 25.8% and Netperf TCP does not show obvious
differences.
The split of datapath will also be helpful for
2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
Hi:
This series implement batch updating of used ring for TX. This help to
reduce the cache contention on used ring. The idea is first split
datacopy path from zerocopy, and do only batching for datacopy. This
is because zercopy had already supported its own batching.
TX PPS was increased 25.8% and Netperf TCP does not show obvious
differences.
The split of datapath will also be helpful for
2016 Nov 08
0
[PATCH 2/3] Split internal stuff out of guestfs.h
...uestfs-internal.h\"
#include \"guestfs_protocol.h\"
@@ -905,6 +936,7 @@ and generate_client_structs_compare () =
#include <errno.h>
#include \"guestfs.h\"
+#include \"guestfs-private.h\"
#include \"guestfs-internal.h\"
";
@@ -990,6 +1022,7 @@ and generate_client_structs_copy () =
#include <errno.h>
#include \"guestfs.h\"
+#include \"guestfs-private.h\"
#include \"guestfs-internal.h\"
";
@@ -1173,6 +1206,7 @@ and generate_client_structs_cleanup () =
#include <stdlib.h>
#incl...
2016 Nov 08
4
[PATCH 1/3] generator: c: move internal functions
Move the generate_all_structs and generate_all_headers functions,
previously internal within the implementation of generate_guestfs_h, to
be usable by other functions in the same "C" module (but not public).
Only code motion.
---
generator/c.ml | 163 +++++++++++++++++++++++++++++----------------------------
1 file changed, 82 insertions(+), 81 deletions(-)
diff --git a/generator/c.ml
2016 Dec 14
1
[PATCH V2] vhost: introduce O(1) vq metadata cache
...0;
@@ -950,6 +1012,7 @@ int vhost_process_iotlb_msg(struct vhost_dev *dev,
ret = -EFAULT;
break;
}
+ vhost_vq_meta_reset(dev);
if (vhost_new_umem_range(dev->iotlb, msg->iova, msg->size,
msg->iova + msg->size - 1,
msg->uaddr, msg->perm)) {
@@ -959,6 +1022,7 @@ int vhost_process_iotlb_msg(struct vhost_dev *dev,
vhost_iotlb_notify_vq(dev, msg);
break;
case VHOST_IOTLB_INVALIDATE:
+ vhost_vq_meta_reset(dev);
vhost_del_umem_range(dev->iotlb, msg->iova,
msg->iova + msg->size - 1);
break;
@@ -1102,12 +1166,26 @@ static...
2016 Dec 14
1
[PATCH V2] vhost: introduce O(1) vq metadata cache
...0;
@@ -950,6 +1012,7 @@ int vhost_process_iotlb_msg(struct vhost_dev *dev,
ret = -EFAULT;
break;
}
+ vhost_vq_meta_reset(dev);
if (vhost_new_umem_range(dev->iotlb, msg->iova, msg->size,
msg->iova + msg->size - 1,
msg->uaddr, msg->perm)) {
@@ -959,6 +1022,7 @@ int vhost_process_iotlb_msg(struct vhost_dev *dev,
vhost_iotlb_notify_vq(dev, msg);
break;
case VHOST_IOTLB_INVALIDATE:
+ vhost_vq_meta_reset(dev);
vhost_del_umem_range(dev->iotlb, msg->iova,
msg->iova + msg->size - 1);
break;
@@ -1102,12 +1166,26 @@ static...
2013 Apr 27
0
[PATCH] vhost: Move vhost-net zerocopy support fields to net.c
...-859,8 +970,8 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
if (r)
goto err_used;
- oldubufs = vq->ubufs;
- vq->ubufs = ubufs;
+ oldubufs = nvq->ubufs;
+ nvq->ubufs = ubufs;
n->tx_packets = 0;
n->tx_zcopy_err = 0;
@@ -911,6 +1022,7 @@ static long vhost_net_reset_owner(struct vhost_net *n)
vhost_net_stop(n, &tx_sock, &rx_sock);
vhost_net_flush(n);
err = vhost_dev_reset_owner(&n->dev);
+ vhost_net_reset_ubuf_info(n);
done:
mutex_unlock(&n->dev.mutex);
if (tx_sock)
@@ -986,11 +1098,17 @@ static...
2013 Apr 27
0
[PATCH] vhost: Move vhost-net zerocopy support fields to net.c
...-859,8 +970,8 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
if (r)
goto err_used;
- oldubufs = vq->ubufs;
- vq->ubufs = ubufs;
+ oldubufs = nvq->ubufs;
+ nvq->ubufs = ubufs;
n->tx_packets = 0;
n->tx_zcopy_err = 0;
@@ -911,6 +1022,7 @@ static long vhost_net_reset_owner(struct vhost_net *n)
vhost_net_stop(n, &tx_sock, &rx_sock);
vhost_net_flush(n);
err = vhost_dev_reset_owner(&n->dev);
+ vhost_net_reset_ubuf_info(n);
done:
mutex_unlock(&n->dev.mutex);
if (tx_sock)
@@ -986,11 +1098,17 @@ static...
2016 Dec 14
2
[PATCH] vhost: introduce O(1) vq metadata cache
...0;
@@ -950,6 +1012,7 @@ int vhost_process_iotlb_msg(struct vhost_dev *dev,
ret = -EFAULT;
break;
}
+ vhost_vq_meta_reset(dev);
if (vhost_new_umem_range(dev->iotlb, msg->iova, msg->size,
msg->iova + msg->size - 1,
msg->uaddr, msg->perm)) {
@@ -959,6 +1022,7 @@ int vhost_process_iotlb_msg(struct vhost_dev *dev,
vhost_iotlb_notify_vq(dev, msg);
break;
case VHOST_IOTLB_INVALIDATE:
+ vhost_vq_meta_reset(dev);
vhost_del_umem_range(dev->iotlb, msg->iova,
msg->iova + msg->size - 1);
break;
@@ -1102,12 +1166,26 @@ static...
2016 Dec 14
2
[PATCH] vhost: introduce O(1) vq metadata cache
...0;
@@ -950,6 +1012,7 @@ int vhost_process_iotlb_msg(struct vhost_dev *dev,
ret = -EFAULT;
break;
}
+ vhost_vq_meta_reset(dev);
if (vhost_new_umem_range(dev->iotlb, msg->iova, msg->size,
msg->iova + msg->size - 1,
msg->uaddr, msg->perm)) {
@@ -959,6 +1022,7 @@ int vhost_process_iotlb_msg(struct vhost_dev *dev,
vhost_iotlb_notify_vq(dev, msg);
break;
case VHOST_IOTLB_INVALIDATE:
+ vhost_vq_meta_reset(dev);
vhost_del_umem_range(dev->iotlb, msg->iova,
msg->iova + msg->size - 1);
break;
@@ -1102,12 +1166,26 @@ static...
2011 Apr 29
17
[RESEND] [PATCH 00/18] Staging: hv: Cleanup vmbus driver code
This is a resend of the patches yet to be applied.
This patch-set addresses some of the bus/driver model cleanup that
Greg sugested over the last couple of days. In this patch-set we
deal with the following issues:
1) Cleanup error handling in the vmbus_probe() and
vmbus_child_device_register() functions. Fixed a
bug in the probe failure path as part of this cleanup.
2) The Windows
2011 Apr 29
17
[RESEND] [PATCH 00/18] Staging: hv: Cleanup vmbus driver code
This is a resend of the patches yet to be applied.
This patch-set addresses some of the bus/driver model cleanup that
Greg sugested over the last couple of days. In this patch-set we
deal with the following issues:
1) Cleanup error handling in the vmbus_probe() and
vmbus_child_device_register() functions. Fixed a
bug in the probe failure path as part of this cleanup.
2) The Windows
2019 Aug 09
6
[RFC PATCH v6 71/92] mm: add support for remote mapping
...enced(struct page *page,
if (!page_rmapping(page))
return 0;
- if (!is_locked && (!PageAnon(page) || PageKsm(page))) {
+ if (!is_locked && (!PageAnon(page) || PageKsm(page) || PageRemote(page))) {
we_locked = trylock_page(page);
if (!we_locked)
return 1;
@@ -1021,7 +1022,7 @@ void page_move_anon_rmap(struct page *page, struct vm_area_struct *vma)
* __page_set_anon_rmap - set up new anonymous rmap
* @page: Page or Hugepage to add to rmap
* @vma: VM area to add page to.
- * @address: User virtual address of the mapping
+ * @address: User virtual address of the...
2009 Jan 28
2
7.1 new install halts on BTX error
I upgraded my 7.0 system to 7.1-RELEASE with freebsd-update only to find
that it no longer boots correctly, instead crashing with a BTX backtrace.
If I break to the loader prompt and use 'ls /boot', I also get a
backtrace.
A new install of 7.1 on this hardware using a separate SCSI card and drive
array also leads to a BTX backtrace. I have copied this below as the first
(most