search for: restort

Displaying 11 results from an estimated 11 matches for "restort".

Did you mean: restart
2018 Jun 08
2
/etc/gai.conf fails to prefer IPv4 over IPv6 for NFS
...? I've watched the automounter daemon in foreground debug mode, and it does show the NFS server mount attempt with IPv6 first, but the actual API call name isn't displayed there. This seems the most likely explanation, I'd just like to know for certain before I give up on gai.conf and restort to disabling IPv6 or other workarounds (e.g. /etc/nfsmount.conf). Thanks, sr. -- || Steve Rikli ||| Well, we've stared at it... that oughta || || Systems Administrator ||| fix it! Let's get outta here. || || Genyosha Networks |||...
2018 Jun 09
0
/etc/gai.conf fails to prefer IPv4 over IPv6 for NFS
On 06/08/2018 03:23 PM, Steve Rikli wrote: > > This seems the most likely explanation, I'd just like to know for certain > before I give up on gai.conf and restort to disabling IPv6 or other > workarounds (e.g. /etc/nfsmount.conf). Have you tried specifying "proto=tcp" as a mount option?? That *should* limit the client to IPv4.
2018 Jun 09
2
/etc/gai.conf fails to prefer IPv4 over IPv6 for NFS
...cle <bc751efa-07f1-91de-9248-5f90f961bcb4 at gmail.com>, Gordon Messmer <centos at centos.org> wrote: >On 06/08/2018 03:23 PM, Steve Rikli wrote: >> >> This seems the most likely explanation, I'd just like to know for certain >> before I give up on gai.conf and restort to disabling IPv6 or other >> workarounds (e.g. /etc/nfsmount.conf). > >Have you tried specifying "proto=tcp" as a mount option??? That *should* >limit the client to IPv4. Yes -- that's the other workaround I tried successfully. Either in auto.master mount options fo...
2018 Jun 08
2
/etc/gai.conf fails to prefer IPv4 over IPv6 for NFS
Using CentOS 6.9 with IPv4 configured, and _not_ disabling IPv6 yet, we want this NFS client system to prefer the IPv4 address of a dual-stack remote NFS server, which has both A and AAAA records in DNS. Otherwise we get a minutes long pause while automounter tries to mount the IPv6 address -- I can see automounter's '/bin/mount' running, using the AAAA record of the server, until
2012 Apr 12
7
Run rsync even not connected
I hopethis hope this makes sense. How do you make rsync run even when not physically connected to the server? In other words, I run rsync from the terminal via vnc and when I log out of the connection, rsync stops running. Is there a script or something I can use? Sent from my iPhone
2019 Oct 29
0
[PATCH v2 13/15] drm/amdgpu: Use mmu_range_insert instead of hmm_mirror
...otify about mm change > - * > - * @mirror: the hmm_mirror (mm) is about to update > - * @update: the update start, end address > - * > - * We temporarily evict all BOs between start and end. This > - * necessitates evicting all user-mode queues of the process. The BOs > - * are restorted in amdgpu_mn_invalidate_range_end_hsa. > - */ > -static int > -amdgpu_mn_sync_pagetables_hsa(struct hmm_mirror *mirror, > - const struct mmu_notifier_range *update) > +static const struct mmu_range_notifier_ops amdgpu_mn_hsa_ops = { > + .invalidate = amdgpu_mn_invalidate...
2019 Oct 29
0
[PATCH v2 13/15] drm/amdgpu: Use mmu_range_insert instead of hmm_mirror
...otify about mm change > - * > - * @mirror: the hmm_mirror (mm) is about to update > - * @update: the update start, end address > - * > - * We temporarily evict all BOs between start and end. This > - * necessitates evicting all user-mode queues of the process. The BOs > - * are restorted in amdgpu_mn_invalidate_range_end_hsa. > - */ > -static int > -amdgpu_mn_sync_pagetables_hsa(struct hmm_mirror *mirror, > - const struct mmu_notifier_range *update) > +static const struct mmu_range_notifier_ops amdgpu_mn_hsa_ops = { > + .invalidate = amdgpu_mn_invalidate...
2019 Oct 28
2
[PATCH v2 13/15] drm/amdgpu: Use mmu_range_insert instead of hmm_mirror
...sync_pagetables_hsa - callback to notify about mm change - * - * @mirror: the hmm_mirror (mm) is about to update - * @update: the update start, end address - * - * We temporarily evict all BOs between start and end. This - * necessitates evicting all user-mode queues of the process. The BOs - * are restorted in amdgpu_mn_invalidate_range_end_hsa. - */ -static int -amdgpu_mn_sync_pagetables_hsa(struct hmm_mirror *mirror, - const struct mmu_notifier_range *update) +static const struct mmu_range_notifier_ops amdgpu_mn_hsa_ops = { + .invalidate = amdgpu_mn_invalidate_hsa, +}; + +static int amdgpu...
2019 Jul 26
13
[PATCH v2 0/7] mm/hmm: more HMM clean up
Here are seven more patches for things I found to clean up. This was based on top of Christoph's seven patches: "hmm_range_fault related fixes and legacy API removal v3". I assume this will go into Jason's tree since there will likely be more HMM changes in this cycle. Changes from v1 to v2: Added AMD GPU to hmm_update removal. Added 2 patches from Christoph. Added 2 patches as
2019 Oct 28
32
[PATCH v2 00/15] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1, scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where they only use invalidate_range_start/end and immediately check the invalidating range against some driver data structure to tell if the driver is interested. Half of them use an interval_tree, the others
2019 Nov 12
20
[PATCH hmm v3 00/14] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1, scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where they only use invalidate_range_start/end and immediately check the invalidating range against some driver data structure to tell if the driver is interested. Half of them use an interval_tree, the others