search for: group_leaders

Displaying 9 results from an estimated 9 matches for "group_leaders".

Did you mean: group_leader
2014 Mar 21
0
[PATCH RFC V2 4/4] tools: virtio: add a top-like utility for displaying vhost satistics
...oup(1)) + for x in os.listdir('/sys/devices/system/cpu') + if re.match(cpure, x)] + import resource + nfiles = len(self.cpus) * 1000 + resource.setrlimit(resource.RLIMIT_NOFILE, (nfiles, nfiles)) + events = [] + self.group_leaders = [] + for cpu in self.cpus: + group = Group(cpu) + for name in _fields: + tracepoint = name + filter = None + # for field like kvm_exit(MMIO) + m = re.match(r'(.*)\((.*)\)', name) + if m...
2012 Apr 10
3
[PATCH 0/2] adding tracepoints to vhost
To help in vhost analyzing, the following series adding basic tracepoints to vhost. Operations of both virtqueues and vhost works were traced in current implementation, net code were untouched. A top-like satistics displaying script were introduced to help the troubleshooting. TODO: - net specific tracepoints? --- Jason Wang (2): vhost: basic tracepoints tools: virtio: add a
2012 Apr 10
3
[PATCH 0/2] adding tracepoints to vhost
To help in vhost analyzing, the following series adding basic tracepoints to vhost. Operations of both virtqueues and vhost works were traced in current implementation, net code were untouched. A top-like satistics displaying script were introduced to help the troubleshooting. TODO: - net specific tracepoints? --- Jason Wang (2): vhost: basic tracepoints tools: virtio: add a
2014 Mar 21
5
[PATCH RFC V2 0/4] Adding tracepoints to vhost/net
Recent debugging on vhost net zerocopy shows the need of tracepoints. So to help in vhost{net} debugging and performance analyzing, the following series adding basic tracepoints to vhost. Operations of both vhost and vhost_net were traced in current implementation. A top-like satistics displaying script were introduced to help the troubleshooting: vhost statistics vhost_virtio_update_used_idx
2014 Mar 21
5
[PATCH RFC V2 0/4] Adding tracepoints to vhost/net
Recent debugging on vhost net zerocopy shows the need of tracepoints. So to help in vhost{net} debugging and performance analyzing, the following series adding basic tracepoints to vhost. Operations of both vhost and vhost_net were traced in current implementation. A top-like satistics displaying script were introduced to help the troubleshooting: vhost statistics vhost_virtio_update_used_idx
2016 Nov 30
1
slow directory access, convert_string_internal: Conversion error: Incomplete multibyte sequence
.../append_data/read_xattr/write_xattr/execute/delete_child /read_attributes/write_attributes/delete/read_acl/write_acl /write_owner:file_inherit/dir_inherit:allow 2:group@:list_directory/read_data/read_xattr/execute/read_attributes /read_acl:file_inherit/dir_inherit:allow 3:group:group_leaders:list_directory/read_data/read_xattr/execute /read_attributes/read_acl:file_inherit/dir_inherit:allow 4:user:10157:list_directory/read_data/read_xattr/execute/read_attributes /read_acl:file_inherit/dir_inherit/no_propagate:allow 5:user:10162:list_directory/read_data/add_file/write_da...
2016 Nov 30
2
slow directory access, convert_string_internal: Conversion error: Incomplete multibyte sequence
There are definitely some files with some weird names- in an ssh session they don't even have regular characters. e.g -rw-rw---- 1 xxx xxx 114985112 Oct 31 14:39 ▒^t Not sure if that is related to problems though. The top command shows Memory: 12G phys mem, 343M free mem, 2048M total swap, 2048M free swap This is in the evening so should not be much load but I think
2019 Nov 12
20
[PATCH hmm v3 00/14] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1, scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where they only use invalidate_range_start/end and immediately check the invalidating range against some driver data structure to tell if the driver is interested. Half of them use an interval_tree, the others
2019 Oct 28
32
[PATCH v2 00/15] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1, scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where they only use invalidate_range_start/end and immediately check the invalidating range against some driver data structure to tell if the driver is interested. Half of them use an interval_tree, the others