search for: proc_nam

Displaying 20 results from an estimated 32 matches for "proc_nam".

Did you mean: proc_name
2011 Sep 08
4
[PATCH] Staging: hv: storvsc: Show the modulename in /sys/class/scsi_host/*/proc_name
mkinitrd relies on /sys/class/scsi_host/*/proc_name instead of /sys/block/sd*/device/../../../moalias to get the scsi driver module name. As a fallback the sysfs driver name could be used, which does not match the module name either ('storvsc' vs. 'hv_storvsc'). Signed-off-by: Olaf Hering <olaf at aepfle.de> --- drivers/sta...
2011 Sep 08
4
[PATCH] Staging: hv: storvsc: Show the modulename in /sys/class/scsi_host/*/proc_name
mkinitrd relies on /sys/class/scsi_host/*/proc_name instead of /sys/block/sd*/device/../../../moalias to get the scsi driver module name. As a fallback the sysfs driver name could be used, which does not match the module name either ('storvsc' vs. 'hv_storvsc'). Signed-off-by: Olaf Hering <olaf at aepfle.de> --- drivers/sta...
2010 Nov 03
1
kernel bug fixed in later kernels
...e <sys/stat.h> #include <sys/types.h> #include <unistd.h> int main(int argc, char** argv) { int fd = open("foo", O_RDONLY); if (setuid(1000)) { printf("could not setuid, run as root with correct uid\n"); return 1; } char proc_name[1024]; sprintf(proc_name, "/proc/self/fd/%d", fd); struct stat stat_buf; int rc = stat(proc_name, &stat_buf); if (rc == 0) { printf("all good\n"); } else { printf("busted, could not access %s\n", proc_name); }...
2014 May 24
0
[PATCH] virtio-scsi: Implement change_queue_depth for virtscsi targets
...th)); + break; + default: + return -EOPNOTSUPP; + } + + return sdev->queue_depth; +} + static int virtscsi_abort(struct scsi_cmnd *sc) { struct virtio_scsi *vscsi = shost_priv(sc->device->host); @@ -684,6 +715,7 @@ static struct scsi_host_template virtscsi_host_template_single = { .proc_name = "virtio_scsi", .this_id = -1, .queuecommand = virtscsi_queuecommand_single, + .change_queue_depth = virtscsi_change_queue_depth, .eh_abort_handler = virtscsi_abort, .eh_device_reset_handler = virtscsi_device_reset, @@ -700,6 +732,7 @@ static struct scsi_host_template virtscsi...
2014 May 24
0
[PATCH] virtio-scsi: Implement change_queue_depth for virtscsi targets
...th)); + break; + default: + return -EOPNOTSUPP; + } + + return sdev->queue_depth; +} + static int virtscsi_abort(struct scsi_cmnd *sc) { struct virtio_scsi *vscsi = shost_priv(sc->device->host); @@ -684,6 +715,7 @@ static struct scsi_host_template virtscsi_host_template_single = { .proc_name = "virtio_scsi", .this_id = -1, .queuecommand = virtscsi_queuecommand_single, + .change_queue_depth = virtscsi_change_queue_depth, .eh_abort_handler = virtscsi_abort, .eh_device_reset_handler = virtscsi_device_reset, @@ -700,6 +732,7 @@ static struct scsi_host_template virtscsi...
2017 Nov 07
2
Problem with getting restapi up&running
...r when_ready: <function when_ready at 0x2842d70> pre_fork: <function pre_fork at 0x2842ed8> cert_reqs: 0 preload_app: False keepalive: 2 accesslog: /var/log/glusterrest/access.log group: 0 graceful_timeout: 30 do_handshake_on_connect: False spew: False workers: 2 proc_name: None sendfile: None pidfile: /var/run/glusterrest.pid umask: 0 on_reload: <function on_reload at 0x2842c08> pre_exec: <function pre_exec at 0x2847668> worker_tmp_dir: None limit_request_fields: 100 pythonpath: None on_exit: <function on_exit at 0x2847e60> con...
2011 Jun 16
7
[PATCH] replace fchmod()-based heartbeat with raindrops
...in Mongrel). diff --git a/lib/unicorn/http_server.rb b/lib/unicorn/http_server.rb index 059f040..0a9af86 100644 --- a/lib/unicorn/http_server.rb +++ b/lib/unicorn/http_server.rb @@ -373,7 +373,7 @@ class Unicorn::HttpServer self.pid = pid.chomp(''.oldbin'') if pid proc_name ''master'' else - worker = WORKERS.delete(wpid) and worker.tmp.close rescue nil + worker = WORKERS.delete(wpid) and worker.close rescue nil m = "reaped #{status.inspect} worker=#{worker.nr rescue ''unknown''}" status....
2008 Jun 04
0
Finding module name for SCSI host adapter for a given SCSI target
...trols that target. The objective is to be able to unload and reload the kernel module when the drive gets into a state that requires a SCSI bus reset for recovery. The best I've been able to come up with so far is: SCSIMOD=$(cat /sys/class/scsi_tape/${DEV##*/}/device/../../scsi_host:host*/proc_name) Anyone know of a way that is a bit less convoluted? -- Bob Nichols "NOSPAM" is really part of my email address. Do NOT delete it.
2009 Sep 23
0
jbd/kjournald oops on 2.6.30.1
...3 lists but could not find any similar oops/reports). == Oops =================== BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 IP: [<ffffffff80373520>] __journal_remove_journal_head+0x10/0x120 PGD 0 Oops: 0000 [#1] SMP last sysfs file: /sys/class/scsi_host/host0/proc_name CPU 0 Pid: 3834, comm: kjournald Not tainted 2.6.30.1_test #1 RIP: 0010:[<ffffffff80373520>] [<ffffffff80373520>] __journal_remove_journal_head+0x10/0x120 RSP: 0018:ffff880c7ee11d80 EFLAGS: 00010246 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000034 RDX: 00000000000000...
2006 Aug 02
0
[PATCH 0/6] SCSI frontend and backend drivers
...er-space daemon before starting domainUs (otherwise Domain0 crashes). /usr/sbin/tgtd -f -d8 The above command runs the daemon in the foreground. So run start VMs with another terminal. If everything goes well, a SCSI host shows up in your VM (DomainU). clematis:/# cat /sys/class/scsi_host/host0/proc_name scsifront And you can see a disk driver: clematis:/# ls /sys/class/scsi_device/0\:0\:0\:0/device/ block:sda iocounterbits queue_depth scsi_level vendor bus iodone_cnt queue_type state delete ioerr_cnt rescan timeout device_bl...
2018 May 02
0
More oddities...
...096 May 2 10:15 em_message_type -r--r--r--. 1 root root 4096 May 2 10:15 host_busy --w-------. 1 root root 4096 May 2 10:15 host_reset -rw-r--r--. 1 root root 4096 May 1 18:00 link_power_management_policy drwxr-xr-x. 2 root root 0 May 2 10:15 power/ -r--r--r--. 1 root root 4096 May 2 10:15 proc_name -r--r--r--. 1 root root 4096 May 2 10:15 prot_capabilities -r--r--r--. 1 root root 4096 May 2 10:15 prot_guard_type --w-------. 1 root root 4096 May 2 10:15 scan -r--r--r--. 1 root root 4096 May 2 10:15 sg_prot_tablesize -r--r--r--. 1 root root 4096 May 2 10:15 sg_tablesize -rw-r--r--. 1 root...
2010 Jan 28
3
How to map ata#.# numbers to /dev/sd numbers?
On my C5 machine (a Dell XPS420) I have a 500Gb disk on the internal SATA controller. I also have a SiI3132 dual-port multi-device eSATA card. This is connected to an external SATA array of disks. Now occasionally I see something like this in my logs ata7.01: exception Emask 0x0 SAct 0x0 SErr 0x0 a ction 0x0 ata7.01: irq_stat 0x00060002, device error via D 2H FIS ata7.01: cmd
2013 Feb 12
6
[PATCH v3 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives performance improvements of up to 50% (measured both with QEMU and tcm_vhost backends). The patches build on top of the new virtio APIs at http://permalink.gmane.org/gmane.linux.kernel.virtualization/18431; the new API simplifies the locking of the virtio-scsi driver nicely, thus it makes sense to require them as a prerequisite.
2013 Feb 12
6
[PATCH v3 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives performance improvements of up to 50% (measured both with QEMU and tcm_vhost backends). The patches build on top of the new virtio APIs at http://permalink.gmane.org/gmane.linux.kernel.virtualization/18431; the new API simplifies the locking of the virtio-scsi driver nicely, thus it makes sense to require them as a prerequisite.
2013 Mar 19
6
[PATCH V5 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives performance improvements of up to 50% (measured both with QEMU and tcm_vhost backends). This version rebased on Rusty's virtio ring rework patches. We hope this can go into virtio-next together with the virtio ring rework pathes. V5: improving the grammar of 1/5 (Paolo) move the dropping of sg_elems to 'virtio-scsi: use
2013 Mar 19
6
[PATCH V5 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives performance improvements of up to 50% (measured both with QEMU and tcm_vhost backends). This version rebased on Rusty's virtio ring rework patches. We hope this can go into virtio-next together with the virtio ring rework pathes. V5: improving the grammar of 1/5 (Paolo) move the dropping of sg_elems to 'virtio-scsi: use
2013 Mar 11
7
[PATCH V4 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives performance improvements of up to 50% (measured both with QEMU and tcm_vhost backends). This version rebased on Rusty's virtio ring rework patches. We hope this can go into virtio-next together with the virtio ring rework pathes. V4: rebase on virtio ring rework patches (rusty's pending-rebases branch) V3 and be found
2013 Mar 11
7
[PATCH V4 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives performance improvements of up to 50% (measured both with QEMU and tcm_vhost backends). This version rebased on Rusty's virtio ring rework patches. We hope this can go into virtio-next together with the virtio ring rework pathes. V4: rebase on virtio ring rework patches (rusty's pending-rebases branch) V3 and be found
2013 Dec 09
2
[PATCH] rework master-to-worker signaling to use a pipe
...ocess(worker) + worker.atfork_child # we''ll re-trap :QUIT later for graceful shutdown iff we accept clients EXIT_SIGS.each { |sig| trap(sig) { exit!(0) } } exit!(0) if (SIG_QUEUE & EXIT_SIGS)[0] @@ -608,23 +605,27 @@ class Unicorn::HttpServer SIG_QUEUE.clear proc_name "worker[#{worker.nr}]" START_CTX.clear - init_self_pipe! WORKERS.clear + + after_fork.call(self, worker) # can drop perms and create listeners LISTENERS.each { |sock| sock.fcntl(Fcntl::F_SETFD, Fcntl::FD_CLOEXEC) } - after_fork.call(self, worker) # can drop perms...
2013 Mar 20
7
[PATCH V6 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives performance improvements of up to 50% (measured both with QEMU and tcm_vhost backends). This version rebased on Rusty's virtio ring rework patches, which has already gone into virtio-next today. We hope this can go into virtio-next together with the virtio ring rework pathes. V6: rework "redo allocation of target data"