Displaying 20 results from an estimated 22 matches for "job1".
Did you mean:
job
2010 Oct 17
0
Help on choosing the appropriate analysis method
...ot;right" strategy for a
particular dataset.
We conducted 24-hour electric field measurements on 90 subjects. They
are grouped by job (2 categories) and location (3 categories). There are
four exposure metrics assigned to each subject.
An excerpt from the data:
n job location M OA UE all
0 job1 dist_200 0.297 0.072 0.171 0.297
1 job1 dist_200 0.083 0.529 0.066 0.529
2 job1 dist_200 0.105 0.145 1.072 1.072
3 job1 dist_200 0.096 0.431 0.099 0.431
4 job1 dist_200 0.137 0.077 0.092 0.137
5 job1 dist_20 NA 0.296 0.107 0.296
6 job1 dist_200 NA 1.595 0.293 1.595
7 job1 dist_20 NA 0.085 0.076 0.0...
2017 Jan 19
2
undefined symbols during linking LLDB 4.0 RC1
.../usr
-DLLDB_DISABLE_PYTHON=1
-DTARGET_TRIPLE="x86_64-pc-linux-gnu"
-DLIBCXX_INSTALL_EXPERIMENTAL_LIBRARY=ON
-DLLVM_ENABLE_LLD=ON
List of undefined symbols and invocation is next:
[ 89%] Linking CXX executable ../../../../bin/lldb
cd /opt/bamboo-agent-01/xml-data/build-dir/CLANG-BFRH-JOB1/build/RELEASE_40_RC1/Linux/x86_64/llvm_build_phase1/tools/lldb/tools/driver && /usr/local/bin/cmake -E cmake_link_script CMakeFiles/lldb.dir/link.txt --verbose=1
/usr/bin/clang++ -stdlib=libc++ -fPIC -fvisibility-inlines-hidden -Wall -W -Wno-unused-parameter -Wwrite-strings -Wcast-qual -...
2017 Sep 06
2
GlusterFS as virtual machine storage
...ted
> replicated volume with arbiter (2+1) and VM on KVM (via Openstack)
> with disk accessible through gfapi. Volume group is set to virt
> (gluster volume set gv_openstack_1 virt). VM runs current (all
> packages updated) Ubuntu Xenial.
>
> I set up following fio job:
>
> [job1]
> ioengine=libaio
> size=1g
> loops=16
> bs=512k
> direct=1
> filename=/tmp/fio.data2
>
> When I run fio fio.job and reboot one of the data nodes, IO statistics
> reported by fio drop to 0KB/0KB and 0 IOPS. After a while, root
> filesystem gets remounted as read-only....
2013 May 21
2
rsync behavior on copy-on-write filesystems
...t
0.057u 17.389s 0:42.29 41.2% 0+0k 19737984+20971520io 0pf+0w
$ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/jobarchive-Ajobarchivetest2
300G 20G 274G 7% /vol/jobarchive_Ajobarchivetest2
## 4) Make a snapshot of the second volume called job1. Note that it
takes up almost no space.
$ btrfs subvolume snapshot current job1
$ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/jobarchive-Ajobarchivetest2
300G 21G 273G 7% /vol/jobarchive_Ajobarchivetest2
## 5) Change the first 4k bytes of...
2017 Sep 06
0
GlusterFS as virtual machine storage
...ith Gluster 3.10.5 on CentOS 7. I created
replicated volume with arbiter (2+1) and VM on KVM (via Openstack)
with disk accessible through gfapi. Volume group is set to virt
(gluster volume set gv_openstack_1 virt). VM runs current (all
packages updated) Ubuntu Xenial.
I set up following fio job:
[job1]
ioengine=libaio
size=1g
loops=16
bs=512k
direct=1
filename=/tmp/fio.data2
When I run fio fio.job and reboot one of the data nodes, IO statistics
reported by fio drop to 0KB/0KB and 0 IOPS. After a while, root
filesystem gets remounted as read-only.
If you care about infrastructure, setup details...
2017 Sep 07
3
GlusterFS as virtual machine storage
...tack)
> > > with disk accessible through gfapi. Volume group is set to virt
> > > (gluster volume set gv_openstack_1 virt). VM runs current (all
> > > packages updated) Ubuntu Xenial.
> > >
> > > I set up following fio job:
> > >
> > > [job1]
> > > ioengine=libaio
> > > size=1g
> > > loops=16
> > > bs=512k
> > > direct=1
> > > filename=/tmp/fio.data2
> > >
> > > When I run fio fio.job and reboot one of the data nodes, IO statistics
> > > reported by fio dr...
2017 Sep 06
0
GlusterFS as virtual machine storage
...iter (2+1) and VM on KVM (via Openstack)
> > with disk accessible through gfapi. Volume group is set to virt
> > (gluster volume set gv_openstack_1 virt). VM runs current (all
> > packages updated) Ubuntu Xenial.
> >
> > I set up following fio job:
> >
> > [job1]
> > ioengine=libaio
> > size=1g
> > loops=16
> > bs=512k
> > direct=1
> > filename=/tmp/fio.data2
> >
> > When I run fio fio.job and reboot one of the data nodes, IO statistics
> > reported by fio drop to 0KB/0KB and 0 IOPS. After a while, root...
2017 Sep 03
3
GlusterFS as virtual machine storage
Il 30-08-2017 17:07 Ivan Rossi ha scritto:
> There has ben a bug associated to sharding that led to VM corruption
> that has been around for a long time (difficult to reproduce I
> understood). I have not seen reports on that for some time after the
> last fix, so hopefully now VM hosting is stable.
Mmmm... this is precisely the kind of bug that scares me... data
corruption :|
Any
2013 Jan 21
1
btrfs_start_delalloc_inodes livelocks when creating snapshot under IO
...nvolved) with 8 threads each like this:
fio --thread --directory=/btrfs/subvol1 --rw=randwrite --randrepeat=1
--fadvise_hint=0 --fallocate=posix --size=1000m --filesize=10737418240
--bsrange=512b-64k --scramble_buffers=1 --nrfiles=1 --overwrite=1
--ioengine=sync --filename=file-1 --name=job0 --name=job1 --name=job2
--name=job3 --name=job4 --name=job5 --name=job6 --name=job7
The files are preallocated with fallocate before the fio run.
Mount options: noatime,nodatasum,nodatacow,nospace_cache
Can somebody please advise on how to address this issue, and, if
possible, how to solve it on kernel 3.6....
2003 Jan 04
0
Anyone here user lightwave/screamernet
...way, I am requesting any information people here have about their
experience with running screamernet under wine, currently I can run it
in a console, it seems to run fine anyway, the command I use is this
wine --debugmsg fixme-all -- x:/Programs/lwsn.exe -2
-c"x:/LWConfigFiles" "w:/job1" "w:/ack1"
that runs screamernet and I get the expected output.
the problem comes from when I try to run this command through my
interface, it seems to just "lock" well, not hard, cause you can kill
it, but it just seems to sit there and do nothing.
it seems to say the o...
2017 Sep 07
0
GlusterFS as virtual machine storage
...h disk accessible through gfapi. Volume group is set to virt
>> > > (gluster volume set gv_openstack_1 virt). VM runs current (all
>> > > packages updated) Ubuntu Xenial.
>> > >
>> > > I set up following fio job:
>> > >
>> > > [job1]
>> > > ioengine=libaio
>> > > size=1g
>> > > loops=16
>> > > bs=512k
>> > > direct=1
>> > > filename=/tmp/fio.data2
>> > >
>> > > When I run fio fio.job and reboot one of the data nodes, IO statistics
&...
2017 Sep 07
2
GlusterFS as virtual machine storage
...h gfapi. Volume group is set to virt
>>> > > (gluster volume set gv_openstack_1 virt). VM runs current (all
>>> > > packages updated) Ubuntu Xenial.
>>> > >
>>> > > I set up following fio job:
>>> > >
>>> > > [job1]
>>> > > ioengine=libaio
>>> > > size=1g
>>> > > loops=16
>>> > > bs=512k
>>> > > direct=1
>>> > > filename=/tmp/fio.data2
>>> > >
>>> > > When I run fio fio.job and reboot one of...
2015 Nov 20
0
[PATCH -qemu] nvme: support Google vendor extension
...1234
Here is the test results:
local NVMe: 860MB/s
qemu-nvme: 108MB/s
qemu-nvme+google-ext: 140MB/s
qemu-nvme-google-ext+eventfd: 190MB/s
root at wheezy:~# cat test.job
[global]
bs=4k
ioengine=libaio
iodepth=64
direct=1
runtime=60
time_based
norandommap
group_reporting
gtod_reduce=1
numjobs=8
[job1]
filename=/dev/nvme0n1
rw=read
2017 Jan 23
2
undefined symbols during linking LLDB 4.0 RC1
...nu"
>> -DLIBCXX_INSTALL_EXPERIMENTAL_LIBRARY=ON
>> -DLLVM_ENABLE_LLD=ON
>>
>> List of undefined symbols and invocation is next:
>>
>> [ 89%] Linking CXX executable ../../../../bin/lldb
>> cd
>> /opt/bamboo-agent-01/xml-data/build-dir/CLANG-BFRH-JOB1/build/RELEASE_40_RC1/Linux/x86_64/llvm_build_phase1/tools/lldb/tools/driver
>> && /usr/local/bin/cmake -E cmake_link_script CMakeFiles/lldb.dir/link.txt
>> --verbose=1
>> /usr/bin/clang++ -stdlib=libc++ -fPIC -fvisibility-inlines-hidden -Wall
>> -W -Wno-unused-par...
2001 Dec 11
0
Wine 2001.11.08, FreeBSD 4.{2,4}, Lightwave 5.6 Screamernet Node
...m
from the Z: drive, giving it a couple of parameters that point back to
the share.
A typical invocation on the first node would be (after sharing C: on the
controller as c_drive)
C:\> net use z: \\controller\c_drive
C:\> z:
Z:\> newtek\programs\lwsn.exe -2 z:\newtek\programs\job1 \
z:\newtek\programs\ack1
The second node uses ...\job2 and ...\ack2, and so on, for each
additional node. The controller and the nodes co-ordinate with one
another by reading and writing to these files.
lwsn.exe is an NT console application, it doesn't use the GUI. Notice
that n...
2017 Sep 08
0
GlusterFS as virtual machine storage
...set to virt
>>>> > > (gluster volume set gv_openstack_1 virt). VM runs current (all
>>>> > > packages updated) Ubuntu Xenial.
>>>> > >
>>>> > > I set up following fio job:
>>>> > >
>>>> > > [job1]
>>>> > > ioengine=libaio
>>>> > > size=1g
>>>> > > loops=16
>>>> > > bs=512k
>>>> > > direct=1
>>>> > > filename=/tmp/fio.data2
>>>> > >
>>>> > > When I r...
2015 Nov 20
2
[PATCH -qemu] nvme: support Google vendor extension
On 20/11/2015 09:11, Ming Lin wrote:
> On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote:
>>
>> On 18/11/2015 06:47, Ming Lin wrote:
>>> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
>>> }
>>>
>>> start_sqs = nvme_cq_full(cq) ? 1 : 0;
>>> - cq->head = new_head;
2015 Nov 20
2
[PATCH -qemu] nvme: support Google vendor extension
On 20/11/2015 09:11, Ming Lin wrote:
> On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote:
>>
>> On 18/11/2015 06:47, Ming Lin wrote:
>>> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
>>> }
>>>
>>> start_sqs = nvme_cq_full(cq) ? 1 : 0;
>>> - cq->head = new_head;
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
...u-nvme+google-ext: 100MB/s
virtio-blk: 174MB/s
virtio-scsi: 118MB/s
I'll show you qemu-vhost-nvme+google-ext number later.
root at guest:~# cat test.job
[global]
bs=4k
ioengine=libaio
iodepth=64
direct=1
runtime=120
time_based
rw=randread
norandommap
group_reporting
gtod_reduce=1
numjobs=2
[job1]
filename=/dev/nvme0n1
#filename=/dev/vdb
#filename=/dev/sda
rw=read
Patches also available at:
kernel:
https://git.kernel.org/cgit/linux/kernel/git/mlin/linux.git/log/?h=nvme-google-ext
qemu:
http://www.minggr.net/cgit/cgit.cgi/qemu/log/?h=nvme-google-ext
Thanks,
Ming
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
...u-nvme+google-ext: 100MB/s
virtio-blk: 174MB/s
virtio-scsi: 118MB/s
I'll show you qemu-vhost-nvme+google-ext number later.
root at guest:~# cat test.job
[global]
bs=4k
ioengine=libaio
iodepth=64
direct=1
runtime=120
time_based
rw=randread
norandommap
group_reporting
gtod_reduce=1
numjobs=2
[job1]
filename=/dev/nvme0n1
#filename=/dev/vdb
#filename=/dev/sda
rw=read
Patches also available at:
kernel:
https://git.kernel.org/cgit/linux/kernel/git/mlin/linux.git/log/?h=nvme-google-ext
qemu:
http://www.minggr.net/cgit/cgit.cgi/qemu/log/?h=nvme-google-ext
Thanks,
Ming