Displaying 20 results from an estimated 22 matches for "time_base".
Did you mean:
time_based
2019 Jul 11
2
Need help with streaming to Icecast
...odec->codec_type ==
AVMediaType.AVMEDIA_TYPE_VIDEO)
{
pInputVideoStream = _pInputFormatContext->streams[i];
}
}
_pOutputStream->avg_frame_rate =
pInputVideoStream->avg_frame_rate;
_pOutputStream->time_base = pInputVideoStream->time_base;
_pOutputStream->sample_aspect_ratio =
pInputVideoStream->sample_aspect_ratio;
ffmpeg.avcodec_parameters_copy(_pOutputStream->codecpar,
pInputVideoStream->codecpar);
_pOutputStream->codecpar->codec_type =
AVMedi...
2007 Jan 06
7
FFmpeg Theora encoding patch
Hi,
Attached is my patch to add theora encoding to ffmpeg's libavcodec (by
using libtheora). I am requesting help to fix the bug I mention below
and am seeking general comments before I submit the patch properly.
Files encoded using this encoder have a problem playing in VLC. The
files will not play unless "Drop late frames" has been unticked in the
advanced video settings.
2019 Jul 11
1
Need help with streaming to Icecast
...gt; pInputVideoStream =
> > _pInputFormatContext->streams[i];
> > }
> > }
> >
> > _pOutputStream->avg_frame_rate =
> > pInputVideoStream->avg_frame_rate;
> > _pOutputStream->time_base = pInputVideoStream->time_base;
> > _pOutputStream->sample_aspect_ratio =
> > pInputVideoStream->sample_aspect_ratio;
> > ffmpeg.avcodec_parameters_copy(_pOutputStream->codecpar,
> > pInputVideoStream->codecpar);
> > _...
2019 Jul 19
3
Samba async performance - bottleneck or bug?
...onor the sync request, similarly to ZFS.
So to summarize, this is the situation:
1) I run a fio benchmark requesting, small, random, async writes. Command is "fio --direct=1 --sync=0 --rw=randwrite --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --group_reporting --name=sambatest --size=32k --time_based". I run this command on both the host, as on the Samba client, both on the same exact ZFS dataset
2) The ZFS dataset only writes async, converting sync to async writes at all times
3) That same dataset being shared through Samba, also only performs async writes (strict sync = no)
4) With t...
2019 Jul 11
0
Need help with streaming to Icecast
...PE_VIDEO)
> {
> pInputVideoStream =
> _pInputFormatContext->streams[i];
> }
> }
>
> _pOutputStream->avg_frame_rate =
> pInputVideoStream->avg_frame_rate;
> _pOutputStream->time_base = pInputVideoStream->time_base;
> _pOutputStream->sample_aspect_ratio =
> pInputVideoStream->sample_aspect_ratio;
> ffmpeg.avcodec_parameters_copy(_pOutputStream->codecpar,
> pInputVideoStream->codecpar);
> _pOutputStream->codecpa...
2019 Jul 18
2
Samba async performance - bottleneck or bug?
...do a random sync write benchmark on the host on this dataset, it will use RAM to do the write, drastically speeding up random writes.
The below benchmark command on the ZFS host:
fio --direct=1 --sync=1 --rw=randwrite --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --group_reporting --name=sambatest --time_based
Has an average speed of 520MB/s (which is the maximum speed of my SATA SSD). Despite requesting a sync write, ZFS turns it in an async write, dramatically speeding it up. Clearly the results are great when I directly benchmark from the host into the sync=disabled ZFS dataset. But this doesn't...
2018 Jul 25
2
[RFC 0/4] Virtio uses DMA API for all devices
...rw-r--r-- 1 libvirt-qemu kvm 5.0G Jul 24 06:26 disk2.img
mount:
size=21G on /mnt type tmpfs (rw,relatime,size=22020096k)
TEST CONFIG
===========
FIO (https://linux.die.net/man/1/fio) is being run with and without
the patches.
Read test config:
[Sequential]
direct=1
ioengine=libaio
runtime=5m
time_based
filename=/dev/vda
bs=4k
numjobs=16
rw=read
unlink=1
iodepth=256
Write test config:
[Sequential]
direct=1
ioengine=libaio
runtime=5m
time_based
filename=/dev/vda
bs=4k
numjobs=16
rw=write
unlink=1
iodepth=256
The virtio block device comes up as /dev/vda on the guest with
/sys/block/vda/queue/n...
2018 Jul 25
2
[RFC 0/4] Virtio uses DMA API for all devices
...rw-r--r-- 1 libvirt-qemu kvm 5.0G Jul 24 06:26 disk2.img
mount:
size=21G on /mnt type tmpfs (rw,relatime,size=22020096k)
TEST CONFIG
===========
FIO (https://linux.die.net/man/1/fio) is being run with and without
the patches.
Read test config:
[Sequential]
direct=1
ioengine=libaio
runtime=5m
time_based
filename=/dev/vda
bs=4k
numjobs=16
rw=read
unlink=1
iodepth=256
Write test config:
[Sequential]
direct=1
ioengine=libaio
runtime=5m
time_based
filename=/dev/vda
bs=4k
numjobs=16
rw=write
unlink=1
iodepth=256
The virtio block device comes up as /dev/vda on the guest with
/sys/block/vda/queue/n...
2019 Jul 25
0
Samba async performance - bottleneck or bug?
...6:55 +0000, douxevip wrote:
> So to summarize, this is the situation:
>
> 1) I run a fio benchmark requesting, small, random, async writes. Command is "fio --direct=1 --sync=0 --rw=randwrite --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --group_reporting --name=sambatest --size=32k --time_based". I run this command on both the host, as on the Samba client, both on the same exact ZFS dataset
>
> 2) The ZFS dataset only writes async, converting sync to async writes at all times
>
> 3) That same dataset being shared through Samba, also only performs async writes (strict s...
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
...emu.
I use ram disk as backend to compare performance.
qemu-nvme: 29MB/s
qemu-nvme+google-ext: 100MB/s
virtio-blk: 174MB/s
virtio-scsi: 118MB/s
I'll show you qemu-vhost-nvme+google-ext number later.
root at guest:~# cat test.job
[global]
bs=4k
ioengine=libaio
iodepth=64
direct=1
runtime=120
time_based
rw=randread
norandommap
group_reporting
gtod_reduce=1
numjobs=2
[job1]
filename=/dev/nvme0n1
#filename=/dev/vdb
#filename=/dev/sda
rw=read
Patches also available at:
kernel:
https://git.kernel.org/cgit/linux/kernel/git/mlin/linux.git/log/?h=nvme-google-ext
qemu:
http://www.minggr.net/cgit/cgit.c...
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
...emu.
I use ram disk as backend to compare performance.
qemu-nvme: 29MB/s
qemu-nvme+google-ext: 100MB/s
virtio-blk: 174MB/s
virtio-scsi: 118MB/s
I'll show you qemu-vhost-nvme+google-ext number later.
root at guest:~# cat test.job
[global]
bs=4k
ioengine=libaio
iodepth=64
direct=1
runtime=120
time_based
rw=randread
norandommap
group_reporting
gtod_reduce=1
numjobs=2
[job1]
filename=/dev/nvme0n1
#filename=/dev/vdb
#filename=/dev/sda
rw=read
Patches also available at:
kernel:
https://git.kernel.org/cgit/linux/kernel/git/mlin/linux.git/log/?h=nvme-google-ext
qemu:
http://www.minggr.net/cgit/cgit.c...
2019 Aug 06
1
Samba async performance - bottleneck or bug?
...p wrote:
>
> > So to summarize, this is the situation:
> >
> > 1. I run a fio benchmark requesting, small, random, async writes. Command is "fio --direct=1 --sync=0 --rw=randwrite --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --group_reporting --name=sambatest --size=32k --time_based". I run this command on both the host, as on the Samba client, both on the same exact ZFS dataset
> >
> > 2. The ZFS dataset only writes async, converting sync to async writes at all times
> >
> > 3. That same dataset being shared through Samba, also only performs as...
2018 Oct 15
0
[Qemu-devel] virtio-console downgrade the virtio-pci-blk performance
...mu issue or kernel issue.
> > >
> > > It sounds odd; can you provide more details on:
> > > a) The benchmark you're using.
> > I'm using fio, the config is:
> > [global]
> > ioengine=libaio
> > iodepth=128
> > runtime=120
> > time_based
> > direct=1
> >
> > [randread]
> > stonewall
> > bs=4k
> > filename=/dev/vdb
> > rw=randread
> >
> > > b) the host and the guest config (number of cpus etc)
> > The qemu cmd is : /usr/libexec/qemu-kvm --device virtio-balloon -m 16G...
2019 Jul 19
0
Samba async performance - bottleneck or bug?
...m sync write benchmark on the host on this dataset, it will use RAM to do the write, drastically speeding up random writes.
> The below benchmark command on the ZFS host:
> fio --direct=1 --sync=1 --rw=randwrite --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --group_reporting --name=sambatest --time_based
> Has an average speed of 520MB/s (which is the maximum speed of my SATA SSD). Despite requesting a sync write, ZFS turns it in an async write, dramatically speeding it up. Clearly the results are great when I directly benchmark from the host into the sync=disabled ZFS dataset. But this doesn...
2015 Nov 20
0
[PATCH -qemu] nvme: support Google vendor extension
...vme0n1,format=raw,if=none,id=D22 \
-device nvme,drive=D22,serial=1234
Here is the test results:
local NVMe: 860MB/s
qemu-nvme: 108MB/s
qemu-nvme+google-ext: 140MB/s
qemu-nvme-google-ext+eventfd: 190MB/s
root at wheezy:~# cat test.job
[global]
bs=4k
ioengine=libaio
iodepth=64
direct=1
runtime=60
time_based
norandommap
group_reporting
gtod_reduce=1
numjobs=8
[job1]
filename=/dev/nvme0n1
rw=read
2018 Jul 23
2
[RFC 0/4] Virtio uses DMA API for all devices
On 07/20/2018 06:46 PM, Michael S. Tsirkin wrote:
> On Fri, Jul 20, 2018 at 09:29:37AM +0530, Anshuman Khandual wrote:
>> This patch series is the follow up on the discussions we had before about
>> the RFC titled [RFC,V2] virtio: Add platform specific DMA API translation
>> for virito devices (https://patchwork.kernel.org/patch/10417371/). There
>> were suggestions
2018 Jul 23
2
[RFC 0/4] Virtio uses DMA API for all devices
On 07/20/2018 06:46 PM, Michael S. Tsirkin wrote:
> On Fri, Jul 20, 2018 at 09:29:37AM +0530, Anshuman Khandual wrote:
>> This patch series is the follow up on the discussions we had before about
>> the RFC titled [RFC,V2] virtio: Add platform specific DMA API translation
>> for virito devices (https://patchwork.kernel.org/patch/10417371/). There
>> were suggestions
2015 Nov 18
0
[PATCH -qemu] nvme: support Google vendor extension
...hz Xeon E5-1650), running QEMU:
$ bin/opt/native/x86_64-softmmu/qemu-system-x86_64 \
-enable-kvm -m 2048 -smp 4 \
-drive if=virtio,file=debian.raw,cache=none \
-drive file=nvme.raw,if=none,id=nvme-dev \
-device nvme,drive=nvme-dev,serial=nvme-serial
Using "fio":
vm # fio -time_based --name=benchmark --ioengine=libaio --iodepth=32 \
--numjobs=1 --runtime=30 --blocksize=4k --filename=/dev/nvme0n1 \
--nrfiles=1 --invalidate=1 --verify=0 --direct=1 --rw=randread
I get about 20k IOPs with the original code and about 85k IOPs with
the vendor extension changes applied (and...
2015 Nov 20
2
[PATCH -qemu] nvme: support Google vendor extension
On 20/11/2015 09:11, Ming Lin wrote:
> On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote:
>>
>> On 18/11/2015 06:47, Ming Lin wrote:
>>> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
>>> }
>>>
>>> start_sqs = nvme_cq_full(cq) ? 1 : 0;
>>> - cq->head = new_head;
2015 Nov 20
2
[PATCH -qemu] nvme: support Google vendor extension
On 20/11/2015 09:11, Ming Lin wrote:
> On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote:
>>
>> On 18/11/2015 06:47, Ming Lin wrote:
>>> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
>>> }
>>>
>>> start_sqs = nvme_cq_full(cq) ? 1 : 0;
>>> - cq->head = new_head;