Displaying 20 results from an estimated 189 matches for "fileio".
Did you mean:
filein
2020 Sep 15
0
[PATCH 01/18] media/v4l2: remove V4L2-FLAG-MEMORY-NON-CONSISTENT flag
...if (q->memory != memory) {
dprintk(q, 1, "memory model mismatch\n");
return -EINVAL;
}
- if (!verify_consistency_attr(q, consistent_mem))
- return -EINVAL;
}
num_buffers = min(*count, VB2_MAX_FRAME - q->num_buffers);
@@ -2581,7 +2547,7 @@ static int __vb2_init_fileio(struct vb2_queue *q, int read)
fileio->memory = VB2_MEMORY_MMAP;
fileio->type = q->type;
q->fileio = fileio;
- ret = vb2_core_reqbufs(q, fileio->memory, 0, &fileio->count);
+ ret = vb2_core_reqbufs(q, fileio->memory, &fileio->count);
if (ret)
goto err_kfre...
2003 Aug 24
1
readdir() and read() errors ignored
...() fail: under these circumstances rsync will transfer zeroes instead
of the actual file data and will again report no errors to the user.
The attached patch fixes both of these problems.
I am not a list member; please Cc me on all replies.
Michael
-------------- next part --------------
Index: fileio.c
===================================================================
RCS file: /cvsroot/rsync/fileio.c,v
retrieving revision 1.6
diff -u -r1.6 fileio.c
--- fileio.c 22 May 2003 23:24:44 -0000 1.6
+++ fileio.c 23 Aug 2003 23:03:01 -0000
@@ -191,7 +191,10 @@
}
if ((nread=read(map->fd,map-...
2009 Jan 15
2
Problem syncing large dataset
Hi,
When using rsync-3.0.2 through 3.0.5, I get this error on a large
dataset syncing from machine-a to machine-b:
$ /bin/rsync -aHSz /local/. machine-b:/local/.
invalid len passed to map_ptr: -1737287498
rsync error: error in file IO (code 11) at fileio.c(188) [sender=3.0.5]
This happens no matter which side initiates the connection, so this
fails in the same way:
$ /bin/rsync -aHSz machine-a:/local/. /local/.
Using rsyncd, same thing so it's not caused by SSH:
$ /bin/rsync -aHSz rsync://machine-a/local/. /local/.
invalid len passed to ma...
2012 Dec 25
2
[LLVMdev] [DragonEgg] Strange call to @"\01__isoc99_fscanf"
...clang generates "__isoc99_fscanf", while DragonEgg gives
"\01__isoc99_fscanf". We generally use DragonEgg as our compiler frontend,
so it is more important. What could be the reason of "\01" issue?
Thanks,
- Dima.
marcusmae at M17xR4:~/forge/kernelgen/tests/behavior/fileio$ cat fileio.c
#include <inttypes.h>
#include <stdio.h>
const char* filename = "fileio.txt";
int main(int argc, char* argv[])
{
if (argc != 2)
{
printf("Test KernelGen support for File I/O\n");
printf("Usage: %s <szarray>\n",...
2008 Dec 23
0
[jra@samba.org: Patch to improve Samba write speeds on Linux ext3 with 3.2.x]
...(lp_syncalways, bSyncAlways)
-FN_LOCAL_BOOL(lp_strict_allocate, bStrictAllocate)
+FN_LOCAL_INTEGER(lp_strict_allocate, iStrictAllocate)
FN_LOCAL_BOOL(lp_strict_sync, bStrictSync)
FN_LOCAL_BOOL(lp_map_system, bMap_system)
FN_LOCAL_BOOL(lp_delete_readonly, bDeleteReadonly)
diff --git a/source/smbd/fileio.c b/source/smbd/fileio.c
index 60aeeef..e23f391 100644
--- a/source/smbd/fileio.c
+++ b/source/smbd/fileio.c
@@ -127,9 +127,10 @@ static ssize_t real_write_file(struct smb_request *req,
if (pos == -1) {
ret = vfs_write_data(req, fsp, data, n);
} else {
+ enum smb...
2017 Nov 09
0
[Gluster-devel] Poor performance of block-store with RDMA
...with one server that has RDMA volume and one client
that is connected to same RDMA network.
two machines have same environment like below.
- Distro : CentOS 6.9
- Kernel : 4.12.9
- GlusterFS : 3.10.5
- tcmu-runner : 1.2.0
- iscsi-initiator-utils : 6.2.0.873
and these are results from test.
1st. FILEIO on FUSE mounted - 333MB/sec
2nd. glfs user backstore - 140MB/sec
3rd. FILEIO on FUSE mounted with tgtd - 235MB/sec
4th. glfs user backstore with tgtd - 220MB/sec
5th. FILEIO on FUSE mounted (iSER) - 643MB/sec
6th. glfs user backstore (iSER) - 149MB/sec
7th. FILEIO on FUSE mounted with tgtd (iSER)...
2003 Apr 27
4
Bogus rsync "Success" message when out of disk space
...write_file().
And write_file is called only in those two places. So that is the
appropriate location to patch. Especially since the obvious fix is
to use the rewrite code already there for the sparse file writes.
-------------------------------------8<-------------------------------------
--- fileio.c.orig Fri Jan 25 15:07:34 2002
+++ fileio.c Sat Apr 26 12:16:25 2003
@@ -69,25 +69,28 @@
return len;
}
int write_file(int f,char *buf,size_t len)
{
int ret = 0;
- if (!sparse_files) {
- return write(f,buf,len);
- }
-
while (len&...
2004 Aug 02
4
reducing memmoves
...s difference
well. If you do, feedback from testing is especially welcome. [glances
in wally's direction] :)
Overall, I think this should never hurt performance, but with large
datasets and much memory, it should improve performance.
-chris
-------------- next part --------------
Index: fileio.c
===================================================================
RCS file: /cvsroot/rsync/fileio.c,v
retrieving revision 1.15
diff -u -r1.15 fileio.c
--- fileio.c 20 Jul 2004 21:35:52 -0000 1.15
+++ fileio.c 2 Aug 2004 02:31:02 -0000
@@ -23,6 +23,7 @@
#include "rsync.h"
extern in...
2008 Mar 23
1
[PATCH] allow to change the block size used to handle sparse files
...parameter.
For example, using a sparse write size of 32KB, I've been able to
increase the transfer rate of an order of magnitude copying the output
files of scientific applications from GPFS to GPFS or GPFS to SAN FS.
-Andrea
---
Allow to change the block size used to handle sparse files.
fileio.c | 3 ++-
options.c | 9 +++++++++
rsync.yo | 10 ++++++++++
3 files changed, 21 insertions(+), 1 deletions(-)
diff --git a/fileio.c b/fileio.c
index f086494..39cae92 100644
--- a/fileio.c
+++ b/fileio.c
@@ -26,6 +26,7 @@
#endif
extern int sparse_files;
+extern long sparse_files_blo...
2016 Jul 27
2
ext4 error when testing virtio-scsi & vhost-scsi
...t2
3. Instead of vhost, also tried loopback and no problem.
Using loopback, host can use the new block device, while vhost is used
by guest (qemu).
http://www.linux-iscsi.org/wiki/Tcm_loop
Test directly in host, not find ext4 error.
Have issue:
1. vhost_scsi & virtio_scsi, ext4
a. iblock
b, fileio, file located in /tmp (ram), no device based.
2, Have tried 4.7-r2 and 4.5-rc1 on D02 board, both have issue.
Since I need kvm specific patch for D02, so it may not freely to switch
to older version.
3. Also test with ext4, disabling journal
mkfs.ext4 -O ^has_journal /dev/sda
Do you have any su...
2016 Jul 27
2
ext4 error when testing virtio-scsi & vhost-scsi
...t2
3. Instead of vhost, also tried loopback and no problem.
Using loopback, host can use the new block device, while vhost is used
by guest (qemu).
http://www.linux-iscsi.org/wiki/Tcm_loop
Test directly in host, not find ext4 error.
Have issue:
1. vhost_scsi & virtio_scsi, ext4
a. iblock
b, fileio, file located in /tmp (ram), no device based.
2, Have tried 4.7-r2 and 4.5-rc1 on D02 board, both have issue.
Since I need kvm specific patch for D02, so it may not freely to switch
to older version.
3. Also test with ext4, disabling journal
mkfs.ext4 -O ^has_journal /dev/sda
Do you have any su...
2015 Sep 18
3
[RFC PATCH 0/2] virtio nvme
...e able to
> decode nvme Read/Write/Flush operations and translate -> submit to
> existing backend drivers.
Let me call the "eventfd based LIO NVMe fabric driver" as
"tcm_eventfd_nvme"
Currently, LIO frontend driver(iscsi, fc, vhost-scsi etc) talk to LIO
backend driver(fileio, iblock etc) with SCSI commands.
Did you mean the "tcm_eventfd_nvme" driver need to translate NVMe
commands to SCSI commands and then submit to backend driver?
But I thought the future "LIO NVMe target" can support frontend driver
talk to backend driver directly with NVMe comm...
2015 Sep 18
3
[RFC PATCH 0/2] virtio nvme
...e able to
> decode nvme Read/Write/Flush operations and translate -> submit to
> existing backend drivers.
Let me call the "eventfd based LIO NVMe fabric driver" as
"tcm_eventfd_nvme"
Currently, LIO frontend driver(iscsi, fc, vhost-scsi etc) talk to LIO
backend driver(fileio, iblock etc) with SCSI commands.
Did you mean the "tcm_eventfd_nvme" driver need to translate NVMe
commands to SCSI commands and then submit to backend driver?
But I thought the future "LIO NVMe target" can support frontend driver
talk to backend driver directly with NVMe comm...
2013 Nov 19
6
[PATCH] Btrfs: fix very slow inode eviction and fs unmount
...when the inode has many pages. In
some scenarios I have experienced umount times higher than 15
minutes, even when there''s no pending IO (after a btrfs fs sync).
A quick way to reproduce this issue:
$ mkfs.btrfs -f /dev/sdb3
$ mount /dev/sdb3 /mnt/btrfs
$ cd /mnt/btrfs
$ sysbench --test=fileio --file-num=128 --file-total-size=16G \
--file-test-mode=seqwr --num-threads=128 \
--file-block-size=16384 --max-time=60 --max-requests=0 run
$ time btrfs fi sync .
FSSync ''.''
real 0m25.457s
user 0m0.000s
sys 0m0.092s
$ cd ..
$ time umount /mnt/btrfs
real 1m38.234s
user 0...
2005 Sep 09
2
C macros and Makevars/package building
...having a subdirectory /
src/sdk together with a Makevars file. This file basically looks like
PKG_CPPFLAGS+=\
-imacros R_affx_constants.h\
-Isdk/files\
(... + a lot of other -I statements telling CPP to include
subdirectories of src/sdk)
Then we have a
SOURCES.SDK = \
sdk/files/FileIO.cpp \
(... + a lot of other .cpp files)
SOURCES.OURS = \
R_affx_cdf.cpp
and then finally a
OBJS=$(SOURCES.SDK:.cpp=.o) $(SOURCES.OURS:cpp:.o)
We seem to need the last statement since it seems that .cpp is not
automatically a C++ suffix (but is it done the "right" way for
port...
1999 Jul 31
0
Samba 2.0.5a & Solaris 2.6 Assistance
...his
error. Has anyone seen this before and can point me in the right
direction to to address? Thanks Andy
[1999/07/31 15:14:30, 1] smbd/service.c:make_connection(521)
agibson (209.31.144.167) connect to service agibson as user
agibson (uid=2107,
gid=200) (pid 1043)
[1999/07/31 15:15:18, 0] smbd/fileio.c:seek_file(54)
seek_file: sys_lseek failed. Error was Invalid argument
[1999/07/31 15:15:18, 0] smbd/fileio.c:seek_file(54)
seek_file: sys_lseek failed. Error was Invalid argument
[1999/07/31 15:15:18, 0] smbd/fileio.c:seek_file(54)
seek_file: sys_lseek failed. Error was Invalid argument
1999 Aug 03
2
Samba 2.0.5a & Solaris 2.6 Assistance (PR#19370)
...ore and can point me in the right
> direction to to address? Thanks Andy
>
> [1999/07/31 15:14:30, 1] smbd/service.c:make_connection(521)
> agibson (209.31.144.167) connect to service agibson as user
> agibson (uid=2107,
> gid=200) (pid 1043)
> [1999/07/31 15:15:18, 0] smbd/fileio.c:seek_file(54)
> seek_file: sys_lseek failed. Error was Invalid argument
> [1999/07/31 15:15:18, 0] smbd/fileio.c:seek_file(54)
> seek_file: sys_lseek failed. Error was Invalid argument
> [1999/07/31 15:15:18, 0] smbd/fileio.c:seek_file(54)
> seek_file: sys_lseek failed. Error...
2015 Sep 23
3
[RFC PATCH 0/2] virtio nvme
...> submit to
> > > existing backend drivers.
> >
> > Let me call the "eventfd based LIO NVMe fabric driver" as
> > "tcm_eventfd_nvme"
> >
> > Currently, LIO frontend driver(iscsi, fc, vhost-scsi etc) talk to LIO
> > backend driver(fileio, iblock etc) with SCSI commands.
> >
> > Did you mean the "tcm_eventfd_nvme" driver need to translate NVMe
> > commands to SCSI commands and then submit to backend driver?
> >
>
> IBLOCK + FILEIO + RD_MCP don't speak SCSI, they simply process I/Os with...
2015 Sep 23
3
[RFC PATCH 0/2] virtio nvme
...> submit to
> > > existing backend drivers.
> >
> > Let me call the "eventfd based LIO NVMe fabric driver" as
> > "tcm_eventfd_nvme"
> >
> > Currently, LIO frontend driver(iscsi, fc, vhost-scsi etc) talk to LIO
> > backend driver(fileio, iblock etc) with SCSI commands.
> >
> > Did you mean the "tcm_eventfd_nvme" driver need to translate NVMe
> > commands to SCSI commands and then submit to backend driver?
> >
>
> IBLOCK + FILEIO + RD_MCP don't speak SCSI, they simply process I/Os with...
2011 Nov 03
5
Fully-Virtualized XEN domU not Booting over iSCSI
...on bootup.
************************************************************
Boot from Hard Disk failed: could not read the hard disk
FATAL: No bootable device
************************************************************
The iSCSI is offering the domUs as file images (.img) to the initiator i.e
.Type=fileio.
My host''s processor (as you may have deduced) supports Intel-VTx. Does my
shared storage need to have this capability to be able to
boot my fully-virtualized domU?
Any pointers/suggestions would be appreciated.
Phillip
BASIC SYSTEM INFO:
-----------------------------------------------...