search for: 512

Displaying 20 results from an estimated 8933 matches for "512".

Did you mean: 12
2010 Nov 11
8
zpool import panics
...epct 4% metaslab 5 offset 14000000000 spacemap 1533 free 10.6G segments 3483 maxsize 9.26G freepct 4% metaslab 6 offset 18000000000 spacemap 1534 free 10.2G segments 512 maxsize 10.2G freepct 3% metaslab 7 offset 1c000000000 spacemap 1571 free 10.0G segments 908 maxsize 9.9G freepct 3% metaslab 8 offset 20000000000 spacemap 1581 free 10.0G...
2006 Jul 10
2
ArcView + Samba: Performance nightmare under Linux, ok under Solaris or HP-UX
...n/covers/dhm_offset/o1000c/arc.adf): pos = 114688, size = 4096, returned 4096 [...] With the Linux samba server, it looks like this: [...] read_file (daten/covers/dhm_offset/o1000c/arc.adf): pos = 0, size = 4096, returned 4096 read_file (daten/covers/dhm_offset/o1000c/arc.adf): pos = 0, size = 512, returned 512 read_file (daten/covers/dhm_offset/o1000c/arc.adf): pos = 512, size = 512, returned 512 read_file (daten/covers/dhm_offset/o1000c/arc.adf): pos = 0, size = 512, returned 512 read_file (daten/covers/dhm_offset/o1000c/arc.adf): pos = 512, size = 512, returned 512 read_file (date...
2006 Jul 10
1
ArcView + Samba: Performance nightmare under Linux, o k under Solaris or HP-UX
...n/covers/dhm_offset/o1000c/arc.adf): pos = 114688, size = 4096, returned 4096 [...] With the Linux samba server, it looks like this: [...] read_file (daten/covers/dhm_offset/o1000c/arc.adf): pos = 0, size = 4096, returned 4096 read_file (daten/covers/dhm_offset/o1000c/arc.adf): pos = 0, size = 512, returned 512 read_file (daten/covers/dhm_offset/o1000c/arc.adf): pos = 512, size = 512, returned 512 read_file (daten/covers/dhm_offset/o1000c/arc.adf): pos = 0, size = 512, returned 512 read_file (daten/covers/dhm_offset/o1000c/arc.adf): pos = 512, size = 512, returned 512 read_file (date...
2012 Mar 10
3
problem: The decoded frame is not as the original one
...=================================*/ printf("\n nbBytes: "); printf("%i",nbBytes); printf("\n frame_size= "); printf("%i",frame_size); printf ("\n"); //----------------- printf("end of run!"); return 0; } OUTPUT: //the original frame 1? -512? 16384? 512? -768? -2048? -1280? 256? -1024? 12288? 0? 8192? 253? 256? -768? 12288? 0? -16? -768? -512? -1? 0? -512? -768? -1536? -512? -512? -768? 16384? 0? 8192? -512? 16384? 512? -768? -2048? -1280? 256? -1024? 12288? 0? 8192? 253? 256? -768? 12288? 0? -16? -768? -512? -1? 0? -512? -768? -1536?...
2020 Mar 17
0
[nbdkit PATCH 3/4] tests: Don't let test-parallel-* hang on nbdkit bug
...rt for parallel requests"; exit 77; } @@ -43,8 +44,8 @@ cleanup_fn rm -f test-parallel-file.data test-parallel-file.out # Populate file, and sanity check that qemu-io can issue parallel requests printf '%1024s' . > test-parallel-file.data -qemu-io -f raw -c "aio_write -P 1 0 512" -c "aio_write -P 2 512 512" \ - -c aio_flush test-parallel-file.data || +timeout 10s </dev/null qemu-io -f raw -c "aio_write -P 1 0 512" \ + -c "aio_write -P 2 512 512" -c aio_flush test-parallel-file.data || { echo "'qemu-io'...
2006 Oct 06
2
smbd hanging on OS X 10.4.8
Hi, If process all my mounts prior to the "..." bit below. It looks as if it's hanging while processing some printer config. On that assumption and since I don't and never have had any printers attached, I've commented out the '[printers]' section from smb.conf. No section contains a 'printable = yes'. I've tried with all sections containing 'printable
2013 Sep 07
1
Re: Error Attaching Seventh VirtIO-SCSI Device to Guest
> -----Original Message----- > From: Osier Yang [mailto:jyang@redhat.com] > Sent: Friday, September 06, 2013 10:54 PM > To: McEvoy, James > Cc: libvirt-users@redhat.com > Subject: Re: [libvirt-users] Error Attaching Seventh VirtIO-SCSI Device to Guest > > On 04/09/13 09:34, McEvoy, James wrote: > > I have run into a problem attempting to attach the seventh
2008 Jan 24
1
zfs showing more filesystem using ls than df actually has
...s <dataset> return the wrong result That I believe to be related to this issue. Has anyone seen this before and do you have a workaround and information and advice to share. # df -k Filesystem kbytes used avail capacity Mounted on /dev/dsk/c0t1d0s0 13283479 10825523 2325122 83% / /devices 0 0 0 0% /devices ctfs 0 0 0 0% /system/contract proc 0 0 0 0% /proc mnttab 0 0 0 0% /etc/mnttab swap 4...
2018 Mar 02
1
[nbdkit PATCH] tests: Make parallel tests work at 512-byte granularity
qemu-io 2.12 will be changing its default alignment to unknown servers so that it does read-modify-write for anything less than 512 bytes. If we implement NBD_OPT_GO, then we can keep qemu-io using 1-byte alignment; but until then, this breaks our parallel tests when using 1-byte alignment because they end up with more delays than expected (thanks to the read-modify-write). Revamp the tests to not rely on sub-sector alignment...
2018 Mar 06
0
[PATCH nbdkit 1/2] tests: Remove QEMU_IO / HAVE_QEMU_IO.
...: missing qemu-io" + exit 77 +fi trap 'rm -f test-parallel-file.data test-parallel-file.out' 0 1 2 3 15 # Populate file, and sanity check that qemu-io can issue parallel requests printf '%1024s' . > test-parallel-file.data -$QEMU_IO -f raw -c "aio_write -P 1 0 512" -c "aio_write -P 2 512 512" \ +qemu-io -f raw -c "aio_write -P 1 0 512" -c "aio_write -P 2 512 512" \ -c aio_flush test-parallel-file.data || - { echo "'$QEMU_IO' can't drive parallel requests"; exit 77; } + { echo "'...
2020 Mar 17
9
[nbdkit PATCH 0/4] Fix testsuite hang with nbd-stadalone
Either patch 1 or patch 2 in isolation is sufficient to fix the problem that Rich forwarded on from an archlinux tester (name so I can credit them?). But both patches should be applied, as well as backported to appropriate stable branches, to maximize cross-version interoperability of nbdkit vs. plugins. Patch 3 will let us detect future similar bugs much faster. I want patch 4 to ensure that
2012 Mar 11
0
problem: The decoded frame is not as the original one
...=================================*/ printf("\n nbBytes: "); printf("%i",nbBytes); printf("\n frame_size= "); printf("%i",frame_size); printf ("\n"); //----------------- printf("end of run!"); return 0; } OUTPUT: //the original frame 1? -512? 16384? 512? -768? -2048? -1280? 256? -1024? 12288? 0? 8192? 253? 256? -768? 12288? 0? -16? -768? -512? -1? 0? -512? -768? -1536? -512? -512? -768? 16384? 0? 8192? -512? 16384? 512? -768? -2048? -1280? 256? -1024? 12288? 0? 8192? 253? 256? -768? 12288? 0? -16? -768? -512? -1? 0? -512? -768? -1536?...
2017 Dec 19
0
kernel: blk_cloned_rq_check_limits: over max segments limit., Device Mapper Multipath, iBFT, iSCSI COMSTAR
...ipt after boot for the boot device I get this: # max_sectors_kb | grep -e 3600144f00000000000005a2769c70001 -e sda -e sdd -e sde -e sdk -e sdj -e max Sys Block Node??? : Device??????????????????????????????? max_sectors_kb? max_hw_sectors_kb /sys/block/dm-1?? : 3600144f00000000000005a2769c70001???? 512???????????? 32767??????????? /sys/block/dm-5?? : 3600144f00000000000005a2769c70001p1?? 512???????????? 32767??????????? /sys/block/dm-6?? : 3600144f00000000000005a2769c70001p2?? 512???????????? 32767??????????? /sys/block/dm-7?? : 3600144f00000000000005a2769c70001p3?? 512???????????? 32767?????????...
2006 Mar 14
3
what are those .nfsXXXXX files with mbox ?
On Solaris 9, using: default_mail_env = mbox:~/mail:INBOX=/var/mail/%u Here is a listing of ~/mail with UW IMAP: .: total 8 drwx------ 2 testu3 sysadmin 512 Mar 14 14:40 . drwxr-xr-x 4 testu3 sysadmin 512 Mar 14 14:40 .. -rw------- 1 testu3 sysadmin 496 Mar 14 14:40 Drafts -rw------- 1 testu3 sysadmin 496 Mar 14 14:40 Sent Items with Dovecot: .: total 16 drwx------ 3 testu3 sysadmin 512 Mar 14 14:52 . drwxr-xr-x 4 te...
2018 Mar 06
4
[PATCH nbdkit 0/2] tests: Minor reworking of tests.
Small reworking of tests to remove $QEMU_IO, making that consistent with other test tools, and to test IPv6 connections.
2003 Apr 17
0
Install of R-1.7.0; permissions.
...the permissions on the files in the installed version (in /usr/local/lib/R) were wrong. E.g. here's a listing of /usr/local/lib/R/library: ===+===+===+===+===+===+===+===+===+===+===+===+===+===+===+===+===+===+=== {erdos} /usr/local/lib/R/library ## ls -l total 28 drwxr-xr-x 10 root 512 Apr 16 15:52 KernSmooth/ drwxr-xr-x 13 root 512 Apr 16 15:52 MASS/ -rw-r--r-- 1 root 608 Apr 16 15:52 R.css drwx------ 11 root 512 Apr 16 15:52 base/ drwxr-xr-x 10 root 512 Apr 16 15:52 boot/ drwxr-xr-x 10 root 512 Apr 16 15:52 class/ drwxr-xr-x 1...
2005 Dec 12
2
Extremely slow Samba3 performance with ArcView/WinXP
...found that there are a _lot_ of small packets going between the WinXP client and the Samba3 server. Example: No. Time Source Destination Protocol Info [...] 75260 81.686777 aaa.bb.ccc.110 aaa.bb.ccc.1 SMB Read AndX Request, FID: 0x272f, 512 bytes at offset 720896 75261 81.687706 aaa.bb.ccc.1 aaa.bb.ccc.110 SMB Read AndX Response, FID: 0x272f, 512 bytes 75262 81.687873 aaa.bb.ccc.110 aaa.bb.ccc.1 SMB Read AndX Request, FID: 0x2738, 512 bytes at offset 116736 75264 81.688963 aaa.bb.c...
2006 Nov 21
2
Memory leak in ocfs2/dlm?
...AM. A simple `ls -Rn' on a filesystem with lots of files makes the box leak so much RAM that the OOM killer starts to kick in. With slab alloc debugging turned on, we see this: # mount; ls -Rn; wait some seconds; Ctrl-C [root@lnxp-1038:/backend1]$ cat /proc/slab_allocators | egrep '(size.512|ocfs2_inode_cache)' | grep ocfs | sort -k 2 -n size-512: 1 o2hb_heartbeat_group_make_item+0x1b/0x79 [ocfs2_nodemanager] size-512: 1 o2hb_map_slot_data+0x22/0x2fa [ocfs2_nodemanager] size-512: 1 ocfs2_initialize_super+0x55e/0xd7f [ocfs2] size-512: 26439 ocfs2_dentry_attach_lock+0x2d1/0x423 [ocfs...
2015 Oct 29
4
[PATCH net-next rfc V2 0/2] basic busy polling support for vhost_net
...queue Result shows very huge improvement on both tx (at most 158%) and rr (at most 53%) while rx is as much as in the past. Most cases the cpu utilization is also improved: Guest TX: size/session/+thu%/+normalize% 64/ 1/ +17%/ +6% 64/ 4/ +9%/ +17% 64/ 8/ +34%/ +21% 512/ 1/ +48%/ +40% 512/ 4/ +31%/ +20% 512/ 8/ +39%/ +22% 1024/ 1/ +158%/ +99% 1024/ 4/ +20%/ +11% 1024/ 8/ +40%/ +18% 2048/ 1/ +108%/ +74% 2048/ 4/ +21%/ +7% 2048/ 8/ +32%/ +14% 4096/ 1/ +94%/ +77% 4096/ 4/ +7%/ -6% 4096/...
2015 Oct 29
4
[PATCH net-next rfc V2 0/2] basic busy polling support for vhost_net
...queue Result shows very huge improvement on both tx (at most 158%) and rr (at most 53%) while rx is as much as in the past. Most cases the cpu utilization is also improved: Guest TX: size/session/+thu%/+normalize% 64/ 1/ +17%/ +6% 64/ 4/ +9%/ +17% 64/ 8/ +34%/ +21% 512/ 1/ +48%/ +40% 512/ 4/ +31%/ +20% 512/ 8/ +39%/ +22% 1024/ 1/ +158%/ +99% 1024/ 4/ +20%/ +11% 1024/ 8/ +40%/ +18% 2048/ 1/ +108%/ +74% 2048/ 4/ +21%/ +7% 2048/ 8/ +32%/ +14% 4096/ 1/ +94%/ +77% 4096/ 4/ +7%/ -6% 4096/...