search for: 16k

Displaying 20 results from an estimated 676 matches for "16k".

Did you mean: 16
2010 Nov 11
8
zpool import panics
...metaslab 99 offset 18c000000000 spacemap 0 free 256G metaslab 100 offset 190000000000 spacemap 0 free 256G Dataset mos [META], ID 0, cr_txg 4, 342M, 923 objects Object lvl iblk dblk dsize lsize %full type 0 2 16K 16K 501K 2.02M 22.36 DMU dnode 1 1 16K 16K 10.5K 32K 100.00 object directory 2 1 16K 512 0 512 0.00 DSL directory 3 1 16K 512 1.50K 512 100.00 DSL props 4 1 16K 512 1.50K 512 100.00 DS...
2007 Oct 26
1
data error in dataset 0. what''s that?
Hi forum, I did something stupid the other day, managed to connect an external disk that was part of zpool A such that it appeared in zpool B. I realised as soon as I had done zpool status that zpool B should not have been online, but it was. I immediately switched off the machine, booted without that disk connected and destroyed zpool B. I managed to get zpool A back and all of my data appears
2006 Mar 05
2
ATTN Andreas Klauer: ASCII art + comments, please?
...I would very much appreciate it if you would draw what you see and criticize the following. Hopefully I''ll better understand after that! TIA, gypsy tc qdisc add dev imq0 root handle 1: htb default 20 tc class add dev imq0 parent 1: classid 1:2 htb rate 4522kbit ceil \ 4760kbit burst 16k cburst 16k quantum 1500 tc class add dev imq0 parent 1:2 classid 1:1 htb rate 4522kbit ceil \ 4760kbit burst 16k cburst 16k tc class add dev imq0 parent 1:1 classid 1:10 htb rate 2487kbit \ ceil 4760kbit burst 16k cburst 16k quantum 1500 prio 1 tc class add dev imq0 parent 1:1 classid 1:20...
2007 Aug 21
12
Is ZFS efficient for large collections of small files?
Is ZFS efficient at handling huge populations of tiny-to-small files - for example, 20 million TIFF images in a collection, each between 5 and 500k in size? I am asking because I could have sworn that I read somewhere that it isn''t, but I can''t find the reference. Thanks, Brian -- - Brian Gupta http://opensolaris.org/os/project/nycosug/
2011 Jun 09
0
No subject
...18,758.48 19,112.50 18,597.07 19,252.04 > 25 80,500.50 78,801.78 80,590.68 78,782.07 > 50 80,594.20 77,985.44 80,431.72 77,246.90 > 100 82,023.23 81,325.96 81,303.32 81,727.54 > > Here's the local guest-to-guest summary for 1 VM pair doing TCP_STREAM with > 256, 1K, 4K and 16K message size in Mbps: > > 256: > Instances Base V0 V1 V2 > 1 961.78 1,115.92 794.02 740.37 > 4 2,498.33 2,541.82 2,441.60 2,308.26 > > 1K: > 1 3,476.61 3,522.02 2,170.86 1,395.57 > 4 6,344.30 7,056.57 7,275.16 7,174.09 > > 4K:...
2011 Jun 09
0
No subject
...18,758.48 19,112.50 18,597.07 19,252.04 > 25 80,500.50 78,801.78 80,590.68 78,782.07 > 50 80,594.20 77,985.44 80,431.72 77,246.90 > 100 82,023.23 81,325.96 81,303.32 81,727.54 > > Here's the local guest-to-guest summary for 1 VM pair doing TCP_STREAM with > 256, 1K, 4K and 16K message size in Mbps: > > 256: > Instances Base V0 V1 V2 > 1 961.78 1,115.92 794.02 740.37 > 4 2,498.33 2,541.82 2,441.60 2,308.26 > > 1K: > 1 3,476.61 3,522.02 2,170.86 1,395.57 > 4 6,344.30 7,056.57 7,275.16 7,174.09 > > 4K:...
2009 May 29
1
Possible typo in "HowTos/Disk_Optimization"
Dear all, In "HowTos/Disk_Optimization", the calculated value of stride size and stripe width appears to have the "K" suffix incorrectly appended to them. Eg: * (64K/4K) = 16K * (3*16K) = 48K * (16K+16K) = 32K The values provided on the mkfs.ext3 command line however, do drop the "K" suffix. I'm no expert on RAID, but the "K" suffixes do look a bit suspect. Regards, Timothy Lee -------------- next part -------------- An HTML attachmen...
2005 Nov 30
2
Trying to understand volblocksize ?
Hi, I am trying to understand the use of volblocksize in emulated volumes. If I create a volume in pool and I want a database engine to read and write, say 16K blocks. Should I then set volblocksize to 16K ? Regards, Patrik This message posted from opensolaris.org
2009 Mar 03
8
zfs list extentions related to pNFS
...var rpool/dump 9.77G 37.0G 9.77G - rpool/export 40K 37.0G 21K /export rpool/export/home 19K 37.0G 19K /export/home rpool/pnfsds 31K 37.0G 15K - <---pNFS dataset rpool/pnfsds/47C80414080A4A42 16K 37.0G 16K - <---pNFS dataset rpool/swap 1.97G 38.9G 4.40M - (pnfs-17-21:/home/lisagab):7 % zfs list -t pnfsdata NAME USED AVAIL REFER MOUNTPOINT rpool/pnfsds 31K 37.0G 15K - rpool/pnfsds/47C80414080A4A4...
2011 Jun 19
2
RFT: virtio_net: limit xmit polling
OK, different people seem to test different trees. In the hope to get everyone on the same page, I created several variants of this patch so they can be compared. Whoever's interested, please check out the following, and tell me how these compare: kernel: git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git virtio-net-limit-xmit-polling/base - this is net-next baseline to test
2011 Jun 19
2
RFT: virtio_net: limit xmit polling
OK, different people seem to test different trees. In the hope to get everyone on the same page, I created several variants of this patch so they can be compared. Whoever's interested, please check out the following, and tell me how these compare: kernel: git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git virtio-net-limit-xmit-polling/base - this is net-next baseline to test
2020 Nov 05
3
BIOS RAID0 and differences between disks
My computer running CentOS 7 is configured to use BIOS RAID0 and has two identical SSDs which are also encrypted. I had a crash the other day and due to a bug in the operating system update, I am unable to boot the system in RAID mode since dracut does not recognize the disks in grub. After modifying the grub command line I am able to boot the system from one of the harddisks after entering the
2009 Dec 15
2
Regression in wideband encoding quality between b1 and rc1
...y / complexity in VBR, and the result was really great with speex beta1. With rc1 (or beta3), there is a clear degradation for fricatives, which gives a very audible (and annoying) feeling of a muffled voice. This problem does not seem to affect CBR encoding, only VBR. It does not appear to affect 16k files as much either. Hope that will help you improve speex again! Blaise P.S.: Is there any plan to make uwb mode really usable? At the moment, compressed 32k wave files sound worse than 16k in maximum quality. Would it be a lot of work to make the bitrate of the upper band to depend on the qual...
2006 May 22
0
smbd process grows to 25Mb resident size
...r/local/lib/libasn1.so.6.1.0 > FEAF6000 8K rwx-- /usr/local/lib/libasn1.so.6.1.0 > FEB00000 264K r-x-- /usr/local/lib/libkrb5.so.17.4.0 > FEB50000 24K rwx-- /usr/local/lib/libkrb5.so.17.4.0 > FEB60000 80K r-x-- /usr/local/lib/libgssapi.so.4.0.0 > FEB82000 16K rwx-- /usr/local/lib/libgssapi.so.4.0.0 > FEB90000 80K r-x-- /lib/nss_ldap.so.1 > FEBB2000 16K rwx-- /lib/nss_ldap.so.1 > FEBB6000 40K rwx-- /lib/nss_ldap.so.1 > FEBD0000 24K r-x-- /lib/nss_files.so.1 > FEBE6000 8K rwx-- /lib/nss_files.so.1 > FEBF00...
2006 Oct 31
0
6389368 fat zap should use 16k blocks (with backwards compatability)
Author: ahrens Repository: /hg/zfs-crypto/gate Revision: 0fdac67554fe0f4938120fb4f0cb35cbbcd38c0b Log message: 6389368 fat zap should use 16k blocks (with backwards compatability) Files: update: usr/src/uts/common/fs/zfs/dbuf.c update: usr/src/uts/common/fs/zfs/dmu_tx.c update: usr/src/uts/common/fs/zfs/dnode.c update: usr/src/uts/common/fs/zfs/sys/zap_impl.h update: usr/src/uts/common/fs/zfs/sys/zap_leaf.h update: usr/src/uts/com...
2019 Apr 30
6
Disk space and RAM requirements in docs
...t/Modules/Output/dependency-dump-dependent-module.m.tmp/vfs/usr/home/petr/src/llvm/trunk/llvm/tools/clang/test/Modules/Inputs 620K build/tools/clang/test/Modules/Output/dependency-dump.m.tmp/vfs/usr/home/petr/src/llvm/trunk/llvm 620K build/tools/clang/test/Modules/Output/cxx-many-overloads.cpp.tmp 616K build/tools/clang/test/Modules/Output/dependency-dump.m.tmp/vfs/usr/home/petr/src/llvm/trunk/llvm/tools 616K build/tools/clang/test/Modules/Output/cxx-many-overloads.cpp.tmp/3VM8S92M4CDQU 612K build/tools/clang/test/Modules/Output/dependency-dump.m.tmp/vfs/usr/home/petr/src/llvm/trunk/llvm/tools/cl...
2020 Nov 12
1
BIOS RAID0 and differences between disks
On 11/04/2020 10:21 PM, John Pierce wrote: > is it RAID 0 (striped) or raid1 (mirrored) ?? > > if you wrote on half of a raid0 stripe set, you basically trashed it. > blocks are striped across both drives, so like 16k on the first disk, then > 16k on the 2nd then 16k back on the first, repeat (replace 16k with > whatever your raid stripe size is). > > if its a raid 1 mirror, then either disk by itself has the complete file > system on it, so you should be able to remirror the changed disk onto the...
2020 Nov 05
1
BIOS RAID0 and differences between disks
> On Nov 4, 2020, at 9:21 PM, John Pierce <jhn.pierce at gmail.com> wrote: > > is it RAID 0 (striped) or raid1 (mirrored) ?? > > if you wrote on half of a raid0 stripe set, you basically trashed it. > blocks are striped across both drives, so like 16k on the first disk, then > 16k on the 2nd then 16k back on the first, repeat (replace 16k with > whatever your raid stripe size is). > > if its a raid 1 mirror, then either disk by itself has the complete file > system on it, so you should be able to remirror the changed disk onto th...
2017 Apr 24
2
IMAP hibernate and scalability in general
Hello, Just to follow up on this, we've hit over 16k (default client limit here) hibernated sessions: --- dovecot 119157 0.1 0.0 63404 56140 ? S Apr01 62:05 dovecot/imap-hibernate [11291 connections] dovecot 877825 0.2 0.0 28512 21224 ? S Apr23 1:34 dovecot/imap-hibernate [5420 connections] --- No issues other than t...
2023 Sep 11
0
[PATCH V11 04/17] locking/qspinlock: Improve xchg_tail for number of cpus >= 16k
On 9/10/23 04:28, guoren at kernel.org wrote: > From: Guo Ren <guoren at linux.alibaba.com> > > The target of xchg_tail is to write the tail to the lock value, so > adding prefetchw could help the next cmpxchg step, which may > decrease the cmpxchg retry loops of xchg_tail. Some processors may > utilize this feature to give a forward guarantee, e.g., RISC-V > XuanTie