Displaying 20 results from an estimated 1624 matches for "1k".
Did you mean:
16k
2009 Sep 26
5
raidz failure, trying to recover
...dev/dsk/c0t1d0 -n -s 256
block=507 (7ec00) transaction=15980522
Now lets say i want to go back in time on this, using the program can help me do that. If i wanted to go back in time to tgx 15980450...
bash-3.00# /tmp/findUberBlock /dev/dsk/c0t1d0 -t 15980450
dd if=/dev/zero of=/dev/dsk/c0t1d0 bs=1k oseek=180 count=1 conv=notrunc
dd if=/dev/zero of=/dev/dsk/c0t1d0 bs=1k oseek=181 count=1 conv=notrunc
dd if=/dev/zero of=/dev/dsk/c0t1d0 bs=1k oseek=182 count=1 conv=notrunc
dd if=/dev/zero of=/dev/dsk/c0t1d0 bs=1k oseek=183 count=1 conv=notrunc
dd if=/dev/zero of=/dev/dsk/c0t1d0 bs=1k oseek=184 c...
2009 Oct 15
8
sub-optimal ZFS performance
...0 90 55 34 1G
3G
22:12:31 164 41 25 41 24 0 61 41 25 1G
3G
22:22:31 161 40 24 40 24 0 68 40 24 1G
3G
arcstat second run:
Time read miss miss% dmis dm% pmis pm% mmis mm%
arcsz c
22:35:52 1K 447 24 429 23 17 47 436 26 1G
3G
22:45:52 163 40 24 40 24 0 75 40 24 1G
3G
22:55:52 161 40 25 40 24 0 86 40 25 1G
3G
23:05:52 159 40 25 39 25 0 71 40 25 1G
3G
23:15:52...
2020 May 03
2
Understanding VDO vs ZFS
...vdostats --verbose /dev/mapper/vdoas | grep -B6 'saving percent'
output from just created vdoas
[root at localhost ~]# vdostats --verbose /dev/mapper/vdoas | grep -B6 'saving
percent'
physical blocks : 10483712
logical blocks : 15728640
1K-blocks : 41934848
1K-blocks used : 4212024
1K-blocks available : 37722824
used percent : 10
saving percent : 99
[root at localhost ~]#
FIRST copy CentOS-7-x86_64-Minimal-2003.iso (1.1...
2012 Aug 29
1
Destination file is larger than source file
....
/opt/rsync/bin/rsync -av -e "ssh -l root" --delete
--exclude-from=/var/scripts/exclude
--password-file=/var/scripts/transfer.passwd <username>@<source
host>::<source dir>/ /<destination dir>
Source system
<source host>:<source dir># du -sh *
1K nohup.out
20G file1.dbf
3.9G file2.dbf
7.6G file3.dbf
1K x1
1K x2
Destination system
bash-3.00# du -sh *
1K nohup.out
20G file1.dbf
16G file2.dbf
7.6G file3.dbf
1K x1
1K x2
-------------- next part --------------
An HTML attachment was scrubbed...
U...
2011 Jun 09
0
No subject
...V2
> 1 18,758.48 19,112.50 18,597.07 19,252.04
> 25 80,500.50 78,801.78 80,590.68 78,782.07
> 50 80,594.20 77,985.44 80,431.72 77,246.90
> 100 82,023.23 81,325.96 81,303.32 81,727.54
>
> Here's the local guest-to-guest summary for 1 VM pair doing TCP_STREAM with
> 256, 1K, 4K and 16K message size in Mbps:
>
> 256:
> Instances Base V0 V1 V2
> 1 961.78 1,115.92 794.02 740.37
> 4 2,498.33 2,541.82 2,441.60 2,308.26
>
> 1K:
> 1 3,476.61 3,522.02 2,170.86 1,395.57
> 4 6,344.30 7,056.57 7,275.16 7,174.09
>...
2011 Jun 09
0
No subject
...V2
> 1 18,758.48 19,112.50 18,597.07 19,252.04
> 25 80,500.50 78,801.78 80,590.68 78,782.07
> 50 80,594.20 77,985.44 80,431.72 77,246.90
> 100 82,023.23 81,325.96 81,303.32 81,727.54
>
> Here's the local guest-to-guest summary for 1 VM pair doing TCP_STREAM with
> 256, 1K, 4K and 16K message size in Mbps:
>
> 256:
> Instances Base V0 V1 V2
> 1 961.78 1,115.92 794.02 740.37
> 4 2,498.33 2,541.82 2,441.60 2,308.26
>
> 1K:
> 1 3,476.61 3,522.02 2,170.86 1,395.57
> 4 6,344.30 7,056.57 7,275.16 7,174.09
>...
2020 Jun 08
1
[PATCH RFC 03/13] vhost: batching fetches
...> length memory regions for us to translate.
>>>>
>>> Yes but I don't see the relevance. This tells us how many descriptors to
>>> batch, not how many IOVs.
>> Yes, but questions are:
>>
>> - this introduce another obstacle to support more than 1K queue size
>> - if we support 1K queue size, does it mean we need to cache 1K descriptors,
>> which seems a large stress on the cache
>>
>> Thanks
>>
>>
> Still don't understand the relevance. We support up to 1K descriptors
> per buffer just for IOV si...
2020 May 03
0
Understanding VDO vs ZFS
...vdoas | grep -B6 'saving percent'
>output from just created vdoas
>
>[root at localhost ~]# vdostats --verbose /dev/mapper/vdoas | grep -B6
>'saving
>percent'
>physical blocks : 10483712
> logical blocks : 15728640
> 1K-blocks : 41934848
> 1K-blocks used : 4212024
> 1K-blocks available : 37722824
> used percent : 10
> saving percent : 99
>[root at localhost ~]#
>
>FIRST copy CentOS-7-...
2009 Jan 24
3
zfs read performance degrades over a short time
I appear to be seeing the performance of a local ZFS file system degrading over a short period of time.
My system configuration:
32 bit Athlon 1800+ CPU
1 Gbyte of RAM
Solaris 10 U6
SunOS filer 5.10 Generic_137138-09 i86pc i386 i86pc
2x250 GByte Western Digital WD2500JB IDE hard drives
1 zfs pool (striped with the two drives, 449 GBytes total)
1 hard drive has
1998 May 18
1
DOS-Client with TCPIP and SAMBA
...keybife2.com
MEM /C
Module, die den Speicher unterhalb 1 MB verwenden:
Name Insgesamt = Konventioneller + Hoher Speicher
-------- ---------------- ---------------- ---------------
MSDOS 50.621 (49K) 50.621 (49K) 0 (0K)
HIMEM 1.168 (1K) 1.168 (1K) 0 (0K)
EMM386 3.136 (3K) 3.136 (3K) 0 (0K)
COMMAND 3.296 (3K) 3.296 (3K) 0 (0K)
LPC 99.216 (97K) 99.216 (97K) 0 (0K)
UMB 960 (1K) 272 (0K) 688...
2020 Jun 05
2
[PATCH RFC 03/13] vhost: batching fetches
...iov, e.g userspace may pass several 1 byte
>> length memory regions for us to translate.
>>
> Yes but I don't see the relevance. This tells us how many descriptors to
> batch, not how many IOVs.
Yes, but questions are:
- this introduce another obstacle to support more than 1K queue size
- if we support 1K queue size, does it mean we need to cache 1K
descriptors, which seems a large stress on the cache
Thanks
>
2020 Jun 05
2
[PATCH RFC 03/13] vhost: batching fetches
...iov, e.g userspace may pass several 1 byte
>> length memory regions for us to translate.
>>
> Yes but I don't see the relevance. This tells us how many descriptors to
> batch, not how many IOVs.
Yes, but questions are:
- this introduce another obstacle to support more than 1K queue size
- if we support 1K queue size, does it mean we need to cache 1K
descriptors, which seems a large stress on the cache
Thanks
>
2011 Mar 03
1
Ploting Histogram with Y axis is percentage of sample for each bin
...percentage of the
total sample that each bin represents.
I know how to plot a histogram with the counts and density... but can't find
anything that gives me perenct of sample on the y axis.
Any help is appriciated
Below is the script I'm working with
par(mfrow=c(1,2))
hist(ISIS$ASH_BA1K_ISIS[ISIS$Pest_Status=="-1"], main="Ash BA 1K Negative
Detection", xlab="ASH BA 1K")
lines(density(ISIS$ASH_BA1K_ISIS), col="blue")
hist(ISIS$ASH_BA1K_ISIS[ISIS$Pest_Status=="1"], main="Ash BA 1K Positive
Detection", xlab="Ash BA 1K&...
2008 Aug 21
1
ext2online with 1k blocks not working
Hello,
As a Virtuozzo users we have majority of our diskspace formatted with -i 1024 -b 1024.
Lately I discovered that on CentOS 4.6 ext2online barfs when I try to grow such filesystem. Running it with -v -d, it prints lots of lines like:
ext2online v1.1.18 - 2001/03/18 for EXT2FS 0.5b
ext2online: 873646830 is a bad size for an ext2 fs! rounding down to 873644033
...
group NNN inode table has
2001 Jan 09
3
openssh 2.3.0p1 closing connection before command output comes through?
i'm getting some very strange behavior with openssh 2.3.0p1 that i don't
recall seeing with 2.2.0p1. here's some short output that will probably sum
up what's going on better than i can explain it:
admin2:~$ ssh downtown1 df
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/sda3 8457624 2881868 5139192 36% /
/dev/sda1 15522 1662 13059 11% /boot
/dev/sdb1 8605584 5633920 2527472 69% /content
/dev/sdc1 8605584 3261568 4899824 40% /logs
admin2:~$...
2017 Dec 18
2
interval or event to evaluate free disk space?
...space on the bricks is calculated. It seems to me that this does not happen for every write call (naturally) but at some interval or that some other event triggers this.
i.e, if I write two files quickly (that together would fill a brick) I'd get a error message:
dd if=/dev/zero of=a bs=1k count=15000 && sleep 1 && dd if=/dev/zero of=aa bs=1k count=15000
#yiels: dd: error writing ?aa?: No space left on device
#(brick1 is full, but glusterd still tries to place file "aa" on the same brick
dd if=/dev/zero of=a bs=1k count=15000 && sleep 60 &...
2008 Oct 27
3
dumpe2fs and repquota not agreeing on block size
...tells me the block size is 4k.
But, if I do:
repquota /var
It is telling me that one of my users is
currently using 10264 blocks. But, if I look
at their mail file, it is 10493792 bytes,
which means they should be using 2562 blocks
or so.
Also, if I do:
df /var
I get this
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/md2 39674140 20401792 17224468 55% /var
Which tells me the blocks are 1K in size. The repquota
result makes much more sense with a 1K block size.
Any idea why dumpe2fs is giving a 4K block size?
Thanks,
Neil
--
Neil Aggarwal, (83...
2011 Jun 19
2
RFT: virtio_net: limit xmit polling
OK, different people seem to test different trees. In the hope to get
everyone on the same page, I created several variants of this patch so
they can be compared. Whoever's interested, please check out the
following, and tell me how these compare:
kernel:
git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git
virtio-net-limit-xmit-polling/base - this is net-next baseline to test
2011 Jun 19
2
RFT: virtio_net: limit xmit polling
OK, different people seem to test different trees. In the hope to get
everyone on the same page, I created several variants of this patch so
they can be compared. Whoever's interested, please check out the
following, and tell me how these compare:
kernel:
git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git
virtio-net-limit-xmit-polling/base - this is net-next baseline to test
2020 May 03
9
Understanding VDO vs ZFS
...s the smell test. I would expect that if the logical volume
contains three copies of essentially identical data, I should see
deduplication numbers close to 3.00, but instead I'm seeing numbers
like 1.15. I compute the compression number as follows:
Use df and extract the value for "1k blocks used" from the third column
use vdostats --verbose and extract the number titled "1K-blocks used"
Divide the first by the second.
Can you provide any advice on my use of ZFS or VDO without telling me
that I should be doing backups differently?
Thanks
David