Displaying 20 results from an estimated 47 matches for "mkfile".
Did you mean:
mdfile
2010 May 02
8
zpool mirror (dumb question)
...idea, but hey, does it hurt to ask?****
I have been thinking, and would it be a good idea, to have on the 2TB
drives, say 1TB or 500GB "files" and then mount them as mirrored? So
basically, have a 2TB hard drive, set up like:
(where drive1 and drive2 are the paths to the mount points)
Mkfile 465gb /drive1/drive1part1
Mkfile 465gb /drive1/drive1part2
Mkfile 465gb /drive1/drive1part3
Mkfile 465gb /drive1/drive1part4
Mkfile 465gb /drive2/drive2part1
Mkfile 465gb /drive2/drive2part2
Mkfile 465gb /drive2/drive2part3
Mkfile 465gb /drive2/drive2part4
(I use 465gb, as 2TB = 2trillion bytes,...
2009 Oct 05
1
bsd mkfile command in centos - a wish
I enjoy the convenience of mkfile command found in Irix and BSD based
distros.
This command allows me to make files of any size;
usage: mkfile [-nv] size[b|k|m|g] filename ...
I've looked here and there and can't seem to find it for Centos.
Any one have ideas on were I can get at least the source for it?
I got someth...
2007 Sep 19
2
import zpool error if use loop device as vdev
Hey, guys
I just do the test for use loop device as vdev for zpool
Procedures as followings:
1) mkfile -v 100m disk1
mkfile -v 100m disk2
2) lofiadm -a disk1 /dev/lofi
lofiadm -a disk2 /dev/lofi
3) zpool create pool_1and2 /dev/lofi/1 and /dev/lofi/2
4) zpool export pool_1and2
5) zpool import pool_1and2
error info here:
bash-3.00# zpool import pool1_1and2
cannot import ''p...
2007 Jan 29
3
dumpadm and using dumpfile on zfs?
...zfs. This
worked for both a standard UFS slice and a SVM mirror using zfs.
Is there something that I''m doing wrong, or is this not yet supported on
ZFS?
Note this is Solaris 10 Update 3, but I don''t think that should matter..
thanks,
peter
Using ZFS
========
HON hcb116 ~ $ mkfile -n 1g /var/adm/crash/dump-file
HON hcb116 ~ $ dumpadm -d /var/adm/crash/dump-file
dumpadm: dumps not supported on /var/adm/crash/dump-file
Using UFS
========
HON hcb115 ~ $ mkfile -n 1g /data/0/test
HON hcb115 ~ $ dumpadm -d /data/0/test
Dump content: kernel pages
Dump device: /da...
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance.
I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.).
According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would assume, that much more and at least in...
2007 Mar 28
20
Gzip compression for ZFS
Adam,
With the blog entry[1] you''ve made about gzip for ZFS, it raises
a couple of questions...
1) It would appear that a ZFS filesystem can support files of
varying compression algorithm. If a file is compressed using
method A but method B is now active, if I truncate the file
and rewrite it, is A or B used?
2) The question of whether or not to use bzip2 was raised in
the
2007 Oct 08
16
Fileserver performance tests
...982ops/s 0.0mb/s 0.0ms/op 27us/op-cpu
12746: 65.266:
IO Summary: 8088 ops 8017.4 ops/s, (997/982 r/w) 155.6mb/s, 508us cpu/op, 0.2ms
12746: 65.266: Shutting down processes
filebench>[/i]
I expected to see some higher numbers really...
a simple "time mkfile 16g lala" gave me something like 280Mb/s.
Would anyone comment on this?
TIA,
Tom
This message posted from opensolaris.org
2006 Sep 15
1
[Blade 150] ZFS: extreme low performance
...0t2d0s4 ONLINE 0 0 0
Then i created a ZFS with no extra options:
# zfs create mypool/zfs01
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 106K 27,8G 25,5K /mypool
mypool/zfs01 24,5K 27,8G 24,5K /mypool/zfs01
When I now send a mkfile on the new FS, the performance of the whole system breaks down near zero:
# mkfile 5g test
last pid: 25286; load avg: 3.54, 2.28, 1.29; up 0+01:44:26 16:16:24
66 processes: 61 sleeping, 3 running, 1 zombie, 1...
2010 Mar 17
1
How to reserve space for a file on a zfs filesystem
Hi all,
How to reserve a space on a zfs filesystem? For mkfiel or dd will write
data to the
block, it is time consuming. whiel "mkfile -n" will not really hold the
space.
And zfs''s set reservation only work on filesytem, not on file?
Could anyone provide a solution for this?
Thanks very much
Vincent
2006 Nov 02
4
reproducible zfs panic on Solaris 10 06/06
Hi,
I am able to reproduce the following panic on a number of Solaris 10 06/06 boxes (Sun Blade 150, V210 and T2000). The script to do this is:
#!/bin/sh -x
uname -a
mkfile 100m /data
zpool create tank /data
zpool status
cd /tank
ls -al
cp /etc/services .
ls -al
cd /
rm /data
zpool status
# uncomment the following lines if you want to see the system think
# it can still read and write to the filesystem after the backing store has gone.
#date
#sleep 60
#date
#zpool sta...
2007 Mar 28
6
ZFS and UFS performance
...a Sun V240 with 2 CPUS and 8 GB of memory. This V240 is attached to a 3510 FC that has 12 x 300 GB disks. The 3510 is configured as HW RAID 5 with 10 disks and 2 spares and it''s exported to the V240 as a single LUN.
We create iso images of our product in the following way (high-level):
# mkfile 3g /isoimages/myiso
# lofiadm -a /isoimages/myiso
/dev/lofi/1
# newfs /dev/rlofi/1
# mount /dev/lofi/1 /mnt
# cd /mnt; zcat /product/myproduct.tar.Z | tar xf -
and we finally use mkisofs to create the iso image.
UFS performance
----------------------
We created a UFS file system on the above LUN...
2010 Jun 18
25
Erratic behavior on 24T zpool
Well, I''ve searched my brains out and I can''t seem to find a reason for this.
I''m getting bad to medium performance with my new test storage device. I''ve got 24 1.5T disks with 2 SSDs configured as a zil log device. I''m using the Areca raid controller, the driver being arcmsr. Quad core AMD with 16 gig of RAM OpenSolaris upgraded to snv_134.
The zpool
2009 Nov 24
9
Best practices for zpools on zfs
...about variable block sizes and the implications for performance.
1. http://hub.opensolaris.org/bin/view/Community+Group+zones/zoss
Suppose that on the storage server, an NFS shared dataset is created
without tuning the block size. This implies that when the client
(ldom or zone v12n server) runs mkfile or similar to create the
backing store for a vdisk or a zpool, the file on the storage server
will be created with 128K blocks. Then when Solaris or OpenSolaris is
installed into the vdisk or zpool, files of a wide variety of sizes
will be created. At this layer they will be created with variable...
2007 Nov 29
10
ZFS write time performance question
...128KB (default)
VxFS Block Size: 8KB(default)
The only thing different in setup for the ZFS vs. VxFS tests is the file system and an array support module (ASM) module was installed for the RAID on the VxFS test case.
Test Case: Run ''iostat'', then write a 1GB file using ''mkfile 1g testfile'' and then run iostat again.
ZFS Test Results: The KB written per second averaged around 250KB.
VxFS Test Results: The KB written per second averaged around 70KB.
When I fixed the ZFS record size to 8KB the KB written per second averaged 110KB.
My questions by be too general...
2006 Mar 17
2
> 1TB filesystems with ZFS and 32-bit Solaris?
Solaris in 32-bit mode has a 1TB device limit. UFS filesystems in 32-bit
mode also have a 1TB limit, even if using a logical volume manager to
span smaller than 1TB devices.
So, what kind of limit does ZFS have when running under 32-bit Solaris?
--
Erik Trimble
Java System Support
Mailstop: usca14-102
Phone: x17195
Santa Clara, CA
2009 Oct 29
2
Difficulty testing an SSD as a ZIL
Hi all,
I received my SSD, and wanted to test it out using fake zpools with files as backing stores before attaching it to my production pool. However, when I exported the test pool and imported, I get an error. Here is what I did:
I created a file to use as a backing store for my new pool:
mkfile 1g /data01/test2/1gtest
Created a new pool:
zpool create ziltest2 /data01/test2/1gtest
Added the SSD as a log:
zpool add -f ziltest2 log c7t1d0
(c7t1d0 is my SSD. I used the -f option since I had done this before with a pool called ''ziltest'', same results)
A ''zpool s...
2010 Jan 22
0
Removing large holey file does not free space 6792701 (still)
...o,
I mentioned this problem a year ago here and filed 6792701 and I know it has been discussed since. It should have been fixed in snv_118, but I can still trigger the same problem. This is only triggered if the creation of a large file is aborted, for example by loss of power, crash or SIGINT to mkfile(1M). The bug should probably be reopened but I post it here since some people where seeing something similar.
Example and attached zdb output:
filer01a:/$ uname -a
SunOS filer01a 5.11 snv_130 i86pc i386 i86pc Solaris
filer01a:/$ zpool create zp...
2006 Nov 20
1
Temporary mount Properties, small bug?
Hi,
Just playing with zfs and the admin manual ...
# mkfile 100m /export/zfs/disk1
# zpool create data /export/zfs/disk1
# zfs create data/users
# zfs mount -o remount,noatime data/users
# zfs get all data/users
NAME PROPERTY VALUE SOURCE
data/users type filesystem -
data/users cr...
2009 Dec 03
5
L2ARC in clusters
Hi,
When deploying ZFS in cluster environment it would be nice to be able
to have some SSDs as local drives (not on SAN) and when pool switches
over to the other node zfs would pick up the node''s local disk drives as
L2ARC.
To better clarify what I mean lets assume there is a 2-node cluster with
1sx 2540 disk array.
Now lets put 4x SSDs in each node (as internal/local drives). Now
2008 Jan 15
4
Moving zfs to an iscsci equallogic LUN
We have a Mirror setup in ZFS that''s 73GB (two internal disks on a sun fire v440). We currently are going to attach this system to an Equallogic Box, and will attach an ISCSCI LUN from the Equallogic box to the v440 of about 200gb. The Equallogic Box is configured as a Hardware Raid 50 (two hot spares for redundancy).
My question is what''s the best approach to moving the ZFS