similar to: How to delegate filesystems from different pools to non-global zone

Displaying 20 results from an estimated 2000 matches similar to: "How to delegate filesystems from different pools to non-global zone"

2009 Jan 12
1
ZFS size is different ?
Hi all, I have 2 questions about ZFS. 1. I have create a snapshot in my pool1/data1, and zfs send/recv it to pool2/data2. but I found the USED in zfs list is different: NAME USED AVAIL REFER MOUNTPOINT pool2/data2 160G 1.44T 159G /pool2/data2 pool1/data 176G 638G 175G /pool1/data1 It keep about 30,000,000 files. The content of p_pool/p1 and backup/p_backup
2011 Mar 30
2
Lists of tables and conditional statements
Hi R-users, I have a list containing numeric tables of differing row length. I want to make a new list that contains only rows from tables with a "Sum" greater than 3, plus the names of each table. I was wondering whether there is an elegant way to do this using apply of related functions as this list has many thousands of such tables. Here is an example of the list $AACS
2007 Jun 26
2
NFS, nested ZFS filesystems and ownership
Hello, I''m sure there is a simple solution, but I am unable to figure this one out. Assuming I have tank/fs, tank/fs/fs1, tank/fs/fs2, and I set sharenfs=on for tank/fs (child filesystems are inheriting it as well), and I chown user:group /tank/fs, /tank/fs/fs1 and /tank/fs/fs2, I see: ls -la /tank/fs user:group . user:group fs1 user:group fs2 user:group some_other_file If I mount
2011 Aug 11
6
unable to mount zfs file system..pl help
# uname -a Linux testbox 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux # rpm -qa|grep zfs zfs-test-0.5.2-1 zfs-modules-0.5.2-1_2.6.18_194.el5 zfs-0.5.2-1 zfs-modules-devel-0.5.2-1_2.6.18_194.el5 zfs-devel-0.5.2-1 # zfs list NAME USED AVAIL REFER MOUNTPOINT pool1 120K 228G 21K /pool1 pool1/fs1 21K 228G 21K /vik [root at
2006 Jan 13
26
A couple of issues
I''ve been testing ZFS since it came out on b27 and this week I BFUed to b30. I''ve seen two problems, one I''ll call minor and the other major. The hardware is a Dell PowerEdge 2600 with 2 3.2GHz Xeons, 2GB memory and a perc3 controller. I have created a filesystem for over 1000 users on it and take hourly snapshots, which destroy the one from 24 hours ago, except the
2018 Oct 10
6
same netbios aliases on multiple servers
Hi Can I set the same netbios name on multiple servers? more precisely why it does not work? server A [global]   netbios name = FS1   netbios aliases = fs fs.example.ru server B [global]   netbios name = FS2   netbios aliases = fs fs.example.ru
2013 Jan 07
5
mpt_sas multipath problem?
Greetings, We''re trying out a new JBOD here. Multipath (mpxio) is not working, and we could use some feedback and/or troubleshooting advice. The OS is oi151a7, running on an existing server with a 54TB pool of internal drives. I believe the server hardware is not relevant to the JBOD issue, although the internal drives do appear to the OS with multipath device names (despite the fact
2001 Nov 22
5
How to setup Rsync as an NT Service
For a recent project I needed to run Rsync as a service on Windows NT. The following is a link to the instructions I created to recreate my steps. http://members.home.net/cbollerud2/projects/rsync/NTService.html The "no-fork" patch used here is very similar to the "no-detach" option mentioned in many previous posts. I wish I could take credit for it, but I basically put
2012 Apr 06
6
Seagate Constellation vs. Hitachi Ultrastar
Happy Friday, List! I''m spec''ing out a Thumper-esque solution and having trouble finding my favorite Hitachi Ultrastar 2TB drives at a reasonable post-flood price. The Seagate Constellations seem pretty reasonable given the market circumstances but I don''t have any experience with them. Anybody using these in their ZFS systems and have you had good luck? Also, if
2007 Apr 19
14
Experience with Promise Tech. arrays/jbod''s?
Greetings, In looking for inexpensive JBOD and/or RAID solutions to use with ZFS, I''ve run across the recent "VTrak" SAS/SATA systems from Promise Technologies, specifically their E-class and J-class series: E310f FC-connected RAID: http://www.promise.com/product/product_detail_eng.asp?product_id=175 E310s SAS-connected RAID:
2007 Sep 28
4
Sun 6120 array again
Greetings, Last April, in this discussion... http://www.opensolaris.org/jive/thread.jspa?messageID=143517 ...we never found out how (or if) the Sun 6120 (T4) array can be configured to ignore cache flush (sync-cache) requests from hosts. We''re about to reconfigure a 6120 here for use with ZFS (S10U4), and the evil tuneable zfs_nocacheflush is not going to serve us well (there is a ZFS
2010 Oct 04
3
hot spare remains in use
Hi, I had a hot spare used to replace a failed drive, but then the drive appears to be fine anyway. After clearing the error it shows that the drive was resilvered, but keeps the spare in use. zpool status pool2 pool: pool2 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool2 ONLINE 0 0 0 raidz2
2004 Feb 06
4
memory reduction
As those of you who watch CVS will be aware Wayne has been making progress in reducing memory requirements of rsync. Much of what he has done has been the product of discussions between he and myself that started a month ago with John Van Essen. Most recently Wayne has changed how the file_struct and its associated data are allocated, eliminating the string areas. Most of these changes have been
2007 Dec 05
2
zfs mirroring question
I create two zfs''s on one pool of four disks with two mirrors, such as... / zpool create tank mirror disk1 disk2 mirror disk3 disk4 zfs create tank/fs1 zfs create tank/fs2/ Are fs1 and fs2 striped across all four disks? If two disks fail that represent a 2-way mirror, do I lose data? Brian.
2018 Oct 10
2
same netbios aliases on multiple servers
we have many branch, with different call quality, each branch, has a samba DC with a file server, and in order not to create labels for each branch, we set up a geo round robin on the DNS server. branch A net 192.168.1.0/24 dc and fs 192.168.1.1 name fs1 branch B net 192.168.2.0/24 dc and fs 192.168.2.1 name fs2 if open fs.example in branch A, then user open fs1 if open fs.example in
2018 Oct 10
3
same netbios aliases on multiple servers
how to set one DNS name on different samba DC and FS servers? > Having said all that, you do have different netbios names (I do hope > the two machines are called fs1 & fs2) and you don't actually need to > set it in smb.conf, Samba will do it for you. > > You are trying to set the same 'netbios aliases' on both Samba > servers and, for the same reasons as the
2008 Feb 05
31
ZFS Performance Issue
This may not be a ZFS issue, so please bear with me! I have 4 internal drives that I have striped/mirrored with ZFS and have an application server which is reading/writing to hundreds of thousands of files on it, thousands of files @ a time. If 1 client uses the app server, the transaction (reading/writing to ~80 files) takes about 200 ms. If I have about 80 clients attempting it @ once, it can
2012 Jun 07
2
Performance optimization tips Gluster 3.3? (small files / directory listings)
Hi, I'm using Gluster 3.3.0-1.el6.x86_64, on two storage nodes, replicated mode (fs1, fs2) Node specs: CentOS 6.2 Intel Quad Core 2.8Ghz, 4Gb ram, 3ware raid, 2x500GB sata 7200rpm (RAID1 for os), 6x1TB sata 7200rpm (RAID10 for /data), 1Gbit network I've it mounted data partition to web1 a Dual Quad 2.8Ghz, 8Gb ram, using glusterfs. (also tried NFS -> Gluster mount) We have 50Gb of
2007 Nov 27
4
SAN arrays with NVRAM cache : ZIL and zfs_nocacheflush
Hi, I read some articles on solarisinternals.com like "ZFS_Evil_Tuning_Guide" on http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide . They clearly suggest to disable cache flush http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#FLUSH . It seems to be the only serious article on the net about this subject. Could someone here state on this
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance. I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.). According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would