similar to: "zfs send" from solaris 10/08 to "zfs receive" on solaris 10/09

Displaying 20 results from an estimated 7000 matches similar to: ""zfs send" from solaris 10/08 to "zfs receive" on solaris 10/09"

2009 Feb 22
11
Confused about zfs recv -d, apparently
First, it fails because the destination directory doesn''t exist. Then it fails because it DOES exist. I really expected one of those to work. So, what am I confused about now? (Running 2008.11) # zpool import -R /backups/bup-ruin bup-ruin # zfs send -R "zp1 at bup-20090222-054457UTC" | zfs receive -dv bup-ruin/fsfs/zp1" cannot receive: specified fs (bup-ruin/fsfs/zp1)
2007 Sep 27
6
Best option for my home file server?
I was recently evaluating much the same question but with out only a single pool and sizing my disks equally. I only need about 500GB of usable space and so I was considering the value of 4x 250GB SATA Drives versus 5x 160GB SATA drives. I had intended to use an AMS 5 disk in 3 5.25" bay hot-swap backplane. http://www.american-media.com/product/backplane/sata300/sata300.html I priced
2011 Feb 05
12
ZFS Newbie question
I?ve spend a few hours reading through the forums and wiki and honestly my head is spinning. I have been trying to study up on either buying or building a box that would allow me to add drives of varying sizes/speeds/brands (adding more later etc) and still be able to use the full space of drives (minus parity? [not sure if I got the terminology right]) with redundancy. I have found the ?all in
2010 Mar 05
2
ZFS replication send/receive errors out
My full backup script errorred out the last two times I ran it. I''ve got a full Bash trace of it, so I know exactly what was done. There are a moderate number of snapshots on the zp1 pool, and I''m intending to replicate the whole thing into the backup pool. After housekeeping, I take make a current snapshot on the data pool (zp1). Since this is a new full backup, I then
2006 Oct 05
1
solaris-supported 8-port PCI-X SATA controller
I''ve lucked into some big disks, so I''m thinking of biting the bullet (screaming loudly in the process) and superceding the SATA controllers on my motherboard with something that will work with hot-swap in Solaris. (did I mention before I''m still pissed about this?) I have enough to populate all 8 bays (meaning adding 4 disks to what I have now), so the 6 ports on the
2010 May 20
13
send/recv over ssh
I know i''m probably doing something REALLY stupid.....but for some reason i can''t get send/recv to work over ssh. I just built a new media server and i''d like to move a few filesystem from my old server to my new server but for some reason i keep getting strange errors... At first i''d see something like this: pfexec: can''t get real path of
2010 Jul 14
6
Xen cpu requirements
I'm installing Centos 5.5 on a new Dell R301 server. I wanted to run Xen and have the full virtualization possibilities (this is our development support server, so it runs a few real services and is available for playing with things; putting the "playing with things" functions into virtual servers would protect the "few real services", and make it easier to clean up
2010 Jan 25
24
Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboard
My current home fileserver (running Open Solaris 111b and ZFS) has an ASUS M2N-SLI DELUXE motherboard. This has 6 SATA connections, which are currently all in use (mirrored pair of 80GB for system zfs pool, two mirrors of 400GB both in my data pool). I''ve got two more hot-swap drive bays. And I''m getting up towards 90% full on the data pool. So, it''s time to expand,
2006 Oct 11
41
ZFS Inexpensive SATA Whitebox
All, So I have started working with Solaris 10 at work a bit (I''m a Linux guy by trade) and I have a dying nfs box at home. So the long and short of it is as follows: I would like to setup a SATAII whitebox that uses ZFS as its filesystem. The box will probably be very lightly used, streaming media to my laptop and workstation would be the bulk of the work. However I do have quite a
2010 Jul 15
3
xm console -- what should I get?
If I type "xm console 6", say (when I have a virtual machine 6 running), what should I get? The documentation seems to indicate that I should get something that behaves like a telnet to a serial console. What I actually get is a connection that might show me a couple of lines of output that do look like they belonged on the console, but doesn't seem to accept any input (except that
2006 Aug 11
2
Looking for motherboard/chipset experience, again
What about the Asus M2N-SLI Deluxe motherboard? It has 7 SATA ports, supports ECC memory, socket AM2, generally looks very attractive for my home storage server. Except that it, and the nvidia nForce 570-SLI it''s built on, don''t seem to be on the HCL. I''m hoping that''s just "yet", not reported yet. Anybody run Solaris on it? Or at least on any
2010 Jul 15
2
Finding DHCP IP of guest system
If I can log in to the guest through the console, I can of course find out what IP DHCP has assigned it. If I configure a static IP I can of course connect to the system there (if it runs services, the firewall allows it, all the usual caveats). Does there happen to be any way to determine from dom0 what IPs are participating in the network and which guests they belong to? (I'm configuring
2010 Apr 26
23
SAS vs SATA: Same size, same speed, why SAS?
I''m building another 24-bay rackmount storage server, and I''m considering what drives to put in the bays. My chassis is a Supermicro SC846A, so the backplane supports SAS or SATA; my controllers are LSI3081E, again supporting SAS or SATA. Looking at drives, Seagate offers an enterprise (Constellation) 2TB 7200RPM drive in both SAS and SATA configurations; the SAS model offers
2008 Sep 17
1
Setting VNC console port in virt-install
Using Centos 5.2 with Xen. I'm making a group of nodes behind an LVS load-director to perform computing services. Those nodes are only accessible from the LVS nodes (I'm using LVS NAT mode). Actually it's Xen virtual servers on the physical nodes behind the LVS boxes that I'm mostly concerned with. When you create a guest with virt-install, there's a vnc param and a vncport
2010 Mar 18
13
ZFS/OSOL/Firewire...
An interesting thing I just noticed here testing out some Firewire drives with OpenSolaris. Setup : OpenSolaris 2009.06 and a dev version (snv_129) 2 500Gb Firewire 400 drives with integrated hubs for daisy-chaining (net: 4 devices on the chain) - one SATA bridge - one PATA bridge Created a zpool with both drives as simple vdevs Started a zfs send/recv to backup a local filesystem Watching
2010 Jun 04
5
Depth of Scrub
Hi, I have a small question about the depth of scrub in a raidz/2/3 configuration. I''m quite sure scrub does not check spares or unused areas of the disks (it could check if the disks detects any errors there). But what about the parity? Obviously it has to be checked, but I can''t find any indications for it in the literature. The man page only states that the data is being
2010 Sep 09
37
resilver = defrag?
A) Resilver = Defrag. True/false? B) If I buy larger drives and resilver, does defrag happen? C) Does zfs send zfs receive mean it will defrag? -- This message posted from opensolaris.org
2010 Apr 27
42
Performance drop during scrub?
Hi all I have a test system with snv134 and 8x2TB drives in RAIDz2 and currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on the testpool drops to something hardly usable while scrubbing the pool. How can I address this? Will adding Zil or L2ARC help? Is it possible to tune down scrub''s priority somehow? Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at
2008 Aug 21
1
Xen "bridged" networking config
I've got a Centos guest and a Windows 2003 server guest running in Xen under Centos (5.2 in both cases), and they can get out to the network, and I can ping them from dom0. This is my first Xen install, and I haven't used Linux as a router before (I'm very familiar with it as a webserver and development platform) so I'm a bit weak on the bridging code and NAT / IP masquerading.
2010 Mar 26
23
RAID10
Hi All, I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection? So if I have 8 x 1.5tb drives, wouldn''t I: - mirror drive 1 and 5 - mirror drive 2 and 6 - mirror drive 3 and 7 - mirror drive 4 and 8 Then stripe 1,2,3,4 Then stripe 5,6,7,8 How does one do this with ZFS?