similar to: send/recv over ssh

Displaying 20 results from an estimated 3000 matches similar to: "send/recv over ssh"

2011 Feb 05
12
ZFS Newbie question
I?ve spend a few hours reading through the forums and wiki and honestly my head is spinning. I have been trying to study up on either buying or building a box that would allow me to add drives of varying sizes/speeds/brands (adding more later etc) and still be able to use the full space of drives (minus parity? [not sure if I got the terminology right]) with redundancy. I have found the ?all in
2011 Apr 05
11
ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?
Hello, I''m debating an OS change and also thinking about my options for data migration to my next server, whether it is on new or the same hardware. Migrating to a new machine I understand is a simple matter of ZFS send/receive, but reformatting the existing drives to host my existing data is an area I''d like to learn a little more about. In the past I''ve asked about
2010 Apr 26
23
SAS vs SATA: Same size, same speed, why SAS?
I''m building another 24-bay rackmount storage server, and I''m considering what drives to put in the bays. My chassis is a Supermicro SC846A, so the backplane supports SAS or SATA; my controllers are LSI3081E, again supporting SAS or SATA. Looking at drives, Seagate offers an enterprise (Constellation) 2TB 7200RPM drive in both SAS and SATA configurations; the SAS model offers
2010 Jun 04
5
Depth of Scrub
Hi, I have a small question about the depth of scrub in a raidz/2/3 configuration. I''m quite sure scrub does not check spares or unused areas of the disks (it could check if the disks detects any errors there). But what about the parity? Obviously it has to be checked, but I can''t find any indications for it in the literature. The man page only states that the data is being
2012 Dec 14
12
any more efficient way to transfer snapshot between two hosts than ssh tunnel?
Assuming in a secure and trusted env, we want to get the maximum transfer speed without the overhead from ssh. Thanks. Fred -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20121213/654f543f/attachment-0001.html>
2009 Feb 22
11
Confused about zfs recv -d, apparently
First, it fails because the destination directory doesn''t exist. Then it fails because it DOES exist. I really expected one of those to work. So, what am I confused about now? (Running 2008.11) # zpool import -R /backups/bup-ruin bup-ruin # zfs send -R "zp1 at bup-20090222-054457UTC" | zfs receive -dv bup-ruin/fsfs/zp1" cannot receive: specified fs (bup-ruin/fsfs/zp1)
2010 May 28
6
zfs send/recv reliability
After looking through the archives I haven''t been able to assess the reliability of a backup procedure which employs zfs send and recv. Currently I''m attempting to create a script that will allow me to write a zfs stream to a tape via tar like below. # zfs send -R pool at something | tar -c > /dev/tape I''m primarily concerned with in the possibility
2010 Jul 19
22
zfs send to remote any ideas for a faster way than ssh?
I''ve tried ssh blowfish and scp arcfour. both are CPU limited long before the 10g link is. I''vw also tried mbuffer, but I get broken pipe errors part way through the transfer. I''m open to ideas for faster ways to to either zfs send directly or through a compressed file of the zfs send output. For the moment I; zfs send > pigz scp arcfour the file gz file to the
2010 Oct 19
8
Balancing LVOL fill?
Hi all I have this server with some 50TB disk space. It originally had 30TB on WD Greens, was filled quite full, and another storage chassis was added. Now, space problem gone, fine, but what about speed? Three of the VDEVs are quite full, as indicated below. VDEV #3 (the one with the spare active) just spent some 72 hours resilvering a 2TB drive. Now, those green drives suck quite hard, but not
2009 Dec 10
6
Confusion regarding ''zfs send''
I''m playing around with snv_128 on one of my systems, and trying to see what kinda of benefits enabling dedup will give me. The standard practice for reprocessing data that''s already stored to add compression and now dedup seems to be a send / receive pipe similar to: zfs send -R <old fs>@snap | zfs recv -d <new fs> However, according to the man page,
2010 Jul 14
6
Xen cpu requirements
I'm installing Centos 5.5 on a new Dell R301 server. I wanted to run Xen and have the full virtualization possibilities (this is our development support server, so it runs a few real services and is available for playing with things; putting the "playing with things" functions into virtual servers would protect the "few real services", and make it easier to clean up
2007 Sep 27
6
Best option for my home file server?
I was recently evaluating much the same question but with out only a single pool and sizing my disks equally. I only need about 500GB of usable space and so I was considering the value of 4x 250GB SATA Drives versus 5x 160GB SATA drives. I had intended to use an AMS 5 disk in 3 5.25" bay hot-swap backplane. http://www.american-media.com/product/backplane/sata300/sata300.html I priced
2010 Jul 15
3
xm console -- what should I get?
If I type "xm console 6", say (when I have a virtual machine 6 running), what should I get? The documentation seems to indicate that I should get something that behaves like a telnet to a serial console. What I actually get is a connection that might show me a couple of lines of output that do look like they belonged on the console, but doesn't seem to accept any input (except that
2011 Apr 28
4
Finding where dedup''d files are
Is there an easy way to find out what datasets have dedup''d data in them. Even better would be to discover which files in a particular dataset are dedup''d. I ran # zdb -DDDD which gave output like: index 1055c9f21af63 refcnt 2 single DVA[0]=<0:1e274ec3000:2ac00:STD:1> [L0 deduplicated block] sha256 uncompressed LE contiguous unique unencrypted 1-copy size=20000L/20000P
2008 Jul 10
49
Supermicro AOC-USAS-L8i
On Wed, Jul 9, 2008 at 1:12 PM, Tim <tim at tcsac.net> wrote: > Perfect. Which means good ol'' supermicro would come through :) WOHOO! > > AOC-USAS-L8i > > http://www.supermicro.com/products/accessories/addon/AOC-USAS-L8i.cfm Is this card new? I''m not finding it at the usual places like Newegg, etc. It looks like the LSI SAS3081E-R, but probably at 1/2 the
2010 Jul 15
2
Finding DHCP IP of guest system
If I can log in to the guest through the console, I can of course find out what IP DHCP has assigned it. If I configure a static IP I can of course connect to the system there (if it runs services, the firewall allows it, all the usual caveats). Does there happen to be any way to determine from dom0 what IPs are participating in the network and which guests they belong to? (I'm configuring
2012 Mar 05
10
Compatibility of Hitachi Deskstar 7K3000 HDS723030ALA640 with ZFS
Greetings, Quick question: I am about to acquire some disks for use with ZFS (currently using zfs-fuse v0.7.0). I''m aware of some 4k alignment issues with Western Digital advanced format disks. As far as I can tell, the Hitachi Deskstar 7K3000 (HDS723030ALA640) uses 512B sectors and so I presume does not suffer from such issues (because it doesn''t lie about the physical layout
2010 Mar 26
23
RAID10
Hi All, I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection? So if I have 8 x 1.5tb drives, wouldn''t I: - mirror drive 1 and 5 - mirror drive 2 and 6 - mirror drive 3 and 7 - mirror drive 4 and 8 Then stripe 1,2,3,4 Then stripe 5,6,7,8 How does one do this with ZFS?
2009 Jan 21
8
cifs perfomance
Hello! I''am setup zfs / cifs home storage server, end now have low performance with play movie stored on this zfs from windows client. server hardware is not new , but n windows it perfomance was normal. CPU is AMD Athlon Burton Thunderbird 2500, runing on 1,7GHz, 1024 RAM and storage: usb c4t0d0 ST332062-0A-3.AA-298.09GB /pci at 0,0/pci1458,5004 at 2,2/cdrom at 1/disk at
2009 Nov 12
8
"zfs send" from solaris 10/08 to "zfs receive" on solaris 10/09
I built a fileserver on solaris 10u6 (10/08) intending to back it up to another server via zfs send | ssh othermachine ''zfs receive'' However, the new server is too new for 10u6 (10/08) and requires a later version of solaris . presently available is 10u8 (10/09) Is it crazy for me to try the send/receive with these two different versions of OSes? Is it possible the