Displaying 20 results from an estimated 50000 matches similar to: "Backing up ZFS snapshots"
2009 Mar 04
5
Oracle database on zfs
Hi,
I am wondering if there is a guideline on how to configure ZFS on a server
with Oracle database?
We are experiencing some slowness on writes to ZFS filesystem. It take about
530ms to write a 2k data.
We are running Solaris 10 u5 127127-11 and the back-end storage is a RAID5
EMC EMX.
This is a small database with about 18gb storage allocated.
Is there a tunable parameters that we can apply to
2009 Mar 28
53
Can this be done?
I currently have a 7x1.5tb raidz1.
I want to add "phase 2" which is another 7x1.5tb raidz1
Can I add the second phase to the first phase and basically have two
raid5''s striped (in raid terms?)
Yes, I probably should upgrade the zpool format too. Currently running
snv_104. Also should upgrade to 110.
If that is possible, would anyone happen to have the simple command
lines to
2010 Mar 18
13
ZFS/OSOL/Firewire...
An interesting thing I just noticed here testing out some Firewire drives with OpenSolaris.
Setup :
OpenSolaris 2009.06 and a dev version (snv_129)
2 500Gb Firewire 400 drives with integrated hubs for daisy-chaining (net: 4 devices on the chain)
- one SATA bridge
- one PATA bridge
Created a zpool with both drives as simple vdevs
Started a zfs send/recv to backup a local filesystem
Watching
2010 Nov 23
14
ashift and vdevs
zdb -C shows an shift value on each vdev in my pool, I was just wondering if
it is vdev specific, or pool wide. Google didn''t seem to know.
I''m considering a mixed pool with some "advanced format" (4KB sector)
drives, and some normal 512B sector drives, and was wondering if the ashift
can be set per vdev, or only per pool. Theoretically, this would save me
some size on
2009 Nov 02
24
dedupe is in
Deduplication was committed last night by Mr. Bonwick:
> Log message:
> PSARC 2009/571 ZFS Deduplication Properties
> 6677093 zfs should have dedup capability
http://mail.opensolaris.org/pipermail/onnv-notify/2009-November/010683.html
Via c0t0d0s0.org.
2009 Oct 28
4
compiling 3.2.15: cifs.upcall not found afer RPM build
Hello,
Trying to compile Samba 3.2.15 on a RHEL AS 4u2 (i686) and I'm getting the
following result from 'sh makerpms.sh':
> Provides: samba-doc = 3.2.15-1
> Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1
rpmlib(PayloadFilesHavePrefix) <= 4.0-1 rpmlib(VersionedDependencies) <=
3.0.3-1
>
>
> RPM build errors:
> File not found:
2011 Jun 14
10
ZFS for Linux?
Hello,
A college friend of mine is using Debian Linux on his desktop,
and wondered if he could tap into ZFS goodness without adding
another server in his small quiet apartment or changing the
desktop OS. According to his research, there are some kernel
modules for Debian which implement ZFS, or a FUSE variant.
Can anyone comment how stable and functional these are?
Performance is a
2008 Oct 31
14
questions on zfs backups
On Thu, Oct 30, 2008 at 11:05 PM, Richard Elling <Richard.Elling at sun.com> wrote:
> Philip Brown wrote:
>> I''ve recently started down the road of production use for zfs, and am hitting my head on some paradigm shifts. I''d like to clarify whether my understanding is correct, and/or whether there are better ways of doing things.
>> I have one question for
2009 Apr 19
21
[on-discuss] Reliability at power failure?
Casper.Dik at Sun.COM wrote:
>
> I would suggest that you follow my recipe: not check the boot-archive
> during a reboot. And then report back. (I''m assuming that that will take
> several weeks)
>
We are back at square one; or, at the subject line.
I did a zpool status -v, everything was hunky dory.
Next, a power failure, 2 hours later, and this is what zpool status
2010 Apr 10
41
Secure delete?
Hi all
Is it possible to securely delete a file from a zfs dataset/zpool once it''s been snapshotted, meaning "delete (and perhaps overwrite) all copies of this file"?
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
roy at karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ
2009 Feb 04
26
ZFS snapshot splitting & joining
Hello everyone,
I am trying to take ZFS snapshots (ie. zfs send) and burn them to DVD''s for offsite storage. In many cases, the snapshots greatly exceed the 8GB I can stuff onto a single DVD-DL.
In order to make this work, I have used the "split" utility to break the images into smaller, fixed-size chunks that will fit onto a DVD. For example:
#split -b8100m
2010 Nov 18
9
WarpDrive SLP-300
http://www.lsi.com/channel/about_channel/whatsnew/warpdrive_slp300/index.html
Good stuff for ZFS.
Fred
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20101117/d48186f0/attachment.html>
2010 Aug 28
5
native ZFS on Linux
This just popped up:
> In terms of how native ZFS for Linux is being handled by [KQ
> Infotec], they are releasing their ported ZFS code under the Common
> Development & Distribution License and will not be attempting to go
> for mainline integration. Instead, this company will just be
> releasing their CDDL source-code as a build-able kernel module for
> users and
2012 Nov 01
2
"starting" dovecot
My system never issues the "dovecot start" command. I do, however, run
/usr/local/libexec/dovecot/imap on port 9xxx. I talk to the server
through port 9xxx and through the preauth tunnel. Is this arrangement
OK? Are there some things that will only work if "dovecot" is invoked?
Thanks,
--
Dave Abrahams
BoostPro Computing Software Development
2011 Oct 12
33
weird bug with Seagate 3TB USB3 drive
Banging my head against a Seagate 3TB USB3 drive.
Its marketing name is:
Seagate Expansion 3 TB USB 3.0 Desktop External Hard Drive STAY3000102
format(1M) shows it identify itself as:
Seagate-External-SG11-2.73TB
Under both Solaris 10 and Solaris 11x, I receive the evil message:
| I/O request is not aligned with 4096 disk sector size.
| It is handled through Read Modify Write but the performance
2012 Dec 04
2
"no longer mounted" warnings
Dovecot seems to be warning about every volume it's ever seen in the
past. Is this normal? Can I make it stop?
--8<---------------cut here---------------start------------->8---
12/4/12 12:33:38.148 PM dovecot[2658]: master: Warning: /Volumes/fs is no longer mounted. See http://wiki2.dovecot.org/Mountpoints
12/4/12 12:33:38.148 PM dovecot[2658]: master: Warning: /Volumes/dave is no
2007 Sep 27
6
Best option for my home file server?
I was recently evaluating much the same question but with out only a
single pool and sizing my disks equally.
I only need about 500GB of usable space and so I was considering the
value of 4x 250GB SATA Drives versus 5x 160GB SATA drives.
I had intended to use an AMS 5 disk in 3 5.25" bay hot-swap backplane.
http://www.american-media.com/product/backplane/sata300/sata300.html
I priced
2008 Jul 06
14
confusion and frustration with zpool
I have a zpool which has grown "organically". I had a 60Gb disk, I added a 120, I added a 500, I got a 750 and sliced it and mirrored the other pieces.
The 60 and the 120 are internal PATA drives, the 500 and 750 are Maxtor OneTouch USB drives.
The original system I created the 60+120+500 pool on was Solaris 10 update 3, patched to use ZFS sometime last fall (November I believe). In
2010 Apr 07
53
ZFS RaidZ recommendation
I have been searching this forum and just about every ZFS document i can find trying to find the answer to my questions. But i believe the answer i am looking for is not going to be documented and is probably best learned from experience.
This is my first time playing around with open solaris and ZFS. I am in the midst of replacing my home based filed server. This server hosts all of my media
2007 Feb 10
16
How to backup a slice ? - newbie
... though I tried, read and typed the last 4 hours; still no clue.
Please, can anyone give a clear idea on how this works:
Get the content of c0d1s1 to c0d0s7 ?
c0d1s1 is pool home and active; c0d0s7 is not active.
I have followed the suggestion on
http://www.opensolaris.org/os/community/zfs/demos/zfs_demo.pdf
% sudo zfs snapshot home at backup
% zfs list
NAME USED AVAIL REFER