Displaying 20 results from an estimated 70 matches for "altroot".
2009 Aug 07
0
This is happening too
creating /altroot/samba-3.4.0/source3/exports/libnetapi.syms
Linking shared library bin/libnetapi.so.0
Compiling libsmb/libsmb_cache.c
In file included from libsmb/libsmb_cache.c:24:
include/libsmbclient.h:78:25: sys/statvfs.h: No such file or directory
In file included from libsmb/libsmb_cache.c:24:
include/libsmbc...
2007 Nov 13
3
this command can cause zpool coredump!
in Solaris 10 U4,
type:
-bash-3.00# zpool create -R filepool mirror /export/home/f1.dat /export/home/f2.dat
invalid alternate root ''Segmentation Fault (core dumped)
--
This messages posted from opensolaris.org
2009 Aug 02
1
libsmb error
...ared inside
parameter list
In file included from libsmb/libsmb_cache.c:25:
include/libsmb_internal.h:509: warning: `struct statvfs' declared inside
parameter list
include/libsmb_internal.h:515: warning: `struct statvfs' declared inside
parameter list
The following command failed:
gcc -I. -I/altroot/samba-3.3.7/source -O -O -D_SAMBA_BUILD_=3
-I/altroot/samba-3.3.7/source/popt
-I/altroot/samba-3.3.7/source/iniparser/src -Iinclude -I./include -I. -I.
-I./lib/replace -I./lib/talloc -I./lib/tdb/include -I./libaddns -I./librpc
-DHAVE_CONFIG_H -I/usr/include/kerberosV -Iinclude -I./include -I. -I...
2009 Feb 22
11
Confused about zfs recv -d, apparently
First, it fails because the destination directory doesn''t exist. Then it
fails because it DOES exist. I really expected one of those to work. So,
what am I confused about now? (Running 2008.11)
# zpool import -R /backups/bup-ruin bup-ruin
# zfs send -R "zp1 at bup-20090222-054457UTC" | zfs receive -dv
bup-ruin/fsfs/zp1"
cannot receive: specified fs (bup-ruin/fsfs/zp1)
2007 Apr 30
4
need some explanation
Hi,
OS : Solaris 10 11/06
zpool list doesn''t reflect pool usage stats instantly. Why?
# ls -l
total 209769330
-rw------T 1 root root 107374182400 Apr 30 14:28 deleteme
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
wo 136G 100G 36.0G 73% ONLINE -
# rm deleteme
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
wo 136G 100G 36.0G 73% ONLINE - ---> why
..... time passes
# zpool list
NAME...
2009 Jun 03
7
"no pool_props" for OpenSolaris 2009.06 with old SPARC hardware
Hi,
yesterday evening I tried to upgrade my Ultra 60 to 2009.06 from SXCE snv_98.
I can''t use AI Installer because OpenPROM is version 3.27.
So I built IPS from source, then created a zpool on a spare drive and installed OS 2006.06 on it
To make the disk bootable I used:
installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0
using the executable from my new
2010 Mar 19
3
zpool I/O error
Hi all,
I''m trying to delete a zpool and when I do, I get this error:
# zpool destroy oradata_fs1
cannot open ''oradata_fs1'': I/O error
#
The pools I have on this box look like this:
#zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
oradata_fs1 532G 119K 532G 0% DEGRADED -
rpool 136G 28.6G 107G 21% ONLINE -
#
Why can''t I delete this pool? This is on Solaris 10 5/09 s10s_u7.
2011 Aug 14
4
Space usage
...;m just uploading all my data to my server and the space used is much more than what i''m uploading;
Documents = 147MB
Videos = 11G
Software= 1.4G
By my calculations, that equals 12.547T, yet zpool list is showing 21G as being allocated;
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
dpool 27.2T 21.2G 27.2T 0% 1.00x ONLINE -
It doesn''t look like any snapshots have been taken, according to zfs list -t snapshot. I''ve read about the ''copies'' parameter but I didn''t specify this when creating filesystems and I guess the default...
2025 Feb 14
0
[Bug 3789] New: Follow symlinks on saving keys from ssh-keygen
...: Linux
Status: NEW
Severity: enhancement
Priority: P5
Component: ssh-keygen
Assignee: unassigned-bugs at mindrot.org
Reporter: dbelyavs at redhat.com
ssh-keygen does not create the .ssh directory for alternate homes
e.g.
# ln -s /root /altroot
# ls -l /root/.ssh
# ssh-keygen -t rsa -f /altroot/.ssh/rsa.key -N ""
Generating public/private rsa key pair.
Saving key "/altroot/.ssh/rsa.key" failed: No such file or directory
Looks like we can use, e.g. sftp_realpath for normalizing the path
--
You are receiving this mai...
2011 Jun 06
3
Available space confusion
...d moved a bunch of movies onto them.
And then I noticed that I''ve somehow lost a full TB of space. Why?
nebol at filez:/$ zfs list tank2
NAME USED AVAIL REFER MOUNTPOINT
tank2 3.12T 902G 32.9K /tank2
nebol at filez:/$ zpool list tank2
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
tank2 5.44T 4.18T 1.26T 76% ONLINE -
I know that ZFS needs space for meta-data, but a full TB ???
--
This message posted from opensolaris.org
2009 Jan 15
21
4 disk raidz1 with 3 disks...
...a_mirror
I have 4x(1TB) disks, one of which is filled with 800GB of data (that I
cant delete/backup somewhere else)
> root at FSK-Backup:~# zpool create -f ambry raidz1 c4t0d0 c5t0d0 c5t1d0
> /dev/lofi/1
> root at FSK-Backup:~# zpool list
> NAME SIZE USED AVAIL CAP HEALTH ALTROOT
> ambry 592G 132K 592G 0% ONLINE -
I get this (592GB???) I bring the virtual device offline, and it becomes
degraded, yet I wont be able to copy my data over. I was wondering if
anyone else had a solution.
Thanks, Jonny
P.S. Please let me know if you need any extra information.
2010 May 15
7
Unable to Destroy One Particular Snapshot
Howdy All,
I''ve a bit of a strange problem here. I have a filesystem with one snapshot that simply refuses to be destroyed. The snapshots just prior to it and just after it were destroyed without problem. While running the zfs destroy command on this particular snapshot, the server becomes more-or-less hung. It''s pingable but will not open a new shell (local or via ssh) however
2009 Dec 15
7
ZFS Dedupe reporting incorrect savings
...ool with 64k recordsize and enabled dedupe on it.
zpool create -O recordsize=64k TestPool device1
zfs set dedup=on TestPool
I copied files onto this pool over nfs from a windows client.
Here is the output of zpool list
Prompt:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
TestPool 696G 19.1G 677G 2% 1.13x ONLINE -
When I ran a "dir /s" command on the share from a windows client cmd, I see the file size as 51,193,782,290 bytes. The alloc size reported by zpool along with the DEDUP of 1.13x does not addup to 51,193,782,290 bytes.
Accord...
2011 Apr 01
15
Zpool resize
...55 sec 63>
/pci at 0,0/pci-ide at 1,1/ide at 0/cmdk at 1,0
1. c2t1d0 <NETAPP-LUN-7340-22.00GB>
/iscsi/disk at 0000iqn.1992-08.com.netapp%3Asn.13510595203E9,0
Specify disk (enter its number): ^C
bash-3.00# zpool list
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
TEST 9,94G 93K 9,94G 0% ONLINE -
What can I do that zpool show new value?
Albert
2009 Jan 25
2
Unable to destory a pool
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
jira-app-zpool 272G 330K 272G 0% ONLINE -
The following command hangs forever. If I reboot the box , zpool list shows online as I mentioned the output above.
# zpool destroy -f jira-app-zpool
How can get rid of this pool and any reference to it.
bash-3.00# zpool status
pool: jira...
2010 Feb 10
5
zfs receive : is this expected ?
amber ~ # zpool list data
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
data 930G 295G 635G 31% 1.00x ONLINE -
amber ~ # zfs send -RD data at prededup |zfs recv -d ezdata
cannot receive new filesystem stream: destination ''ezdata'' exists
must specify -F to overwrite it
amber ~ # zfs send -RD data at prededup |zfs recv -d ezdata/data
canno...
2008 Jun 01
1
capacity query
...e
/dev/zvol/dsk/swap/vol 181,1 8 19922936 19687704
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
swap 9.61G 236M 24.5K /swap
swap/vol 9.61G 236M 9.61G -
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
swap 20G 19.2G 791M 96% ONLINE -
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080601/6d6b93ab/attachment.html>
-------------- next part --------------
A non-t...
2008 Jan 15
4
Moving zfs to an iscsci equallogic LUN
We have a Mirror setup in ZFS that''s 73GB (two internal disks on a sun fire v440). We currently are going to attach this system to an Equallogic Box, and will attach an ISCSCI LUN from the Equallogic box to the v440 of about 200gb. The Equallogic Box is configured as a Hardware Raid 50 (two hot spares for redundancy).
My question is what''s the best approach to moving the ZFS
2009 Oct 01
4
RAIDZ v. RAIDZ1
So, I took four 1.5TB drives and made RAIDZ, RAIDZ1 and RAIDZ2 pools. The sizes for the pools were 5.3TB, 4.0TB, and 2.67TB respectively. The man page for RAIDZ states that "The raidz vdev type is an alias for raidz1." So why was there a difference between the sizes for RAIDZ and RAIDZ1? Shouldn''t the size be the same for "zpool create raidz ..." and "zpool
2009 Oct 01
1
cachefile for snail zpool import mystery?
...eleted some LUNs on the array
without taking it off device trees?
2. we now have the burden of maintaining these cachefiles when
we change the zpool, say add/drop a lun. any advice?
It''d be nice if zfs keeps a cache file (other than /etc/zfs/zpool.cache)
for those ones imported under an altroot, and make it persistent,
verify/update entries at proper events. At least, I wish zfs allow
us to create the cachefiles while they are not currently imported.
so that I can just have a simple daily job to maintain the cache files
on every node of a cluster automatically.
Thanks.
Max
--
This me...