similar to: ZFS encryption?? - [Fwd: [osol-announce] SXCE Build 105 available]

Displaying 20 results from an estimated 1000 matches similar to: "ZFS encryption?? - [Fwd: [osol-announce] SXCE Build 105 available]"

2009 Jan 09
2
ZFS encryption?? - [Fwd: [osol-announce] SXCE Build 105 available
It was rumored that Nevada build 105 would have ZFS encrypted file systems integrated into the main source. In reviewing the Change logs (URL''s below) I did not see anything mentioned that this had come to pass. Its going to be another week before I have a chance to play with b105. Does anyone know specifically if b105 has ZFS encryption? Thanks, Jerry -------- Original Message
2009 Nov 16
5
xVM filas on SXCE 127
During boot, I get the following error: Nov 16 09:16:41 sol11 svc.startd[7]: [ID 652011 daemon.warning] svc:/system xvm/store:default: Method "/lib/svc/method/xenstored start" failed with exit status 96. Nov 16 09:16:41 sol11 svc.startd[7]: [ID 748625 daemon.error] system/xvm/store:default misconfigured: transitioned to maintenance (see ''svcs -xv'' for details) It
2010 Feb 24
3
How to know the recordsize of a file
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I would like to know the blocksize of a particular file. I know the blocksize for a particular file is decided at creation time, in fuction of the write size done and the recordsize property of the dataset. How can I access that information?. Some zdb magic?. - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at
2020 Jul 08
2
Can't use samba-tool gpo restore command
Hi, After I successfully dumped the GPO policies on my working domain controller I would like to reuse it on a different domain server, but when I use the following command: samba-tool gpo restore B59E0B93-8226-40CA-A5C8-58A7AA1D139E /var/tmp/samba_gpo/policy/\{B59E0B93-8226-40CA-A5C8-58A7AA1D139E\} I got this error message: Using temporary directory /tmp/tmpo7huf4c0 (use --tmpdir to
2012 Jun 12
15
Recovery of RAIDZ with broken label(s)
Hi all, I have a 5 drive RAIDZ volume with data that I''d like to recover. The long story runs roughly: 1) The volume was running fine under FreeBSD on motherboard SATA controllers. 2) Two drives were moved to a HP P411 SAS/SATA controller 3) I *think* the HP controllers wrote some volume information to the end of each disk (hence no more ZFS labels 2,3) 4) In its "auto
2009 Sep 01
15
install sxce as paravirtual guest
I have tried to define a domain or using virt-install both failed. Could some one give me the correct command to do this? I used: virt-install --nographics --paravirt --os-type=solaris --os-variant=opensolaris --ram 1024 --name cam-host --disk path=/dev/dsk/c0t600A0B800049E902000008EB4A9CE744d0p0,driver=phy -l /export/media_images/sol-nv-b121-x86-dvd.iso thx, florian
2016 Jan 13
5
Test still failing in old CPUs
Opus 1.1.2. As experienced in previous release: """ ./test-driver: line 107: 25185 Illegal instruction "$@" > $log_file 2>&1 FAIL: celt/tests/test_unit_mathops """ -- Jes?s Cea Avi?n _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ Twitter: @jcea
2007 Sep 19
2
import zpool error if use loop device as vdev
Hey, guys I just do the test for use loop device as vdev for zpool Procedures as followings: 1) mkfile -v 100m disk1 mkfile -v 100m disk2 2) lofiadm -a disk1 /dev/lofi lofiadm -a disk2 /dev/lofi 3) zpool create pool_1and2 /dev/lofi/1 and /dev/lofi/2 4) zpool export pool_1and2 5) zpool import pool_1and2 error info here: bash-3.00# zpool import pool1_1and2 cannot import
2014 Sep 28
1
"doveadm backup/sync" are badly documented
Most documents around there talk abour "dsync", but the modern way is "doveadm backup". This command is not documented in the wiki and there are a few details missing, like how to use it thru SSH. I am currently doing some tests about how to backup my mdbox. I can do tests in local using: $ doveadm backup -u jcea -m proveedores/dovecot mdbox:/tmp/aa/ This will
2017 Jun 16
2
OPUS/FLAC Metadata
On 16/06/17 18:47, Marvin Scholz wrote: > This concatenation are valid streams, it's called chaining, afaik. So > players not supporting them are broken and the authors should be > notified about it, I think. And my question is... Is THAT the way an OPUS source communicates new metadata to the icecast server?. That is the question I would like to know :-). -- Jesús Cea Avión
2007 Mar 28
6
ZFS and UFS performance
We are running Solaris 10 11/06 on a Sun V240 with 2 CPUS and 8 GB of memory. This V240 is attached to a 3510 FC that has 12 x 300 GB disks. The 3510 is configured as HW RAID 5 with 10 disks and 2 spares and it''s exported to the V240 as a single LUN. We create iso images of our product in the following way (high-level): # mkfile 3g /isoimages/myiso # lofiadm -a /isoimages/myiso
2009 Jun 04
5
Problem installing RCS on SXCE
I was shocked to find no RCS on SXCE 107. I needed it to update some RCS archives I had copied over. No problem - go to sunfreeware and copy it over. No OpenSolaris branch? The Solaris 10 package should work. Only the package install appears to complete and does not - as documented only in a log file, not on my screen. OK then, the source for RCS should be easily compiled and installed, I
2009 Sep 10
2
De-duplication before SXCE EOL ?
Can anyone answer if we will get zfs de-duplication before SXCE EOL? If possible also anser the same on encryption? Thanks -- This message posted from opensolaris.org
2014 Jul 27
1
"Corrupted dbox file [...] purging found mismatched offsets"
Doing a "doveadm purge" today I got this: """ doveadm(jcea): Error: Corrupted dbox file /home/jcea/.thunderbird/dovecot/storage/m.686 (around offset=1385772): purging found mismatched offsets (1385742 vs 1380664, 185/275) doveadm(jcea): Warning: fscking index file /home/jcea/.thunderbird/dovecot/storage/dovecot.map.index doveadm(jcea): Warning: mdbox
2009 Jan 07
9
''zfs recv'' is very slow
On Wed 07/01/09 20:31 , Carsten Aulbert carsten.aulbert at aei.mpg.de sent: > Brent Jones wrote: > > > > Using mbuffer can speed it up dramatically, but > > this seems like a hack> without addressing a real problem with zfs > > send/recv.> Trying to send any meaningful sized snapshots > > from say an X4540 takes> up to 24 hours, for as little as 300GB
2020 Mar 30
3
Multithreaded encoding?
I am interested in being able to encode a single Opus stream using several CPU cores. I get a raw audio input and "opusenc" can transcode it at 1200% speed (Raspberry PI 3B+). It saturates a single CPU core, but the other three are idle. Is out there any project to add multithreading options to "opusenc", or something in that line? Looking around, I have found this:
2019 Nov 28
2
ESEARCH is announced but it doesn't work
I am using Dovecot 2.3.4. I could upgrade if necessary. I see this capabilities after login in: """ a OK [CAPABILITY IMAP4rev1 SASL-IR LOGIN-REFERRALS ID ENABLE IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS THREAD=ORDEREDSUBJECT MULTIAPPEND URL-PARTIAL CATENATE UNSELECT CHILDREN NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE QRESYNC ESEARCH ESORT SEARCHRES WITHIN
2015 May 19
2
"doveadm backup" doesn't work anymore after upgrading to 2.2.18
Until today I could do this to backup my primary IMAP4 server: """ doveadm backup ssh csi doveadm dsync-server """ It doesn't work anymore after upgrading to Dovecot 2.2.18: """ jcea at ubuntu:~$ doveadm backup ssh csi doveadm dsync-server Enter passphrase for key '/home/jcea/.ssh/id_rsa': dsync-remote(root): Error: Mailbox INBOX: Failed
2009 Nov 17
2
p2v for sxce snv115 to xvm on opensolaris host?
hi folks, is there a straightforward or well-documented way to migrate my physical sxce snv_115 (x64) system into an xvm ? searching for "p2v" in an opensolaris context seems to pick up a few hits on zones, but nothing obvious relating to xvm on opensolaris for what it''s worth the host system is opensolaris (2010.02 snv_126), but i''m hoping that''s not very
2007 Feb 05
6
snapdir visable recursively throughout a dataset
Is there an existing RFE for, what I''ll wrongly call, "recursively visable snapshots"? That is, .zfs in directories other than the dataset root. Frankly, I don''t need it available in all directories, although it''d be nice, but I do have a need for making it visiable 1 dir down from the dataset root. The problem is that while ZFS and Zones work smoothly