similar to: Any limit on pool hierarchy?

Displaying 20 results from an estimated 3000 matches similar to: "Any limit on pool hierarchy?"

2010 Nov 09
5
X4540 RIP
Oracle have deleted the best ZFS platform I know, the X4540. Does anyone know of an equivalent system? None of the current Oracle/Sun offerings come close. -- Ian.
2011 Aug 09
2
Log entrys after upgrade 3.6.0
Hi ! I upgraded to 3.6.0 on my Ubuntu 8.04 64 bit and since change to max protocol=SMB2 and nmbd/smbd restarts I get many of those "NT_STATUS_END_OF_FILE" messages /------------------------------------- [2011/08/09 15:55:11.130751, 1] smbd/process.c:456(receive_smb_talloc) read_smb_length_return_keepalive failed for client 192.168.1.109 read error = NT_STATUS_END_OF_FILE.
2007 Feb 17
1
Filesystem won't mount because of "unsupported optional features (80)"
I made a filesystem (mke2fs -j) on a logical volume under kernel 2.6.20 on a 64-bit based system, and when I try to mount it, ext3 complains with EXT3-fs: dm-1: couldn't mount because of unsupported optional features (80). I first thought I just forgot to make the filesystem, so I remade it and the error is still present. I ran fsck on this freshly made filesystem, and it completed with
2011 Dec 15
2
ZFS coming to the Mac (again)
Here''s something that might of interest to Mac aficionados: http://tenscomplement.com/z-410-storage-main-features -- Maurice Volaski, maurice.volaski at einstein.yu.edu Computing Support, Dominick P. Purpura Department of Neuroscience Albert Einstein College of Medicine of Yeshiva University -------------- next part -------------- An HTML attachment was scrubbed... URL:
2005 May 19
3
[Q] Where does all the space go?
I created a filesystem as follows: mke2fs -j -O dir_index -O sparse_super -T largefile /dev/drbd/6 Here's the the output from df Filesystem Size Used Avail Use% /dev/drbd/6 475G 33M 452G 1% It seems that ext3 has taken 23 GB, which is about 5% of the total disk size, for itself. Is that right? If that is, indeed, the case, why does df just list 33M as being
2009 Jul 26
4
Any word on when the ietf mib will be fixed for liebert?
This mib used to work, so is there a way to go back to the version prior to this one without downgrading the whole package? * Starting UPS drivers... Network UPS Tools - UPS driver controller 2.4.1 Network UPS Tools - Generic SNMP UPS driver 0.44 (2.4.1) Detected GXT2-2000RT120 on host upswallleft (mib: ietf 1.3) [upswallleft] nut_snmp_get: 1.3.6.1.2.1.33.1.4.4.1.4.0: Error in
2002 Jul 31
2
Patronizing the exclude * option
I need to get around the requirement of --exclude=* to have all the parent directories of the files and directories that are to be included. So given the file.. /startdirectory/subdirectory1/subdirectory2/filetobecopied I need to include / /subdirectory1 /subdirectory1/subdirectory2 I could just include the entire directory structure , but alternatively just include the needed paths. So the
2010 Jul 16
1
Making a zvol unavailable to iSCSI trips up ZFS
I''ve been experimenting with a two system setup in snv_134 where each system exports a zvol via COMSTAR iSCSI. One system imports both its own zvol and the one from the other system and puts them together in a ZFS mirror. I manually faulted the zvol on one system by physically removing some drives. What I expect to happen is that ZFS will fault the zvol pool and the iSCSI stack will
2023 Jun 03
1
What could cause rsync to kill ssh?
Maurice R Volaski via rsync <maurice.volaski at lists.samba.org> wrote: > I have an rsync script that it is copying one computer (over ssh) > to a shared CIFS mount on Gentoo Linux, kernel 6.3.4. The script > runs for a while and then at some point quits knocking my ssh > session offline on all terminals and it blocks ssh from being able > to connect again. Even restarting
2006 Feb 19
3
ext3 involved in kernel panic in 2.6.13?
Dual Opteron system running ext3 atop drbd (network RAID) devices, which, in turn, are atop LVM logical volumes. The underlying device is hardware SCSI RAID via a LSILogic HBA. The kernel is vanilla 2.6.13 on a Gentoo-based system. A panic occurred, which contains references to ext3 code. I'm not sure how others manage to get these typed out, but I'm manually typing it from
2023 Jun 03
3
What could cause rsync to kill ssh?
I have an rsync script that it is copying one computer (over ssh) to a shared CIFS mount on Gentoo Linux, kernel 6.3.4. The script runs for a while and then at some point quits knocking my ssh session offline on all terminals and it blocks ssh from being able to connect again. Even restarting sshd doesn?t help. Rsync has apparently killed it. I have to reboot. -------------- next part
2023 Jun 03
2
What could cause rsync to kill ssh?
Rsync 3.2.7 is running on the Gentoo computer, which doesn't have a version, other than it's "current". I'm running the script from this computer. Rsync 3.1.2 is on the source computer, where the files come from, which is Ubuntu 18.0.4.6. I'm copying to a CIFS share mounted on the Gentoo computer. The rsync scripts are all similar to this one: /usr/bin/rsync -v -a
2023 Jun 03
1
What could cause rsync to kill ssh?
Maurice R Volaski <maurice.volaski at einsteinmed.edu> wrote: > Rsync 3.2.7 is running on the Gentoo computer, which doesn't have > a version, other than it's "current". I'm running the script from > this computer. > > Rsync 3.1.2 is on the source computer, where the files come from, > which is Ubuntu 18.0.4.6. > > I'm copying to a CIFS
2003 May 18
2
[Q] Why does it take so long for XP to logon?
If I login to our Samba box (RedHat 7.1 + 2.4.20) under Windows 2K via Start->Run->share name, I get logged in almost immediately. It averages about 25 seconds under XP. This behavior has been true in various versions of Samba since I believe XP was available and is reproducible on every (several) Windows 2K/XP boxes I have tried. This is a workgroup environment with Samba acting as
2010 May 08
6
Mirrored Servers
Lets say I have two servers, both running opensolaris with ZFS. I basically want to be able to create a filesystem where the two servers have a common volume, that is mirrored between the two. Meaning, each server keeps an identical, real time backup of the other''s data directory. Set them both up as file servers, and load balance between the two for incoming requests. How would anyone
2007 Jan 13
1
[Q] How can the directory location to dd output affect performance?
I have two Opteron-based Tyan systems being supported by PCI-e Areca cards. There is definitely an issue going on in the two systems that is causing significantly degraded performance of these cards. It appeared, initially, that the SATA backplane on the Tyan chassis was wholly to blame. But then I made an odd discovery. I'm running from the Ubuntu LiveCD for 64-bit. It uses kernel
2009 Jun 25
2
[Q] What might cause modification dates to shift later by an hour?
Recently, our backup software oddly decided to rebackup a good portion of our file server instead of just doing an incremental. When I examined various sets of presumably identical files, I discovered that the modification dates on these files were no longer the same. Many files were re-dated to exactly one hour later such that if a file had been modified on 3/24/04 at 2:24:53 PM, it's
2005 Jun 17
1
[Q] Is this true and does it mean there is dynamic defragmentation in ext2/3?
Someone recently posted the following statement midway down the page at http://forums.gentoo.org/viewtopic-t-305871-postdays-0-postorder-asc-highlight-ext3+ordered+data-start-25.html >You don't need to defragment ext2/ext3 because as you use the >filesystem file blocks and inodes are moved around and reallocated >to keep the data nearly contiguous. It's not perfect, but it
2005 Jun 26
1
[Q] Is errors=panic safe to use, and will it detect a RAID gone psycho?
I have had in years past seen hardware (SCSI) RAID controllers lose it electronically causing the kernel to fill the logs with scary SCSI messages and ext3 to complain about "holes" in the filesystem like so: Sep 7 14:47:17 thewarehouse1 kernel: EXT3-fs error (device sd(8,81)): ext3_readdir: directory #376833 contains a hole at offset 0 I'm using drbd and heartbeat so whatever
2005 May 19
1
mke2fs options for very large filesystems
>Yes, if you are creating larger files. By default e2fsck assumes the average >file size is 8kB and allocates a corresponding number of inodes there. If, >for example, you are storing lots of larger files there (digital photos, MP3s, >etc) that are in the MB range you can use "-t largefile" or "-t largefile4" >to specify an average file size of 1MB or 4MB