similar to: zfs list extentions related to pNFS

Displaying 20 results from an estimated 200 matches similar to: "zfs list extentions related to pNFS"

2009 Jan 11
2
xmv and vlan on snv_105?
Hi, I updated my installation to snv_105 from pkg.opensolaris.org/dev. Booting xvm kernel doesn''t work. The last working xvm version that I tested was on snv_98. I filed a bug report earlier on http://defect.opensolaris.org/bz/show_bug.cgi?id=5905 Is anyone using xmv on opensolaris 2008.11 (stable or dev)? Is it suppose to work, or is it one of those
2009 Jan 09
2
bnx problems
Hi, I''m trying to BFU my own build of ON onto a snv_104 installation and I''m hitting problems with bnx driver; which is coming from a set of snv_105 closed bins: WARNING: bnx0: Failed to allocate GLD MAC memory. I believe bnx is a closed source GLDv3 driver. Were there any incompatible changes due to the crossbow integration that went in after snv_105?
2012 Jan 22
1
Samba CTDB with data coming via pNFS?
Greetings, Does anyone know whether I'll encounter problems serving out CIFS using Samba/CTDB where the servers are pNFS clients? Specifically I'm thinking that I'll have a number of RHEL 6.2 boxes connecting to netapp storage using pNFS. These boxes will then serve a variety of CIFS clients. JR
2015 Jun 23
2
CentOS 7, systemd, pNFS - hosed
I just updated a server that's running CentOS 7. I do have elrepo enabled, because this Rave computer has four early Tesla cards. It won't boot. Nor can I get it to boot with either of the other two kernels, and I'll be the one that worked was erased. *Once* it complained that it couldn't fsck the large filesystem (IIRC, it's XFS). The other four-five times, I get pNFS
2009 Jan 07
9
''zfs recv'' is very slow
On Wed 07/01/09 20:31 , Carsten Aulbert carsten.aulbert at aei.mpg.de sent: > Brent Jones wrote: > > > > Using mbuffer can speed it up dramatically, but > > this seems like a hack> without addressing a real problem with zfs > > send/recv.> Trying to send any meaningful sized snapshots > > from say an X4540 takes> up to 24 hours, for as little as 300GB
2008 Nov 10
2
Parallel/Shared/Distributed Filesystems
I'm looking at using GFS for parallel access to shared storage, most likely an iSCSI resource. It will most likely work just fine but I am curious if folks are using anything with fewer system requisites (e.g. installing and configuring the Cluster Suite). Specifically to our case, we have 50 nodes running in-house code (some in Java, some in C) which (among other things) receives JPGs,
2009 May 04
8
CentOS DomU on Opensolaris Dom0 - virt-install fails with error in virDomainCreateLinux()
Hi, I am trying to install CentOS on an Opensolaris Dom0. virt-install fails with an error in virDomainCreateLinux(). Is this a known issue? Am I missing some step? manoj@mowgli:~$ uname -a SunOS mowgli 5.11 snv_101b i86pc i386 i86xpv Solaris manoj@mowgli:~$ pfexec virt-install What is the name of your virtual machine? centos How much RAM should be allocated (in megabytes)? 512 What would
2010 Nov 01
1
How to migrate a linux kernel to xen
Hi all: I have a linux kernel 2.6.32-pnfs and want to make it a paravirtualized domU of xen, can anyone tell me where to find the patches, and which is the most near kernel? Thank you. Mingyang Guo _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2008 Apr 19
6
sparc server plus zfs
I have heard rumors to the affect that there will be a sparc release of lustre incorporating zfs. If this is true does anyone know how I can get a copy, beta test, or sign up for the effort.
2019 Dec 20
1
GFS performance under heavy traffic
Hi David, Also consider using the mount option to specify backup server via 'backupvolfile-server=server2:server3' (you can define more but I don't thing replica volumes greater that 3 are usefull (maybe in some special cases). In such way, when the primary is lost, your client can reach a backup one without disruption. P.S.: Client may 'hang' - if the primary server got
2013 Oct 22
3
htdocs on NFS share / any pitfalls?
Hi all, i have a new setup where the htdocs directory for the webserver is located on a nfs share. Client has cachefilesd configured. Compared to the old setup (htdocs directory is on the local disk) the performance is not so gratifying. The disk is "faster" compared to the ethernet link but the cache should at least compensate this a bit. Do they exist more pitfalls for such
2008 Mar 13
3
Round-robin NFS protocol with ZFS
Hello all, I was thinking if such scenario could be possible: 1 - Export/import a ZFS filesystem in two solaris servers. 2 - Export that filesystem (NFS). 3 - Mount that filesystem on clients in two different mount points (just to authenticate in both servers/UDP). 4a - Use some kind of "man-in-the middle" to auto-balance the connections (the same IP on servers) or 4b - Use different
2016 Nov 04
4
RHEL 7.3 released
As a heads up RHEL 7.3 is released: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/7.3_Release_Notes/index.html Pay careful attention when the CR repo starts churning out RPMs (if you have CR enabled) as there have been a few rebases in this - notably firewalld, NetworkManager, freeIPA, libreoffice, samba, amongst others If you have an ipv6 environment ping is now
2011 Jan 15
1
Is mirroring provides failover protection?
Hi, I have two Gluster servers A and B. They are mirrored. I mount A. What happens when server A goes down? I would like for my applications to seamlessly use data from the Gluster storage B. Thanks, Sergiy.
2009 Nov 06
7
Status of DTrace NFSv3/v4 client providers
We recently had a strange NFS performance anomaly between a V880 running snv_124 and two NetApp filers. To investigate, a DTrace NFSv4 (and eventually NFSv3) client provider would been extremely helpful. Unfortunately, all I could find were a request for code review of a v3 client provider and another request for help developing a v4 provider. Nothing seems to have come from those initiatives,
2010 Apr 14
1
General Server Hardware Question
Hey List, How do people measure how many listeners they can have on a single NIC card? A 1Gig Ethernet NIC card maybe able in terms of bandwidth to serve 31,250 listeners at 32Kbps (not including overheads etc, just a flat calculation) but there is no way the NIC card its self could handle 31k concurrent connections to 31k different IPs on 31K ports? (Obviously the OS comes into play here a bit
2017 Oct 22
2
[PATCH v1 1/3] virtio-balloon: replace the coarse-grained balloon_lock
Wei Wang wrote: > The balloon_lock was used to synchronize the access demand to elements > of struct virtio_balloon and its queue operations (please see commit > e22504296d). This prevents the concurrent run of the leak_balloon and > fill_balloon functions, thereby resulting in a deadlock issue on OOM: > > fill_balloon: take balloon_lock and wait for OOM to get some memory; >
2017 Oct 22
2
[PATCH v1 1/3] virtio-balloon: replace the coarse-grained balloon_lock
Wei Wang wrote: > The balloon_lock was used to synchronize the access demand to elements > of struct virtio_balloon and its queue operations (please see commit > e22504296d). This prevents the concurrent run of the leak_balloon and > fill_balloon functions, thereby resulting in a deadlock issue on OOM: > > fill_balloon: take balloon_lock and wait for OOM to get some memory; >
2017 Jan 28
6
make SCSI passthrough support optional
Hi all, this series builds on my previous changes in Jens' for-4.11/rq-refactor branch that split out the BLOCK_PC fields from struct request into a new struct scsi_request, and makes support for struct scsi_request and the SCSI passthrough ioctls optional. It is now only enabled by drivers that need it. In addition I've made SCSI passthrough support in the virtio_blk driver an optional
2017 Jan 28
6
make SCSI passthrough support optional
Hi all, this series builds on my previous changes in Jens' for-4.11/rq-refactor branch that split out the BLOCK_PC fields from struct request into a new struct scsi_request, and makes support for struct scsi_request and the SCSI passthrough ioctls optional. It is now only enabled by drivers that need it. In addition I've made SCSI passthrough support in the virtio_blk driver an optional