Displaying 20 results from an estimated 41 matches for "pnfs".
Did you mean:
nfs
2009 Mar 03
8
zfs list extentions related to pNFS
Hi,
I am soliciting input from the ZFS engineers and/or ZFS users on an
extension to "zfs list". Thanks in advance for your feedback.
Quick Background:
The pNFS project (http://opensolaris.org/os/project/nfsv41/) is adding
a new DMU object set type which is used on the pNFS data server to
store pNFS stripe DMU objects. A pNFS dataset gets created with the
"zfs create" command and gets displayed using "zfs list".
Specific Questio...
2012 Jan 22
1
Samba CTDB with data coming via pNFS?
Greetings,
Does anyone know whether I'll encounter problems serving out CIFS using
Samba/CTDB where the servers are pNFS clients? Specifically I'm
thinking that I'll have a number of RHEL 6.2 boxes connecting to netapp
storage using pNFS. These boxes will then serve a variety of CIFS
clients.
JR
2015 Jun 23
2
CentOS 7, systemd, pNFS - hosed
...is Rave computer has four early Tesla cards.
It won't boot. Nor can I get it to boot with either of the other two
kernels, and I'll be the one that worked was erased.
*Once* it complained that it couldn't fsck the large filesystem (IIRC,
it's XFS). The other four-five times, I get pNFS something Dependency,
failed, systemd freezing execution.
So far, I'm googling, and not finding anything like this. Has anyone run
into this, or have any suggestions?
If I could get back to the fsck failure, I could at least tell it to
ignore /dev/sdb....
mark
2015 Jun 23
0
CentOS 7, systemd, pNFS - hosed
...rly Tesla cards.
>
> It won't boot. Nor can I get it to boot with either of the other two
> kernels, and I'll be the one that worked was erased.
>
> *Once* it complained that it couldn't fsck the large filesystem (IIRC,
> it's XFS). The other four-five times, I get pNFS something Dependency,
> failed, systemd freezing execution.
>
> So far, I'm googling, and not finding anything like this. Has anyone run
> into this, or have any suggestions?
>
> If I could get back to the fsck failure, I could at least tell it to
> ignore /dev/sdb....
>...
2018 Mar 06
0
pNFS
Hi list,
I am wondering why do we need Ganesha user-land NFS server in order to get pNFS working?
I understand Ganesha is necessary on the MDS, but standard kernel based NFS server should be sufficient on DS bricks (which should bring us additional performance), right?
Could someone clarify?
Thanks,
Ondrej
-----
The information contained in this e-mail and in any attachments is con...
2010 Nov 01
1
How to migrate a linux kernel to xen
Hi all:
I have a linux kernel 2.6.32-pnfs and want to make it a
paravirtualized domU of xen, can anyone tell me where to find the patches,
and which is the most near kernel? Thank you.
Mingyang Guo
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
2018 May 09
0
3.12, ganesha and storhaug
...nfig/use that are clearly for the newest ganesha and gluster combo.
Can the gnfs support multiple transport methods?
Granted, I have a conflicted need: the NFS is needed to resolve the
mmap issue (and hopefully do some speed up that the users see) but the
real high-speed need is possibly through pNFS clients to NFS-Ganesha
but those connections will be over infinniband so only TCP and no RDMA.
That drops my raw speed from 40Gbps to 10Gbps (connectx-2 gear). I can
run gnfs on RDMA (40Gbps!) but no NFSv4.1 for pNFS AND a manual mess
setting up HA for connections. The last connection fun is not al...
2008 Nov 10
2
Parallel/Shared/Distributed Filesystems
I'm looking at using GFS for parallel access to shared storage, most likely
an iSCSI resource. It will most likely work just fine but I am curious if
folks are using anything with fewer system requisites (e.g. installing and
configuring the Cluster Suite).
Specifically to our case, we have 50 nodes running in-house code (some in
Java, some in C) which (among other things) receives JPGs,
2008 Apr 19
6
sparc server plus zfs
I have heard rumors to the affect that there will be a sparc release
of lustre incorporating zfs. If this is true does anyone know how I
can get a copy, beta test, or sign up for the effort.
2019 Dec 20
1
GFS performance under heavy traffic
...;>
>>> Have you thought about upgrading to v6? There are some enhancements in v6 which could be beneficial.
>>>
>>> Yet, it is indeed strange that so much traffic is generated with FUSE.
>>>
>>> Another aproach is to test with NFSGanesha which suports pNFS and can natively speak with Gluster, which cant bring you closer to the previous setup and also provide some extra performance.
>>>
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>>
>>>
-------------- next part --------------
An HTML at...
2016 Nov 04
4
RHEL 7.3 released
...n the tech preview world nftables joins the testing group (I'll have
articles up exploring this new firewalling method in the coming weeks)
for networking. Whilst with storage overlayfs and btrfs remain in tech
preview status - with cephfs joining them... as notable pieces.
There's also new pNFS stuff.
This is only a small snippet of things that jumped out relevant to me
personally. As always make sure you read through the release notes in
full. to be ready once CentOS starts producing the RPMs, and keep in
mind this early in the lifecycle there are a fair few rebases and new
features imp...
2017 Oct 22
2
[PATCH v1 1/3] virtio-balloon: replace the coarse-grained balloon_lock
...m_notify: release some inflated memory via leak_balloon();
> leak_balloon: wait for balloon_lock to be released by fill_balloon.
>
> This patch breaks the lock into two fine-grained inflate_lock and
> deflate_lock, and eliminates the unnecessary use of the shared data
> (i.e. vb->pnfs, vb->num_pfns). This enables leak_balloon and
> fill_balloon to run concurrently and solves the deadlock issue.
>
> @@ -162,20 +160,20 @@ static unsigned fill_balloon(struct virtio_balloon *vb, size_t num)
> msleep(200);
> break;
> }
> - set_page_pfns(vb, vb-&g...
2017 Oct 22
2
[PATCH v1 1/3] virtio-balloon: replace the coarse-grained balloon_lock
...m_notify: release some inflated memory via leak_balloon();
> leak_balloon: wait for balloon_lock to be released by fill_balloon.
>
> This patch breaks the lock into two fine-grained inflate_lock and
> deflate_lock, and eliminates the unnecessary use of the shared data
> (i.e. vb->pnfs, vb->num_pfns). This enables leak_balloon and
> fill_balloon to run concurrently and solves the deadlock issue.
>
> @@ -162,20 +160,20 @@ static unsigned fill_balloon(struct virtio_balloon *vb, size_t num)
> msleep(200);
> break;
> }
> - set_page_pfns(vb, vb-&g...
2013 Oct 22
3
htdocs on NFS share / any pitfalls?
Hi all,
i have a new setup where the htdocs directory for the webserver
is located on a nfs share. Client has cachefilesd configured.
Compared to the old setup (htdocs directory is on the local disk)
the performance is not so gratifying. The disk is "faster" compared
to the ethernet link but the cache should at least compensate this
a bit. Do they exist more pitfalls for such
2017 Oct 22
0
[PATCH v1 1/3] virtio-balloon: replace the coarse-grained balloon_lock
...ome inflated memory via leak_balloon();
>> leak_balloon: wait for balloon_lock to be released by fill_balloon.
>>
>> This patch breaks the lock into two fine-grained inflate_lock and
>> deflate_lock, and eliminates the unnecessary use of the shared data
>> (i.e. vb->pnfs, vb->num_pfns). This enables leak_balloon and
>> fill_balloon to run concurrently and solves the deadlock issue.
>>
>> @@ -162,20 +160,20 @@ static unsigned fill_balloon(struct virtio_balloon *vb, size_t num)
>> msleep(200);
>> break;
>> }
>&g...
2008 Mar 13
3
Round-robin NFS protocol with ZFS
Hello all,
I was thinking if such scenario could be possible:
1 - Export/import a ZFS filesystem in two solaris servers.
2 - Export that filesystem (NFS).
3 - Mount that filesystem on clients in two different mount points (just to authenticate in both servers/UDP).
4a - Use some kind of "man-in-the middle" to auto-balance the connections (the same IP on servers)
or
4b - Use different
2011 Jan 15
1
Is mirroring provides failover protection?
Hi,
I have two Gluster servers A and B. They are mirrored. I mount A. What happens when server A goes down?
I would like for my applications to seamlessly use data from the Gluster storage B.
Thanks,
Sergiy.
2009 Nov 06
7
Status of DTrace NFSv3/v4 client providers
We recently had a strange NFS performance anomaly between a V880 running
snv_124 and two NetApp filers. To investigate, a DTrace NFSv4 (and
eventually NFSv3) client provider would been extremely helpful.
Unfortunately, all I could find were a request for code review of a v3
client provider and another request for help developing a v4 provider.
Nothing seems to have come from those initiatives,
2017 Jan 28
6
make SCSI passthrough support optional
Hi all,
this series builds on my previous changes in Jens' for-4.11/rq-refactor
branch that split out the BLOCK_PC fields from struct request into a new
struct scsi_request, and makes support for struct scsi_request and the
SCSI passthrough ioctls optional. It is now only enabled by drivers that
need it.
In addition I've made SCSI passthrough support in the virtio_blk driver an
optional
2017 Jan 28
6
make SCSI passthrough support optional
Hi all,
this series builds on my previous changes in Jens' for-4.11/rq-refactor
branch that split out the BLOCK_PC fields from struct request into a new
struct scsi_request, and makes support for struct scsi_request and the
SCSI passthrough ioctls optional. It is now only enabled by drivers that
need it.
In addition I've made SCSI passthrough support in the virtio_blk driver an
optional