similar to: [PATCH] appliance: Add support for btrfs, GFS, GFS2, JFS, HFS, HFS+, NILFS, OCFS2

Displaying 20 results from an estimated 7000 matches similar to: "[PATCH] appliance: Add support for btrfs, GFS, GFS2, JFS, HFS, HFS+, NILFS, OCFS2"

2018 Feb 19
1
[PATCH] appliance: include dash for Debian distros
Make sure that /bin/sh is available in the appliance, and that path is provided by dash on Debian distributions. --- appliance/packagelist.in | 1 + 1 file changed, 1 insertion(+) diff --git a/appliance/packagelist.in b/appliance/packagelist.in index 78aedad0b..f92a6ce95 100644 --- a/appliance/packagelist.in +++ b/appliance/packagelist.in @@ -61,6 +61,7 @@ ifelse(DEBIAN,1, dnl old name used in
2020 Jan 29
1
[PATCH] appliance: Add ntfs-3g-system-compression (RHBZ#1703463).
This package in Fedora enables optional support for Windows 10 "CompactOS" (file-level compression), read-only, which is sufficient for inspecting Windows guests and doing certain types of modifications to them. Virt-v2v appears to work, but it may be that anything that involves modifying a compressed file might not work. I couldn't find the equivalent package in Debian or SUSE.
2010 Aug 10
1
GFS/GFS2 on CentOS
Hi all, If you have had experience hosting GFS/GFS2 on CentOS machines could you share you general impression on it? Was it realiable? Fast? Any issues or concerns? Also, how feasible is it to start it on just one machine and then grow it out if necessary? Thanks. Boris.
2016 Oct 06
1
[PATCH] appliance: add/remove some packages for Arch Linux
Added: - cdrtools: added as alternative to cdrkit - multipath-tools: contains kpartx (in AUR) Removed: - ntfsprogs: the package is no longer available, it has been completely replaced by ntfs-3g (already in packagelist.in) - zfs-fuse: no longer in AUR Signed-off-by: Tomáš Golembiovský <tgolembi@redhat.com> --- appliance/packagelist.in | 4 ++-- 1 file changed, 2 insertions(+), 2
2014 Oct 02
4
[PATCH 0/3] RFC: appliance flavours
Hi, this is a prototype of something I've around for some time. Basically it is about adding new appliances in addition to the main one currently used and kept up-to-date automatically: this way it is possible to create new appliances with extra packages, to be used in specific contexts (like virt-rescue, with more network/recovery tools) without filling the main appliance. It's still
2012 Aug 14
1
[PATCH] Even on Debian, the package containing the diff binary it has been diffutils for two years.
There had been a virtual package "diff" that depended on diffutils, but that's gone in wheezy/sid, too. --- appliance/packagelist.in | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/appliance/packagelist.in b/appliance/packagelist.in index b26ef23..4830962 100644 --- a/appliance/packagelist.in +++ b/appliance/packagelist.in @@ -23,7 +23,6 @@ btrfs-progs
2013 Aug 21
2
Dovecot tuning for GFS2
Hello, I'm deploing a new email cluster using Dovecot over GFS2. Actually I'm using courier over GFS. Actually I'm testing Dovecot with these parameters: mmap_disable = yes mail_fsync = always mail_nfs_storage = yes mail_nfs_index = yes lock_method = fcntl Are they correct? RedHat GFS support mmap, so is it better to enable it or leave it disabled? The documentation suggest the
2008 Sep 24
3
Dovecot performance on GFS clustered filesystem
Hello All, We are using Dovecot 1.1.3 to serve IMAP on a pair of clustered Postfix servers which share a fiber array via the GFS clustered filesystem. This all works very well for the most part, with the exception that certain operations are so inefficient on GFS that they generate significant I/O load and hurt performance. We are using the Maildir format on disk. We're also using
2011 Jun 11
2
mmap in GFS2 on rhel 6.1
Hello list, we continue our tests using Dovecot on a RHEL 6.1 Cluster Backend with GFS2, also we are using dovecot as a Director for user node persistence, everything was ok until we started stress testing the solution with imaptest, we had many deadlocks, cluster filesystems corruptions and hangs, specially in index filesystem, we have configured the backend as if they were on a NFS like setup
2016 Feb 26
2
Re: [PATCH] added ntfscat_i api
On 26/02/16 10:12, Richard W.M. Jones wrote: > On Fri, Feb 26, 2016 at 12:16:22AM +0200, noxdafox wrote: >> According to autogen.sh output Perl bindings and virt tools seem to >> be missing, could it be related to this? Are the tests relying to >> such dependencies? > Yes, the tests rely on Perl bindings working, so I guess you need to > install whatever missing
2012 Nov 27
6
CTDB / Samba / GFS2 - Performance - with Picture Link
Hello, maybe there is someone they can help and answer a question why i get these network screen on my ctdb clusters. I have two ctdb clusters. One physical and one in a vmware enviroment. So when i transfer any files (copy) in a samba share so i get such network curves with performance breaks. I dont see that the transfer will stop but why is that so? can i change anything or does anybody know
2009 Aug 03
1
CTDB+GFS2+CMAN. clean_start="0" or clean_start="1"?
Hi everybody, I have tested CTDB+GFS2+CMAN under Debian. It works good but I do not understand some points. It is possible to run the CTDB defining it under services section in cluster.conf but running it on the second node shuts down the process at the first one. My CTDB configuration implies 2 active-active nodes. Does CTDB care if the node starts with clean_start="0" or
2011 Feb 27
1
Recover botched drdb gfs2 setup .
Hi. The short story... Rush job, never done clustered file systems before, vlan didn't support multicast. Thus I ended up with drbd working ok between the two servers but cman / gfs2 not working, resulting in what was meant to be a drbd primary/primary cluster being a primary/secondary cluster until the vlan could be fixed with gfs only mounted on the one server. I got the single server
2020 Nov 10
2
centos8 / anaconda EFI regression / HFS+ ESP
Hi folks, years ago I had no problem installing CentOS7 on my iMac workstation (iMac Late 2015). The installation worked out of the box. Today I wanted to upgrade to CentOS8 and while configuring the partitions I get an error that the installation can not start because: "HFS+ ESP needed and mounted on /boot/efi". In fact they are the same partition as for CentOS7. Is this a regression
2012 Mar 07
2
hfs with extended attribute support
Hi all, I?ve got a HFS+(not journaled) volume connected to my centos6.2 test server, i installed the kmod-hfs(plus) packages and read/write works all fine. but since i?m going to use this for serving mac home folders via netatalk i would like to mount it with support for Extended Attributes and acl?s. so i add user_xattr and acl to my fstab options but then it fails to mount. checking the
2010 Mar 27
1
DRBD,GFS2 and GNBD without all clustered cman stuff
Hi all, Where i want to arrive: 1) having two storage server replicating partition with DRBD 2) exporting via GNBD from the primary server the drbd with GFS2 3) inporting the GNBD on some nodes and mount it with GFS2 Assuming no logical error are done in the last points logic this is the situation: Server 1: LogVol09, DRDB configured as /dev/drbd0 replicated to Server 2. DRBD seems to work
2009 Feb 21
1
GFS2/OCFS2 scalability
Andreas Dilger wrote: > On Feb 20, 2009 20:23 +0300, Kirill Kuvaldin wrote: >> I'm evaluating different cluster file systems that can work with large >> clustered environment, e.g. hundreds of nodes connected to a SAN over >> FC. >> >> So far I looked at OCFS2 and GFS2, they both worked nearly the same >> in terms of performance, but since I ran my
2005 Dec 23
1
GFS2, OCFS2, and FUSE cause xenU to oops.
I really need to share a filesystem and I''d rather not have to export it from one domU to another so I tried mounting it with GFS2 and then OCFS2. Both caused the xenU kernel to oops just as the mount was attempted. I assumed that a FUSE-based solution would be a little less problematic (if only because it doesn''t require kernel patches) but it also caused an oops right when
2008 May 29
3
GFS
Hello: I am planning to implement GFS for my university as a summer project. I have 10 servers each with SAN disks attached. I will be reading and writing many files for professor's research projects. Each file can be anywhere from 1k to 120GB (fluid dynamic research images). The 10 servers will be using NIC bonding (1GB/network). So, would GFS be ideal for this? I have been reading a lot
2020 Oct 17
1
[PATCH] Use guestfsd binary to auto-generate library dependencies for appliance
The ELF NEEDED are used to determine guestfsd's library dependencies with help from the dynamic linker and the package manager. This was prompted by Debian bug #972241 which was caused by a libtirpc package renaming in Debian/unstable because the SONAME had been changed. --- appliance/Makefile.am | 26 ++++++++++++++++- appliance/packagelist.in | 62