similar to: Gluster with ZFS

Displaying 20 results from an estimated 4000 matches similar to: "Gluster with ZFS"

2025 Apr 17
4
Gluster with ZFS
HI Alexander, Thanks for the update. Initially, I also thought of deploying Ceph but ceph is quite difficult to set-up and manage. Moreover, it's also hardware demanding. I think it's most suitable for a very large set-up with hundreds of clients. What do you think of MooseFS ? Have you or anyone else tried MooseFS. If yes, how was its performance?
2025 Apr 17
1
Gluster with ZFS
Gagan: Throwing my $0.02 in -- It depends on the system environment of how you are planning on deploying Gluster (and/or Ceph). I have Ceph running on my three node HA Proxmox cluster using three OASLOA Mini PCs that only has the Intel N95 Processor (4-core/4-thread) with 16 GB of RAM and a cheap Microcenter store brand 512 GB NVMe M.2 2230 SSD and my Ceph cluster has been running without any
2025 Apr 17
2
Gluster with ZFS
On Thu, 2025-04-17 at 14:44 +0530, gagan tiwari wrote: > HI?Alexander, > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Thanks for the update. Initially, I? > also thought of deploying Ceph but ceph is quite difficult to set-up > and manage. Moreover, it's also??hardware demanding. You are of course entitled to your own opinion but I'd like to point out that ZFS+Gluster carries a lot of
2025 Apr 17
1
Gluster with ZFS
On Thu, Apr 17, 2025 at 09:40:08AM +0530, gagan tiwari wrote: > Hi Guys, > We have been using OpenZFS in our HPC environment for > quite some time. And OpenZFS was going fine. > > But we are now running into scalability issues since OpenZFS can't be > scaled out. Since ZFS is a local FS, you are essentially limited to how much storage you can stick into
2025 Apr 17
1
Gluster with ZFS
On Thu, Apr 17, 2025 at 02:44:28PM +0530, gagan tiwari wrote: > HI Alexander, > Thanks for the update. Initially, I also > thought of deploying Ceph but ceph is quite difficult to set-up and manage. > Moreover, it's also hardware demanding. I think it's most suitable for a > very large set-up with hundreds of clients. I strongly disagree. I
2025 Apr 17
1
Gluster with ZFS
Gagan: I actually tried what Alexander mentioned below, as a separate experiment. I have two AMD Ryzen 9 5950X compute nodes and one AMD Ryzen 9 7950X compute node, and each node has 128 GB of RAM and also a Mellanox ConnectX-4 100 Gbps Infiniband network card, and I was using I think one Intel 670p 1 TB NVMe SSD and two Silicon Power US70 1 TB NVMe SSD. >From the Ceph perspective, giving it
2019 Mar 04
2
Enable XAT_OFFLINE extended attribute in Samba
On Mon, 4 Mar 2019 10:25:59 -0800 Jeremy Allison via samba <samba at lists.samba.org> wrote: > On Mon, Mar 04, 2019 at 04:48:56PM +0100, Andrea Cucciarre' via samba > wrote: > > Hello, > > > > Does Samba support XAT_OFFLINE and XAT_ONLINE extended attribute? > > I have enabled "ea support = yes" but it seems to have no effect on > > that.
2009 Apr 23
1
Load a data from oracle database to R
Hello, I am have trying to load data in R by connecting R to the database the following way: > library(RODBC) > channel<-odbcConnect("gagan") now after I connect to the server by putting pwd. I want to load table from the database named "temp" in to R so that I can do some descriptive statistics with it. now when I try to do "data(temp)" it gives error
2015 Jun 01
2
Native ZFS on Linux
> > > OK, plese note that I am not willing to tolerate anti-oss claims and will > continue to correct similar false claims. If you don't like those > discussions > at all, you should try to avoid false claims and the need for corrections. > If I were RedHat, including a non GPL filesystem into my operating system would make me sweat a bit. Intel were facing a similar
2015 Sep 28
2
parse raw image to read block group desc table!
Hi, I am writing a piece of code to open a raw image file of a virtual machine which has ubuntu installed in it. The virtual disk is formatted using MBR partitioning method and has 3 primary and 1 extended partition. I want to open up that file and read the block group descriptor table and inode table for each partition. I have written some lines of code and successfully able to read the
2023 Oct 28
2
State of the gluster project
I don't think it's worth it for anyone. It's a dead project since about 9.0, if not earlier. It's time to embrace the truth and move on. /Z On Sat, 28 Oct 2023 at 11:21, Strahil Nikolov <hunter86_bg at yahoo.com> wrote: > Well, > > After IBM acquisition, RH discontinued their support in many projects > including GlusterFS (certification exams were removed,
2015 Jun 01
2
Native ZFS on Linux
On 06/01/2015 07:42 AM, Joerg Schilling wrote: > Johnny Hughes <johnny at centos.org> wrote: > >> On 06/01/2015 06:42 AM, Joerg Schilling wrote: >>> Chuck Munro <chuckm at seafoam.net> wrote: >>> >>>> I have a question that has been puzzling me for some time ... what is >>>> the reason RedHat chose to go with btrfs rather than
2009 Apr 09
4
running a .r script and saving the output to a file
Hello, I want to run the following commands as a script(.r or .bat) and save the output in an external file through Windows OS. data<-read.csv(file="wgatever.csv", head=TRUE, sep=",") summary(data$SQFT) hist(data$STAMP) hist(data$STAMP, col='blue') hist(data$SHIP, col='blue') How could I do that? I have a great problem using the sink() function
2023 Oct 28
1
State of the gluster project
On Sat, Oct 28, 2023 at 11:07:52PM +0300, Zakhar Kirpichenko wrote: > I don't think it's worth it for anyone. It's a dead project since about > 9.0, if not earlier. It's time to embrace the truth and move on. Which is shame because I choose GlusterFS for one of my storage clusters _specifically_ due to the ease of emergency data recovery (for purely replicated volumes) even
2015 May 29
7
Native ZFS on Linux
I have a question that has been puzzling me for some time ... what is the reason RedHat chose to go with btrfs rather than working with the ZFS-on-Linux folks (now OpenZFS)? Is it a licensing issue, political, etc? Although btrfs is making progress, ZFS is far more mature, has a few more stable features (especially Raid-z3) and has worked flawlessly for me on CentOS-6 and Scientific Linux-6.
2015 Jun 01
2
Native ZFS on Linux
On 06/01/2015 06:42 AM, Joerg Schilling wrote: > Chuck Munro <chuckm at seafoam.net> wrote: > >> I have a question that has been puzzling me for some time ... what is >> the reason RedHat chose to go with btrfs rather than working with the >> ZFS-on-Linux folks (now OpenZFS)? Is it a licensing issue, political, etc? > > There is no licensing issue, but
2023 Dec 14
2
Gluster -> Ceph
Hi all, I am looking in to ceph and cephfs and in my head I am comparing with gluster. The way I have been running gluster over the years is either a replicated or replicated-distributed clusters. The small setup we have had has been a replicated cluster with one arbiter and two fileservers. These fileservers has been configured with RAID6 and that raid has been used as the brick. If disaster
2014 Sep 03
1
Effect of setting "store dos attributes = no" in Samba 4.1.11
Thanks for your help and replies. Yes, I meant "store dos attributes". It's pretty clear now that I need to keep the parameter 'store dos attributes=no' since 1) the server is an AD member server and 2) the map* parameters don't do the right thing under ZFS / NFSV4 ACLs. I've read that the steps Klaus Hartnegg listed resolves the issue on ZFS on Linux; however, I
2023 Dec 14
2
Gluster -> Ceph
Big raid isn't great as bricks. If the array does fail, the larger brick means much longer heal times. My main question I ask when evaluating storage solutions is, "what happens when it fails?" With ceph, if the placement database is corrupted, all your data is lost (happened to my employer, once, losing 5PB of customer data). With Gluster, it's just files on disks, easily
2023 Dec 17
1
Gluster -> Ceph
Il 14/12/2023 16:08, Joe Julian ha scritto: > With ceph, if the placement database is corrupted, all your data is lost > (happened to my employer, once, losing 5PB of customer data). From what I've been told (by experts) it's really hard to make it happen. More if proper redundancy of MON and MDS daemons is implemented on quality HW. > With Gluster, it's just files on