search for: 400tb

Displaying 8 results from an estimated 8 matches for "400tb".

Did you mean: 400gb
2011 Jul 14
5
really large file systems with centos
I've been asked for ideas on building a rather large archival storage system for inhouse use, on the order of 100-400TB. Probably using CentOS 6. The existing system this would replace is using Solaris 10 and ZFS, but I want to explore using Linux instead. We have our own tomcat based archiving software that would run on this storage server, along with NFS client and server. Its a write once, read almost n...
2017 Aug 30
2
Gluster status fails
Hi I am running a 400TB five node purely distributed gluster setup. I am troubleshooting an issue where some times files creation fails. I found that volume status is not working gluster volume status Another transaction is in progress for atlasglust. Please try again after sometime. When I tried from other node then i...
2017 Aug 31
0
Gluster status fails
...t; > Kashif > > On Thu, Aug 31, 2017 at 2:40 AM, Atin Mukherjee <amukherj at redhat.com> > wrote: > >> >> On Wed, 30 Aug 2017 at 20:55, mohammad kashif <kashif.alig at gmail.com> >> wrote: >> >>> Hi >>> >>> I am running a 400TB five node purely distributed gluster setup. I am >>> troubleshooting an issue where some times files creation fails. I found >>> that volume status is not working >>> >>> gluster volume status >>> Another transaction is in progress for atlasglust. Pleas...
2013 Dec 12
3
Is Gluster the wrong solution for us?
We are about to abandon GlusterFS as a solution for our object storage needs. I'm hoping to get some feedback to tell me whether we have missed something and are making the wrong decision. We're already a year into this project after evaluating a number of solutions. I'd like not to abandon GlusterFS if we just misunderstand how it works. Our use case is fairly straight forward.
2017 Jul 11
2
Extremely slow du
...as suggested in other > threads. > Now I am going to update my production server. I am planning to use > following optimization option, it would be very useful if you can point > out any inconsistency or suggest some other options. My production setup > has 5 servers consisting of 400TB storage and around 80 million files > of varying lengths. > > Options Reconfigured: > server.event-threads: 4 > client.event-threads: 4 > cluster.lookup-optimize: on > cluster.readdir-optimize: off > performance.client-io-threads: on > performance.cache-size: 1GB > per...
2013 Dec 06
2
How reliable is XFS under Gluster?
Hello, I am in the point of picking up a FS for new brick nodes. I was used to like and use ext4 until now but I recently red for an issue introduced by a patch in ext4 that breaks the distributed translator. In the same time, it looks like the recommended FS for a brick is no longer ext4 but XFS which apparently will also be the default FS in the upcoming RedHat7. On the other hand, XFS is being
2024 Jan 18
1
Upgrade 10.4 -> 11.1 making problems
...temp space 2) recovering from the bricks what was inaccessible from the mountpoint (keeping different file revisions for the conflicting ones) 3) destroying and recreating the volume 4) copying back the data from the backup When gluster gets used because you need lots of space (we had more than 400TB on 3 nodes with 30x12TB SAS disks in "replica 3 arbiter 1"), where do you park the data? Is the official solution "just have a second cluster idle for when you need to fix errors"? It took more than a month of downtime this summer, and after less than 6 months I'd have to...
2024 Jan 18
1
Upgrade 10.4 -> 11.1 making problems
were you able to solve the problem? Can it be treated like a "normal" split brain? 'gluster peer status' and 'gluster volume status' are ok, so kinda looks like "pseudo"... hubert Am Do., 18. Jan. 2024 um 08:28 Uhr schrieb Diego Zuccato <diego.zuccato at unibo.it>: > > That's the same kind of errors I keep seeing on my 2 clusters, >