Displaying 20 results from an estimated 3000 matches similar to: "Bricks topology"
2011 Jun 24
1
How long should resize2fs take?
Hullo!
First mail, sorry if this is the wrong place for this kind of
question. I realise this is a "piece of string" type question.
tl;dr version: I have a resizefs shrinking an ext4 filesystem from
~4TB to ~3TB and it's been running for ~2 days. Is this normal?
Strace shows lots of:-
lseek(3, 42978250752, SEEK_SET) = 42978250752
read(3,
2013 Aug 12
1
Dell R515 with PERC H700 - JBOD?
Hello, I'm curiuos if anyone knows if it's possible to set up a Dell R515 (which has PERC H700) to be JBOD.
It seems the only options are RAID0 or RAID1.
I read posts, where people say it can by done by making each disk its own RAID0.? This works, but it wigs out when that disk is removed, and forgets a disk was ever there (unless I go back in the PERC and fix it).
My plan is to have a
2012 May 06
4
btrfs-raid10 <-> btrfs-raid1 confusion
Greetings,
until yesterday I was running a btrfs filesystem across two 2.0 TiB
disks in RAID1 mode for both metadata and data without any problems.
As space was getting short I wanted to extend the filesystem by two
additional drives lying around, which both are 1.0 TiB in size.
Knowing little about the btrfs RAID implementation I thought I had to
switch to RAID10 mode, which I was told is
2019 Jan 31
0
C7, mdadm issues
> Il 30/01/19 16:49, Simon Matter ha scritto:
>>> On 01/30/19 03:45, Alessandro Baggi wrote:
>>>> Il 29/01/19 20:42, mark ha scritto:
>>>>> Alessandro Baggi wrote:
>>>>>> Il 29/01/19 18:47, mark ha scritto:
>>>>>>> Alessandro Baggi wrote:
>>>>>>>> Il 29/01/19 15:03, mark ha scritto:
2012 Apr 27
1
geo-replication and rsync
Hi,
can someone tell me the differenct between geo-replication and plain rsync?
On which frequency files are replicated with geo-replication?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120427/72f35727/attachment.html>
2018 May 09
2
Some more questions
Il giorno mer 9 mag 2018 alle ore 21:31 Jim Kinney <jim.kinney at gmail.com>
ha scritto:
> correct. a new server will NOT add space in this manner. But the original
Q was about rebalancing after adding a 4th server. If you are using
distributed/replication, then yes, a new server with be adding a portion of
it's space to add more space to the cluster.
Wait, in a distribute-replicate,
2018 May 09
0
Some more questions
It all depends on how you are set up on the distribute. Think RAID 10
with 4 drives - each pair strips (distribute) and the pair of pairs
replicates.
On Wed, 2018-05-09 at 19:34 +0000, Gandalf Corvotempesta wrote:
> Il giorno mer 9 mag 2018 alle ore 21:31 Jim Kinney <jim.kinney at gmail.
> com>
> ha scritto:
> > correct. a new server will NOT add space in this manner. But the
2017 Oct 04
2
data corruption - any update?
On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran <nbalacha at redhat.com>
wrote:
>
>
> On 3 October 2017 at 13:27, Gandalf Corvotempesta <
> gandalf.corvotempesta at gmail.com> wrote:
>
>> Any update about multiple bugs regarding data corruptions with
>> sharding enabled ?
>>
>> Is 3.12.1 ready to be used in production?
>>
>
>
2017 Oct 13
1
small files performance
Where did you read 2k IOPS?
Each disk is able to do about 75iops as I'm using SATA disk, getting even
closer to 2000 it's impossible
Il 13 ott 2017 9:42 AM, "Szymon Miotk" <szymon.miotk at gmail.com> ha scritto:
> Depends what you need.
> 2K iops for small file writes is not a bad result.
> In my case I had a system that was just poorly written and it was
>
2017 Sep 08
4
GlusterFS as virtual machine storage
Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few
minutes. SIGTERM on the other hand causes crash, but this time it is
not read-only remount, but around 10 IOPS tops and 2 IOPS on average.
-ps
On Fri, Sep 8, 2017 at 1:56 PM, Diego Remolina <dijuremo at gmail.com> wrote:
> I currently only have a Windows 2012 R2 server VM in testing on top of
> the gluster storage,
2017 Sep 08
0
GlusterFS as virtual machine storage
On Fri, Sep 8, 2017 at 12:48 PM, Gandalf Corvotempesta
<gandalf.corvotempesta at gmail.com> wrote:
> I think this should be considered a bug
> If you have a server crash, glusterfsd process obviously doesn't exit
> properly and thus this could least to IO stop ?
I agree with you completely in this.
2017 Sep 23
1
EC 1+2
Already read that.
Seems that I have to use a multiple of 512, so 512*(3-2) is 512.
Seems fine
Il 23 set 2017 5:00 PM, "Dmitri Chebotarov" <4dimach at gmail.com> ha scritto:
> Hi
>
> Take a look at this link (under ?Optimal volumes?), for Erasure Coded
> volume optimal configuration
>
> http://docs.gluster.org/Administrator%20Guide/Setting%20Up%20Volumes/
>
2017 Jun 29
0
How to shutdown a node properly ?
On Thu, Jun 29, 2017 at 12:41 PM, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:
> Init.d/system.d script doesn't kill gluster automatically on
> reboot/shutdown?
>
> Sounds less like an issue with how it's shutdown but an issue with how
it's mounted perhaps. My gluster fuse mounts seem to handle any one node
being shutdown just fine as long as
2017 Jun 29
4
How to shutdown a node properly ?
Init.d/system.d script doesn't kill gluster automatically on
reboot/shutdown?
Il 29 giu 2017 5:16 PM, "Ravishankar N" <ravishankar at redhat.com> ha scritto:
> On 06/29/2017 08:31 PM, Renaud Fortier wrote:
>
> Hi,
>
> Everytime I shutdown a node, I lost access (from clients) to the volumes
> for 42 seconds (network.ping-timeout). Is there a special way to
2017 Sep 08
3
GlusterFS as virtual machine storage
2017-09-08 13:44 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>:
> I did not test SIGKILL because I suppose if graceful exit is bad, SIGKILL
> will be as well. This assumption might be wrong. So I will test it. It would
> be interesting to see client to work in case of crash (SIGKILL) and not in
> case of graceful exit of glusterfsd.
Exactly. if this happen, probably there
2016 Oct 27
4
Server migration
On 27 Oct 2016, at 15:29, Tanstaafl <tanstaafl at libertytrek.org> wrote:
>
> On 10/26/2016 2:38 AM, Gandalf Corvotempesta
> <gandalf.corvotempesta at gmail.com> wrote:
>> This is much easier than dovecot replication as i can start immedialy with
>> no need to upgrade the old server
>>
>> my only question is: how to manage the email received on the
2014 Apr 07
3
Software RAID10 - which two disks can fail?
Hi All.
I have a server which uses RAID10 made of 4 partitions for / and boots from
it. It looks like so:
mdadm -D /dev/md1
/dev/md1:
Version : 00.90
Creation Time : Mon Apr 27 09:25:05 2009
Raid Level : raid10
Array Size : 973827968 (928.71 GiB 997.20 GB)
Used Dev Size : 486913984 (464.36 GiB 498.60 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 1
2017 Jun 30
2
How to shutdown a node properly ?
On 06/30/2017 12:40 AM, Renaud Fortier wrote:
>
> On my nodes, when i use the system.d script to kill gluster (service
> glusterfs-server stop) only glusterd is killed. Then I guess the
> shutdown doesn?t kill everything !
>
Killing glusterd does not kill other gluster processes.
When you shutdown a node, everything obviously gets killed but the
client does not get notified
2017 Nov 15
4
Re: [Qemu-devel] [qemu-img] support for XVA
I'm thinking on how to prove you a sample XVA
I have to create (and populate) a VM because an empty image will result in
an empty XVA
And a VM is 300-400Mb as minimum
Il 15 nov 2017 10:30 PM, "Max Reitz" <mreitz@redhat.com> ha scritto:
> On 2017-11-15 21:41, Gandalf Corvotempesta wrote:
> > 2017-11-15 21:29 GMT+01:00 Richard W.M. Jones <rjones@redhat.com>:
>
2017 Oct 04
0
data corruption - any update?
Just so I know.
Is it correct to assume that this corruption issue is ONLY involved if
you are doing rebalancing with sharding enabled.
So if I am not doing rebalancing I should be fine?
-bill
On 10/3/2017 10:30 PM, Krutika Dhananjay wrote:
>
>
> On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran
> <nbalacha at redhat.com <mailto:nbalacha at redhat.com>> wrote: