Displaying 16 results from an estimated 16 matches for "96tb".
Did you mean:
6tb
2017 Jun 09
2
Extremely slow du
Hi
I have just moved our 400 TB HPC storage from lustre to gluster. It is part
of a research institute and users have very small files to big files ( few
KB to 20GB) . Our setup consists of 5 servers, each with 96TB RAID 6 disks.
All servers are connected through 10G ethernet but not all clients.
Gluster volumes are distributed without any replication. There are
approximately 80 million files in file system.
I am mounting using glusterfs on clients.
I have copied everything from lustre to gluster but old fil...
2018 Mar 12
2
Can't heal a volume: "Please check if all brick processes are running."
Hello,
We have a very fresh gluster 3.10.10 installation.
Our volume is created as distributed volume, 9 bricks 96TB in total
(87TB after 10% of gluster disk space reservation)
For some reasons I can?t ?heal? the volume:
# gluster volume heal gv0
Launching heal operation to perform index self heal on volume gv0 has
been unsuccessful on bricks that are down. Please check if all brick
processes are running.
Wh...
2018 Mar 13
4
Can't heal a volume: "Please check if all brick processes are running."
...tions are not described there or there are no explanation
> what is doing...
>
>
>
> On 2018-03-12 15:58, Anatoliy Dmytriyev wrote:
>
>> Hello,
>>
>> We have a very fresh gluster 3.10.10 installation.
>> Our volume is created as distributed volume, 9 bricks 96TB in total
>> (87TB after 10% of gluster disk space reservation)
>>
>> For some reasons I can?t ?heal? the volume:
>> # gluster volume heal gv0
>> Launching heal operation to perform index self heal on volume gv0 has
>> been unsuccessful on bricks that are down. Pl...
2017 Jun 09
0
Extremely slow du
...5 PM, mohammad kashif <kashif.alig at gmail.com>
wrote:
> Hi
>
> I have just moved our 400 TB HPC storage from lustre to gluster. It is
> part of a research institute and users have very small files to big files
> ( few KB to 20GB) . Our setup consists of 5 servers, each with 96TB RAID 6
> disks. All servers are connected through 10G ethernet but not all clients.
> Gluster volumes are distributed without any replication. There are
> approximately 80 million files in file system.
> I am mounting using glusterfs on clients.
>
> I have copied everything from...
2017 Jun 09
2
Extremely slow du
...hif.alig at gmail.com>
> wrote:
>
>> Hi
>>
>> I have just moved our 400 TB HPC storage from lustre to gluster. It is
>> part of a research institute and users have very small files to big files
>> ( few KB to 20GB) . Our setup consists of 5 servers, each with 96TB RAID 6
>> disks. All servers are connected through 10G ethernet but not all clients.
>> Gluster volumes are distributed without any replication. There are
>> approximately 80 million files in file system.
>> I am mounting using glusterfs on clients.
>>
>> I have...
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
...doc.gluster.org? As I see
many gluster options are not described there or there are no explanation
what is doing...
On 2018-03-12 15:58, Anatoliy Dmytriyev wrote:
> Hello,
>
> We have a very fresh gluster 3.10.10 installation.
> Our volume is created as distributed volume, 9 bricks 96TB in total
> (87TB after 10% of gluster disk space reservation)
>
> For some reasons I can?t ?heal? the volume:
> # gluster volume heal gv0
> Launching heal operation to perform index self heal on volume gv0 has
> been unsuccessful on bricks that are down. Please check if all brick...
2017 Jun 12
2
Extremely slow du
...t;
>>>> Hi
>>>>
>>>> I have just moved our 400 TB HPC storage from lustre to gluster. It is
>>>> part of a research institute and users have very small files to big files
>>>> ( few KB to 20GB) . Our setup consists of 5 servers, each with 96TB RAID 6
>>>> disks. All servers are connected through 10G ethernet but not all clients.
>>>> Gluster volumes are distributed without any replication. There are
>>>> approximately 80 million files in file system.
>>>> I am mounting using glusterfs on c...
2018 Mar 14
2
Can't heal a volume: "Please check if all brick processes are running."
...re no explanation
>> what is doing...
>>
>>
>>
>> On 2018-03-12 15:58, Anatoliy Dmytriyev wrote:
>>
>>> Hello,
>>>
>>> We have a very fresh gluster 3.10.10 installation.
>>> Our volume is created as distributed volume, 9 bricks 96TB in total
>>> (87TB after 10% of gluster disk space reservation)
>>>
>>> For some reasons I can't "heal" the volume:
>>> # gluster volume heal gv0
>>> Launching heal operation to perform index self heal on volume gv0 has
>>> been un...
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
...re no explanation
>> what is doing...
>>
>>
>>
>> On 2018-03-12 15:58, Anatoliy Dmytriyev wrote:
>>
>>> Hello,
>>>
>>> We have a very fresh gluster 3.10.10 installation.
>>> Our volume is created as distributed volume, 9 bricks 96TB in total
>>> (87TB after 10% of gluster disk space reservation)
>>>
>>> For some reasons I can?t ?heal? the volume:
>>> # gluster volume heal gv0
>>> Launching heal operation to perform index self heal on volume gv0 has
>>> been unsuccessful on...
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
...org [1]? As I see many gluster options are not described there or there are no explanation what is doing...
>
> On 2018-03-12 15:58, Anatoliy Dmytriyev wrote:
> Hello,
>
> We have a very fresh gluster 3.10.10 installation.
> Our volume is created as distributed volume, 9 bricks 96TB in total
> (87TB after 10% of gluster disk space reservation)
>
> For some reasons I can't "heal" the volume:
> # gluster volume heal gv0
> Launching heal operation to perform index self heal on volume gv0 has
> been unsuccessful on bricks that are down. Please chec...
2017 Jun 10
0
Extremely slow du
...t;> wrote:
>>
>>> Hi
>>>
>>> I have just moved our 400 TB HPC storage from lustre to gluster. It is
>>> part of a research institute and users have very small files to big files
>>> ( few KB to 20GB) . Our setup consists of 5 servers, each with 96TB RAID 6
>>> disks. All servers are connected through 10G ethernet but not all clients.
>>> Gluster volumes are distributed without any replication. There are
>>> approximately 80 million files in file system.
>>> I am mounting using glusterfs on clients.
>>...
2017 Jun 16
0
Extremely slow du
...gt; Hi
>>>>>
>>>>> I have just moved our 400 TB HPC storage from lustre to gluster. It is
>>>>> part of a research institute and users have very small files to big files
>>>>> ( few KB to 20GB) . Our setup consists of 5 servers, each with 96TB RAID 6
>>>>> disks. All servers are connected through 10G ethernet but not all clients.
>>>>> Gluster volumes are distributed without any replication. There are
>>>>> approximately 80 million files in file system.
>>>>> I am mounting using...
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
...oing...
>>>
>>>
>>>
>>> On 2018-03-12 15:58, Anatoliy Dmytriyev wrote:
>>>
>>>> Hello,
>>>>
>>>> We have a very fresh gluster 3.10.10 installation.
>>>> Our volume is created as distributed volume, 9 bricks 96TB in total
>>>> (87TB after 10% of gluster disk space reservation)
>>>>
>>>> For some reasons I can't "heal" the volume:
>>>> # gluster volume heal gv0
>>>> Launching heal operation to perform index self heal on volume gv0 has...
2017 Jun 18
1
Extremely slow du
...gt;>>>
>>>>>> I have just moved our 400 TB HPC storage from lustre to gluster. It
>>>>>> is part of a research institute and users have very small files to big
>>>>>> files ( few KB to 20GB) . Our setup consists of 5 servers, each with 96TB
>>>>>> RAID 6 disks. All servers are connected through 10G ethernet but not all
>>>>>> clients. Gluster volumes are distributed without any replication. There
>>>>>> are approximately 80 million files in file system.
>>>>>> I...
2017 Jul 11
2
Extremely slow du
...> from lustre to gluster. It is part of a
> research institute and users have very small
> files to big files ( few KB to 20GB) . Our
> setup consists of 5 servers, each with 96TB
> RAID 6 disks. All servers are connected
> through 10G ethernet but not all clients.
> Gluster volumes are distributed without any
> replication. There are approximately 80
&...
2012 Apr 05
1
Better to use a single large storage server or multiple smaller for mdbox?
I'm trying to improve the setup of our Dovecot/Exim mail servers to
handle the increasingly huge accounts (everybody thinks it's like
infinitely growing storage like gmail and stores everything forever in
their email accounts) by changing from Maildir to mdbox, and to take
advantage of offloading older emails to alternative networked storage
nodes.
The question now is whether having a