Displaying 6 results from an estimated 6 matches for "bluestor".
Did you mean:
bluestore
2018 May 16
1
[ceph-users] dovecot + cephfs - sdbox vs mdbox
...rt Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*
On Wed, May 16, 2018 at 4:45 PM Jack <ceph at jack.fr.eu.org> wrote:
> On 05/16/2018 09:35 PM, Webert de Souza Lima wrote:
> > We'll soon do benchmarks of sdbox vs mdbox over cephfs with bluestore
> > backend.
> > We'll have to do some some work on how to simulate user traffic, for
> writes
> > and readings. That seems troublesome.
> I would appreciate seeing these results !
>
> > Thanks for the plugins recommendations. I'll take the change and ask y...
2018 May 16
0
[ceph-users] dovecot + cephfs - sdbox vs mdbox
Hello Jack,
yes, I imagine I'll have to do some work on tuning the block size on
cephfs. Thanks for the advise.
I knew that using mdbox, messages are not removed but I though that was
true in sdbox too. Thanks again.
We'll soon do benchmarks of sdbox vs mdbox over cephfs with bluestore
backend.
We'll have to do some some work on how to simulate user traffic, for writes
and readings. That seems troublesome.
Thanks for the plugins recommendations. I'll take the change and ask you
how is the SIS status? We have used it in the past and we've had some
problems with it....
2018 May 16
2
dovecot + cephfs - sdbox vs mdbox
...nd is used for all dovecot instances
- no need of sharding domains
- dovecot is easily load balanced (with director sticking users to the
same dovecot backend)
On the upcoming upgrade we intent to:
- upgrade ceph to 12.X (Luminous)
- drop the SSD Cache Tier (because it's deprecated)
- use bluestore engine
I was said on freenode/#dovecot that there are many cases where SDBOX would
perform better with NFS sharing.
In case of cephfs, at first, I wouldn't think that would be true because
more files == more generated IO, but thinking about what I said in the
beginning regarding sdbox vs mdbo...
2023 Oct 28
1
State of the gluster project
...emergency data recovery (for purely
replicated volumes) even in case of complete failure of the software
stack and system disks - just grab the data disks, mount on a suitable
machine and copy the data off.
Anyone knows of distributed FS with similar easy emergency recovery?
(I also run Ceph, but Bluestore seems to be pretty much a black box.)
Kind regards,
Alex.
>
> /Z
>
> On Sat, 28 Oct 2023 at 11:21, Strahil Nikolov <hunter86_bg at yahoo.com> wrote:
>
> > Well,
> >
> > After IBM acquisition, RH discontinued their support in many projects
>...
2018 May 16
0
[ceph-users] dovecot + cephfs - sdbox vs mdbox
...; - no need of sharding domains
> - dovecot is easily load balanced (with director sticking users to the
> same dovecot backend)
>
> On the upcoming upgrade we intent to:
> - upgrade ceph to 12.X (Luminous)
> - drop the SSD Cache Tier (because it's deprecated)
> - use bluestore engine
>
> I was said on freenode/#dovecot that there are many cases where SDBOX would
> perform better with NFS sharing.
> In case of cephfs, at first, I wouldn't think that would be true because
> more files == more generated IO, but thinking about what I said in the
> beg...
2023 Oct 28
2
State of the gluster project
I don't think it's worth it for anyone. It's a dead project since about
9.0, if not earlier. It's time to embrace the truth and move on.
/Z
On Sat, 28 Oct 2023 at 11:21, Strahil Nikolov <hunter86_bg at yahoo.com> wrote:
> Well,
>
> After IBM acquisition, RH discontinued their support in many projects
> including GlusterFS (certification exams were removed,