Displaying 9 results from an estimated 9 matches for "25tb".
Did you mean:
256b
2015 May 06
2
Backup PC or other solution
On Wed, May 6, 2015 2:46 pm, m.roth at 5-cent.us wrote:
> Alessandro Baggi wrote:
>> I list,
>> I'm new with backup ops and I'm searching a good system to accomplish
>> this
>> work. I know that on centos there are bacula and amanda but they are
>> too
>> tape oriented. Another is that they are very powerfull but more complex.
>> I
>>
2018 May 16
2
dovecot + cephfs - sdbox vs mdbox
...a replicated pool of SSDs, and messages stored
in a replicated pool of HDDs (under a Cache Tier with a pool of SSDs).
All using cephfs / filestore backend.
Currently there are 3 clusters running dovecot 2.2.34 and ceph Jewel
(10.2.9-4).
- ~25K users from a few thousands of domains per cluster
- ~25TB of email data per cluster
- ~70GB of dovecot INDEX [meta]data per cluster
- ~100MB of cephfs metadata per cluster
Our goal is to build a single ceph cluster for storage that could expand in
capacity, be highly available and perform well enough. I know, that's what
everyone wants.
Cephfs is...
2015 May 06
0
Backup PC or other solution
...drbd for
disaster recovery. I keep 12+ months of monthly full backups, and 30+
days of daily incrementals. The deduplicated and compressed backups of
all this take all of 4800GB, containing 9.1 million files and 4369
directories. The full backups WOULD have taken 68TB and the
incrementals 25TB without dedup.
I'm very happy with it.
its a 'pull' based backup, no agents are required for the clients... it
can use a variety of methods, I mostly use rsync-over-ssh, all you need
to configure is a ssh key so the backup server's backuppc user can
connect to the target via ss...
2016 Feb 19
0
mac os x clients "error code -43"
...*.chifrator*/*.cpderksu*/*.cry/*.crypto/*.darkness/*.dyatel*/*.enc*/*.gruzin*/*.hb15/*.help*/*.kraken/*.locked/*.nalog*/*.nochance/*.oplata*/*.oshit/*.pizda*/*.relock*/*.troyancoder*/*.encrypted/*.ccc/*.xyz/*.aaa/*.abc/*.eee/*.exx/.zzz/*.mkv/*.asp/*.cgi/*.vv/*.vvv/*.MICRO/
[ELELE]
path = /media/25tb/opi/dergimac/Elele
browseable = yes
writeable = yes
valid users = @__renkayrim, at __foto_arsiv, at _mac_elele,@"Domain Admins"
write list = @_mac_elele
-------------------------------------------
Internet uzerinden iletişimde zamaninda, guvenli, hatasiz ve viruslerden arindirilmis...
2018 May 16
1
[ceph-users] dovecot + cephfs - sdbox vs mdbox
...of SSDs).
> >>> All using cephfs / filestore backend.
> >>>
> >>> Currently there are 3 clusters running dovecot 2.2.34 and ceph Jewel
> >>> (10.2.9-4).
> >>> - ~25K users from a few thousands of domains per cluster
> >>> - ~25TB of email data per cluster
> >>> - ~70GB of dovecot INDEX [meta]data per cluster
> >>> - ~100MB of cephfs metadata per cluster
> >>>
> >>> Our goal is to build a single ceph cluster for storage that could
> expand
> >> in
> >>>...
2015 May 07
2
Backup PC or other solution
...er recovery. I keep 12+ months of monthly full backups, and 30+
> days of daily incrementals. The deduplicated and compressed backups of
> all this take all of 4800GB, containing 9.1 million files and 4369
> directories. The full backups WOULD have taken 68TB and the
> incrementals 25TB without dedup.
>
> I'm very happy with it.
>
> its a 'pull' based backup, no agents are required for the clients... it
> can use a variety of methods, I mostly use rsync-over-ssh, all you need
> to configure is a ssh key so the backup server's backuppc user can
>...
2018 May 16
0
[ceph-users] dovecot + cephfs - sdbox vs mdbox
...ages stored
> in a replicated pool of HDDs (under a Cache Tier with a pool of SSDs).
> All using cephfs / filestore backend.
>
> Currently there are 3 clusters running dovecot 2.2.34 and ceph Jewel
> (10.2.9-4).
> - ~25K users from a few thousands of domains per cluster
> - ~25TB of email data per cluster
> - ~70GB of dovecot INDEX [meta]data per cluster
> - ~100MB of cephfs metadata per cluster
>
> Our goal is to build a single ceph cluster for storage that could expand in
> capacity, be highly available and perform well enough. I know, that's what
&g...
2018 May 16
0
[ceph-users] dovecot + cephfs - sdbox vs mdbox
...ted pool of HDDs (under a Cache Tier with a pool of SSDs).
> > All using cephfs / filestore backend.
> >
> > Currently there are 3 clusters running dovecot 2.2.34 and ceph Jewel
> > (10.2.9-4).
> > - ~25K users from a few thousands of domains per cluster
> > - ~25TB of email data per cluster
> > - ~70GB of dovecot INDEX [meta]data per cluster
> > - ~100MB of cephfs metadata per cluster
> >
> > Our goal is to build a single ceph cluster for storage that could expand
> in
> > capacity, be highly available and perform well enoug...
2013 Jan 26
4
Write failure on distributed volume with free space available
Hello,
Thanks to "partner" on IRC who told me about this (quite big) problem.
Apparently in a distributed setup once a brick fills up you start
getting write failures. Is there a way to work around this?
I would have thought gluster would check for free space before writing
to a brick.
It's very easy to test, I created a distributed volume from 2 uneven
bricks and started to