search for: 44tb

Displaying 6 results from an estimated 6 matches for "44tb".

Did you mean: 448b
2012 Oct 24
2
Ceph samba size reporting troubles
Dear developement team, I want to share a massive storage casted with Ceph by samba with windows workstations. All works well. My problem so is that in windows the ceph storage size statistics are wrong. Instead of seeing a 44TB hard drive I see a 176GB hard drive. Under linux that issue doesn't show. The size are properly reported. I investigated around and it seems that the problem belongs in the unability for windows to treat blocksize over 65k and by default the block size is 4K. Don't know how you can sol...
2012 Oct 24
0
Ceph samba size reporting troubles in windows
Dear developement team, I want to share a massive storage casted with Ceph by samba with windows workstations. All works well. My problem so is that in windows the ceph storage size statistics are wrong. Instead of seeing a 44TB hard drive I see a 176GB hard drive. Under linux that issue doesn't show. The size are properly reported. I investigated around and it seems that the problem belongs in the unability for windows to treat blocksize over 65k and by default the block size is 4K. Don't know how you can sol...
2023 Oct 27
1
State of the gluster project
...mpromise lucrative GPFS licensing). We also saw more than 30 minutes for an ls on a Gluster directory containing about 50 files when we had many millions of files on the fs (with one disk per brick, which also lead to many memory issues). After last rebuild I created 5-disks RAID5 bricks (about 44TB each) and memory pressure wend down drastically, but desyncs still happen even if the nodes are connected via IPoIB links that are really rock-solid (and in the worst case they could fallback to 1Gbps Ethernet connectivity). Diego Il 27/10/2023 10:30, Marcus Peders?n ha scritto: > Hi Diego,...
2011 May 08
4
Building a Back Blaze style POD
Hi All, I am about to embark on a project that deals with allowing information archival, over time and seeing change over time as well. I can explain it a lot better, but I would certainly talk your ear off. I really don't have a lot of money to throw at the initial concept, but I have some. This device will host all of the operations for the first few months until I can afford to build a
2023 Oct 27
1
State of the gluster project
...mpromise lucrative GPFS licensing). We also saw more than 30 minutes for an ls on a Gluster directory containing about 50 files when we had many millions of files on the fs (with one disk per brick, which also lead to many memory issues). After last rebuild I created 5-disks RAID5 bricks (about 44TB each) and memory pressure wend down drastically, but desyncs still happen even if the nodes are connected via IPoIB links that are really rock-solid (and in the worst case they could fallback to 1Gbps Ethernet connectivity). Diego Il 27/10/2023 10:30, Marcus Peders?n ha scritto: > Hi Diego,...
2023 Oct 27
1
State of the gluster project
Hi Diego, I have had a look at BeeGFS and is seems more similar to ceph then to gluster. It requires extra management nodes similar to ceph, right? Second of all there are no snapshots in BeeGFS, as I understand it. I know ceph has snapshots so for us this seems a better alternative. What is your experience of ceph? I am sorry to hear about your problems with gluster, from my experience we had