Jim Kinney
2018-Apr-13 12:02 UTC
[Gluster-users] Is the size of bricks limiting the size of files I can store?
On April 12, 2018 3:48:32 PM EDT, Andreas Davour <ante at Update.UU.SE> wrote:>On Mon, 2 Apr 2018, Jim Kinney wrote: > >> On Mon, 2018-04-02 at 20:07 +0200, Andreas Davour wrote: >>> On Mon, 2 Apr 2018, Nithya Balachandran wrote: >>> >>>> On 2 April 2018 at 14:48, Andreas Davour <ante at update.uu.se> wrote: >>>> >>>>> Hi >>>>> >>>>> I've found something that works so weird I'm certain I have >>>>> missed how >>>>> gluster is supposed to be used, but I can not figure out how. >>>>> This is my >>>>> scenario. >>>>> >>>>> I have a volume, created from 16 nodes, each with a brick of the >>>>> same >>>>> size. The total of that volume thus is in the Terabyte scale. >>>>> It's a >>>>> distributed volume with a replica count of 2. >>>>> >>>>> The filesystem when mounted on the clients is not even close to >>>>> getting >>>>> full, as displayed by 'df'. >>>>> >>>>> But, when one of my users try to copy a file from another network >>>>> storage >>>>> to the gluster volume, he gets a 'filesystem full' error. What >>>>> happened? I >>>>> looked at the bricks and figured out that one big file had ended >>>>> up on a >>>>> brick that was half full or so, and the big file did not fit in >>>>> the space >>>>> that was left on that brick. >>>>> >>>> >>>> Hi, >>>> >>>> This is working as expected. As files are not split up (unless you >>>> are >>>> using shards) the size of the file is restricted by the size of the >>>> individual bricks. >>> >>> Thanks a lot for that definitive answer. Is there a way to manage >>> this? >>> Can you shard just those files, making them replicated in the >>> process? >> >> I manage this by using thin pool, thin lvm and add new drives to the >> lvm across all gluster nodes and expand the user space. My thinking >on >> this is a RAID 10 with the RAID 0 in the lvm and the RAID1 handled by >> gluster replica 2+ :-) > >I'm not sure I see how that solved the problem, but as you have though >it through I think you are trying to say something I should understand. > >/andreasBy adding space to a logical volume, effectively below the control of gluster, the entire space is available for users. Gluster manages replication across hosts and lvm provides absolute space allocation on each host. So I have 3 hosts, replica 3, and 12 bricks on each host, 1 brick for each mount point the clients see. Some bricks are a single drive while others are 2 and 1 is 5 drives. That same lvm setup is replicated on all 3 hosts. Now a client wants more storage. They buy 3 new drives, 1 for each host. Each host gets the lvm command queued up to add the new drive to the volume for that client. Then in parallel, all 3 hosts expand the volume along with a filesystem resize. In about 2 seconds gluster picks up the change in size. Since this size change is at the host filesystem level, a file larger than the remaining space on the original drive can be written as lvm will simply span the physical volumes. Gluster would choke and not span bricks.> >-- >"economics is a pseudoscience; the astrology of our time" >Kim Stanley Robinson-- Sent from my Android device with K-9 Mail. All tyopes are thumb related and reflect authenticity.
Reasonably Related Threads
- Is the size of bricks limiting the size of files I can store?
- Is the size of bricks limiting the size of files I can store?
- Is the size of bricks limiting the size of files I can store?
- Is the size of bricks limiting the size of files I can store?
- shard corruption bug