Displaying 7 results from an estimated 7 matches for "dockerstore".
2018 Mar 06
2
SQLite3 on 3 node cluster FS?
...g the volume
>> performance.write-behind flag as you suggested, and simultaneously
>> disabling caching in my client side mount command.
>
>
> Good to know it worked. Can you give us the output of
> # gluster volume info
[root at node-1 /]# gluster volume info
Volume Name: dockerstore
Type: Replicate
Volume ID: fb08b9f4-0784-4534-9ed3-e01ff71a0144
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 172.18.0.4:/data/glusterfs/store/dockerstore
Brick2: 172.18.0.3:/data/glusterfs/store/dockerstore
Brick3: 172.18.0.2:/data/glusterfs/stor...
2018 Mar 06
0
SQLite3 on 3 node cluster FS?
...ind flag as you suggested, and simultaneously
> >> disabling caching in my client side mount command.
> >
> >
> > Good to know it worked. Can you give us the output of
> > # gluster volume info
>
> [root at node-1 /]# gluster volume info
>
> Volume Name: dockerstore
> Type: Replicate
> Volume ID: fb08b9f4-0784-4534-9ed3-e01ff71a0144
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: 172.18.0.4:/data/glusterfs/store/dockerstore
> Brick2: 172.18.0.3:/data/glusterfs/store/docker...
2018 Mar 06
1
SQLite3 on 3 node cluster FS?
...taneously
>> >> disabling caching in my client side mount command.
>> >
>> >
>> > Good to know it worked. Can you give us the output of
>> > # gluster volume info
>>
>> [root at node-1 /]# gluster volume info
>>
>> Volume Name: dockerstore
>> Type: Replicate
>> Volume ID: fb08b9f4-0784-4534-9ed3-e01ff71a0144
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 3 = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: 172.18.0.4:/data/glusterfs/store/dockerstore
>> Brick2: 172...
2018 Mar 06
0
SQLite3 on 3 node cluster FS?
+Csaba.
On Tue, Mar 6, 2018 at 2:52 AM, Paul Anderson <pha at umich.edu> wrote:
> Raghavendra,
>
> Thanks very much for your reply.
>
> I fixed our data corruption problem by disabling the volume
> performance.write-behind flag as you suggested, and simultaneously
> disabling caching in my client side mount command.
>
Good to know it worked. Can you give us the
2018 Mar 05
6
SQLite3 on 3 node cluster FS?
Raghavendra,
Thanks very much for your reply.
I fixed our data corruption problem by disabling the volume
performance.write-behind flag as you suggested, and simultaneously
disabling caching in my client side mount command.
In very modest testing, the flock() case appears to me to work well -
before it would corrupt the db within a few transactions.
Testing using built in sqlite3 locks is
2018 Mar 07
0
gluster debian build repo redirection loop on apt-get update on docker
On 03/06/2018 05:50 PM, Paul Anderson wrote:
> When I follow the directions at
> http://docs.gluster.org/en/latest/Install-Guide/Install/ to install
> the latest gluster on a debian 9 docker container, I get the following
> error:
Files in the .../3.13/3.13.2 directory had the wrong owner/group,
(rsync_aide). I'm not sure why, maybe an incomplete rsync? I've fixed
the owners
2018 Mar 06
2
gluster debian build repo redirection loop on apt-get update on docker
When I follow the directions at
http://docs.gluster.org/en/latest/Install-Guide/Install/ to install
the latest gluster on a debian 9 docker container, I get the following
error:
Step 6/15 : RUN echo deb [arch=amd64]
https://download.gluster.org/pub/gluster/glusterfs/3.13/9/Debian/stretch/amd64/apt/
stretch main > /etc/apt/sources.list.d/gluster.list
---> Running in 1ef386afb192
Removing