Displaying 11 results from an estimated 11 matches for "storage3".
Did you mean:
storage
2012 Dec 03
1
"gluster peer status" messed up
I have three machines, all Ubuntu 12.04 running gluster 3.3.1.
storage1 192.168.6.70 on 10G, 192.168.5.70 on 1G
storage2 192.168.6.71 on 10G, 192.168.5.71 on 1G
storage3 192.168.6.72 on 10G, 192.168.5.72 on 1G
Each machine has two NICs, but on each host, /etc/hosts lists the 10G
interface on all machines.
storage1 and storage3 were taken away for hardware changes, which included
swapping the boot disks. They had the O/S reinstalled.
Somehow I have gotten into a...
2009 Oct 01
1
3-layer structure and the bonnie rewrite problem
...ode]
^
|
GlusterFS
|
v
[Storage1 <-replicate-> Storage2]
***********
Config2 (3-layer):
***********
[Client-Node]
^
|
GlusterFS
|
v
[Re-Exporter/Proxy-Node]
^
|
GlusterFS
|
v
[Storage1 <-replicate-> Storage2, Storage3]
***********
The "Config2" is the targeted structure. I do require the exporter layer
as a proxy to keep the vol-files on the clients substituable and simple
and by this - most important ;) - to reduce my administrative effort.
Imagine the following:
* Having 3 webserver-client-nodes, o...
2012 May 04
1
'Transport endpoint not connected'
...annot access /gluster/scratch: Transport endpoint is not connected
$ ls /gluster/scratch3
dbbuild DBS
$ sudo umount /gluster/scratch
$ sudo mount /gluster/scratch
$ ls /gluster/scratch
dbbuild
$
Note that /gluster/scratch is a distributed volume (spread across servers
'storage2' and 'storage3'), whereas /gluster/scratch3 is a single brick
(server 'storage3' only).
So *some* of the mounts do seem to automatically reconnect - not all are
affected.
But in future, I think it would be good if the FUSE client could
automatically attempt to reconnect under whatever circumstance c...
2016 Oct 12
5
Backup Suggestion on C7
...each host pools on different logical volumes,
like:
host1 -> lv1
host2 -> lv2
host3 -> lv3
and store pools/volumes on specified storage daemon that uses a
specified device for each different hosts.
host1 -> storage1 -> device_lv1
host2 -> storage2 -> device_lv2
host3 -> storage3 -> device_lv3
Unfortunately, I can't define on bacula-sd.conf multiple storage
definition but only multiple devices. To use different storage I must
run 3 bacula-sd on same host (I can?), run a bacula-sd on a vm/host.
Ah, I must use only one physical server.
With one single machine an...
2013 Dec 03
0
Problem booting guest with more than 8 disks
...9;/>
| </source>
| <target dev='vdd' bus='virtio' />
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test2-storage3'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vde' bus='virti...
2016 Oct 12
0
[SOLVED] Re: Backup Suggestion on C7
...:
>
> host1 -> lv1
> host2 -> lv2
> host3 -> lv3
>
> and store pools/volumes on specified storage daemon that uses a
> specified device for each different hosts.
>
> host1 -> storage1 -> device_lv1
> host2 -> storage2 -> device_lv2
> host3 -> storage3 -> device_lv3
>
>
> Unfortunately, I can't define on bacula-sd.conf multiple storage
> definition but only multiple devices. To use different storage I must
> run 3 bacula-sd on same host (I can?), run a bacula-sd on a vm/host.
> Ah, I must use only one physical server.
&g...
2016 Oct 13
0
Backup Suggestion on C7
...gt;
> host1 -> lv1
> host2 -> lv2
> host3 -> lv3
>
> and store pools/volumes on specified storage daemon that uses a
> specified device for each different hosts.
>
> host1 -> storage1 -> device_lv1
> host2 -> storage2 -> device_lv2
> host3 -> storage3 -> device_lv3
>
>
> Unfortunately, I can't define on bacula-sd.conf multiple storage
> definition but only multiple devices. To use different storage I must
> run 3 bacula-sd on same host (I can?), run a bacula-sd on a vm/host.
> Ah, I must use only one physical server....
2016 Oct 12
1
[SOLVED] Re: Backup Suggestion on C7
...host2 -> lv2
>> host3 -> lv3
>>
>> and store pools/volumes on specified storage daemon that uses a
>> specified device for each different hosts.
>>
>> host1 -> storage1 -> device_lv1
>> host2 -> storage2 -> device_lv2
>> host3 -> storage3 -> device_lv3
>>
>>
>> Unfortunately, I can't define on bacula-sd.conf multiple storage
>> definition but only multiple devices. To use different storage I must
>> run 3 bacula-sd on same host (I can?), run a bacula-sd on a vm/host.
>> Ah, I must use only...
2010 Oct 13
0
Samba3 3.5 + OpenLDAP very slow transfer
...eable = yes
browseable = No
create mode = 0600
directory mode = 0700
[backup1]
comment = Private Backup 1
path = /share
read only = No
create mask = 0777
directory mode = 0777
force create mode = 0777
valid users = denes
invalid users = bikeclub
oplocks = false
level2 oplocks = false
[storage3]
comment = Public Storage 3
path = /share5
read only = No
create mask = 0777
directory mode = 0777
force create mode = 077
invalid users = bikeclub
oplocks = false
level2 oplocks = false
[storage2]
comment = Public Storage 2
path = /share2
read only = No
create mask = 0777
directory mask = 0777
fo...
2010 Oct 13
0
Samba 3 + OpenLDAP very slow transfer speed(when multiple small files, probably LDAP problem)
...eable = yes
browseable = No
create mode = 0600
directory mode = 0700
[backup1]
comment = Private Backup 1
path = /share
read only = No
create mask = 0777
directory mode = 0777
force create mode = 0777
valid users = denes
invalid users = bikeclub
oplocks = false
level2 oplocks = false
[storage3]
comment = Public Storage 3
path = /share5
read only = No
create mask = 0777
directory mode = 0777
force create mode = 077
invalid users = bikeclub
oplocks = false
level2 oplocks = false
[storage2]
comment = Public Storage 2
path = /share2
read only = No
create mask = 0777
directory mask = 0777
fo...
2012 Nov 14
2
Avoid Split-brain and other stuff
Hi!
I just gave GlusterFS a try and experienced two problems. First some background:
- I want to set up a file server with synchronous replication between branch offices, similar to Windows DFS-Replication. The goal is _not_ high-availability or cluster-scaleout, but just having all files locally available at each branch office.
- To test GlusterFS, I installed two virtual machines