Displaying 15 results from an estimated 15 matches for "palantir".
2005 Aug 23
1
Problem with AUTH causes serverside lockup
...I am running Dovecot alpha1, and after about 24 hours of the server
working fine, it starts to lock up when I open Thunderbird to check mail.
Basically I can see new mail, but when I click it, it just hangs at
"Loading message...".
I get the following in my maillog:
Aug 23 09:25:28 palantir dovecot: auth(default): client in: AUTH
1 PLAIN service=IMAP lip=194.192.14.150 rip=80.197.147.147
Aug 23 09:25:28 palantir dovecot: auth(default): client out: CONT 1
Aug 23 09:25:28 palantir dovecot: auth(default): client in: AUTH
1 PLAIN service=IMAP lip=194.192...
2018 Feb 13
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
...I'm at a loss. Is this a known type of problem? If so, how do I fix
it? If not, what's the next step to troubleshoot it?
# gluster --version
glusterfs 3.8.8 built on Jan 11 2017 14:07:11
Repository revision: git://git.gluster.com/glusterfs.git
# gluster volume status
Status of volume: palantir
Gluster process TCP Port RDMA Port Online
Pid
------------------------------------------------------------------------------
Brick saruman:/var/local/brick0/data 49154 0 Y
10690
Brick gandalf:/var/local/brick0/data 49155 0 Y
1873...
2006 Nov 02
3
v1.0 plans, rc11 tomorrow
As you can probably guess from my today's burst of activity, I'm no
longer extremely busy. Actually it looks like for the next 3-4 weeks I
don't have anything especially time consuming to do. So it's time to get
Dovecot v1.0 released :)
I've now read all the mails from this list again, and it looks like
pretty much the only problems with rc10 was the mbox assert crash, which
2018 Feb 15
0
Failover problems with gluster 3.8.8-1 (latest Debian stable)
...roblem? If so, how do I fix
> it? If not, what's the next step to troubleshoot it?
>
>
> # gluster --version
> glusterfs 3.8.8 built on Jan 11 2017 14:07:11
> Repository revision: git://git.gluster.com/glusterfs.git
>
> # gluster volume status
> Status of volume: palantir
> Gluster process TCP Port RDMA Port Online
> Pid
> ------------------------------------------------------------------------------
> Brick saruman:/var/local/brick0/data 49154 0 Y
> 10690
> Brick gandalf:/var/local/brick0/data...
2018 Feb 27
2
Quorum in distributed-replicate volume
...e separate
> > nodes?
> >
> No it doesn't matter as long as the bricks of same replica subvol are not
> on the same nodes.
OK, great. So basically just install the gluster server on the new
node(s), do a peer probe to add them to the cluster, and then
gluster volume create palantir replica 3 arbiter 1 [saruman brick] [gandalf brick] [arbiter 1] [azathoth brick] [yog-sothoth brick] [arbiter 2] [cthulhu brick] [mordiggian brick] [arbiter 3]
Or is there more to it than that?
--
Dave Sherohman
2005 May 10
13
What do you name yours
Hello list
we are installing 2 new servers (to run asterisk) shortly, for a
"stand alone" service. Ignoring our current naming convention, we'd
like to name them something.. but we are not sure what.
a consideration is that on the screens of the phones it shows
extension@hostname (eg 3001@telephony) (all extensions are numeric) so
the users will see it everyday
i'm not
2018 Feb 15
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
...If not, what's the next step to troubleshoot it?
> >
> >
> > # gluster --version
> > glusterfs 3.8.8 built on Jan 11 2017 14:07:11
> > Repository revision: git://git.gluster.com/glusterfs.git
> >
> > # gluster volume status
> > Status of volume: palantir
> > Gluster process TCP Port RDMA Port Online
> > Pid
> > ------------------------------------------------------------
> ------------------
> > Brick saruman:/var/local/brick0/data 49154 0 Y
> > 10690
> > Brick...
2018 Feb 27
2
Quorum in distributed-replicate volume
...ume configuration it is difficult to suggest the
> configuration change,
> and since it is a live system you may end up in data unavailability or data
> loss.
> Can you give the output of "gluster volume info <volname>"
> and which brick is of what size.
Volume Name: palantir
Type: Distributed-Replicate
Volume ID: 48379a50-3210-41b4-9a77-ae143c8bcac0
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: saruman:/var/local/brick0/data
Brick2: gandalf:/var/local/brick0/data
Brick3: azathoth:/var/local/brick0/data
Brick4: yog-sot...
2018 Feb 27
0
Quorum in distributed-replicate volume
...> > No it doesn't matter as long as the bricks of same replica subvol are
> not
> > on the same nodes.
>
> OK, great. So basically just install the gluster server on the new
> node(s), do a peer probe to add them to the cluster, and then
>
> gluster volume create palantir replica 3 arbiter 1 [saruman brick]
> [gandalf brick] [arbiter 1] [azathoth brick] [yog-sothoth brick] [arbiter
> 2] [cthulhu brick] [mordiggian brick] [arbiter 3]
>
gluster volume add-brick <volname> replica 3 arbiter 1 <arbiter 1> <arbiter
2> <arbiter 3>
is the co...
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 4:18 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote:
> > If you want to use the first two bricks as arbiter, then you need to be
> > aware of the following things:
> > - Your distribution count will be decreased to 2.
>
> What's the significance of this? I'm
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote:
> If you want to use the first two bricks as arbiter, then you need to be
> aware of the following things:
> - Your distribution count will be decreased to 2.
What's the significance of this? I'm trying to find documentation on
distribution counts in gluster, but my google-fu is failing me.
> - Your data on
2018 Feb 27
0
Quorum in distributed-replicate volume
...ggest the
> > configuration change,
> > and since it is a live system you may end up in data unavailability or
> data
> > loss.
> > Can you give the output of "gluster volume info <volname>"
> > and which brick is of what size.
>
> Volume Name: palantir
> Type: Distributed-Replicate
> Volume ID: 48379a50-3210-41b4-9a77-ae143c8bcac0
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 3 x 2 = 6
> Transport-type: tcp
> Bricks:
> Brick1: saruman:/var/local/brick0/data
> Brick2: gandalf:/var/local/brick0/data
> Brick...
2018 Feb 27
0
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 6:14 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote:
> > > "In a replica 2 volume... If we set the client-quorum option to
> > > auto, then the first brick must always be up, irrespective of the
> > > status of the second brick. If only the second brick is up,
1999 Nov 12
1
[RHSA-1999:054-01] Security problems in bind (fwd)
Woops... this didn't show up here but it did on BugTraq. Questions answered!
--
Chuck Mead, CTO, MoonGroup Consulting, Inc. <http://moongroup.com>
Mail problems? Send "s-u-b-s-c-r-i-b-e mailhelp" (no quotes and no
hyphens) in the body of a message to mailhelp-request@moongroup.com.
Public key available at: wwwkeys.us.pgp.net
----------
2018 Feb 26
2
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote:
> > "In a replica 2 volume... If we set the client-quorum option to
> > auto, then the first brick must always be up, irrespective of the
> > status of the second brick. If only the second brick is up, the
> > subvolume becomes read-only."
> >
> By default client-quorum is