Displaying 20 results from an estimated 36 matches for "tyops".
Did you mean:
tops
2018 Feb 25
3
Convert replica 2 to replica 2+1 arbiter
I must ask again, just to be sure. Is what you are proposing definitely
supported in v3.8?
Kind regards,
Mitja
On 25/02/2018 13:55, Jim Kinney wrote:
> gluster volume add-brick volname replica 3 arbiter 1
> brickhost:brickpath/to/new/arbitervol
>
> Yes. The replica 3 looks odd. Somewhere in 3.12 (?) or not until v4 a
> change in command will happen so it won't count the
2018 Jan 12
0
Integration of GPU with glusterfs
On January 11, 2018 10:58:28 PM EST, Lindsay Mathieson <lindsay.mathieson at gmail.com> wrote:
>On 12/01/2018 3:14 AM, Darrell Budic wrote:
>> It would also add physical resource requirements to future client
>> deploys, requiring more than 1U for the server (most likely), and I?m
>
>> not likely to want to do this if I?m trying to optimize for client
>>
2018 Jan 12
3
Integration of GPU with glusterfs
On 12/01/2018 3:14 AM, Darrell Budic wrote:
> It would also add physical resource requirements to future client
> deploys, requiring more than 1U for the server (most likely), and I?m
> not likely to want to do this if I?m trying to optimize for client
> density, especially with the cost of GPUs today.
Nvidia has banned their GPU's being used in Data Centers now to, I
imagine
2006 Jul 02
1
typo working only when webbrick is running?
hi.
i currently have tyop setup on a shared rails hosting server. my problem
is that the ''links'' on my blog site are not ''active''. the home page is
fine, but when you try to click on the menu, the buttons take you
nowhere. but when I start webbrick, the site then becomes fully
functional. any ideas?
--
Posted via http://www.ruby-forum.com/.
2018 Feb 25
0
Convert replica 2 to replica 2+1 arbiter
gluster volume add-brick volname replica 3 arbiter 1 brickhost:brickpath/to/new/arbitervol
Yes. The replica 3 looks odd. Somewhere in 3.12 (?) or not until v4 a change in command will happen so it won't count the arbiter as a replica.
On February 25, 2018 5:05:04 AM EST, "Mitja Miheli?" <mitja.mihelic at arnes.si> wrote:
>Hi!
>
>I am using GlusterFS on CentOS7 with
2018 Feb 25
2
Convert replica 2 to replica 2+1 arbiter
Hi!
I am using GlusterFS on CentOS7 with glusterfs-3.8.15 RPM version.
I currently have a replica 2 running and I would like to get rid of the
split-brain problem before it occurs. This is one of the possible solutions.
Is it possible to and an arbiter to this volume?
I have read in a thread from 2016 that this feature is planned for
version 3.8.
Is the feature available? If so, could you give
2018 Jan 15
2
[Gluster-devel] Integration of GPU with glusterfs
It is disappointing to see the limitation being put by Nvidia on low cost GPU usage on data centers.
https://www.theregister.co.uk/2018/01/03/nvidia_server_gpus/
We thought of providing an option in glusterfs by which we can control if we want to use GPU or not.
So, the concern of gluster eating out GPU's which could be used by others can be addressed.
---
Ashish
----- Original
2018 May 30
0
shard corruption bug
https://docs.gluster.org/en/latest/release-notes/3.12.6/
The major issue in 3.12.6 is not present in 3.12.7. Bugzilla ID listed in link.
On May 29, 2018 8:50:56 PM EDT, Dan Lavu <dan at redhat.com> wrote:
>What shard corruption bug? bugzilla url? I'm running into some odd
>behavior
>in my lab with shards and RHEV/KVM data, trying to figure out if it's
>related.
>
2018 Feb 25
0
Convert replica 2 to replica 2+1 arbiter
Hi,
It should be there, see https://review.gluster.org/#/c/14502/ <https://review.gluster.org/#/c/14502/>
BR,
Martin
> On 25 Feb 2018, at 15:52, Mitja Miheli? <mitja.mihelic at arnes.si> wrote:
>
> I must ask again, just to be sure. Is what you are proposing definitely supported in v3.8?
>
> Kind regards,
> Mitja
>
> On 25/02/2018 13:55, Jim Kinney wrote:
2018 May 30
2
shard corruption bug
What shard corruption bug? bugzilla url? I'm running into some odd behavior
in my lab with shards and RHEV/KVM data, trying to figure out if it's
related.
Thanks.
On Fri, May 4, 2018 at 11:13 AM, Jim Kinney <jim.kinney at gmail.com> wrote:
> I upgraded my ovirt stack to 3.12.9, added a brick to a volume and left it
> to settle. No problems. I am now running replica 4
2018 Jan 15
0
[Gluster-devel] Integration of GPU with glusterfs
On Mon, Jan 15, 2018 at 12:06 AM, Ashish Pandey <aspandey at redhat.com> wrote:
>
> It is disappointing to see the limitation being put by Nvidia on low cost
> GPU usage on data centers.
> https://www.theregister.co.uk/2018/01/03/nvidia_server_gpus/
>
> We thought of providing an option in glusterfs by which we can control if
> we want to use GPU or not.
> So, the
2015 Jan 19
3
[PATCH v3 00/16] virtio-pci: towards virtio 1.0 guest support
...g. Huh?
Also note:
[root at fedora ~]# ~kraxel/projects/pciutils/lspci -vvsa
00:0a.0 Communication controller: Red Hat, Inc Virtio console
[ ... ]
Capabilities: [64] VirtIO: Notify
BAR=2 offset=00003000 size=00400000
multiplier=00010000
[ ... ]
Why multiplier is 64k instead of 4k? Just a tyops?
cheers,
Gerd
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 0001-add-virtio-vendor-caps.patch
Type: text/x-patch
Size: 3354 bytes
Desc: not available
URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20150119/e8571cc0/attachmen...
2015 Jan 19
3
[PATCH v3 00/16] virtio-pci: towards virtio 1.0 guest support
...g. Huh?
Also note:
[root at fedora ~]# ~kraxel/projects/pciutils/lspci -vvsa
00:0a.0 Communication controller: Red Hat, Inc Virtio console
[ ... ]
Capabilities: [64] VirtIO: Notify
BAR=2 offset=00003000 size=00400000
multiplier=00010000
[ ... ]
Why multiplier is 64k instead of 4k? Just a tyops?
cheers,
Gerd
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 0001-add-virtio-vendor-caps.patch
Type: text/x-patch
Size: 3354 bytes
Desc: not available
URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20150119/e8571cc0/attachmen...
2017 Nov 08
9
Adding a slack for communication?
>From today's community meeting, we had an item from the issue queue:
https://github.com/gluster/community/issues/13
Should we have a Gluster Community slack team? I'm interested in
everyone's thoughts on this.
- amye
--
Amye Scavarda | amye at redhat.com | Gluster Community Lead
2018 Jul 03
0
[PATCH] gpu: drm: virito: code cleanup
On Mon, Jul 02, 2018 at 11:57:28PM +0530, Souptick Joarder wrote:
> The fault handler code is commented since v4.2.
> If there is no plan to enable the fault handler
> code in future, we can remove this dead code.
Indeed, but please without tyops in the $subject line.
cheers,
Gerd
2007 Jan 31
0
[patch] readnotes fix
Hi,
Fix tyops which breaks transparent gunzipping ...
please apply,
Gerd
--
Gerd Hoffmann <kraxel@suse.de>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
2002 Feb 28
1
WineHQ.com Moving
NOTICE!! NOTICE!! NOTICE!!
www.winehq.com will be moving to a new home in Minnesota. We are trying
to make this move as seamless as possible, but we can't guarantee that
there will not be any downtime. I will post another message to the
mailing lists when I have a more definite time of shutdown of the old
server.
www.winehq.org WILL be on-line for Web, FTP and CVS access. The area
that could
2018 Apr 22
0
Reconstructing files from shards
So a stock ovirt with gluster install that uses sharding
A. Can't safely have sharding turned off once files are in use
B. Can't be expanded with additional bricks
Ouch.
On April 22, 2018 5:39:20 AM EDT, Gandalf Corvotempesta <gandalf.corvotempesta at gmail.com> wrote:
>Il dom 22 apr 2018, 10:46 Alessandro Briosi <ab1 at metalit.com> ha
>scritto:
>
>> Imho
2018 Apr 23
1
Reconstructing files from shards
2018-04-23 9:34 GMT+02:00 Alessandro Briosi <ab1 at metalit.com>:
> Is it that really so?
yes, i've opened a bug asking developers to block removal of sharding
when volume has data on it or to write a huge warning message
saying that data loss will happen
> I thought that sharding was a extended attribute on the files created when
> sharding is enabled.
>
> Turning off
2018 May 04
2
shard corruption bug
Hi to all
is the "famous" corruption bug when sharding enabled fixed or still a work
in progress ?