Displaying 11 results from an estimated 11 matches for "ewen".
Did you mean:
even
2006 May 30
1
sib TDT transmission/disequilibrium test
Does anyone know if the sib TDT has been implemented in R
1. Spielman, R.S., and Ewens, W.J. (1998) A sibship test for linkage in the
presence of association: the sib transmission/disequilibrium test. Am J Hum
Genet 62, 450-458
--
Farrel Buchinsky, MD
Pediatric Otolaryngologist
Allegheny General Hospital
Pittsburgh, PA
2006 Oct 05
13
Unbootable system recovery
I have just recently (physically) moved a system with 16 hard drives (for the array) and 1 OS drive; and in doing so, I needed to pull out the 16 drives so that it would be light enough for me to lift.
When I plugged the drives back in, initially, it went into a panic-reboot loop. After doing some digging, I deleted the file /etc/zfs/zpool.cache.
When I try to import the pool using the zpool
2025 Apr 17
1
Gluster with ZFS
...zardry ran, does indeed create a write amplification effect as a result of the copy-on-write architecture)
Conversely, if you're looking for reliability, the more nodes that you have in the Ceph cluster, the more reliable and resilient to failures the Ceph backend will be.
Thanks.
Sincerely,
Ewen
________________________________
From: Gluster-users <gluster-users-bounces at gluster.org> on behalf of gagan tiwari <gagan.tiwari at mathisys-india.com>
Sent: April 17, 2025 2:14 AM
To: Alexander Schreiber <als at thangorodrim.ch>
Cc: gluster-users at gluster.org <gluster-use...
2025 Apr 17
1
Gluster with ZFS
...h the RAM state. And since the VM disk was already sitting on the shared, distributed, Ceph storage, I wasn't moving the disk over 100 Gbps IB, just the RAM state.
So depending on how you've set it up, throwing more hardware at Ceph, might not improve performance much.
Thanks.
Sincerely,
Ewen
________________________________
From: Gluster-users <gluster-users-bounces at gluster.org> on behalf of Alexander Schreiber <als at thangorodrim.ch>
Sent: April 17, 2025 7:54 AM
To: gagan tiwari <gagan.tiwari at mathisys-india.com>
Cc: gluster-users at gluster.org <gluster-use...
2002 Feb 22
1
problems with connections.tdb
Hi, I'm wondering if anyone can diagnose the following errors from my
log.smbd:
------------
smbd version 2.2.3 started.
Copyright Andrew Tridgell and the Samba Team 1992-2002
[2002/02/21 12:04:19, 0] tdb/tdbutil.c:tdb_log(475)
tdb(/usr/local/samba/var/locks/connections.tdb): tdb_oob len 1111638618 beyond eof at 8192
[2002/02/21 12:04:19, 0] smbd/connection.c:claim_connection(188)
2025 Apr 17
1
Gluster with ZFS
On Thu, Apr 17, 2025 at 02:44:28PM +0530, gagan tiwari wrote:
> HI Alexander,
> Thanks for the update. Initially, I also
> thought of deploying Ceph but ceph is quite difficult to set-up and manage.
> Moreover, it's also hardware demanding. I think it's most suitable for a
> very large set-up with hundreds of clients.
I strongly disagree. I
2023 Jun 01
1
Using glusterfs for virtual machines with qcow2 images
...oxmox "natively" supports Ceph, which the Wikipedia article for it says that it is a distributed object storage system.
Maybe that might work better with Proxmox?
Hope this helps.
Sorry that I wasn't able to assist with the specific problem that you are facing.
Thanks.
Sincerely,
Ewen
________________________________
From: Gluster-users <gluster-users-bounces at gluster.org> on behalf of Christian Schoepplein <christian.schoepplein at linova.de>
Sent: June 1, 2023 11:42 AM
To: gluster-users at gluster.org <gluster-users at gluster.org>
Subject: [Gluster-users]...
2001 Nov 28
46
Resource temporarily unavailable
Im trying to run this program called Thinkboxx and wen it tryes to
comunicate over the comm port it hangs.
Here are som output to read.
earlier on i get:
Call kernel32.VirtualAlloc(43050000,00001000,00001000,00000004) ret=0058557c
Ret kernel32.VirtualAlloc() retval=43050000 ret=0058557c
thats from vhere the com port gets an virtual memory space, i think.
and in the end i have:
Call
2025 Apr 17
4
Gluster with ZFS
HI Alexander,
Thanks for the update. Initially, I also
thought of deploying Ceph but ceph is quite difficult to set-up and manage.
Moreover, it's also hardware demanding. I think it's most suitable for a
very large set-up with hundreds of clients.
What do you think of MooseFS ? Have you or anyone else tried MooseFS. If
yes, how was its performance?
2023 Jun 01
3
Using glusterfs for virtual machines with qcow2 images
Hi,
we'd like to use glusterfs for Proxmox and virtual machines with qcow2
disk images. We have a three node glusterfs setup with one volume and
Proxmox is attached and VMs are created, but after some time, and I think
after much i/o is going on for a VM, the data inside the virtual machine
gets corrupted. When I copy files from or to our glusterfs
directly everything is OK, I've
2003 Dec 01
0
No subject
...sed through bad code??
[2001/05/25 11:32:12, 0] lib/util_sock.c:write_socket_data(540)
write_socket_data: write failure. Error = Broken pipe
[2001/05/25 11:38:12, 0] lib/util_sock.c:write_socket_data(540)
write_socket_data: write failure. Error = Broken pipe
Thanks a lot
jak
Return-Path: <BSEwen@wcom.net>
Delivered-To: samba@lists.samba.org
Received: from pmesmtp02.wcom.com (pmesmtp02.wcom.com [199.249.20.2]) by
lists.samba.org (Postfix) with ESMTP id 1CEBC4547 for
<samba@lists.samba.org>; Tue, 29 May 2001 13:20:26 -0700 (PDT)
Received: from dgismtp01.wcomnet.com ([166.38.58.1...