search for: osds

Displaying 20 results from an estimated 134 matches for "osds".

Did you mean: osdl
2007 Nov 29
3
lustre osd implementation
hello, does lustre support OSD T10 standard? Can it be used with IBM''s/Intel''s OSD initiator and target? Thanks, Ashish -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20071128/c2de31a6/attachment-0002.html
2011 Feb 11
0
[PATCH 3/3]:Staging: hv: Remove osd layer
The OSD layer was a wrapper around native interfaces adding little value and was infact buggy - refer to the osd_wait.patch for details. This patch gets rid of the OSD abstraction. Signed-off-by: K. Y. Srinivasan <kys at microsoft.com> Signed-off-by: Hank Janssen <hjanssen at microsoft.com> --- drivers/staging/hv/Makefile | 2 +- drivers/staging/hv/blkvsc.c | 2 +-
2011 Feb 11
0
[PATCH 3/3]:Staging: hv: Remove osd layer
The OSD layer was a wrapper around native interfaces adding little value and was infact buggy - refer to the osd_wait.patch for details. This patch gets rid of the OSD abstraction. Signed-off-by: K. Y. Srinivasan <kys at microsoft.com> Signed-off-by: Hank Janssen <hjanssen at microsoft.com> --- drivers/staging/hv/Makefile | 2 +- drivers/staging/hv/blkvsc.c | 2 +-
2012 Aug 30
5
Ceph + RBD + Xen: Complete collapse -> Network issue in domU / Bad data for OSD / OOM Kill
Hi, A bit of explanation of what I''m trying to achieve : We have a bunch of homogeneous nodes that have CPU + RAM + Storage and we want to use that as some generic cluster. The idea is to have Xen on all of these and run Ceph OSD in a domU on each to "export" the local storage space to the entire cluster. And then use RBD to store / access VM images from any of the machines.
2011 Oct 09
1
Btrfs High IO-Wait
Hi, I have high IO-Wait on the ods (ceph), the osd are running a v3.1-rc9 kernel. I also experience high IO-rates, around 500IO/s reported via iostat. Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 6.80 0.00 62.40 18.35 0.04 5.29 0.00 5.29 5.29 3.60 sdb
2007 Nov 30
1
lustre-1.8 OSD
lustre-1.8 has OSD structures in place, what do I need to add in to make it work with OSD T10 standard? could anybody point me to some docs mentioning lustre internals - OSTs, OSSs, OBDs, and control flow when a read/write call is invoked by a client. thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2012 Apr 20
44
Ceph on btrfs 3.4rc
...x10 [91128.965133] [<ffffffff81070360>] ? kthread_freezable_should_stop+0x70/0x70 [91128.972913] [<ffffffff8158c220>] ? gs_change+0x13/0x13 [91128.978826] ---[ end trace b8c31966cca731fb ]--- I''m able to reproduce this with ceph on a single server with 4 disks (4 filesystems/osds) and a small test program based on librbd. It is simply writing random bytes on a rbd volume (see attachment). Is this something I should care about? Any hint''s on solving this would be appreciated. Thanks, Christian
2007 Aug 30
2
OSD Mystery
I recently started using compiz-fusion. After spending months looking at an anemic little rectangular on-screen volume control when I use the volume buttons on my ubuntu thinkpad T60p, all of a sudden I noticed that I was getting a nice, big, robust rounded-corner display (somewhat mac-like). Now it's gone again. I really have no clue whether this came from compiz-fusion, emerald, a plugin,
2010 Jul 14
1
[PATCH] Documentation: Cleanup virt-v2v.conf references and mention --network
virt-v2v.conf is no longer required on the command line in most circumstances. This change removes it from the command line in all examples. This change also references an optional --network parameter in all examples, as this typically will be required. --- v2v/virt-v2v.pl | 56 +++++++++++++++++++++++++++++++++--------------------- 1 files changed, 34 insertions(+), 22 deletions(-) diff
2014 Nov 07
4
[Bug 10925] New: non-atomic xattr replacement in btrfs => rsync --read-batch random errors
https://bugzilla.samba.org/show_bug.cgi?id=10925 Bug ID: 10925 Summary: non-atomic xattr replacement in btrfs => rsync --read-batch random errors Product: rsync Version: 3.1.0 Hardware: All URL: http://article.gmane.org/gmane.comp.file-systems.btrfs /40013 OS: All
2014 Sep 04
2
PXE booting WinPE with UEFI architecture
This got me closer, but it got to "Encapsulating winpe.wim..." and never went through the rest of the way. Ultimately PXELINUX apparently timed out and the machine rebooted. Here is relative portion of pxelinux: LABEL SCCM OSD Boot MENU LABEL ^2. SCCM OSD Boot com32 linux.c32 append wimboot initrdfile=bootmgr.exe,BCD,boot.sdi,winpe.wim TEXT HELP
2012 Dec 13
22
[PATCH] Btrfs: fix a deadlock on chunk mutex
An user reported that he has hit an annoying deadlock while playing with ceph based on btrfs. Current updating device tree requires space from METADATA chunk, so we -may- need to do a recursive chunk allocation when adding/updating dev extent, that is where the deadlock comes from. If we use SYSTEM metadata to update device tree, we can avoid the recursive stuff. Reported-by: Jim Schutt
2014 Aug 29
0
PXE booting WinPE with UEFI architecture
On 28/08/14 19:38, Jason Jones wrote: > Anyone have luck with pxechn32 and bootmgfw.efi? > > I'm getting the "Unable to retrieve first package" issue as reported by others. > > Really, any advice for UEFI booting into a winpe environment off of pxelinux 6.03 would be beneficial. As it happens, I released a new version of wimboot with support for UEFI yesterday. The
2011 Oct 26
1
Re: ceph on btrfs [was Re: ceph on non-btrfs file systems]
2011/10/26 Sage Weil <sage@newdream.net>: > On Wed, 26 Oct 2011, Christian Brunner wrote: >> >> > Christian, have you tweaked those settings in your ceph.conf?  It would be >> >> > something like ''journal dio = false''.  If not, can you verify that >> >> > directio shows true when the journal is initialized from your osd log?
2010 Mar 31
1
[PATCH] Documentation: Update virt-v2v pod for RHEV export and IDE default
--- v2v/virt-v2v.pl | 140 ++++++++++++++++++++++++++++++++++++++++--------------- 1 files changed, 102 insertions(+), 38 deletions(-) diff --git a/v2v/virt-v2v.pl b/v2v/virt-v2v.pl index c1a4728..3559738 100755 --- a/v2v/virt-v2v.pl +++ b/v2v/virt-v2v.pl @@ -21,7 +21,6 @@ use strict; use Pod::Usage; use Getopt::Long; -#use Data::Dumper; use File::Spec; use File::stat; @@ -50,16 +49,19
2023 Dec 14
2
Gluster -> Ceph
...lusters. >Here are my observations but I am far from an expert in either Ceph or Gluster. > >Gluster works very well with 2 servers containing 2 big RAID disk arrays. > >Ceph on the other hand has MON,MGR,MDS...? that can run on multiple servers, and should be for redundancy, but the OSDs should be lots of small servers with very few disks attached. > >It kind of seems that the perfect OSD would be a disk with a raspberry pi attached and a 2.5Gb nic. >Something really cheap and replaceable. > >So putting Ceph on 2 big servers with RAID arrays is likely a very bad idea...
2023 Dec 14
2
Gluster -> Ceph
Hi all, I am looking in to ceph and cephfs and in my head I am comparing with gluster. The way I have been running gluster over the years is either a replicated or replicated-distributed clusters. The small setup we have had has been a replicated cluster with one arbiter and two fileservers. These fileservers has been configured with RAID6 and that raid has been used as the brick. If disaster
2013 Nov 06
0
[PATCH] Btrfs: fix lockdep error in async commit
Lockdep complains about btrfs''s async commit: [ 2372.462171] [ BUG: bad unlock balance detected! ] [ 2372.462191] 3.12.0+ #32 Tainted: G W [ 2372.462209] ------------------------------------- [ 2372.462228] ceph-osd/14048 is trying to release lock (sb_internal) at: [ 2372.462275] [<ffffffffa022cb10>] btrfs_commit_transaction_async+0x1b0/0x2a0 [btrfs] [ 2372.462305] but there
2010 Nov 01
5
[PATCH 03/10] staging: hv: Convert camel cased struct fields in hv.h to lower cases
From: Haiyang Zhang <haiyangz at microsoft.com> Convert camel cased struct fields in hv.h to lower cases Signed-off-by: Haiyang Zhang <haiyangz at microsoft.com> Signed-off-by: Hank Janssen <hjanssen at microsoft.com> --- drivers/staging/hv/hv.c | 95 +++++++++++++++++++++++--------------------- drivers/staging/hv/hv.h | 20 +++++----- drivers/staging/hv/vmbus.c |
2010 Nov 01
5
[PATCH 03/10] staging: hv: Convert camel cased struct fields in hv.h to lower cases
From: Haiyang Zhang <haiyangz at microsoft.com> Convert camel cased struct fields in hv.h to lower cases Signed-off-by: Haiyang Zhang <haiyangz at microsoft.com> Signed-off-by: Hank Janssen <hjanssen at microsoft.com> --- drivers/staging/hv/hv.c | 95 +++++++++++++++++++++++--------------------- drivers/staging/hv/hv.h | 20 +++++----- drivers/staging/hv/vmbus.c |