Displaying 20 results from an estimated 134 matches for "osd".
Did you mean:
os
2007 Nov 29
3
lustre osd implementation
hello,
does lustre support OSD T10 standard? Can it be used with IBM''s/Intel''s OSD
initiator and target?
Thanks,
Ashish
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20071128/c2de31a6/attachment-0002.html
2011 Feb 11
0
[PATCH 3/3]:Staging: hv: Remove osd layer
The OSD layer was a wrapper around native interfaces
adding little value and was infact buggy -
refer to the osd_wait.patch for details.
This patch gets rid of the OSD abstraction.
Signed-off-by: K. Y. Srinivasan <kys at microsoft.com>
Signed-off-by: Hank Janssen <hjanssen at microsoft.com>...
2011 Feb 11
0
[PATCH 3/3]:Staging: hv: Remove osd layer
The OSD layer was a wrapper around native interfaces
adding little value and was infact buggy -
refer to the osd_wait.patch for details.
This patch gets rid of the OSD abstraction.
Signed-off-by: K. Y. Srinivasan <kys at microsoft.com>
Signed-off-by: Hank Janssen <hjanssen at microsoft.com>...
2012 Aug 30
5
Ceph + RBD + Xen: Complete collapse -> Network issue in domU / Bad data for OSD / OOM Kill
Hi,
A bit of explanation of what I''m trying to achieve :
We have a bunch of homogeneous nodes that have CPU + RAM + Storage and
we want to use that as some generic cluster. The idea is to have Xen
on all of these and run Ceph OSD in a domU on each to "export" the
local storage space to the entire cluster. And then use RBD to store /
access VM images from any of the machines.
We did setup a working ceph cluster and RBD works well as long as we
don''t access it from a dom0 that run a VM hosting a OSD.
When...
2011 Oct 09
1
Btrfs High IO-Wait
Hi,
I have high IO-Wait on the ods (ceph), the osd are running a v3.1-rc9
kernel.
I also experience high IO-rates, around 500IO/s reported via iostat.
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s
avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.00 0.00 0.00 6.80 0.00 62.40
18.35...
2007 Nov 30
1
lustre-1.8 OSD
lustre-1.8 has OSD structures in place, what do I need to add in to make it
work with OSD T10 standard? could anybody point me to some docs mentioning
lustre internals - OSTs, OSSs, OBDs, and control flow when a read/write call
is invoked by a client. thanks.
-------------- next part --------------
An HTML attachment...
2012 Apr 20
44
Ceph on btrfs 3.4rc
...x10
[91128.965133] [<ffffffff81070360>] ? kthread_freezable_should_stop+0x70/0x70
[91128.972913] [<ffffffff8158c220>] ? gs_change+0x13/0x13
[91128.978826] ---[ end trace b8c31966cca731fb ]---
I''m able to reproduce this with ceph on a single server with 4 disks
(4 filesystems/osds) and a small test program based on librbd. It is
simply writing random bytes on a rbd volume (see attachment).
Is this something I should care about? Any hint''s on solving this
would be appreciated.
Thanks,
Christian
2007 Aug 30
2
OSD Mystery
I recently started using compiz-fusion. After spending months looking
at an anemic little rectangular on-screen volume control when I use
the volume buttons on my ubuntu thinkpad T60p, all of a sudden I
noticed that I was getting a nice, big, robust rounded-corner display
(somewhat mac-like). Now it's gone again.
I really have no clue whether this came from compiz-fusion, emerald, a
plugin,
2010 Jul 14
1
[PATCH] Documentation: Cleanup virt-v2v.conf references and mention --network
...xml
+ virt-v2v -i libvirtxml -op imported --network default guest-domain.xml
- virt-v2v -f virt-v2v.conf -ic esx://esx.server/ -op transfer esx_guest
+ virt-v2v -ic esx://esx.server/ -op imported --network default esx_guest
- virt-v2v -f virt-v2v.conf -ic esx://esx.server/ \
- -o rhev -osd rhev.nfs.storage:/export_domain guest esx_guest
+ virt-v2v -ic esx://esx.server/ \
+ -o rhev -osd rhev.nfs.storage:/export_domain --network rhevm \
+ esx_guest
=head1 DESCRIPTION
@@ -517,22 +518,29 @@ storage referred to in the domain XML is available locally at the same path...
2014 Nov 07
4
[Bug 10925] New: non-atomic xattr replacement in btrfs => rsync --read-batch random errors
...s, but the latter
bounced because I'm not subscribed, so here's a copy of the post there, with
additional info at the end, that I forgot to put in the initial message, but
that I posted to the former as a followup:
A few days ago, I started using rsync batches to archive old copies of ceph OSD
snapshots for certain kinds of disaster recovery. This seems to exercise an
unexpected race condition in rsync, which happens to expose what appears to be
a race condition in btrfs, causing random scary but harmless errors when
replaying the rsync batches.
strace has revealed that the two rsync p...
2014 Sep 04
2
PXE booting WinPE with UEFI architecture
This got me closer, but it got to "Encapsulating winpe.wim..." and never went through the rest of the way. Ultimately PXELINUX apparently timed out and the machine rebooted.
Here is relative portion of pxelinux:
LABEL SCCM OSD Boot
MENU LABEL ^2. SCCM OSD Boot
com32 linux.c32
append wimboot initrdfile=bootmgr.exe,BCD,boot.sdi,winpe.wim
TEXT HELP
Deployment of Operating System via SCCM.
Jason Jones
Sr. Associate, Network Services | End User Computing
3947 N Oak St Ext | Valdosta, GA 31605...
2012 Dec 13
22
[PATCH] Btrfs: fix a deadlock on chunk mutex
An user reported that he has hit an annoying deadlock while playing with
ceph based on btrfs.
Current updating device tree requires space from METADATA chunk,
so we -may- need to do a recursive chunk allocation when adding/updating
dev extent, that is where the deadlock comes from.
If we use SYSTEM metadata to update device tree, we can avoid the recursive
stuff.
Reported-by: Jim Schutt
2014 Aug 29
0
PXE booting WinPE with UEFI architecture
On 28/08/14 19:38, Jason Jones wrote:
> Anyone have luck with pxechn32 and bootmgfw.efi?
>
> I'm getting the "Unable to retrieve first package" issue as reported by others.
>
> Really, any advice for UEFI booting into a winpe environment off of pxelinux 6.03 would be beneficial.
As it happens, I released a new version of wimboot with support for UEFI
yesterday. The
2011 Oct 26
1
Re: ceph on btrfs [was Re: ceph on non-btrfs file systems]
...r wrote:
>> >> > Christian, have you tweaked those settings in your ceph.conf? It would be
>> >> > something like ''journal dio = false''. If not, can you verify that
>> >> > directio shows true when the journal is initialized from your osd log?
>> >> > E.g.,
>> >> >
>> >> > 2011-10-21 15:21:02.026789 7ff7e5c54720 journal _open dev/osd0.journal fd 14: 104857600 bytes, block size 4096 bytes, directio = 1
>> >> >
>> >> > If directio = 1 for you, something else f...
2010 Mar 31
1
[PATCH] Documentation: Update virt-v2v pod for RHEV export and IDE default
...ml
+ virt-v2v -f virt-v2v.conf -i libvirtxml -op transfer guest-domain.xml
- virt-v2v -f virt-v2v.conf -ic esx://esx.server/ -op transfer guest-domain
+ virt-v2v -f virt-v2v.conf -ic esx://esx.server/ -op transfer esx_guest
+
+ virt-v2v -f virt-v2v.conf -ic esx://esx.server/ \
+ -o rhev -osd rhev.nfs.storage:/export_domain guest esx_guest
=head1 DESCRIPTION
virt-v2v converts guests from a foreign hypervisor to run on KVM, managed by
-libvirt. It can currently convert Red Hat Enterprise Linux and Fedora guests
-running on Xen and VMware ESX. It will enable VirtIO drivers in the co...
2023 Dec 14
2
Gluster -> Ceph
...lusters.
>Here are my observations but I am far from an expert in either Ceph or Gluster.
>
>Gluster works very well with 2 servers containing 2 big RAID disk arrays.
>
>Ceph on the other hand has MON,MGR,MDS...? that can run on multiple servers, and should be for redundancy, but the OSDs should be lots of small servers with very few disks attached.
>
>It kind of seems that the perfect OSD would be a disk with a raspberry pi attached and a 2.5Gb nic.
>Something really cheap and replaceable.
>
>So putting Ceph on 2 big servers with RAID arrays is likely a very bad ide...
2023 Dec 14
2
Gluster -> Ceph
...fileserver and as it is RAIDed
I can loose two disks on this machine before I
start to loose data.
.... thinking ceph and similar setup ....
The idea is to have one "admin" node and two fileservers.
The admin node will run mon, mgr and mds.
The storage nodes will run mon, mgr, mds and 8x osd (8 disks),
with replication = 2.
The problem is that I can not get my head around how
to think when disaster strikes.
So one fileserver burns up, there is still the other
fileserver and from my understanding the ceph system
will start to replicate the files on the same fileserver
and when this is...
2013 Nov 06
0
[PATCH] Btrfs: fix lockdep error in async commit
Lockdep complains about btrfs''s async commit:
[ 2372.462171] [ BUG: bad unlock balance detected! ]
[ 2372.462191] 3.12.0+ #32 Tainted: G W
[ 2372.462209] -------------------------------------
[ 2372.462228] ceph-osd/14048 is trying to release lock (sb_internal) at:
[ 2372.462275] [<ffffffffa022cb10>] btrfs_commit_transaction_async+0x1b0/0x2a0 [btrfs]
[ 2372.462305] but there are no more locks to release!
[ 2372.462324]
[ 2372.462324] other info that might help us debug this:
[ 2372.462349] no locks held...
2010 Nov 01
5
[PATCH 03/10] staging: hv: Convert camel cased struct fields in hv.h to lower cases
..._MSR_SIEFP msr set to: %llx", siefp.as_uint64);
@@ -449,15 +454,15 @@ void HvSynicInit(void *irqarg)
wrmsrl(HV_X64_MSR_SCONTROL, sctrl.as_uint64);
- gHvContext.SynICInitialized = true;
+ g_hv_context.synic_initialized = true;
return;
Cleanup:
- if (gHvContext.synICEventPage[cpu])
- osd_PageFree(gHvContext.synICEventPage[cpu], 1);
+ if (g_hv_context.synic_event_page[cpu])
+ osd_PageFree(g_hv_context.synic_event_page[cpu], 1);
- if (gHvContext.synICMessagePage[cpu])
- osd_PageFree(gHvContext.synICMessagePage[cpu], 1);
+ if (g_hv_context.synic_message_page[cpu])
+ osd_PageFree(...
2010 Nov 01
5
[PATCH 03/10] staging: hv: Convert camel cased struct fields in hv.h to lower cases
..._MSR_SIEFP msr set to: %llx", siefp.as_uint64);
@@ -449,15 +454,15 @@ void HvSynicInit(void *irqarg)
wrmsrl(HV_X64_MSR_SCONTROL, sctrl.as_uint64);
- gHvContext.SynICInitialized = true;
+ g_hv_context.synic_initialized = true;
return;
Cleanup:
- if (gHvContext.synICEventPage[cpu])
- osd_PageFree(gHvContext.synICEventPage[cpu], 1);
+ if (g_hv_context.synic_event_page[cpu])
+ osd_PageFree(g_hv_context.synic_event_page[cpu], 1);
- if (gHvContext.synICMessagePage[cpu])
- osd_PageFree(gHvContext.synICMessagePage[cpu], 1);
+ if (g_hv_context.synic_message_page[cpu])
+ osd_PageFree(...