Displaying 9 results from an estimated 9 matches for "lvm_volum".
Did you mean:
lvm_volume
2010 Feb 25
1
[PATCH] fix storage problem.
...@ class TaskOmatic
lvm_storage_volume = StorageVolume.factory(lvm_db_pool.get_type_label)
existing_vol = StorageVolume.find(:first, :conditions =>
["storage_pool_id = ? AND key = ?",
- lvm_db_pool.id, lvm_volume.key])
+ lvm_db_pool.id, lvm_volume.get_attr('key')])
if not existing_vol
add_volume_to_db(lvm_db_pool, lvm_volume, "0744", "0744", "0744");
else
- @logger.info "volume #{lv...
2009 Jul 13
1
[PATCH] Use volume key instead of path to identify volume.
...lvm_storage_volume = StorageVolume.factory(lvm_db_pool.get_type_label)
existing_vol = StorageVolume.find(:first, :conditions =>
["storage_pool_id = ? AND #{lvm_storage_volume.volume_name} = ?",
- lvm_db_pool.id, lvm_volume.name])
+ lvm_db_pool.id, lvm_volume.key])
if not existing_vol
add_volume_to_db(lvm_db_pool, lvm_volume, "0744", "0744", "0744");
else
- @logger.info "volume #{lvm_volume.name} alre...
2007 Oct 08
2
OCF2 and LVM
Does anybody knows if is there a certified procedure in to
backup a RAC DB 10.2.0.3 based on OCFS2 ,
via split mirror or snaphots technology ?
Using Linux LVM and OCFS2, does anybody knows if is
possible to dinamically extend an OCFS filesystem,
once the underlying LVM Volume has been extended ?
Thanks in advance
Riccardo Paganini
2014 Apr 18
2
Many orphaned inodes after resize2fs
...blem with my ext3 filesystem:
- I had ext3 filesystem of the size of a few TB with journal. I correctly
unmounted it and it was marked clean.
- I then ran fsck.etx3 -f on it and it did not find any problem.
- After increasing size of its LVM volume by 1.5 TB I resized the
filesystem by resize2fs lvm_volume and it finished without problem.
- But fsck.ext3 -f immediately after that showed "Inodes that were part of
a corrupted orphan linked list found." and many thousands of "Inode XXX was
part of the orphaned inode list." I did not accepted fix. According to
debugfs all the inodes...
2014 Apr 18
0
Re: Many orphaned inodes after resize2fs
...- I had ext3 filesystem of the size of a few TB with journal. I correctly
> unmounted it and it was marked clean.
>
> - I then ran fsck.etx3 -f on it and it did not find any problem.
>
> - After increasing size of its LVM volume by 1.5 TB I resized the
> filesystem by resize2fs lvm_volume and it finished without problem.
>
> - But fsck.ext3 -f immediately after that showed "Inodes that were part of
> a corrupted orphan linked list found." and many thousands of "Inode XXX was
> part of the orphaned inode list." I did not accepted fix. According to
&g...
2014 Apr 18
3
Re: Many orphaned inodes after resize2fs
...the size of a few TB with journal. I correctly
> > unmounted it and it was marked clean.
> >
> > - I then ran fsck.etx3 -f on it and it did not find any problem.
> >
> > - After increasing size of its LVM volume by 1.5 TB I resized the
> > filesystem by resize2fs lvm_volume and it finished without problem.
> >
> > - But fsck.ext3 -f immediately after that showed "Inodes that were part
> of
> > a corrupted orphan linked list found." and many thousands of "Inode XXX
> was
> > part of the orphaned inode list." I did not...
2014 Apr 18
0
Many orphaned inodes after resize2fs
...blem with my ext3 filesystem:
- I had ext3 filesystem of the size of a few TB with journal. I correctly
unmounted it and it was marked clean.
- I then ran fsck.etx3 -f on it and it did not find any problem.
- After increasing size of its LVM volume by 1.5 TB I resized the
filesystem by resize2fs lvm_volume and it finished without problem.
- But fsck.ext3 -f immediately after that showed "Inodes that were part of
a corrupted orphan linked list found." and many thousands of "Inode XXX was
part of the orphaned inode list." I did not accepted fix. According to
debugfs all the inodes...
2009 May 29
0
[PATCH server] Add more debugging to storage tasks
...ass TaskOmatic
physical_vol.lvm_pool_id = lvm_db_pool.id
physical_vol.save!
- lvm_libvirt_pool = LibvirtPool.factory(lvm_db_pool)
+ lvm_libvirt_pool = LibvirtPool.factory(lvm_db_pool, @logger)
lvm_libvirt_pool.connect(@session, node)
lvm_volumes = @session.objects(:class => 'volume',
@@ -725,7 +725,7 @@ class TaskOmatic
end
begin
- libvirt_pool = LibvirtPool.factory(db_pool)
+ libvirt_pool = LibvirtPool.factory(db_pool, @logger)
begin
libvirt_pool.connect(@session, node)
@@ -...
2009 Nov 04
4
[PATCH server] Update daemons to use new QMF.
...eVolume.factory(db_pool_phys.get_type_label)
@@ -696,9 +701,9 @@ class TaskOmatic
physical_vol.save!
lvm_libvirt_pool = LibvirtPool.factory(lvm_db_pool, @logger)
- lvm_libvirt_pool.connect(@session, node)
+ lvm_libvirt_pool.connect(@qmfc, node)
- lvm_volumes = @session.objects(:class => 'volume',
+ lvm_volumes = @qmfc.objects(:class => 'volume',
'storagePool' => lvm_libvirt_pool.remote_pool.object_id)
lvm_volumes.each do |lvm_volume|
@@ -733,16 +738,16 @@ class...