Displaying 20 results from an estimated 20 matches for "disk6".
Did you mean:
disk
2017 Jul 07
0
[Gluster-devel] gfid and volume-id extended attributes lost
...s:/ws/disk3/ws_brick
Brick10: glusterfs1sds:/ws/disk4/ws_brick
Brick11: glusterfs2sds:/ws/disk4/ws_brick
Brick12: glusterfs3sds:/ws/disk4/ws_brick
Brick13: glusterfs1sds:/ws/disk5/ws_brick
Brick14: glusterfs2sds:/ws/disk5/ws_brick
Brick15: glusterfs3sds:/ws/disk5/ws_brick
Brick16: glusterfs1sds:/ws/disk6/ws_brick
Brick17: glusterfs2sds:/ws/disk6/ws_brick
Brick18: glusterfs3sds:/ws/disk6/ws_brick
Brick19: glusterfs1sds:/ws/disk7/ws_brick
Brick20: glusterfs2sds:/ws/disk7/ws_brick
Brick21: glusterfs3sds:/ws/disk7/ws_brick
Brick22: glusterfs1sds:/ws/disk8/ws_brick
Brick23: glusterfs2sds:/ws/disk8/ws_br...
2017 Jul 08
2
[Gluster-devel] gfid and volume-id extended attributes lost
...ck
>
> Brick11: glusterfs2sds:/ws/disk4/ws_brick
>
> Brick12: glusterfs3sds:/ws/disk4/ws_brick
>
> Brick13: glusterfs1sds:/ws/disk5/ws_brick
>
> Brick14: glusterfs2sds:/ws/disk5/ws_brick
>
> Brick15: glusterfs3sds:/ws/disk5/ws_brick
>
> Brick16: glusterfs1sds:/ws/disk6/ws_brick
>
> Brick17: glusterfs2sds:/ws/disk6/ws_brick
>
> Brick18: glusterfs3sds:/ws/disk6/ws_brick
>
> Brick19: glusterfs1sds:/ws/disk7/ws_brick
>
> Brick20: glusterfs2sds:/ws/disk7/ws_brick
>
> Brick21: glusterfs3sds:/ws/disk7/ws_brick
>
> Brick22: glusterfs1s...
2017 Jul 07
3
[Gluster-devel] gfid and volume-id extended attributes lost
On Fri, Jul 7, 2017 at 9:25 PM, Ankireddypalle Reddy <areddy at commvault.com>
wrote:
> 3.7.19
>
These are the only callers for removexattr and only _posix_remove_xattr has
the potential to do removexattr as posix_removexattr already makes sure
that it is not gfid/volume-id. And surprise surprise _posix_remove_xattr
happens only from healing code of afr/ec. And this can only happen
2017 Jul 10
0
[Gluster-devel] gfid and volume-id extended attributes lost
...disk4/ws_brick
>>
>> Brick12: glusterfs3sds:/ws/disk4/ws_brick
>>
>> Brick13: glusterfs1sds:/ws/disk5/ws_brick
>>
>> Brick14: glusterfs2sds:/ws/disk5/ws_brick
>>
>> Brick15: glusterfs3sds:/ws/disk5/ws_brick
>>
>> Brick16: glusterfs1sds:/ws/disk6/ws_brick
>>
>> Brick17: glusterfs2sds:/ws/disk6/ws_brick
>>
>> Brick18: glusterfs3sds:/ws/disk6/ws_brick
>>
>> Brick19: glusterfs1sds:/ws/disk7/ws_brick
>>
>> Brick20: glusterfs2sds:/ws/disk7/ws_brick
>>
>> Brick21: glusterfs3sds:/ws/disk7...
2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
...Brick12: glusterfs3sds:/ws/disk4/ws_brick
>>>
>>> Brick13: glusterfs1sds:/ws/disk5/ws_brick
>>>
>>> Brick14: glusterfs2sds:/ws/disk5/ws_brick
>>>
>>> Brick15: glusterfs3sds:/ws/disk5/ws_brick
>>>
>>> Brick16: glusterfs1sds:/ws/disk6/ws_brick
>>>
>>> Brick17: glusterfs2sds:/ws/disk6/ws_brick
>>>
>>> Brick18: glusterfs3sds:/ws/disk6/ws_brick
>>>
>>> Brick19: glusterfs1sds:/ws/disk7/ws_brick
>>>
>>> Brick20: glusterfs2sds:/ws/disk7/ws_brick
>>>
>&...
2017 Jul 10
0
[Gluster-devel] gfid and volume-id extended attributes lost
...4/ws_brick
>>>>
>>>> Brick13: glusterfs1sds:/ws/disk5/ws_brick
>>>>
>>>> Brick14: glusterfs2sds:/ws/disk5/ws_brick
>>>>
>>>> Brick15: glusterfs3sds:/ws/disk5/ws_brick
>>>>
>>>> Brick16: glusterfs1sds:/ws/disk6/ws_brick
>>>>
>>>> Brick17: glusterfs2sds:/ws/disk6/ws_brick
>>>>
>>>> Brick18: glusterfs3sds:/ws/disk6/ws_brick
>>>>
>>>> Brick19: glusterfs1sds:/ws/disk7/ws_brick
>>>>
>>>> Brick20: glusterfs2sds:/ws/d...
2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
...s:/ws/disk3/ws_brick
Brick10: glusterfs1sds:/ws/disk4/ws_brick
Brick11: glusterfs2sds:/ws/disk4/ws_brick
Brick12: glusterfs3sds:/ws/disk4/ws_brick
Brick13: glusterfs1sds:/ws/disk5/ws_brick
Brick14: glusterfs2sds:/ws/disk5/ws_brick
Brick15: glusterfs3sds:/ws/disk5/ws_brick
Brick16: glusterfs1sds:/ws/disk6/ws_brick
Brick17: glusterfs2sds:/ws/disk6/ws_brick
Brick18: glusterfs3sds:/ws/disk6/ws_brick
Brick19: glusterfs1sds:/ws/disk7/ws_brick
Brick20: glusterfs2sds:/ws/disk7/ws_brick
Brick21: glusterfs3sds:/ws/disk7/ws_brick
Brick22: glusterfs1sds:/ws/disk8/ws_brick
Brick23: glusterfs2sds:/ws/disk8/ws_br...
2017 Jul 13
0
[Gluster-devel] gfid and volume-id extended attributes lost
...ck
>
> Brick11: glusterfs2sds:/ws/disk4/ws_brick
>
> Brick12: glusterfs3sds:/ws/disk4/ws_brick
>
> Brick13: glusterfs1sds:/ws/disk5/ws_brick
>
> Brick14: glusterfs2sds:/ws/disk5/ws_brick
>
> Brick15: glusterfs3sds:/ws/disk5/ws_brick
>
> Brick16: glusterfs1sds:/ws/disk6/ws_brick
>
> Brick17: glusterfs2sds:/ws/disk6/ws_brick
>
> Brick18: glusterfs3sds:/ws/disk6/ws_brick
>
> Brick19: glusterfs1sds:/ws/disk7/ws_brick
>
> Brick20: glusterfs2sds:/ws/disk7/ws_brick
>
> Brick21: glusterfs3sds:/ws/disk7/ws_brick
>
> Brick22: glusterfs1s...
2019 Aug 01
1
guestmount mounts gets corrupted somehow? [iscsi lvm guestmount windows filesystem rsync]
...re visible on
all servers.
On the backup server I have the following running:
# guestmount --version
guestmount 1.40.2
# guestmount --ro -a /dev/lvm1-vol/sr8-disk1a -a /dev/lvm1-vol/sr8-disk2
-a /dev/lvm1-vol/sr8-disk3 -a /dev/lvm1-vol/sr8-disk4 -a
/dev/lvm1-vol/sr8-disk5 -a /dev/lvm1-vol/sr8-disk6 -m /dev/sdb2
/mnt/sr8-sdb2
# rsync --archive --delete --partial --progress --recursive --no-links
--no-devices --quiet /mnt/sr8-sdb2/
/srv/storage/backups/libvirt-filesystems/sr8-sdb2
This used to go fine for many years (we helped with the development of
deduplication ntfs support).
Now one...
2012 Jun 05
2
best practises for mail systems
...ed through 3x2TB disks into it.
In guests vms on top of this disks we made XFS and fired up glusterfs with distributed replicated volumes for our mailstorage.
so it looks like this:
vm1??? replicate???? vm2
disk1 ------------> disk4
disk2 ------------> disk5
disk3 ------------> disk6
in each vm we mounted glusterfs and pointed dovecot to that dir for mail creation (as ltmp) and imap4 user access.
also we use exim as smtp.
So, with glusterfs as mailstorage we can go for LVS to load balancing for exim and dovecot.
so wherenever one of host systems (hence one of mail vms...
2015 Aug 12
1
[PATCH 1/2] disk-create: Allow preallocation off/metadata/full.
...6K preallocation:off
disk-create disk2.img raw 256K preallocation:sparse
disk-create disk3.img raw 256K preallocation:full
disk-create disk4.img qcow2 256K
disk-create disk5.img qcow2 256K preallocation:off
+ disk-create disk5.img qcow2 256K preallocation:sparse
disk-create disk6.img qcow2 256K preallocation:metadata
+ disk-create disk6.img qcow2 256K preallocation:full
disk-create disk7.img qcow2 256K compat:1.1
disk-create disk8.img qcow2 256K clustersize:128K
disk-create disk9.img qcow2 -1 backingfile:disk1.img compat:1.1
--
2.5.0
2024 Oct 17
0
Bricks with different sizes.
.../disco3/vms3 force
And mounted into /vms
like this:
gluster1:VMS /vms glusterfs
defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster2 0 0
Later on, I add more 3 HDD, like:
mkfs.xfs /dev/sdf
mkfs.xfs /dev/sdg
mkfs.xfs /dev/sdh
mount /dev/sdf /disk4
mount /dev/sdg /disk5
mount /dev/sdh /disk6
And than, I add the bricks:
gluster vol add-brick VMS replica 2 gluster1:/disco4/vms4
gluster1:/disco4/vms5 gluster1:/disco5/vms5 gluster2:/disco4/vms4
gluster2:/disco4/vms5 gluster2:/disco5/vms5 force
The /vms mounted grow up normally.
So my question: is there anything dangerous to do this way?...
2018 Mar 28
0
Re: [PATCH FOR DISCUSSION ONLY v2] v2v: Add -o kubevirt output mode.
...arting qemu, unclear why.
export SKIP_TEST_PARALLEL_MOUNT_LOCAL=1
# Skip vfs-minimum-size test which requires btrfs-progs >= 4.2
export SKIP_TEST_VFS_MINIMUM_SIZE_2=1
# This test fails if the available memory is ~ 4 GB, as it is
# on the test machine.
export SKIP_TEST_BIG_HEAP=1
# qemu-img: ./disk6.img: Could not preallocate data for the new file: Bad file descriptor
# https://bugzilla.redhat.com/1265196
export SKIP_TEST_DISK_CREATE_SH=1
export SKIP_TEST_V2V_OA_OPTION_SH=1
# Skip tests which fail because discard does not work on NFS.
export SKIP_TEST_BLKDISCARD_PL=1
export SKIP_TEST_FSTRIM_P...
2012 Jan 13
1
Quota problems with Gluster3.3b2
Hi everyone,
I'm playing with Gluster3.3b2, and everything is working fine when
uploading stuff through swift. However, when I enable quotas on Gluster,
I randomly get permission errors. Sometimes I can upload files, most
times I can't.
I'm mounting the partitions with the acl flag, I've tried wiping out
everything and starting from scratch, same result. As soon as I
2018 Mar 28
2
Re: [PATCH FOR DISCUSSION ONLY v2] v2v: Add -o kubevirt output mode.
On Wed, Mar 28, 2018 at 1:01 PM, Richard W.M. Jones <rjones@redhat.com>
wrote:
> On Wed, Mar 28, 2018 at 12:33:56PM +0200, Piotr Kliczewski wrote:
> > configure: error: Package requirements (jansson >= 2.7) were not met:
>
> You need to install jansson-devel.
>
OK, In addition I had to install ocaml-hivex-devel (failed during make)
>
> Rich.
>
> --
>
2013 Dec 03
0
Problem booting guest with more than 8 disks
...#39; port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vdg' bus='virtio'/>
| <alias name='virtio-disk6'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x0c'
function='0x0'/>
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protoco...
2012 Jun 05
1
[ Re: best practises for mail systems]
...e.
>> >
>> > so it looks like this:
>> >
>> >
>> >
>> > vm1 replicate vm2
>> >
>> > disk1 ------------> disk4
>> >
>> > disk2 ------------> disk5
>> >
>> > disk3 ------------> disk6
>> >
>> >
>> >
>> > in each vm we mounted glusterfs and pointed dovecot to that dir for
>> mail
>> creation (as ltmp) and imap4 user access.
>> >
>> > also we use exim as smtp.
>> >
>> >
>> >
>> >...
2006 May 09
3
Possible corruption after disk hiccups...
...43 sol gda: [ID 107833 kern.notice] Sense Key: aborted command
May 9 16:47:43 sol gda: [ID 107833 kern.notice] Vendor ''Gen-ATA '' error code: 0x3
May 9 16:47:43 sol gda: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci10de,5c at 9/pci-ide at a/ide at 0/cmdk at 1,0 (Disk6):
May 9 16:47:43 sol Error for command ''write sector'' Error Level: Informational
May 9 16:47:43 sol gda: [ID 107833 kern.notice] Sense Key: aborted command
May 9 16:47:43 sol gda: [ID 107833 kern.notice] Vendor ''Gen-ATA '' error code: 0...
2014 Jan 28
11
[PATCH 00/10] New API: disk-create for creating blank disks.
A lot of code runs 'qemu-img create' or 'truncate' to create blank
disk images.
In the past I resisted adding an API to do this, since it essentially
duplicates what you can already do using other tools (ie. qemu-img).
However this does simplify calling code quite a lot since qemu-img is
somewhat error-prone to use (eg: don't try to create a disk called
"foo:bar")
2009 Mar 28
53
Can this be done?
I currently have a 7x1.5tb raidz1.
I want to add "phase 2" which is another 7x1.5tb raidz1
Can I add the second phase to the first phase and basically have two
raid5''s striped (in raid terms?)
Yes, I probably should upgrade the zpool format too. Currently running
snv_104. Also should upgrade to 110.
If that is possible, would anyone happen to have the simple command
lines to