Displaying 20 results from an estimated 5547 matches for "mnt".
Did you mean:
int
2010 Apr 30
1
gluster-volgen - syntax for mirroring/distributing across 6 nodes
NOTE: posted this to gluster-devel when I meant to post it to gluster-users
01 | 02 mirrored --|
03 | 04 mirrored --| distributed
05 | 06 mirrored --|
1) Would this command work for that?
glusterfs-volgen --name repstore1 --raid 1 clustr-01:/mnt/data01
clustr-02:/mnt/data01 --raid 1 clustr-03:/mnt/data01
clustr-04:/mnt/data01 --raid 1 clustr-05:/mnt/data01
clustr-06:/mnt/data01
So the 'repstore1' is the distributed part, and within that are 3 sets
of mirrored nodes.
2) Then, since we're running 24 drives in JBOD mode, so we...
2017 Sep 22
1
vfs_fruit and extended attributes
...AME>`.
(Apologies for the length of this?)
root at mfs-01 ~]#gluster volume info mfs1
Volume Name: mfs1
Type: Distributed-Disperse
Volume ID: 2fa02e5d-95b4-4aaa-b16c-5de90e0b11b2
Status: Started
Snapshot Count: 0
Number of Bricks: 6 x (8 + 4) = 72
Transport-type: tcp
Bricks:
Brick1: mfs-b01:/mnt/gfs001/data
Brick2: mfs-b01:/mnt/gfs002/data
Brick3: mfs-b01:/mnt/gfs003/data
Brick4: mfs-b02:/mnt/gfs019/data
Brick5: mfs-b02:/mnt/gfs020/data
Brick6: mfs-b02:/mnt/gfs021/data
Brick7: mfs-b03:/mnt/gfs037/data
Brick8: mfs-b03:/mnt/gfs038/data
Brick9: mfs-b03:/mnt/gfs039/data
Brick10: mfs-b04:/mnt/g...
2013 Mar 05
1
memory leak in 3.3.1 rebalance?
...he only references to rebalance
memory leaks I could find were related to 3.2.x, not 3.3.1.
gluster volume info:
Volume Name: bigdata
Type: Distributed-Replicate
Volume ID: 56498956-7b4b-4ee3-9d2b-4c8cfce26051
Status: Started
Number of Bricks: 25 x 2 = 50
Transport-type: tcp
Bricks:
Brick1: ml43:/mnt/donottouch/localb
Brick2: ml44:/mnt/donottouch/localb
Brick3: ml43:/mnt/donottouch/localc
Brick4: ml44:/mnt/donottouch/localc
Brick5: ml45:/mnt/donottouch/localb
Brick6: ml46:/mnt/donottouch/localb
Brick7: ml45:/mnt/donottouch/localc
Brick8: ml46:/mnt/donottouch/localc
Brick9: ml47:/mnt/donottouch/...
2002 Dec 12
5
Display error
Hi ,
I use mandrake 9.0 with kernel 2.4.19 on i686 and tried to run several times
with different wine version ( last wine version is
wine-cvs-opengl.121102.i386.rpm ).My Display driver is NVIDIA 1.0 3123.
XFree86 version is 4.2.1. I did what sad in troubleshouting on Wine website.
Everytime same error message came . I edit ld.so.config and profile files
for wine library path but it wasn't
2017 Sep 22
0
vfs_fruit and extended attributes
On Thu, 2017-09-21 at 10:35 -0600, Terry McGuire wrote:
> Hello list. I?m attempting to improve how Samba shares directories on our Gluster volume to Mac
> users by using the vfs_fruit module.
What versions of GlusterFS and Samba have you installed? And which platform/distro are you using as
Samba server?
Please paste the output of `gluster volume info <VOLNAME>`.
> This module
2020 Aug 23
2
MultiDatabase shard count limitations
....92% script/public-i libxapian.so.30.8.0 [.] GlassTable::get_exact_entry
3.19% script/public-i libc-2.28.so [.] __memcpy_ssse3
2.65% script/public-i libpthread-2.28.so [.] __libc_pread64
2.27% script/public-i [unknown] [k] 0xffffffff81800000
2.18% /mnt/btr/public perl [.] Perl_yyparse
2.11% /mnt/btr/public perl [.] Perl_yylex
1.92% script/public-i libxapian.so.30.8.0 [.] GlassPostList::move_forward_in_chunk_to_at_least
1.76% script/public-i libxapian.so.30.8.0 [.] GlassPostListTable::get_fre...
2023 Aug 04
2
print only first level directory name when copying files
Hello,
I am copying /mnt/foo to /mnt/bar/
rsync --info=name1,del2 -rl /mnt/foo /mnt/bar/
/mnt/foo contains deep directory structure, ie:
/mnt/foo/aaa/
/mnt/foo/aaa/somestuff/
/mnt/foo/aaa/somestuff/file1
/mnt/foo/bbb/
/mnt/foo/bbb/someotherstuff/
/mnt/foo/bbb/someotherstuff/file2
I am not interest...
2020 Aug 21
2
MultiDatabase shard count limitations
Going back to the "prioritizing aggregated DBs" thread from
February 2020, I've got 390 Xapian shards for 130 public inboxes
I want to search against(*). There's more on the horizon (we're
expecting tens of thousands of public inboxes).
After bumping RLIMIT_NOFILE and running ->add_database a bunch,
the actual queries seem to be taking ~30s (not good :x).
Now I'm
2024 Feb 15
1
tests for clone-dest
...usr/bin/btrfs-search-metadata || test_skipped "Can't find btrfs-search-metatadata from python3-btrfs (only available on Linux)"
# make a btrfs filesystem and mount it
truncate -s 115M $scratchdir/btrfs.image
/sbin/mkfs.btrfs $scratchdir/btrfs.image > /dev/null
mkdir -p $scratchdir/mnt/
mount -o loop $scratchdir/btrfs.image $scratchdir/mnt/ || test_skipped "Can't mount btrfs image file, try running as root"
# set up some test files and rsync them
mkdir $scratchdir/mnt/1 $scratchdir/mnt/2 $scratchdir/mnt/3
# files should be at least 4K in size so they fill an extent...
2005 Nov 18
0
WIne 0.9x crash on make
...r about a month or so, but the 0.9 version won't compile on
my system. It conks out with an error in the loader subdir (actual data
below).
The following is the output I get when I run make:
-------------------------------------------------------------------------------------
bas@pairadocs /mnt/downloads/Winestuff/wine-0.9.1 $ make
(lotsa stuff going fine)
make[1]: Entering directory `/mnt/downloads/Winestuff/wine-0.9.1/loader'
gcc -m32 -o wine-preloader -static -nostartfiles -nodefaultlibs
-Wl,-Ttext=0x7c000000 preloader.o -L../libs/port -lwine_port
preloader.o(.text+0x14): In func...
2017 Oct 19
0
vfs_fruit and extended attributes
...oot at mfs-01 ~]#gluster volume info mfs1
>
> Volume Name: mfs1
> Type: Distributed-Disperse
> Volume ID: 2fa02e5d-95b4-4aaa-b16c-5de90e0b11b2
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 6 x (8 + 4) = 72
> Transport-type: tcp
> Bricks:
> Brick1: mfs-b01:/mnt/gfs001/data
> Brick2: mfs-b01:/mnt/gfs002/data
> Brick3: mfs-b01:/mnt/gfs003/data
> Brick4: mfs-b02:/mnt/gfs019/data
> Brick5: mfs-b02:/mnt/gfs020/data
> Brick6: mfs-b02:/mnt/gfs021/data
> Brick7: mfs-b03:/mnt/gfs037/data
> Brick8: mfs-b03:/mnt/gfs038/data
> Brick9: mfs-b03:...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
I applied the workarround for this bug and now df shows the right size:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor1data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor1data:/volumedisk1
197T 61T 136T 31% /volumedisk1
[root at stor2 ~]# df -h
Filesystem Size Used...
2017 Nov 09
2
GlusterFS healing questions
....
What kind of config tweaks is recommended for these kind of EC volumes?
$ gluster volume info
Volume Name: test-ec-100g
Type: Disperse
Volume ID: 0254281d-2f6e-4ac4-a773-2b8e0eb8ab27
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (8 + 2) = 10
Transport-type: tcp
Bricks:
Brick1: dn-304:/mnt/test-ec-100/brick
Brick2: dn-305:/mnt/test-ec-100/brick
Brick3: dn-306:/mnt/test-ec-100/brick
Brick4: dn-307:/mnt/test-ec-100/brick
Brick5: dn-308:/mnt/test-ec-100/brick
Brick6: dn-309:/mnt/test-ec-100/brick
Brick7: dn-310:/mnt/test-ec-100/brick
Brick8: dn-311:/mnt/test-ec-2/brick
Brick9: dn-312:/m...
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
.............
run.node1.X.rd
run.node2.X.rd
( X ranging from 0000 to infinite )
Curiously stor1data and stor2data maintain similar ratios in bytes:
Filesystem 1K-blocks Used Available
Use% Mounted on
/dev/sdc1 52737613824 17079174264 35658439560 33%
/mnt/glusterfs/vol1 -> stor1data
/dev/sdc1 52737613824 17118810848 35618802976 33%
/mnt/glusterfs/vol1 -> stor2data
However the ratio on som3data differs too much (1TB):
Filesystem 1K-blocks Used Available
Use% Mounted on
/dev/sdc1 527...
2013 Mar 18
1
OST0006 : inactive device
...shows:
[code]
[root at MDS ~]# lctl list_nids
10.94.214.185 at tcp
[root at MDS ~]#
[/code]
On Lustre Client1:
[code]
[root at lustreclient1 lustre]# lfs df -h
UUID bytes Used Available Use% Mounted on
lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6% /mnt/lustre[MDT:0]
lustre-OST0000_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:0]
lustre-OST0001_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:1]
lustre-OST0002_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:2]
lustre-OST0003_UUID 5.9G 276....
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...r at gmail.com> wrote:
> Hi Nithya,
>
> I applied the workarround for this bug and now df shows the right size:
>
> That is good to hear.
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
> /dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
> stor1data:/volumedisk0
> 101T 3,3T 97T 4% /volumedisk0
> stor1data:/volumedisk1
> 197T 61T 136T 31% /volumedisk1
>
>
> [root at stor2 ~]# d...
2013 Mar 18
1
lustre showing inactive devices
...de]
[root at MDS ~]# lctl list_nids
10.94.214.185 at tcp
[root at MDS ~]#
[/code]
On Lustre Client1:
[code]
[root at lustreclient1 lustre]# lfs df -h
UUID bytes Used Available Use% Mounted on
lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6%
/mnt/lustre[MDT:0]
lustre-OST0000_UUID 5.9G 276.1M 5.3G 5%
/mnt/lustre[OST:0]
lustre-OST0001_UUID 5.9G 276.1M 5.3G 5%
/mnt/lustre[OST:1]
lustre-OST0002_UUID 5.9G 276.1M 5.3G 5%
/mnt/lustre[OST:2]
lustre-OST0003_UUID 5.9G...
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...gt;
> ( X ranging from 0000 to infinite )
>
> Curiously stor1data and stor2data maintain similar ratios in bytes:
>
> Filesystem 1K-blocks Used Available
> Use% Mounted on
> /dev/sdc1 52737613824 17079174264 35658439560 33%
> /mnt/glusterfs/vol1 -> stor1data
> /dev/sdc1 52737613824 17118810848 35618802976 33%
> /mnt/glusterfs/vol1 -> stor2data
>
> However the ratio on som3data differs too much (1TB):
> Filesystem 1K-blocks Used Available
> Use% Mounted...
2017 Sep 21
2
vfs_fruit and extended attributes
Hello list. I?m attempting to improve how Samba shares directories on our Gluster volume to Mac users by using the vfs_fruit module. This module does wonders for speeding listings and downloads of directories with large numbers of files in the Finder, but it kills uploads dead. Finder gives an error:
The Finder can?t complete the operation because some data in ?[filename]? can?t be read or
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
...1475983, failures: 0,
skipped: 0
Checking my logs the new stor3node and the rebalance task was executed on
2018-02-10 . From this date to now I have been storing new files.
The sequence of commands to add the node was:
gluster peer probe stor3data
gluster volume add-brick volumedisk0 stor3data:/mnt/disk_b1/glusterfs/vol0
gluster volume add-brick volumedisk0 stor3data:/mnt/disk_b1/glusterfs/vol0
2018-03-01 6:32 GMT+01:00 Nithya Balachandran <nbalacha at redhat.com>:
> Hi Jose,
>
> On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote:
>
>...