Displaying 20 results from an estimated 3000 matches similar to: "Extremely slow du"
2017 Jun 12
2
Extremely slow du
Hi Vijay
I have enabled client profiling and used this script
https://github.com/bengland2/gluster-profile-analysis/blob/master/gvp-client.sh
to extract data. I am attaching output files. I don't have any reference
data to compare with my output. Hopefully you can make some sense out of
it.
On Sat, Jun 10, 2017 at 10:47 AM, Vijay Bellur <vbellur at redhat.com> wrote:
> Would it be
2017 Jun 18
1
Extremely slow du
Hi Mohammad,
A lot of time is being spent in addressing metadata calls as expected. Can
you consider testing out with 3.11 with md-cache [1] and readdirp [2]
improvements?
Adding Poornima and Raghavendra who worked on these enhancements to help
out further.
Thanks,
Vijay
[1] https://gluster.readthedocs.io/en/latest/release-notes/3.9.0/
[2] https://github.com/gluster/glusterfs/issues/166
On
2017 Jun 16
0
Extremely slow du
Hi Vijay
Did you manage to look into the gluster profile logs ?
Thanks
Kashif
On Mon, Jun 12, 2017 at 11:40 AM, mohammad kashif <kashif.alig at gmail.com>
wrote:
> Hi Vijay
>
> I have enabled client profiling and used this script
> https://github.com/bengland2/gluster-profile-analysis/blob/
> master/gvp-client.sh to extract data. I am attaching output files. I
>
2017 Jun 09
2
Extremely slow du
Hi Vijay
Thanks for your quick response. I am using gluster 3.8.11 on Centos 7
servers
glusterfs-3.8.11-1.el7.x86_64
clients are centos 6 but I tested with a centos 7 client as well and
results didn't change
gluster volume info Volume Name: atlasglust
Type: Distribute
Volume ID: fbf0ebb8-deab-4388-9d8a-f722618a624b
Status: Started
Snapshot Count: 0
Number of Bricks: 5
Transport-type: tcp
2017 Jun 10
0
Extremely slow du
Would it be possible for you to turn on client profiling and then run du?
Instructions for turning on client profiling can be found at [1]. Providing
the client profile information can help us figure out where the latency
could be stemming from.
Regards,
Vijay
[1]
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Performance%20Testing/#client-side-profiling
On Fri, Jun 9, 2017 at
2017 Jun 09
0
Extremely slow du
Can you please provide more details about your volume configuration and the
version of gluster that you are using?
Regards,
Vijay
On Fri, Jun 9, 2017 at 5:35 PM, mohammad kashif <kashif.alig at gmail.com>
wrote:
> Hi
>
> I have just moved our 400 TB HPC storage from lustre to gluster. It is
> part of a research institute and users have very small files to big files
> ( few
2017 Jun 09
2
Extremely slow du
Hi
I have just moved our 400 TB HPC storage from lustre to gluster. It is part
of a research institute and users have very small files to big files ( few
KB to 20GB) . Our setup consists of 5 servers, each with 96TB RAID 6 disks.
All servers are connected through 10G ethernet but not all clients.
Gluster volumes are distributed without any replication. There are
approximately 80 million files in
2018 May 01
0
Usage monitoring per user
Hi,
There are several programs that will basically take the outputs of your
scans and store the results in a database. If you size the database
appropriately, then querying that database will be much quicker than
querying the filesystem. But of course the results will be a little bit
outdated.
One such project is robinhood. https://github.com/cea-hpc/robinhood/wiki
A simpler way might be to
2018 May 02
1
Usage monitoring per user
I rather like agedu It probably does what you want.
But as Mohammad says you do have to traverse your filesystem.
https://www.chiark.greenend.org.uk/~sgtatham/agedu/
agedu: track down wasted disk space - chiark home page<https://www.chiark.greenend.org.uk/~sgtatham/agedu/>
www.chiark.greenend.org.uk
agedu. a Unix utility for tracking down wasted disk space Introduction. Suppose
2017 Dec 13
0
Online Rebalancing
On 13 December 2017 at 17:34, mohammad kashif <kashif.alig at gmail.com> wrote:
> Hi
>
> I have a five node 300 TB distributed gluster volume with zero
> replication. I am planning to add two more servers which will add around
> 120 TB. After fixing the layout, can I rebalance the volume while clients
> are online and accessing the data?
>
>
Hi,
Yes, you can. Are
2018 May 01
2
Usage monitoring per user
Hi
Is there any easy way to find usage per user in Gluster? We have 300TB
storage with almost 100 million files. Running du take too much time. Are
people aware of any other tool which can be used to break up storage per
user?
Thanks
Kashif
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2017 Dec 13
2
Online Rebalancing
Hi
I have a five node 300 TB distributed gluster volume with zero
replication. I am planning to add two more servers which will add around
120 TB. After fixing the layout, can I rebalance the volume while clients
are online and accessing the data?
Thanks
Kashif
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2017 Aug 30
2
Gluster status fails
Hi
I am running a 400TB five node purely distributed gluster setup. I am
troubleshooting an issue where some times files creation fails. I found
that volume status is not working
gluster volume status
Another transaction is in progress for atlasglust. Please try again after
sometime.
When I tried from other node then it seems two nodes have Locking issue
gluster volume status
Locking failed
2013 Sep 05
1
NFS cann't use by esxi with Striped Volume
After some test , I confirm that Esxi cann't use Striped-Replicate volume
on Glusterfs's nfs.
But could success on Distributed-Replicate .
Anyone know how or why ?
2013/9/5 higkoohk <higkoohk at gmail.com>
> Thanks Vijay !
>
> It run success after 'volume set images-stripe nfs.nlm off'.
>
> Now I can use Esxi with Glusterfs's nfs export .
>
> Many
2013 Sep 13
1
glusterfs-3.4.1qa2 released
RPM: http://bits.gluster.org/pub/gluster/glusterfs/3.4.1qa2/
SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.1qa2.tar.gz
This release is made off jenkins-release-42
-- Gluster Build System
2013 Aug 21
1
FileSize changing in GlusterNodes
Hi,
When I upload files into the gluster volume, it replicates all the files to both gluster nodes. But the file size slightly varies by (4-10KB), which changes the md5sum of the file.
Command to check file size : du -k *. I'm using glusterFS 3.3.1 with Centos 6.4
This is creating inconsistency between the files on both the bricks. ? What is the reason for this changed file size and how can
2017 Aug 31
0
Gluster status fails
Thank you for the acknowledgement.
On Thu, Aug 31, 2017 at 8:30 PM, mohammad kashif <kashif.alig at gmail.com>
wrote:
> Hi Atin
>
> Thanks, I was not running any script or gluster command. But now gluster
> status command started working. CPU usage also came down and looking at the
> ganglia graph, cpu usage is strongly correlated with network activity.
>
> It may be
2013 Jul 15
4
GlusterFS 3.4.0 and 3.3.2 released!
Hi All,
3.4.0 and 3.3.2 releases of GlusterFS are now available. GlusterFS 3.4.0
can be downloaded from [1]
and release notes are available at [2]. Upgrade instructions can be
found at [3].
If you would like to propose bug fix candidates or minor features for
inclusion in 3.4.1, please add them at [4].
3.3.2 packages can be downloaded from [5].
A big note of thanks to everyone who helped in
2013 Mar 14
1
glusterfs 3.3 self-heal daemon crash and can't be started
Dear glusterfs experts,
Recently we have encountered a self-heal daemon crash issue after
rebalanced volume.
Crash stack bellow:
+------------------------------------------------------------------------------+
pending frames:
patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash: 2013-03-14 16:33:50
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread
2017 Nov 09
1
Adding a slack for communication?
@Amye +1 for this great Idea, I am 100% for it.
@Vijay for archiving purposes maybe it will be possible to use free service as https://slackarchive.io/ <https://slackarchive.io/>
BR,
Martin
> On 9 Nov 2017, at 00:09, Vijay Bellur <vbellur at redhat.com> wrote:
>
>
>
> On Wed, Nov 8, 2017 at 4:22 PM, Amye Scavarda <amye at redhat.com <mailto:amye at