similar to: Numbers behind "df" and "tune2fs"

Displaying 20 results from an estimated 2000 matches similar to: "Numbers behind "df" and "tune2fs""

2013 Sep 16
2
Re: Numbers behind "df" and "tune2fs"
Thanks for you help. I also tried adding some other informations as you suggest: I can also take into account: - "Reserved block count: XXXXXXX" from tune2fs that gives me the number of blocks reserved for root - Reserved GDT blocks: XXX But I didn't thought about the FS journal. How can I gather information about it? (it's size and any other information?) 2013/9/16
2013 Sep 16
0
Re: Numbers behind "df" and "tune2fs"
On 9/16/13 5:16 AM, Nicolas Michel wrote: > Hello guys, > > I have some difficulties to understand what really are the numbers > behing "df" and tune2fs. You'll find the output of tune2fs and df > below, on which my maths are based. > > Here are my maths: > > A tune2fs on an ext3 FS tell me the FS size is 3284992 block large. It > also tell me that
2013 Sep 16
0
Re: Numbers behind "df" and "tune2fs"
On 9/16/13 9:44 AM, Nicolas Michel wrote: > Thanks for you help. I also tried adding some other informations as you suggest: > I can also take into account: > - "Reserved block count: XXXXXXX" from tune2fs that gives me the > number of blocks reserved for root > - Reserved GDT blocks: XXX > > But I didn't thought about the FS journal. How can I gather
2013 Sep 17
2
Re: Numbers behind "df" and "tune2fs"
OK. Thanks for the journal information. I thought tune2fs -l and dumpe2fs were the same. In reality it's almost the same but not entirely ^^ I hear you about all the internal mecanisms that make the FS working or give it some features, and I do understand that it takes some place on the disk. However what I don't understand is why the number given in the "available column" is
2013 Sep 17
0
Re: Numbers behind "df" and "tune2fs"
In fact the thing I really want to achieve is to be able to find the values and the algorithm that enable me to reproduce the percentage given by df (and to understand deeply what it means). Why do I need it? Because I'm trying to write some script to do capacity planning and space problem forecast. Currently I don't really know which values I should use to do it. (I could use the
2014 Aug 17
2
What uses these 50 GB?
Hello everybody, first of all thank you the development of Ext2/3/4. It works like a charm and makes it possible to base applications on it. However, now I have the first time where I need more information to understand the behaviour of a ext4 installation on a 480 GB harddisk. It holds a database with a size of 355 GB, as said by "du -m": ... 355263 /opt/ssd However,
2016 Apr 30
1
tune2fs: Filesystem has unsupported feature(s) while trying to open
Not in my testing especially about the time of 6.4. On Apr 22, 2016 5:16 PM, "Gordon Messmer" <gordon.messmer at gmail.com> wrote: > On 04/22/2016 01:33 AM, Rob Townley wrote: > >> tune2fs against a LVM (albeit formatted with ext4) is not the same as >> tune2fs against ext4. >> > > tune2fs operates on the content of a block device. A logical volume
2011 Jul 07
4
Is it safe to run tune2fs -c -1 -i 0 /dev/sda2 on mounted file system
Hi, Is it safe to run tune2fs -c -1 -i 0 /dev/sda2 on mounted file system Basically, this is a command to disable fsck based on reboot count & last fsck time. -- Regards, Sherin
2001 Nov 23
3
core dumped messages from tune2fs
I decided to start using ext3. My kernel 2.4.15p9. I downloaded and build util-linux-2.11m and e2fsprogs 1.25. I compiled ext3 in the kernel. I started converting my filesystems and thing went ok for the first few. I then started getting the following on each additional filesystem. [root@joker /root]# tune2fs -j /dev/hdc4 tune2fs 1.25 (20-Sep-2001) Creating journal inode: done This filesystem
2016 Apr 19
2
tune2fs: Filesystem has unsupported feature(s) while trying to open
I have an ext4 filesystem for which I'm trying to use "tune2fs -l". Here is the listing of the filesystem from the "mount" command: # mount | grep share /dev/mapper/VolGroup_Share-LogVol_Share on /share type ext4 (rw,noatime,nodiratime,usrjquota=aquota.user,jqfmt=vfsv0,data=writeback,nobh,barrier=0) When I try to run "tune2fs" on it, I get the following error:
2016 Apr 22
4
tune2fs: Filesystem has unsupported feature(s) while trying to open
tune2fs against a LVM (albeit formatted with ext4) is not the same as tune2fs against ext4. Could this possibly be a machine where uptime has outlived its usefulness? On Thu, Apr 21, 2016 at 10:02 PM, Chris Murphy <lists at colorremedies.com> wrote: > On Tue, Apr 19, 2016 at 10:51 AM, Matt Garman <matthew.garman at gmail.com> > wrote: > > > ># rpm -qf `which
2003 Oct 29
1
tune2fs -j on mounted FS
Just now I ran tune2fs -j on the root filesystem of a box running 2.6.0-test8. Then I edited /etc/fstab and changed the FS type to from ext2 to ext3, saved the file, and invoked vim on the file again. A few moments after this, the box hung. Unfortunately X was running at the time, and so I don't have any messages to cite. Is this a known problem?
2011 Nov 09
6
[PATCH] Add tune2fs support to libguestfs.
At the moment OpenStack uses kpartx and nbd to resize filesystems and inject files to guests. I sincerely hope they don't allow untrusted users to upload guest images / AMIs :-( To fix this I'm looking into adding libguestfs support as an optional backend in OpenStack. The only missing feature in libguestfs is the ability to call tune2fs on a filesystem. This patch series adds tune2fs
2007 Mar 29
3
tune2fs -l stale info
Hello, I just noticed that 'tune2fs -l' did not returned a "lively" updated information regarding the free inodes count (looks like it's always correct after unmounting). It became suprising after an online resizing operation, where the total inode count was immediatly updated (grown in my case) but the free inode count was the same: one could deduce that suddenly a lot of
2011 Nov 10
5
[PATCH v2] Add tune2fs command.
The changes since the previous patch: - safe ADD_ARG macro for adding arguments to a fixed size stack array - support for testing functions that return RHashtable, ie. tune2fs-l. - add tests that set (tune2fs) and get (tune2fs-l) various parameters. - only one 'intervalbetweenchecks' parameter (in seconds) Rich.
2016 Apr 30
3
tune2fs: Filesystem has unsupported feature(s) while trying to open
On Sat, April 30, 2016 8:54 am, William Warren wrote: > uptime=insecurity. This sounds like MS Windows admin's statement. Are there any Unix admins still left around who remember systems with kernel that doesn't need [security] patching for few years? And libc that does not need security patches often. I almost said glibc, but on those Unixes it was libc; glibc, however, wasn't
2003 Mar 20
1
Is it safe to run "tune2fs -j" on a mounted filesystem?
All -- I'm curious is if it safe or even wise to run the following command on a mounted filesystem, namely root (/)? tune2fs -j /dev/hda1 What about if someone goes into single user mode and runs this first? mount -o remount,ro / And then to enable it, runs this? mount -t ext2 -o remount,rw / I assume it is not safe to do so, but one user in my LUG assumes otherwise. Just curious,
2005 Oct 31
2
What is the history of CONFIG_EXT{2,3}_CHECK?
Can anyone tell me the history of CONFIG_EXT{2,3}_CHECK? There is code for a "check" option for mount if these options are enabled, but there's no way to enable them. TIA Adrian -- "Is there not promise of rain?" Ling Tan asked suddenly out of the darkness. There had been need of rain for many days. "Only a promise," Lao Er said.
2008 Nov 05
2
RE: RedHat DomU hanging
Hello, Maybe someone can help me.... I have a guest XEN image that ran well until today. I use an LVM partition to host the guest and today on Dom0 I added another LVM to be available to this domU. This were all the commands I issued : On hypervisor lvcreate -n lintra02data -L 30G rootvg vi /etc/xen/lintra02 and add volume to file like this : disk = [
2012 Dec 04
2
Xen dom0 load affecting domUs
Hello folks, I have a Xen server running on Centos 6.2 with 3.6.6-1.el6xen.x86_64 as the Dom0 kernel, and i have dedicted 2 CPUs and 4G ram for the Dom0.I have close to 4-5 Virtual machines running on the Xen server. The problem is that for some reason,carrying outany CPU/Disk intensive task on the Dom0 seems to be affecting the DomUs adversly.For ex.i noticed that my Domus become extremly