Displaying 20 results from an estimated 3000 matches similar to: "CF Card wear optimalisation for ext4"
2014 Oct 10
0
Re: CF Card wear optimalisation for ext4
On Oct 8, 2014, at 10:28 AM, Jelle de Jong <jelledejong at powercraft.nl> wrote:
> Hello everyone,
>
> I been using CF cards for almost more then 7 years now with ext
> file-system without any major problems on ALIX boards.
>
> Last year I took 30 other systems in production with ext4 and the CF
> cards been dropping out pretty fast, it may have been a bad batch but
2014 Oct 16
2
Re: CF Card wear optimalisation for ext4
* Andreas Dilger <adilger@dilger.ca> hat geschrieben:
> The "lifetime writes" value has not been around forever, so if the
> filesystem was originally created and populated on an older kernel
> (e.g. using ext3) it would not contain a record of those writes.
It was created as stable ext4 in the first place. So only if there was a
stable ext4 release which didn't
2014 Oct 17
0
Re: CF Card wear optimalisation for ext4
On Thu, Oct 16, 2014 at 11:01:35PM +0200, Bodo Thiesen wrote:
>
> Since it never get's updated unless the file system is unmounted, it can
> only be used for a 24 hours test by mounting the file system now,
> unmounting it 24 hours from now and then taking the difference.
It also gets updated if the file system syncfs(2) or sync(2) system
call. But if you crash, any writes since
2014 Oct 16
2
Re: CF Card wear optimalisation for ext4
* Andreas Dilger <adilger@dilger.ca> hat geschrieben:
> You can see in the ext4 superblock the amount of data that has been
> written to a filesystem over its lifetime:
>
> Note that this number isn't wholly accurate, but rather a guideline.
Is is more like a completely bogus value at best:
# LANG=C df -h / | grep root
/dev/root 3.7T 3.6T 73G 99% /
# grep [0-9]
2014 Oct 16
0
Re: CF Card wear optimalisation for ext4
On Oct 16, 2014, at 10:25 AM, Bodo Thiesen <bothie@gmx.de> wrote:
> * Andreas Dilger <adilger@dilger.ca> hat geschrieben:
>
>> You can see in the ext4 superblock the amount of data that has been
>> written to a filesystem over its lifetime:
>>
>> Note that this number isn't wholly accurate, but rather a guideline.
>
> Is is more like a
2014 Oct 11
2
Re: CF Card wear optimalisation for ext4
Something else that you might want to do is count the number of
journal commits that are taking place, via a command like this:
perf stat -e jbd2:jbd2_start_commit -a sleep 3600
This will count the number of jbd2 commits are executed in 3600
seconds --- i.e., an hour.
If you are running some workload which is constantly calling fsync(2),
that will be forcing journal commits, and those turn into
2008 May 27
2
needs help, root inode gone after usb bus reset on sata disks
Hello everybody,
I am new to this list, so welcome everybody.
Last 2 week I had two harddisk crashes with my ext2 file system.
This is what sort of happed with both of the disk:
I pluged in my USB to SATA converter in my harddisk that has an ext2
filesystem. I mounted the partition, went to a directory that had a DVD
image. I mounted the dvd image in the same directory and started
watching the
2009 Apr 26
1
ext4 mount fails with "resize inode not valid" after a reboot
With kernel 2.6.30-rc2-git6 and prior I am having problems mounting
ext4 partitions after reboot.
A successful mount looks like this:
/dev/cciss/c0d0p8 on /squid-cache0 type ext4 (rw,noexec,nodev,noatime,data=writeback,errors=panic)
/dev/cciss/c0d0p9 on /squid-cache1 type ext4 (rw,noexec,nodev,noatime,data=writeback,errors=panic)
/dev/cciss/c0d0p10 on /squid-data type ext4
2013 Mar 12
2
ext4 and extremely slow filesystem traversal
Hello list,
I have troubles with the daily backup of a modest filesystem which
tends to take more that 10 hours. I have ext4 all over the place on ~200
servers and never ran into such a problem.
The filesystem capacity is 300 GB (19,6M inodes) with 196 GB (9,3M
inodes) used. It's mounted 'defaults,noatime'. It sits on a hardware
RAID array thru plain LVM slices. The RAID array is
2014 May 10
1
location of file-system information on ext4
Hi,
I zero-filled first 10MiB of my SSD(dd if=/dev/zero of=/dev/sda bs=10M
count=1). As expected, this wiped my primary GPD header and first
partition. Before the wipe, GPT was following:
Disk /dev/sda: 250069680 sectors, 119.2 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 2EFD285D-F8E6-4262-B380-232E866AF15C
Partition table holds up to 128 entries
First usable sector is 34, last
2016 Mar 11
0
/etc/msg.sock folder questions regarding nvram/wear leveling.
On 11/03/16 12:08, Andy Walsh wrote:
> Hi,
>
> i try to create a openWRT Samba 4.3 package and stumbled across the fact
> that samba 4.3 will create those message socks inside the private-dir. That
> results in creating entries inside /etc/samba/msg.sock.
>
> On openWRT /var is a tempFS in ram, so anything there is not a problem
> regarding nvram and wear leveling. Yet the
2016 Mar 11
2
/etc/msg.sock folder questions regarding nvram/wear leveling.
Hi,
i try to create a openWRT Samba 4.3 package and stumbled across the fact
that samba 4.3 will create those message socks inside the private-dir. That
results in creating entries inside /etc/samba/msg.sock.
On openWRT /var is a tempFS in ram, so anything there is not a problem
regarding nvram and wear leveling. Yet the root uses a jffs2 overlay. So
while those message socks have no size, jffs2
2016 Mar 11
1
/etc/msg.sock folder questions regarding nvram/wear leveling.
Rowland penny <rpenny <at> samba.org> writes:
>
> On 11/03/16 12:08, Andy Walsh wrote:
> > Hi,
> >
> > i try to create a openWRT Samba 4.3 package and stumbled across the fact
> > that samba 4.3 will create those message socks inside the private-dir. That
> > results in creating entries inside /etc/samba/msg.sock.
> >
> > On openWRT
2007 Dec 05
6
SCSI bad block table display
Hi All:
Is there a utility available that will allow for the dump/display of
the bad track table of a SCSI drive. We had this capability on SCO
OSR5 but I have not been able to locate anything similar for Linux.
The closest I have found is the badblocks utility that is part of the
e2fsprogs package but this appears to only test for bad blocks not
display the current bad block table contents.
I
2012 May 22
3
SSD erase state and reducing SSD wear
I''ve got two recent examples of SSDs. Their pristine state from the
manufacturer shows:
Device Model: OCZ-VERTEX3
# hexdump -C /dev/sdd
00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
|................|
*
1bf2976000
Device Model: OCZ VERTEX PLUS
(OCZ VERTEX 2E)
# hexdump -C /dev/sdd
00000000 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
|................|
*
2013 Sep 17
2
Re: Numbers behind "df" and "tune2fs"
OK. Thanks for the journal information. I thought tune2fs -l and
dumpe2fs were the same. In reality it's almost the same but not
entirely ^^
I hear you about all the internal mecanisms that make the FS working
or give it some features, and I do understand that it takes some place
on the disk. However what I don't understand is why the number given
in the "available column" is
2010 Feb 27
1
e2fsprogs Help.
Hello,
Hope you will forgive me for asking some very simple question about
e2fsprogs. I am very new to the kernel as
well as file system programming.
My task is to collect superblock, inode, bitmap ( or free list) information
from ext2/ext3 filesysteam. After searching
the google, I came to know about e2fsprogs, which I was able to install and
use at least "dumpe2fs" utility. This
2008 Jan 14
3
Spot the cyclical relationship
I got the following error, but there''s no "cycle" I commented out
File["/dev/sdb3"] and it works, but of course would choke if I ran it
and the requirement were not met
err: Could not apply complete catalog: Found cycles in the following
relationships: File[/dev/sdb1] => Exec[echo -e "0,290\n,290\n," | sfdisk
/dev/sdb]
Here''s the node:
node
2005 Feb 22
2
ext3 compatibility between 2.4 and 2.6 kernels
Hello--
We have a system where a central server formats removable hard disks,
which are then booted in an embedded system running a highly modified
RH9. The removable disks themselves contain boot, root, and data
filesystems.
The problem we've encountered after upgrading to FC3 / kernel 2.6 on
the central server is that the 2.4 kernel in the embedded system
cannot read the root filesystem,
2009 Dec 08
3
botched RAID, now e2fsck or what?
Hi all,
Somehow I managed to mess with a RAID array containing an ext3 partition.
Parenthesis, if it matters: I disconnected physically a drive while
the array was online. Next thing, I lost the right order of the drives
in the array. While trying to re-create it, I overwrote the raid
superblocks. Luckily, the array was RAID5 degraded, so whenever I
re-created it, it didn't go into sync;