Fragmentation of modules on NTFS hard drive

Technical issues/questions of an intermediate or advanced nature.
Rapha_
Shogun
Shogun
Posts: 238
Joined: 12 Jun 2021, 21:59
Distribution: Xfce 4.12 - 5.rc3 - x86_64
Location: France

Fragmentation of modules on NTFS hard drive

Post#16 by Rapha_ » 09 Oct 2023, 01:01

^

Have you defragmented your ISOs and modules saved on Ntfs disks and USB sticks (fat 32) ?

My latest backups were very fragmented [between 60 and 80 pieces] for the most serious ones, on disks that were 75% full.
This doesn't depend from Windows, since it's the Linux applications (ntfs-3g?) that fragment these files on Ntfs / fat 32.


A very good tool to defrag only one file on Windows, Defraggler (for free) :
https://www.ccleaner.com/defraggler

User avatar
Rava
Contributor
Contributor
Posts: 5416
Joined: 11 Jan 2011, 02:46
Distribution: XFCE 5.01 x86_64 + 4.0 i586
Location: Forests of Germany

Fragmentation of modules on NTFS hard drive

Post#17 by Rava » 10 Oct 2023, 02:34

Just as heads up on e4defrag
I found this article https://howtoforge.com/tutorial/linux-f ... em-defrag/
have a partial read:
Since many users nowadays use SSDs and not HDDs, it is important to note that the defragmentation procedure is only beneficial for the later. If you own an SSD, there is simply no point in worrying about fragmented files as those disks can access their storage randomly, wheres HDDs access sequentially. Defragging your SSD will only increase the read/write count and thus reduce the lifetime of your disk. SSD owners should convey their interest on the TRIM function instead, which is not covered in this tutorial.
(highlighting by me) :)

Added in 9 minutes 27 seconds:
How to Repair and Defragment Linux System Partitions and Directories
https://www.tecmint.com/defragment-linu ... rectories/

Do ext4 filesystems need to be defragmented?
https://superuser.com/questions/536788/ ... fragmented

How to defrag an ext4 filesystem
https://askubuntu.com/questions/221079/ ... filesystem
among info on e4defrag I found this part on e2fsck enlightening:
e2fsck -D /dev/sda1
e2fsck -C 0 -c -c -D /dev/sda1

-D optimizes Directories by sorting and reindexing them which usually speeds things up. Might not be considered a full filesystem defrag, but not all systems have e4defrag.

Sorting directories so that directories and files are in alphabetical order speeds reading up because files are read together physically as well as logically.

I include the following options when running e2fsck. -C 0 outputs to stdout -c -c non-destructive rw test using badblocks

note that it does not work with disks which are in use
Added in 26 minutes 12 seconds:
https://www.howtogeek.com/115229/htg-ex ... agmenting/
If you do have problems with fragmentation on Linux, you probably need a larger hard disk. If you actually need to defragment a file system, the simplest way is probably the most reliable: Copy all the files off the partition, erase the files from the partition, then copy the files back onto the partition. The file system will intelligently allocate the files as you copy them back onto the disk.

You can measure the fragmentation of a Linux file system with the fsck command -- look for "non-contiguous inodes" in the output.
Cheers!
Yours Rava

Rapha_
Shogun
Shogun
Posts: 238
Joined: 12 Jun 2021, 21:59
Distribution: Xfce 4.12 - 5.rc3 - x86_64
Location: France

Fragmentation of modules on NTFS hard drive

Post#18 by Rapha_ » 01 Nov 2023, 00:34

^

After testing the writing speed of a 10 MB file on a disk (Ntfs, fat32,...) :

Code: Select all

dd if=/dev/zero of=/mnt/sda1/tempfile bs=1M count=10 conv=fdatasync

You can see if the file 'tempfile' is fragmented with this command...Each line corresponds to a fragment :

Code: Select all

root@porteus:/home/guest# hdparm --fibmap /mnt/sda1/tempfile

/mnt/sda1/tempfile:
filesystem blocksize 4096, begins at LBA 63; assuming 512 byte sectors.
 byte_offset  begin_LBA    end_LBA    sectors
           0   19335879   19336710        832
      425984   19375047   19375614        568
      716800   19564103   19564622        520
      983040   19769223   19769726        504
     1241088   19860799   19861078        280
     1384448   19907191   19907358        168
     1470464   19975255   19975278         24
     1482752   20324535   20324542          8
     1486848   20604703   20605718       1016
     2007040   20658095   20658886        792
     2412544   20866591   20867262        672
     --------  ---------  --------   ----

By counting the rows, you obtain the number of fragments (in this case, remove 4 unnecessary rows from the result) :

Code: Select all

root@porteus:/home/guest# hdparm --fibmap /mnt/sda1/tempfile |wc -l
60

User avatar
Rava
Contributor
Contributor
Posts: 5416
Joined: 11 Jan 2011, 02:46
Distribution: XFCE 5.01 x86_64 + 4.0 i586
Location: Forests of Germany

Fragmentation of modules on NTFS hard drive

Post#19 by Rava » 01 Nov 2023, 05:07

^
Thanks for that helpful information.

Just as heads up: hdparm is not accessible for the non-root-user. While you can create empty files with the dd command like in Rapha_'s above example as long as you have write permissions in the target folder, you cannot run the hdparm command unless you are in a root shell.

The dd might be known to some, but the

Code: Select all

 hdparm --fibmap /mnt/sdXY/filename
most likely wasn't. Especially this part:
Rapha_ wrote:
01 Nov 2023, 00:34
By counting the rows, you obtain the number of fragments (in this case, remove 4 unnecessary rows from the result) :
Aren't there 5 unnecessary rows:
At the top:

Code: Select all


/mnt/sda1/tempfile:
filesystem blocksize 4096, begins at LBA 63; assuming 512 byte sectors.
 byte_offset  begin_LBA    end_LBA    sectors
and at the bottom:

Code: Select all

     --------  ---------  --------   ----
Or is the empty line a the top a glitch due to Dr. Copy and Mr. Paste?

Added in 5 minutes 55 seconds:
Nope, the empty line at the beginning of the output is not a glitch. See my own example:
sdb is an external drive, connected via ESATA.
dd via guest:

Code: Select all

guest@rava:/mnt/sdb3/test$ dd if=/dev/zero of=tempfile bs=1M count=10 conv=fdatasync
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.54419 s, 19.3 MB/s
hdparm via root:

Code: Select all

root@rava:/mnt/sdb3/test# hdparm  --fibmap tempfile

tempfile:
 filesystem blocksize 4096, begins at LBA 9437184; assuming 512 byte sectors.
 byte_offset  begin_LBA    end_LBA    sectors
           0 1277378560 1277399039      20480
root@rava:/mnt/sdb3/test# 
no

Code: Select all

     --------  ---------  --------   ----
line at the bottom in my example.
Rapha_, was the "-------- --------- -------- ----" line copied verbatim or was it meant as a

Code: Select all

[…]
palceholder?
When it was not part of hdparm's output then there are indeed only 4 lines to be subtracted.

Added in 6 minutes 32 seconds:

Code: Select all

root@rava:/mnt/sdb3/test# hdparm  --fibmap tempfile |wc -l
5
And let's do he shell the arithmetic as well:

Code: Select all

root@rava:/mnt/sdb3/test# echo "segments in file: ""$(($(hdparm  --fibmap tempfile |wc -l)-4))"
segments in file: 1
root@rava:/mnt/sdb3/test#
:)
Cheers!
Yours Rava

User avatar
Ed_P
Contributor
Contributor
Posts: 8374
Joined: 06 Feb 2013, 22:12
Distribution: Cinnamon 5.01 ISO
Location: Western NY, USA

Fragmentation of modules on NTFS hard drive

Post#20 by Ed_P » 01 Nov 2023, 18:48

Rapha_ wrote:
01 Nov 2023, 00:34

Code: Select all

root@porteus:/home/guest# hdparm --fibmap /mnt/sda1/tempfile

/mnt/sda1/tempfile:
filesystem blocksize 4096, begins at LBA 63; assuming 512 byte sectors.
 byte_offset  begin_LBA    end_LBA    sectors
           0   19335879   19336710        832
      425984   19375047   19375614        568
      716800   19564103   19564622        520
      983040   19769223   19769726        504
     1241088   19860799   19861078        280
     1384448   19907191   19907358        168
     1470464   19975255   19975278         24
     1482752   20324535   20324542          8
     1486848   20604703   20605718       1016
     2007040   20658095   20658886        792
     2412544   20866591   20867262        672
     --------  ---------  --------   ----
By counting the rows, you obtain the number of fragments (in this case, remove 4 unnecessary rows from the result) :

Code: Select all

root@porteus:/home/guest# hdparm --fibmap /mnt/sda1/tempfile |wc -l
60
I count 11 fragments.

Code: Select all

1.           0   19335879   19336710        832
2.      425984   19375047   19375614        568
3.      716800   19564103   19564622        520
4.      983040   19769223   19769726        504
5.     1241088   19860799   19861078        280
6.     1384448   19907191   19907358        168
7.     1470464   19975255   19975278         24
8.     1482752   20324535   20324542          8
9.     1486848   20604703   20605718       1016
10.    2007040   20658095   20658886        792
11.    2412544   20866591   20867262        672
Ed

User avatar
Rava
Contributor
Contributor
Posts: 5416
Joined: 11 Jan 2011, 02:46
Distribution: XFCE 5.01 x86_64 + 4.0 i586
Location: Forests of Germany

Fragmentation of modules on NTFS hard drive

Post#21 by Rava » 01 Nov 2023, 18:51

Ed_P wrote:
01 Nov 2023, 18:48
I count 11 fragments.
So did I, but I presume the "-------- --------- -------- ----" line at the bottom should mean

Code: Select all

[…]
Cheers!
Yours Rava

Rapha_
Shogun
Shogun
Posts: 238
Joined: 12 Jun 2021, 21:59
Distribution: Xfce 4.12 - 5.rc3 - x86_64
Location: France

Fragmentation of modules on NTFS hard drive

Post#22 by Rapha_ » 01 Nov 2023, 22:11

Rava wrote:
01 Nov 2023, 18:51
Ed_P wrote:
01 Nov 2023, 18:48
I count 11 fragments.
So did I, but I presume the "-------- --------- -------- ----" line at the bottom should mean

Code: Select all

[…]
yes

Rapha_
Shogun
Shogun
Posts: 238
Joined: 12 Jun 2021, 21:59
Distribution: Xfce 4.12 - 5.rc3 - x86_64
Location: France

Fragmentation of modules on NTFS hard drive

Post#23 by Rapha_ » 01 Nov 2023, 22:19

An interesting article on fragmentation and USB flash drive :

Flash Memory Fragmentation – Myths and Facts
https://web.archive.org/web/20170614154 ... and_facts/

User avatar
Rava
Contributor
Contributor
Posts: 5416
Joined: 11 Jan 2011, 02:46
Distribution: XFCE 5.01 x86_64 + 4.0 i586
Location: Forests of Germany

Fragmentation of modules on NTFS hard drive

Post#24 by Rava » 01 Nov 2023, 22:32

Rapha_ wrote:
01 Nov 2023, 22:19
Flash Memory Fragmentation – Myths and Facts
https://web.archive.org/web/20170614154 ... and_facts/
Quite enlightening.

But in one part it contradicts itself.
Myth: Backing up the data of a flash card, formatting and then restoring it again will produce a file system free of fragmentation.
[…]
All chances of data recovery are lost. If there were any deleted files that could be restored or file system errors that could be fixed they won't be recoverable at all after the backup/restore solution.
vs
Data reliability on fragmented vs. non-fragmented file systems
[…]
Conclusion: Keeping a file system free of fragmentation significantly increases the chances of data recovery.
First it says chances of data recovery are lost when you use defragmentation, and then it says keeping a file system free of fragmentation significantly increases the chances of data recovery. How can both things be true at the same time, or am I missing something here?
Cheers!
Yours Rava

Rapha_
Shogun
Shogun
Posts: 238
Joined: 12 Jun 2021, 21:59
Distribution: Xfce 4.12 - 5.rc3 - x86_64
Location: France

Fragmentation of modules on NTFS hard drive

Post#25 by Rapha_ » 01 Nov 2023, 23:06

It's 2 different things:

In the first case "Backing up the data of a flash card, formatting and then restoring" all the datas on the flash disk are erased and then rewritten over: so all the previous data locations are overwritten and lost.

in the second case, if you leave unfragmented data on a USB flash drive, and if they've been erased by accident, these files can be recovered in their entirety by software. (not if they have been fragmented)

Post Reply