Jump to content

[sug] avoiding temporary moves


Recommended Posts

When defragging the disk, at the time, it is done an analysis to get the actual file table and then file by file is defragmented according to the chosen strategy.

When there is no space available the file chunks occupying the target clusters are moved out of the way to a temporary place.
This physical movement costs a lot of time.

If instead after doing an analysis getting the actual file (chunk) locations do a copy of that in memory table and perform a defrag of that table only without moving the files physically. This step may take some time but will be decades faster than moving chunks physically to temporary locations.
Next step would be, go through each file which chunks that have to be relocated and move those chunks which you can move directly to the final location. Repeat for each chunk not yet moved. Repeat loop until done.
Only if there is no free final location for a chunk available, move chunks for the one file you are currently working on, to a temporary location and move them accordingly in the source file location table in memory. Those temporary moved chunks free up other final locations for chunks directly to be moved there, so continue repeating loops.

 

Link to comment
Share on other sites

  • Moderators

Ah, so [sug] is short for suggestion.

took me a while to get that.

so if I read that right your suggestion boils down to using memory more to allow physical relocation to be used less, is that right?

Backup now & backup often.
It's your digital life - protect it with a backup.
Three things are certain; Birth, Death and loss of data. You control the last.

Link to comment
Share on other sites

  • Moderators

Memory would be infinitely faster however the slow way of doing it is infinitely safer in the event of power failure/outage, BSOD, or the defrag software crashing/freezing or what not, and in the process there could be file loss or file corruption.

It sounds unsafe with the potential for disaster, and also sounds outside of the specifications for the Microsoft Defrag API which to my knowledge all defrag tools utilize on Windows, and if it were possible (probably could be but with huge caveats) I'd imagine Microsoft would've already implemented it.

Link to comment
Share on other sites

Not really. It's a totally different strategy.

Today:
1. got this file A, it has these chunks fragmented A1 there A2 there and A3 there
2. I've to place these chunks there L1-L3... oh wait it's occupied, gotta free up some space first
3. Moving those chunks C1-C3 occupying the space L1-L3 out of the way to a temporary location TL1-TL3 on the drive
4. Now moving A1-A3 to final location L1-L3
5. Next file B, it has the chunks B1-B4 located at L5, TL1, TL3 and L7
6. Gotta move it to L4-L7, L4 is free great... oh wait L5 is occupied, gotta free up space again
7. Moving L5-L7 to TL4-TL6
8. Now moving TL4 (was L5) to L4, TL1 to L5, TL3 to L6 and TL6 (was L7) to L7

There are a lot of physical relocations which are not necessary at all and which costs a lot of time.
My idea is to perform a virtual defrag on a copy of the file chunk allocation table defraggler gets anyway by it's analysis.

1. Get file chunk allocation table (T1) by analysing the drive, display the chunks in the gui (as of today already)
2. make an internal copy (T2) of that table (T1) and perform a virtual defrag only on T2 without moving any file at all, only to get a final layout where all chunks ought to be after a defrag.
3. Now look for a free cluster in T1 and move the belonging chunks (according to T2) there, update T1
4. repeat until no free cluster is available anymore
5. If all clusters in T1 equal T2, it's finished, else
6. check a file which still has to be defragmented and move all chunks to a temporary location, this frees up some clusters.
7. Restart loop at 3.

I'll bet the time it takes to virtually defrag T2 will be saved and much more because of avoiding so much unnecessary physical cluster moves to temporary locations and back to their final locations. Especially when we consider HDDs.

 

Link to comment
Share on other sites

  • Moderators

defragging has been around for as long as hard drives have been made.
implemented by many companies offering numerous software packages.
you'd have to assume that far smarter people than us would have already thought of, and rejected, such an obvious process for whatever reason.

Backup now & backup often.
It's your digital life - protect it with a backup.
Three things are certain; Birth, Death and loss of data. You control the last.

Link to comment
Share on other sites

  • Moderators
3 minutes ago, AndyK70 said:

Not really. It's a totally different strategy.

Today:
1. got this file A, it has these chunks fragmented A1 there A2 there and A3 there
2. I've to place these chunks there L1-L3... oh wait it's occupied, gotta free up some space first

all depends on what sort of defrag you are doing and what outcomes you are chasing.

if you simply want File A to be defragged, chunks L1-L3 don't even come into it, DF will simply pick up A1, A2 and A3 and find a spot on the drive where they can all be contiguous.

Backup now & backup often.
It's your digital life - protect it with a backup.
Three things are certain; Birth, Death and loss of data. You control the last.

Link to comment
Share on other sites

2 minutes ago, mta said:

all depends on what sort of defrag you are doing and what outcomes you are chasing.

Sorry, didn't mention it: complete defrag

 

Well, I did mention it in a way:

Quote

When defragging the disk,

 

Link to comment
Share on other sites

27 minutes ago, mta said:

you'd have to assume that far smarter people than us would have already thought of, and rejected, such an obvious process for whatever reason.

Please don't get me wrong. If my thoughts have a kink which leads to security risks, like losing data, I'd like to know it. What I don't like is It's not been done for decades this way for no reason.

Edited by AndyK70
wasmissing a 'not'
Link to comment
Share on other sites

  • Moderators

no malice was intended Andy. :D

simply stating that if you had thought of it, I'd have reasoned others would have too.
and yes you are right - someone has to be the first.

but in this case, I seem to recall that was actually how it was first done and due to memory constraints and data loss, it went to the method they all now use.

Backup now & backup often.
It's your digital life - protect it with a backup.
Three things are certain; Birth, Death and loss of data. You control the last.

Link to comment
Share on other sites

6 hours ago, mta said:

no malice was intended Andy. :D

Appreciated ^_^

6 hours ago, mta said:

I seem to recall that was actually how it was first done and due to memory constraints and data loss, it went to the method they all now use.

If you have some source I'd really like to read through it. Regarding the memory constraints this was my first thought when this strategy came to my mind years ago and back then this was truly an issue. As you may have noticed, this idea is nothing that came to me just a day ago while having a shower.
Nowadays, on the other hand, most users have plenty of RAM in their rigs and even suggesting reading a bunch of clusters into memory before writing them back to the drive. That's not my intent, because there I see the risk of losing data due to bad RAM, memory corruption, power loss, etc., etc.
In my idea the only point on using more RAM is to hold a second copy of the file/cluster allocation table which is used as target reference only.

The actual physical move of each file chunk would be the same as now, one at a time, updating the table and physical file table (on the drive) after each write. I don't see any chance of avoiding any of this action due to data security reasons.

I was thinking of complete hdd defragmentation but imho my strategy avoiding unnecessary writes to temporary locations sdds could benefit even more as their cells wear down a lot faster than a platter of a hdd. So a (complete) sdd defragmentation would not only benefit by time saving but also not writing so many cells as these days defrag.
After all, personally, I wouldn't use a defrag tool at all to manually defrag a sdd due to cell teardown. Windows 10 has methods and strategy incorporated to keep file fragmentation on sdds in bounds and from time to time windows defrag kicks in and tidies up in the background for what is needed. But that's another story/thread. sry for ot

 

 

Edited by AndyK70
typo
Link to comment
Share on other sites

  • Moderators

no source Andy, certainly nothing I could put my hands on.
more a little memory bell that was ringing.

Backup now & backup often.
It's your digital life - protect it with a backup.
Three things are certain; Birth, Death and loss of data. You control the last.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.