Jump to content
CCleaner Community Forums

AndyK70

Members
  • Content Count

    5
  • Joined

  • Last visited

Community Reputation

0 Neutral

About AndyK70

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Appreciated If you have some source I'd really like to read through it. Regarding the memory constraints this was my first thought when this strategy came to my mind years ago and back then this was truly an issue. As you may have noticed, this idea is nothing that came to me just a day ago while having a shower. Nowadays, on the other hand, most users have plenty of RAM in their rigs and even suggesting reading a bunch of clusters into memory before writing them back to the drive. That's not my intent, because there I see the risk of losing data due to bad RAM, memory corruption, power loss, etc., etc. In my idea the only point on using more RAM is to hold a second copy of the file/cluster allocation table which is used as target reference only. The actual physical move of each file chunk would be the same as now, one at a time, updating the table and physical file table (on the drive) after each write. I don't see any chance of avoiding any of this action due to data security reasons. I was thinking of complete hdd defragmentation but imho my strategy avoiding unnecessary writes to temporary locations sdds could benefit even more as their cells wear down a lot faster than a platter of a hdd. So a (complete) sdd defragmentation would not only benefit by time saving but also not writing so many cells as these days defrag. After all, personally, I wouldn't use a defrag tool at all to manually defrag a sdd due to cell teardown. Windows 10 has methods and strategy incorporated to keep file fragmentation on sdds in bounds and from time to time windows defrag kicks in and tidies up in the background for what is needed. But that's another story/thread. sry for ot
  2. Please don't get me wrong. If my thoughts have a kink which leads to security risks, like losing data, I'd like to know it. What I don't like is It's not been done for decades this way for no reason.
  3. Sorry, didn't mention it: complete defrag Well, I did mention it in a way:
  4. Not really. It's a totally different strategy. Today: 1. got this file A, it has these chunks fragmented A1 there A2 there and A3 there 2. I've to place these chunks there L1-L3... oh wait it's occupied, gotta free up some space first 3. Moving those chunks C1-C3 occupying the space L1-L3 out of the way to a temporary location TL1-TL3 on the drive 4. Now moving A1-A3 to final location L1-L3 5. Next file B, it has the chunks B1-B4 located at L5, TL1, TL3 and L7 6. Gotta move it to L4-L7, L4 is free great... oh wait L5 is occupied, gotta free up space again 7. Moving L5-L7 to TL4-TL6 8. Now moving TL4 (was L5) to L4, TL1 to L5, TL3 to L6 and TL6 (was L7) to L7 There are a lot of physical relocations which are not necessary at all and which costs a lot of time. My idea is to perform a virtual defrag on a copy of the file chunk allocation table defraggler gets anyway by it's analysis. 1. Get file chunk allocation table (T1) by analysing the drive, display the chunks in the gui (as of today already) 2. make an internal copy (T2) of that table (T1) and perform a virtual defrag only on T2 without moving any file at all, only to get a final layout where all chunks ought to be after a defrag. 3. Now look for a free cluster in T1 and move the belonging chunks (according to T2) there, update T1 4. repeat until no free cluster is available anymore 5. If all clusters in T1 equal T2, it's finished, else 6. check a file which still has to be defragmented and move all chunks to a temporary location, this frees up some clusters. 7. Restart loop at 3. I'll bet the time it takes to virtually defrag T2 will be saved and much more because of avoiding so much unnecessary physical cluster moves to temporary locations and back to their final locations. Especially when we consider HDDs.
  5. When defragging the disk, at the time, it is done an analysis to get the actual file table and then file by file is defragmented according to the chosen strategy. When there is no space available the file chunks occupying the target clusters are moved out of the way to a temporary place. This physical movement costs a lot of time. If instead after doing an analysis getting the actual file (chunk) locations do a copy of that in memory table and perform a defrag of that table only without moving the files physically. This step may take some time but will be decades faster than moving chunks physically to temporary locations. Next step would be, go through each file which chunks that have to be relocated and move those chunks which you can move directly to the final location. Repeat for each chunk not yet moved. Repeat loop until done. Only if there is no free final location for a chunk available, move chunks for the one file you are currently working on, to a temporary location and move them accordingly in the source file location table in memory. Those temporary moved chunks free up other final locations for chunks directly to be moved there, so continue repeating loops.
×
×
  • Create New...