Jump to content

azadian

Members
  • Posts

    2
  • Joined

  • Last visited

Reputation

0 Neutral
  1. In my case, the data is not changing. Small files have no impact since I'm assuming we generate a list of disk blocks to move. Whether those blocks comprise a lot of small files, or one huge file, makes little difference. OK, depending on where the block pointers live, it might require more writes to the index, but we're talking a difference of factor 2 or 3 at most. In my case, I believe there are few files less than 1MB. If you can generate the entire list of block moves in stage 2, then you also know where the blocks are, and can easily compensate for those which are on the slower side. Even without any such compensation, the total error would be far less than one order of magnitude. Apparently the algorithm used does not create the entire list of required moves in stage 2. I'd be pleased to have my ignorance relieved as to why the task is more complex than I have suggested. I'm guessing it has to do with the fact that the disk is mounted, arguably requiring a more dynamic approach.
  2. Last night I stopped the defrag that claimed that it had only one minute to go. Today I restarted it, and so far it has been running for about 12 hours. This is on a disk that is basically unused. That's just one example of how bad the estimates are. It seems to me that it should be easy to have a quite accurate estimate. The way I see it, defrag can be done in 3 stages: 1) Build list of files and blocks 2) Decide what blocks need to be moved where 3) Move the blocks After stage 2, you know exactly how many blocks need to be moved, as well as how much fragmentation will remain. At the very least you could have a progress indicator which simply shows the number of blocks remaining to be moved. Once stage 3 has run for a few minutes, you've moved at least hundreds of blocks, and you have a pretty good idea what the average time per block is. Easy peasy.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.