Jump to content

Thomas L

Members
  • Posts

    2
  • Joined

  • Last visited

Reputation

0 Neutral

Profile Information

  • Gender
    Male
  • Location
    Allershausen, Germany
  1. I've seen this too - currently (since 2 days actually), DF works on a Server's 2T HDD, with 68% fragmented, 5.3 million fragments, and many many small files. There is almost no disk activity, so there seems to be something conceptionally wrong in DF. +1 for this one.
  2. These are some suggestions, after using DF for some months on several machines, both servers and workstations. Most of them are based on what i've heard (disk seeks) and seen (whatever windows reports) during defrag, and from experience with other defrag tools on Windows and Macintosh. So some suggestions may be purely wrong, as they may be based on false assumptions. Buffer size I don't know how block copying works now (since one can't guess from the asynchronous disk map display)... Using a configurable buffer for copying would greatly improve throughput by minimizing disk seeks. I know, the bigger the buffer the bigger the chance of data loss... Progress display Currently the display is asynchronous, and gets stuck frequently, especially while df'ing big disks. If a copy buffer is used (as mentioned above), reading/writing a chunk would become a "lengthy" (more than a few ms) operation, and could be used to provide visual feedback. The Big Plan (perhaps in 2.0?) Currently it seems you just walk sector to sector, and defrag the file from the sector if needed. If you have "The Big Plan" on how the disk would look after defragmentation, you can just move some data directly to where it would reside when the disk is fully defragged. Now, especially with < 20% free space, data is moved around at least 2 times, and sometimes even more often. Optimize disk seeks If you start defragging, free space for blocks that need to be moved away are searched from the end of the disk. If you change that to search for a larger free space from (end of disk/current block)/2, in both directions, you may end up having faster seeks. Another idea I had was to analyze seek times once for a disk, to guess the disk's internal disk/head structure, and use these findings to optimize free space search - but i guess this would be a bit too much. I'm sure there were more ideas, some of them already mentioned here... Thanks for considering Thomas
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.