Jump to content
CCleaner Community Forums

Rob Defraggle

Experienced Members
  • Content count

    98
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Rob Defraggle

  • Rank
    Advanced Member
  1. Deletion of System Restore files

    It generally doesn't. A FAQ is "why is my disk filled with large fragmented files?" almost always System Volume Info used for restore. Was your disk fairly full, and not defragged before with Defraggler? It tends to do more defrag work, and compacts unlike the Win defrag.
  2. It looks like it's defragging a file in C: drive, not G:
  3. Defraggler Defrag Can Seriously Fragment $Mft

    Thank you for the explanation and helpful interest by the way So for interest, I'll republish for both Win7 & Vista System disk for comparison purposes. Never ran Contig, on the Win7 system partition as I haven't had any anomaly with the $MFT which seemed to need fixing. The 0xC0000 offset must be what romanov meant by "first cluster" of $MFT, so Defragglers Full Defrag layout policy sounds like it's seriously compromised, if it is filling in very many small holes, at the beginning of the Full Defrag (as it appears to be doing). Win7 64 bit upgrade - fresh NTFS made during installation Filename: $MFT Path: C:\ Size: 150 MB (157,286,400) State: Not deleted Creation time: 06/11/2009 01:05 Last modification time: 06/11/2009 01:05 Last access time: 06/11/2009 01:05 Comment: No overwritten clusters detected. 38400 cluster(s) allocated at offset 786432 6 cluster(s) allocated at offset 4085789 Vista 32bit Filename: $MFT Path: V:\ Size: 178 MB (187,105,280) State: Not deleted Creation time: 19/10/2007 23:20 Last modification time: 19/10/2007 23:20 Last access time: 19/10/2007 23:20 Comment: No overwritten clusters detected. 4 cluster(s) allocated at offset 786432 1 cluster(s) allocated at offset 1233057 1 cluster(s) allocated at offset 1224516 6 cluster(s) allocated at offset 4458165 Although the Win7 is on a later slower part of disk, the scan was noticeably fast compared to that of V: I have of course tried multiple passes of full defrag & freespace defrag & run of Contig to hope to get a reasonable $MFT that Defraggler likes, rather than relocating it on every Full Defrag run.
  4. Defraggler Defrag Can Seriously Fragment $Mft

    Ah OK, sneeky I missed the 'not deleted' file option. Filename: $MFT Path: V:\ Size: 178 MB (187,105,280) State: Not deleted Creation time: 19/10/2007 23:20 Last modification time: 19/10/2007 23:20 Last access time: 19/10/2007 23:20 Comment: No overwritten clusters detected. 4 cluster(s) allocated at offset 786432 1 cluster(s) allocated at offset 1233057 1 cluster(s) allocated at offset 1224516 6 cluster(s) allocated at offset 4458165 The last Contig run to fix V:\$MFT, I didn't save the actual output, it was similar to the previous : Processing V:\$Mft: Scanning file... Scanning disk... File is 45679 physical clusters in length. File is in 1699 fragments. Moving 45679 clusters at file offset cluster 4 to disk cluster 13894015 Moving 45679 clusters at file offset cluster 4 to disk cluster 13894015 My thinking was I could engineer a free spot for the cluster allocation, by either making a file and deleting that occupied roughtly the right place, or when it's tidy, subtract a good number of clusters from the $Bitmap allocation found or something. See those files are in goodly free space, after the full Defrag runs; I had a good clean up of this partition V:\ If the 4 cluster(s) allocated at offset 786432 is $MFT and 1 cluster(s) allocated at offset 1233057 is $Bitmap as shown by Defraggler drive map, then the numbers seem plausible. Shame I didn't log my last Contig run, it seems to fit romanov's explanation anyway. It's confusing if Contig is lying somewhat in it's output, and then you see $MFT & $MFT::Bitmap being 1 fragment each, and then Defraggler seeing 2 frags for a supposedly defragged file. Suffice to say, I probably am best ditching V: and re-initialising the FS beore re-using. Though it's a real weakness in the $MFT allocation doing full Defrag, if this happens after plenty of effort to free up space, and defrag files and free list.
  5. Defraggler Defrag Can Seriously Fragment $Mft

    Still present with 2.03.282 According to the Contig tool the $MFT is in 1 piece and moved (may be it is fibbing and discounting first cluster of $MFT), but at least it allocates the rest in 1 piece though I'd prefer nearer the main file blocks and $Bitmap next to it. I've installed and tried to use Recuva, but it just seems to want to look for deleted files. If it can report on $MFT cluster allocation, it is not obvious enough how to do it to me even after looking through the program feature documentation. Could you explain how, please? If I know what clusters are free, I think Contig can defrag to a specific location, if manually set on command line, but I've no safe way to know where to try to relocate it at moment, so allow it to automatically decide.
  6. They have in 2.03 speeded up the defrag a lot on my FAT filesystem partition, for me it's been the best release so far but perhaps you have discovered a regression. I am suspecting it may be slower when compacting many small files during full defrag, after I did some software uninstalling in an NTFS partition, but I don't think I could repeat it easily trying with an older version. A bug I found & reported in the Bug Report section of forum, caused massive fragmentation of the $Mft. To provide more info, I was asked to create a debug logfile to upload and submit that in the Bug Reporting part of the forum.
  7. Very likely a non-zero "large file size" threshold is turning on the feature, and the default is zero which turns it off; re-using a value in this way to mean 2 slightly different things is quite common. With Windows 7 and as far as I can see in Vista, it isn't done that way anymore. A pagefile when used heavily, is likely to need fast Random Access, rather than maximum sustained sequential read/write speeds, similarly the $Mft. Rather than locating the pagefile on fastest part of disk, they seem to go around the centre of freespace, after most system & user files, until they fill the disk with rubbish from youtube & itunes. If you can it's probably faster to do a full backup to another disk and remake & restore the NTFS volume fresh, rather than move files on same disk twice.
  8. Defrag

    Use the drive map feature to discover what these clusters contain, and then an answer will likely become clear.
  9. Lng time to defrag disc - is this normal

    Good point! The "Quick Defrag" would suffice to catch any small heavily fragmented files to catch cases where defragging really helps alot.
  10. hibernate and suspend after defragmentation

    Because hibernation is really turned off, but with the state saved so it can be restored on reboot. This is faster for most people, than having the desktop restarted and all those services restarted. Sane people, don't want a 25s boot, only to then spend a minute on mouse clicks to reorganise their applications. They'd rather have it all just come back the way they left it, which happens with Hibernate and allows a full power off including the PC power suppy.
  11. Defrag .. when to stop?

    Is this really significant? The most popular UNIX style filesystems, have traditionally actually deliberately spread files around as a deliberate strategy, so that large files can be stored in contiguous chunks, and small files stored near their containing directories (folders) unless the disk is very full. By doing this they reduce greatly the need to run defragmentation programs. Whilst some speed up in benchmarks would be obtained for sustained sequential read speed by compaction, I'm not at all convinced any really perceivable difference would be noticed by the end user in daily desktop use, once the small fragments are removed.
  12. hibernate and suspend after defragmentation

    Hibernate works reliably with solid drivers, I hardly ever Shutdown Windows 7 because the regular OS updates mean frequent reboots anyway. Not only does it turn your machine off and re-boots to useable Desktop faster, but Hibernate restores the desktop appliction setup to exactly where you left off, so you can resume next morning without pain of re-logging in and restarting applications. Sleeping is not a feature necessary for me, as I have it kick in automatically on idle. What would be useful is if Defraggler was able to disable all Power Saving Suspends, until it finished so you can leave system unattended without altering the Power Management policy to allow time to finish the defrag.
  13. hibernate and suspend after defragmentation

    Yes, I would like option to Hibernate rather than Shutdown. I don't tend to use the "Shutdown" feature though, as when I did try it, on powering on, Windows would start and then shutdown again requiring an additional reboot for some reason.
  14. I regularly use "Defrag Freespace" and it is not doing that for me (think someone else reported similar), Windows 7 64 bit. The point is though, that compaction is already done by full defrag; there's an option to "Move Large Files" by a selection criteria to end of disk, but howwever there's no analagous way of selecting files to be preferentially moved to beginning of disk, before all the mass after compaction. I can imagine some people wanting maximum sequential read speed on some large files for example, or localising a folder and small files to be close together on disk via such a feature. As it stands, you'ld need to reuse a seperate partition on early part of disk; and it might be very difficult to create such a space, if Windows C: drive is installed in the most common way.
  15. If it's fairly simple to do, based on code already existing for the defrags, I would find it useful on occasion to be able to multi-select drives for analysis, and then view the drive maps, rather than have to select drive, choose analyse and then look at the map. It would be more orthogonal if the UI worked same way for all drive actions, allowing queuing.
×