Jump to content

Really shrinking the $MFT


Guest Keatah

Recommended Posts

Guest Keatah

I typically use CCleaner wipe mft capability from time to time. This last run left me with 317,000 files named .z.z.z....z.z.zzz......zz.z.... and they're resident in the $mft itself. how do I get rid of those?

 

I have pro-level tool that do the job. But what about consumer layperson level tools?

 

An alternative is to format the disk and then re-copy everything back, thus beginning again with a fresh $mft. But that's not really practical.

 

I'm also looking at the "problem" as a speed & tidy issue, not security related. To have the CPU and disk subsystem load and sort through all that is a lot of unnecessary busywork.

 

So far I've investigated a number of utilities, but no go.

Link to comment
Share on other sites

The MFT is something special to the NTFS filing system. The MFT doesn't contain any files at all. It's more or less an extension of the directory file. The directory file can't hold too much info about files and therefore the MFT contains more info about those files.

http://msdn.microsof...0(v=vs.85).aspx

 

When one uses the CC's free space wipe feature then it first cleans the MFT and then after that it wipes the free drive space. And that takes A LOT OF time.

 

When I want to REALLY clean the MFT and the free space on a disk then I use "Cleandisk Security". This program does the opposite of what CC does. First clean the free space on a disk (by writing lots of files to that disk). A as result of that the MFT shrinks in size. And then "cleaning" the MFT takes (much) less time.

http://www.theabsolu...re/clndisk.html

I personally favor "Cleandisk Security" over CC.

System setup: http://speccy.piriform.com/results/gcNzIPEjEb0B2khOOBVCHPc

 

A discussion always stimulates the braincells !!!

Link to comment
Share on other sites

Guest Keatah

@winapp2.ini And that will "smash down" the mft? By virtue of the $MFT having to sacrifice its un-used entries?

 

@willy2 I'm not interested in actually cleaning or changing the information in un-used mft entries. I want to get rid of them and reduce the $mft size. Having 200,000+ (and more) entries is too much. There are notable performance degradations.

Link to comment
Share on other sites

Yes, it will reduce the size of the MFT (temporarily).

 

But there's another problem to this solution. As a result of the method used both CC and Cleandisk Security wipe the System Restore Points (SRPs). Does your PRO tool wipe these SRPs as well ? I would assume it does. Because, as far as I know, there's no "Reduce MFT size" or "Clean MFT" API. One has to use a "Work Around" method to produce the desired outcome.

System setup: http://speccy.piriform.com/results/gcNzIPEjEb0B2khOOBVCHPc

 

A discussion always stimulates the braincells !!!

Link to comment
Share on other sites

Guest Keatah

Yes. You'd have to kill SRP in order to maintain consistency. You're correct, there are no APIs that I know of that support this operation. Reducing the number of $MFT records has to be done offline.

Link to comment
Share on other sites

Both CC and CleanDisk Security (CDS) write A LOT OF large files to disk in order to "clean" free space on a disk, effectively overwriting all the old files. Windows keeps monitoring that process, notices that the disk gets filled and at some point Windows decides to kill (one or more) SRPs and to reduce the size of the MFT as much as possible, in order to make room for the new files. Reducing the size of the MFT and killing the SRPs are effectively unintended consequences of the way these two programs work.

 

When CC and CDS notice that a disk is nearly completely filled (the user then gets a "disk full" warniung from Windows) then they delete all those temporary files. The end result is that a drive/disk is "cleaned". But one's SRPs are gone as well.

 

That's why I am curious whether or not your PRO tool operates the same way.

System setup: http://speccy.piriform.com/results/gcNzIPEjEb0B2khOOBVCHPc

 

A discussion always stimulates the braincells !!!

Link to comment
Share on other sites

  • Moderators

You can't reduce the number of entries in the MFT. All records have a unique sequence no which is used as part of the indexing between records, so to remove one would make all that follow invalid. Maybe you could chop the records after the last live record, but those aren't used anyway so wouldn't hinder access speed. But I doubt whether any deleted MFT records hinder a binary search noticeably anyway.

 

The best way, as Keetah says, is to unload, format, restore. So what's your magical pro tool that removes deleted entries?

 

Willy, loading lots of files doesn't shrink the MFT, just the MFT zone. It doesn't even stop MFT expansion.

Link to comment
Share on other sites

Guest Keatah

My uber pro kit is internal and in-house. It is both a hardware and software solution. The name of it isn't speakable in this forum without getting a reprimand, as are many of our custom made utilities.

 

I can tell you it shadows the existing metafiles into ram, all of them. Then simulates a format by erasing the existing on-disk table, it then repopulates the table from the top down. Sometimes moving a file, sometimes adjusting a record. It isn't all that efficient and consumes a lot of time. Hence its unspeakable name. There's nothing really proprietary about it other than the time needed to code it. I suppose. I classify it as brute force.

 

There was discussion on the MYDEFRAG (JK Defrag) forums a while back about this and the author had cited lack of documentation as the reason for not developing the feature. I guess the whole forum and website is a gonner nowadays.

 

I'm investigating something much more elegant. Something from Paragon. I wanted to play with it a while ago but couldn't get time till now. This is Hard Disk Manager Professional. And I consider it semi-pro level, the cost is about $100.00 for a toolkit that does a bunch of partitioning things and stuff.

 

But it has a seemingly magic bullet that truncates and pushes all the MFT records to the top of the table.

 

In my preliminary test I had an MFT that was 63MB in size, and after HDMpro at it the MFT was down to 37MB. Recuva had found 24,363 un-used entries, ripe for overwriting. And after HDMpro Recuva came up with 20 records. I can assume these 20 records were created during re-boot after. So apparently HDMpro had squished out the un-used entries and gained back 26MB of space. Or better thought of as your CPU and FileSystem code had 26MB less data to sort through.

Link to comment
Share on other sites

  • Moderators

i'm wondering what sort of time/space gain shrinking the MFT will give?.

it seems the space gain is marginal, so does this maintenance exercise give a general performance boost?

 

hopefully that doesn't sound derogatory, I'm generally interested.

Backup now & backup often.
It's your digital life - protect it with a backup.
Three things are certain; Birth, Death and loss of data. You control the last.

Link to comment
Share on other sites

Guest Keatah

Not at all.. not at all..

 

I can guess and say fractions of a second on usb disk if you eliminate 500,000 entries.

 

I suspect, but will need to verify, that a properly compressed and de-slacked MFT will improve defrag operations. Not only the initial reading of the map, but throughout the job. I think we could gain several minutes.

Link to comment
Share on other sites

  • Moderators

again - no malice - but doesn't that seem like the effort is not worth the reward?

I mean, you can run CC or DF, as examples, and you tend to see some benefit, obviously this is relatively short lived and directly related to how often you perform those tasks.

I think I'm just not understanding the how 'cleaning' it will effect my PC experience.

Backup now & backup often.
It's your digital life - protect it with a backup.
Three things are certain; Birth, Death and loss of data. You control the last.

Link to comment
Share on other sites

Guest Keatah

If I'm pushing around thousands of small files quickly. It does indeed make a difference. I'll need to benchmark to see what the exact results are.

 

The effort is rather small. With Paragon doing the compaction and 1/2 size reduction it takes about 1-2 minutes on a 750,000 record mft. If it's a system disk you need to go offline. If it's a data disk, you can do it from within windows.

 

And the slower the interface like usb the bigger the improvement.

 

I'm going to guess working with a smaller $mft is beneficial to SSD.

 

At the moment I'm "developing" an exact procedure to do it all from a 1-click script.

Link to comment
Share on other sites

  • Moderators

looking forward to the outcome

Backup now & backup often.
It's your digital life - protect it with a backup.
Three things are certain; Birth, Death and loss of data. You control the last.

Link to comment
Share on other sites

Guest Keatah

There may be none, IDK. It may only manifest itself in certain cases. One of my systems had 1,800,000 orphaned entries or some even bigger number. With a mechanical disk there's either seek delay or just plain latency as the cpu works through those. Especially if some are at the bottom of the tree, or way out on a branch. Lots of empty holes to jump over.

 

With today's SSDs, garbage collection is still pretty dumb, and the disk is going to happily keep those orphaned entries safe and sound. If they're updated frequently one "spot" or branch is going to bring all the nearby branches with it when wear leveling kicks in on a block level.

 

Take for example analogy of a 2meg text file, and you only update 1 line in it. You still have to push those other 1.9999 megs around, if you're doing SSD. If you're doing a spinner, then you can precisely pluck and place a single entry at will. You can thank the outmoded block writing methods inherent in Flash memory for that.

 

Having less dead spots in the MFT allows more of it to fit in the SSD cache ram, and it's smaller to boot. And less blocks are erased with metafile activity.

 

Hope that makes some sense. It is truly late here and it's bedtime.

Link to comment
Share on other sites

Perhaps someone should build a tool that rebuilds the $MFT from itself, by reading the individual links and pruning deadones, then reconnecting the live links. I would think it would only be useful for people who deal with many many thousands of files, but it might be useful to run something like that on an older machine and see the effects.

Link to comment
Share on other sites

  • Moderators

Whilst I have no experience of the 'MFT compactors' mentioned (I might try the trial version of Paragon's s/w), I find it very doubtful in my mind whether these dead records in the MFT can be removed. There are just too many critical internal links in the record to be changed safely. Of course compact has more than one definition. A defrag of the extents, plus a chop of the slack space on the end of the file, is one, and quite realisable too.

 

In any event file access is done, in the main, on file name, and that means reading through the folder to file chain. There are no more records accessed to get to a file if the MFT has a million deleted records in it than if it has none. Only software such as Recuva reads all the MFT records sequentially. The point is that although it is very nice to have a clean MFT, there would be an immeasurably small increase in access speed. What software manufacturer would risk working on a proprietary undocumented system, on the most vital of system files, for the smallest of gains? I would say you can count Piriform out.

Link to comment
Share on other sites

Guest Keatah

Paragon killed all the orphaned links. I set up a 2GB test partition. Filled it to 1.8GB and added and deleted 500 small files.

 

Recuva found all 500 entires and marked them as recoverable. After running HDMpro Recuva couldn't find any entries. Repeating the test with 500,000 files, I could see the mft expand and then contract back.

 

Granted, I'll need to look through everything to be sure, but it seems to do what it claims to do. As far as I know, no other company is making that claim.

Link to comment
Share on other sites

Guest Keatah

I'm satisfied with the performance gains on a mechanical disk. On older systems the Start Menu populates much faster. And operations with small files is noticeably faster. Especially in my case where that's what I'm working with recently. Didn't need a stopwatch to give it a thumbs up.

 

And cutting it down to size, 420 to 250MB is a good thing. Regardless of how the database is structured, the smaller the better. This is simply less data to transfer over the bus, and less searching and less skipping over empty holes. Less processing. More of the table can stay in memory. Seems like system housekeeping tasks are just a little snappier.

 

Taking the MFT down to size is one of the hidden performance benefits of reformatting. Now I can appreciate, a little, why some people are so keen on doing that activity. I guess.

 

Indeed, I asked my colleague to look into how it all works. Entries in the database that are not pointing to any valid file on the disk are indeed removed. The hive is re-written sans those entries.

 

Understand this is different than defragging the MFT. Defragging is simply bringing the separate parts (that are strewn across the disk) together into one area. Getting rid of the dead entries is a whole other animal. As far as we can tell. The stuff from Paragon is the only utility (usable by non-technical people) that can do this.

 

I'm tired of seeing 700,000 entries in Recuva! I'm done looking for utilities to compact MFT!

Link to comment
Share on other sites

  • Moderators

I agree, it would be nice to see a clean MFT without all those spurious entries. However a compacted MFT would force all new file allocations - all those temp files that are created and deleted continuously - to the end of the MFT, so there would be quite a lot of skipping to and fro anyway - starting at the root directory in logical record 5.

 

I've downloaded a trial of Paragon Disk Manager Pro but it won't install under Sandboxie, and I don't want to do a proper install/uninstall with all the dregs left behind (clogging up the MFT). I'll try to install it on an older XP box later.

 

I guess my point is that the conceptual approach to compacting the MFT involves moving one record into another slot, and that is too complex and dangerous. Perhaps there's a different and smarter way to do it, such as move a file to a different partition, then move it back again. Repeat ad infinitum. That would close up the MT and NTFS would do all the work safely. If your source is willing to talk then I'd be interested.

 

By the way the link above is wrong in so many ways, starting with the description. No wonder the patent hasn't been granted.

Link to comment
Share on other sites

Guest Keatah

After removing 290,000 records from another system, and letting it sit quiescent for a few minutes. The amount of memory in use was 12MB less than it was prior.

 

The paragon tool seems to do a lot of work in memory, because over a 2 minute period, there isn't a lot of disk access. And the whole job is done in about 3 or so.

 

I don't know how much more I'm going to look into this. But I'll be happy to pass along any questions and see what answers come back.

Link to comment
Share on other sites

  • Moderators

Well, I installed Para Drive Mgr Pro on the XP box, and ran a defrag MFT to test. Then I requested a Compact MFT and to my great annoyance, after it had accepted the request, a message said that it wasn't available with the trial version. So I can't test it.

 

I'm just curious how it does it. That's the question.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.