Jump to content

Augeas

Moderators
  • Posts

    4,542
  • Joined

Posts posted by Augeas

  1. The point of Recuva is to recover previously deleted files, and the point of NTFS, which I assume we're dealing with here, is to ensure the integrity of live metadata and live user files. The two don't always go together.

    Recuva reads the MFT and lists all records for files that have the deleted flag set. It doesn't select or exclude any files apart from those options chosen by the user. What you see is what there is in the MFT. If any deleted record has been reused by NTFS then the deleted file's information has gone and can't be shown.

    With an SSD the process is the same but the outcome is different. Although the deleted file list is still shown very little data is recoverable, due to the way the SSD's controller handles deleted pages. Recovered files will contain zeroes, so using Recuva or any file recovery software on an SSD is likely to be futile.

    However I have noticed a difference bewtween running Recuva when I had an HDD and when I moved to an SSD. On the HDD the list of files includes many recognisable user files, and there is a good chance of recovering many of them. With the SSD I see a large list of what appear to be system files, and a very short list (fewer than 20) user files. This is puzzling, but correlation isn't necessarily causation..

    The only quick explanation I can see is that there are a larger number of dynamic file allocations and deletions taking place that are reusing the deleted user file records in the MFT, wiping out user deletions. When I moved to an SSD I also moved from Win 8 to Win 10. I don't believe that the SSD is relevant here, as it knows nothing of either NTFS or the MFT. NTFS version 3.1 has been the same on disk since Windows XP, so is Win 10 (or Firefox, or both of them) now upping dynamic file allocation? Is it Win updates? I don't know.

    The point is that Recuva is doing what it always has done, reading the MFT and listing the deleted file records. That it isn't showing what the user might want it to is frustrating, but just how it is.

    (Put as far as I know before every sentence.)

     

  2. Because - probably - a deep scan runs a normal scan first, and a normal scan reads the MFT where the file names are held. The file names are listed, but the clusters which held the file's data should contain zeroes, or more correctly a read request will return zeroes (who knows what he clusters contain, they are unaccessible).

    It is quite usual for files recently deleted being not found. A file deletion leaves the MFT record marked as available for reuse by any activity, and even opening Recuva writes a few files. I have a sneaky suspicion that NTFS reuses available MFT records held in memory first, so a recently deleted file's record is very exposed to reuse.

    As you say, running a deep scan on an SSD is pretty much pointless, there's next to nothing to return.

  3. Nobody can say whether you can, or will, recover any deleted files. All you can do is try.

    A deep scan runs a normal scan first, so when you chopped the deep scan you would have seen the results from the normal scan. This scans the MFT which is very fast. Running a deep scan on the recycler is not feasible, as the directory information is held in the MFT not at file level, and a deep scan looks for clusters containing files, not directories.

    Files sent to the recycler are renamed, to $Ixxx.ext and $Rxxx.ext. The data part is held in the $R file. You could run a normal scan with $R in the filename box, or just look for $R files. A deep scan will not list the files under this or any name, as filenames are held in the MFT. (I have seen files deleted from the recycler return to their original names, I don't really know what rules the recycler follows.)

  4. FAT32 is a beefed-up version of FAT16. However it needs four bytes to hold the first cluster number (in the FAT tables) instead of two, so it uses two additional bytes from elsewhere (the actual address of the start of the file is held in two separate halves). When a file is deleted the additional two bytes of the address, the high end, are wiped by the file system for some reason, and as a result the address of the file is corrupted. This is why you get the overwritten file message, Recuva is looking in the wrong place. It isn't possible to find the right place, except by guessing.

    A deep scan looks for clusters with a recognisable file signature, so can be useful in cases such as this. However a text file has no file signature, so is not identified by Recuva.

    It may be possible to find this file with a hex editor, but as it's only a test it isn't worth bothering.

  5. 4 hours ago, Andavari said:

    The way NTFS works (in Windows 10 at least) shocked me the other day, the MFT size is over 511MB on my C:\ drive SSD.

    That's nothing to worry about, it's only 500,000 live and deleted files. On my 120 gb C drive SSD my MFT is 472 mb, and I am only using 36 gb. Win 10 install allocates and deletes a lot, a very large lot, of files. Remember that large files, and large directories, will use multiple MFT records so the total file count is probably under 500k.

    WFS will not touch the MFT, unless it's an entire disk erase. Windows does not reduce the size of the MFT, nothing does, apart from a reformat. When a disk is nearly full then NTFS will allocate files within the MFT Zone, which is not the same as reducing the size of the MFT.

  6. You could read through http://kcall.co.uk/ntfs/index.html  although it is heavy going. The part headed MFT Records, or MFT Extension Records, describes the index clusters (called Folder Entry in Defraggler) for a file, the principle is the same for a folder.

    Microsoft sometimes calls directories indexes, and we call them folders. It is confusing.

    The MFT is a file, which holds one or more 1k records for every file on the drive, including itself. A folder, or directory, consists of one or more records in the MFT. Large files, or large folders, may have separate index clusters allocated which hold the addresses of the many MFT records used by the file or folder.

    Yes, I have noted your signature, and agree entirely. We're a long way from Recuva Suggestions, but it's fun, sort of.

  7. 3 hours ago, Willy2 said:

    Quote:

    "A directory is a record, or number of records, in the MFT. NTFS will reduce the size of the directory on file deletion, shuffling the live entries up to overwrite the deleted entry. There are no references to deleted files in an NTFS directory. The MFT is heavily protected and any writes to it with, for instance, a hex editor are backed out in seconds. There is no concept of a 'vacant' directory entry."

    No, when a file is deleted then NTFS doesn't reduce the size of the directory & doesn't overwrite the deleted entry & doesn't shuffle all live entries up.

    Yes it does. Use a hex editor to find a directory record in the MFT. You will see that file names are held in ascending name order in the $Index_Root attribute. Delete one of those files and the remaining file names are moved up to fill the gap and an EOF marker overrides what was the last file name. Larger directories will have MFT extension records, and one or more Index clusters. The principle is the same. This is easy to observe with a small directory, and I have. There is no process I know that can flag a file within a directory as deleted. Show me an MFT directory record containing a deleted file.

    Quote

     If that was the case then RECUVA wouldn't be able to find & recover those "deleted files", would it ?

    Yes it would. Recuva doesn't look at the directory records, so it doesn't matter what's in them. Recuva looks for deleted FILE records in the MFT, and lists those. Deleted files are not listed in directory records in the MFT. Directory information for a deleted file is found by following the directory chain-back address in the deleted file's record. (As far as I understand from my experience using Recuva.) If you run Recuva you will see many deleted files listed that have no directory information - the directory records no longer exist, but Recuva finds the files. 

    Quote

    It wouldn't make sense either in another way. Imagine a (VERY) large MFT with A LOT OF entries. When I delete one file that has an entry at the very beginning of the MFT, then it would take NTFS A LOT OF / way too much time to "shuffle all live entries" to overwrite only one deleted entry. And that undermines all your 3 other replies as well.

    This is a misunderstanding of the structure of the MFT and what has been said here. No MFT records are shuffled anywhere, nor has that claim been made. The shuffling is of file names and associated info within the MFT records for the directory, not MFT records themselves. My three other replies stand.

    Quote

    In other words: There are entries that are "vacant" (or as Microsoft puts it: "unallocated").

    These are unallocated MFT records. They are not the same as file names held within a MFT directory record.

    I'm beginning to wonder whether the terms 'MFT' and 'Directory' are being confused?

  8. You're quite welcome to disagree. An NTFS directory is a record in the MFT. A large directory may have several MFT records. An exceptionally large directory may have a separate index cluster allocated. I don't know whether we are using the same definition for directory.

    By the way I have just edited my big response, please use the later version.

    More by the way, no, I have never used Defraggler.

  9. 3 hours ago, Willy2 said:

    - And here is a bug that need to be fixed in the next version of RECUVA:

    - When RECUVA is busy analyzing drives then something very odd shows up. The program shows a ridiculous high %  and a very high negative %. Very, very odd. See the 2 pictures attached.

    - Question: Do compressed files make these ridiculous high % show up in the program ? Like they do in CCleaner ?

     

    I don't know, I've never seen this. All I get is an increasing percentage to 100% when the scan is finished.

  10. 3 hours ago, Willy2 said:

    - Add an option/ options to hide all files that are "Unrecoverable" / "Poorly recoverable" / etc. Present these options to the user before RECUVA analyzes a drive.

    Yes, could be an Option in Options/Actions. The scan would still be as long as before, as all records in te MFT have to be scanned to see if they are recoverable or not.

    Quote

    - Add one or more options to overwrite entries in a directory file. Or include "overwrite directory file entries" program code in the "overwrite selected file(s)" subroutines that are already available/used in the current version. As far as I know it's impossible to directly write to/ erase information from the Master File Table (MFT). That's why the program must overwrite entries in the directory files  itself. Perhaps it's possible to overwrite a directory entry with e.g. zeros (or another character) that would signal to the NTFS that such a directory entry is "vacant / empty" ?

    A directory is a record, or number of records, in the MFT. NTFS will reduce the size of the directory on file deletion, shuffling the live entries up to overwrite the deleted entry. There are no references to deleted files in an NTFS directory. The MFT is heavily protected and any writes to it with, for instance, a hex editor are backed out in seconds. There is no concept of a 'vacant' directory entry.

    Quote

    - The developers of Recuva could take this one step further. Perhaps it's possible to even reduce the size of a directory file itself. So, if a directory file contains say 100 entries of which say 30 refer to files that don't exist anymore or are "securely overwritten" then perhaps it's possible to reduce the size of that directory file to the remaining 70 entries.

    See above. Directory size is maintained  by NTFS.

    Quote

    The reduction of the size of the directory file could happen under a separate option or included in the "Securely overwrite files" subroutines.

    See above.

    Quote

    - Then the "Securely overwrite files" program code could contain the following things/subroutines :

    1)) Securely overwrite files

    2)) Overwrite "empty" directory file entries.

    3)) Reduce the size of directory files

    See lots of above.

    Quote

    - In the past I had a program called "Clean Disk Security" ( http://www.theabsolute.net/sware/clndisk.html CDS) that was able to reduce the amount of directory entries in a directory file. Several times in the past I ran RECUVA before and after running CDS. As a result of running CDS I noticed that the amount of e.g. "unrecoverable files" (as reported by Recuva) shrank (very) dramatically. Unfortunately I don't have that version (v7.xx) of that program (CDS) anymore.

    I couldn't say what that was doing. Some older software did things that are not legit or possible today.

    (All the above is 'Generally speaking'. There can be exceptions.

  11. Well, this is heresy, but here goes:

    Remove the multiple pass overwrite option on secure file deletion and Drive Wiper, leaving one zero-byte pass only, saving decades of lost time and tons of CO2 in wasted energy. Surprisingly for a tech and science dominated field, the multipass myth has achieved unquestionable god-like status. It was thoroughly debunked over twenty years ago.

    Chances of being adopted - close to zero.

  12. The Ignored Files includes live files, zero-byte files, system files etc, which you would not normally want to recover. Also if there is any file name or path in the aptly named Filename or Path box then this will restrict the results, possibly down to zero. The free and paid version have the same recovery facilities.

  13. If by 'Exited the list' you mean closed the program, yes, you will have to do a rerun. If you mean that you have, for example, typed a search word in the File/Path box, then clearing that should restore the original list. Scan results are held in memory to avoid overwriting data, so when the program closes the results are gone. If your storage device is an SSD then post back here first.

  14. Good to hear that copies come to the rescue. As for disabling TRIM, that advice sounds as if you disable TRIM after discovering that files have been accidentally deleted. This would be ineffective as TRIM is an asynchronous command issued on file deletion and disabling it afterwards is shutting the stable door when the nag has well and truly galloped off down the road. For this method to work you'd have to have TRIM disabled permanently, which is possible if not recommended (although I'm not fully convinced of it's worth these days).

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.