Jump to content

Augeas

Moderators
  • Posts

    4,542
  • Joined

Posts posted by Augeas

  1. It appears that the deleted file is called sandvich.png somewhere in c:\users. The overwriting file is also called sandvich.png in c:\users. If you follow the path of the overwriting file you will find a live copy of sandvich.png. As the deleted file and the overwriting file are both the same image then when you recover the deleted file you will get the live file's contents. This has caused the confusion. I don't know if you want to get rid of sandvich, but if you do then delete the live copy. You will then be able to overwrite your deleted file.

    This may have been caused by sandvich being deleted and then recreated more or less immediately afterwards. The new allocation has used a different record in the Master File Table to write the file name etc, but the same clusters have been used to hold the new file. Thus the live file and the deleted file entries in the MFT point to the same clusters, and co-incidentally it is the same image as before.

  2. The entry for the deleted file in the Master File Table contains the addresses of the clusters it used. If these clusters have been reused then another live file is using them - they have been overwritten. The cluster addresses in both the deleted file and the live file point to the same clusters  If you recover an overwritten file you will be recovering (copying) what's in the live file. the data from the deleted file has gone forever. You cannot secure overwrite the deleted file's clusters because they are being used by the live file. Recuva is working fine, this is how NTFS works.

  3. Recuva is working fine. Overwritten by another file means that the deleted file's clusters have been reused by another live file. You can 'recover' the original file as a recovery is simply copying what's in the file's clusters. What you will get however is the data from the overwriting file, the original data has gone forever. You cannot overwrite an overwritten file as you would be overwriting the data from the new file, which is not what you want to do.

  4. I have to disagree Nukecad, shift/del is not, and has never been, a secure delete. The file's content is unchanged. It's true that small files can be, and probably are, held in their MFT record. In some cases, long file names etc, there is not enough room for even a small file and a separate cluster is used to hold the data.

    Why is the O/P seeing 00's? I guess he or she's using an SSD. In which case kiss goodbye to recovering any of that data.

  5. If you have an SSD with TRIM enabled on both the device and the O/S - which I assume you have - then file recovery is with a few exceptions impossible. The TRIM'd data has gone forever and nothing that Recuva or anything else can do can retrieve it.

    Small files (under 700 bytes) will lbe stored in the MFT so they will be recoverable.

  6. The ignored file count  is non-deleted files, zero length files, etc. If you are using an SSD then a deep scan is likely to find nothing. If you have some search criteria in  the File/Path box then it is possible that no file fits that criteria.

  7. Microsoft does update its documents. Despite my earlier post (does anyone actually read what I post?) Trium's extensive cut and paste concerns XP and Server 2003, and Eddy's link harks back to 2004. If you really want to know about the Set MftZone parameter of Fsutil in Windows 10 then look at  https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/fsutil-behavior 

     But all this is a distraction. The MFT Zone is a mythical, OK logical, area of disk space which is kept clear for MFT expansion. I can't see what it has to do with wipe free space speed. The MFT remains unchanged, the same bits in the same position, whatever the Zone parameter is.

    An MFT wipe is a rather crude 'fill the empty records with files and then delete them' process. I would hope that CC would calculate the number of empty MFT records, allocate the same number of small files (I think it's abour 650 bytes) and then delete them. That file size is used as it does not require a cluster allocation, and it is large enough to ensure that the previous contents of the 1024 byte record are overwritten. The process is simple: CC allocates a number of small files: NTFS scans the MFT and reuses the free records. There is some directory maintenance - those new files have to go into a directory somewhere - and this could be complex if the number of free records is considerable. That number is likely to be in the tens of thousands. Does CC use one directory or multiple directories? I don't know. Furthermore directory names are held in ascending order, so if the new files are named in a way that is anything but an ascending sequence, for each file allocation all the directory entries will have to be moved down one place to fit the new filename in. An ascending file name will go on the end of the long name chain with minimum distruption. A descending file name will be a killer, with extensive updates through a chain of thousands of files.

    Does this cause the slowing down of WFS? Quite possibly, I really don't know.

     

     

     

     

  8. The Restore Folder Structure applies to recovery. You can rerun the scan (if you have closd Recuva), check the Restore Folder Structure box, and then recover whatever you select to a separate drive. The recovered files will have the folder structure as far as can be established. You can then copy what you want back to your C drive.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.