Jump to content
Piriform Community Forums

Augeas

Moderators
  • Content count

    4,079
  • Joined

  • Last visited

Posts posted by Augeas


  1. I'm a bit wary of this piece of thread necromancy, but here goes:

    If you're talking about the FAT, as in FAT32, then the FAT entries are zeroed on file deletion, so there's no need or even sense in 'cleaning the FAT'. Deleted file names are held in the various directories/folders in the system and are more difficult to remove.

    If you mean NTFS then run a Drive Wipe to clear the MFT.


  2. I have no idea what you're talking about. You mentioned a cookie, said that you knew 'whose cookie it is and why ít is there, I also know why it is not removed', but seem unwilling to divulge any of that information. I don't think that your graceless apology will encourage me to extend this thread with any further posts.


  3. I think that you have some peculiarly elevated idea of my capabilites. I am not CCleaner, I didn't write, or see a byte of, the code, and I have no idea how to get rid of it (as it doesn't occur on my machine). If you know where the cookie came from and why it isn't being removed then you know more than me, and it might help others  trying to answer your question if you divulged that information.

    As for me, I'll get back to clearing up spammers, and editing out obsceneties.


  4. By deleting unused space I assume you mean wipe free space?

    Don't run Wipe Free Space or Secure File Deletion on an SSD. If TRIM is enabled (i.e you didn't get your pc from a museum) then it will perform the same function as WFS and SD. Occasionally run a Defrag Optimise to clear up any TRIM commands that have not been executed.

    You can run the normal CC functions - deleting unwanted files etc - as usual.


  5. I’ve removed your link to a competitor product as this is an official Piriform forum: if anyone is interested they will be able to find it easily enough.
     
    And whilst we’re talking about the competition, that product is only 291 kb. Oh for the days when CC was well under 1 mb, long gone now. CC has always done more than file deletion, but it is a bit of a porker now (es ist ein bisschen fett).
     
    It all depends on the mechanics of the deletion. A folder with 100,000 entries is huge, with hundreds, if not thousands, of MFT records and many index clusters. For secure deletion CC must:
     
    Open each file
    Overwrite the data
    Save the file
    Rename the file
    Delete the file
     
    Every time a file is renamed its position in the huge list of files in the folder is altered. Files are stored in the folder in alphabetic sequence, and when the file is renamed (to a variant of ZZZ.ZZZ) the sequence must be shuffled up or down to accommodate this. If CC overwrites, renames and deletes the files in reverse alphabetic order (last file first) then only a few MFT records have to be rewritten every time a single file is deleted. If it processes the files in a different order, or a different manner, then there’s a huge amount of work to be done.
     
    On the other hand finding the last file in a huge folder is onerous. Every entry for each file has to be read until the EOF is found. And if the last file is edited, renamed and deleted, is position lost so that the last file has to be sought from the start again? It would be easier to delete the first file first, but the renaming would be a nightmare for performance. Is this what is happening?
     
    We can add to all this writing entries to the log journal, updating the MFT index clusters, altering the cluster and MFT bitmaps for every file deleted, and syncing the disk every few seconds. And things I haven’t thought of.
     
    It’s a wonder really that the process ever finishes.
     
    I don’t know what the opposing software does, but this list (apart from the editing and renaming) is what NTFS has to do to delete files from a folder, whatever software is used. Perhaps CC doesn’t use the most efficient way of securely deleting the files, and it is only exposed on folders with a very, very large number of files.
     
    You could be better off if you normal deleted the folder and then ran a wipe free space.
     
    The overwriting pattern doesn’t matter by the way, it’s all randomised before being written to the disk.

  6. Maybe it isn't that dumb. Some 30 years ago when I was a sysprog on IBM mainframes the space search algorithm was (wherever the write heads were) same track, same cylinder, one cylinder either side, three cylinders either side, give up and go to the start. I would not be at all surprised if NTFS didn't do something similar. If there were no algorithm then files would just be written from the start of the device, or somewhere similar. Maybe they are.

    As for moving the MFT, that is onerous, and would need a rewrite of the VBR, MFT internal records, and the MFT mirror, for no great advantage. The separate zones of the MFT (it is allocated in blocks of 200mb) can be defragged into one continuous extent but I don't think that the position of the first records of the MFT can be moved. My memory is getting rusty on this though.


  7. The peculiar thing about fragmentation, and defraggers, is that you never actually look at the storage device in any detail.

    The cluster map is created entirely from the Master File Table and the Cluster Bitmap (and perhaps a few other meta data bits)  - it's a logical construct created by defraggers who, after loading those two files, don't need to access the drive at all.

    Fragmentation, or the lack of it, is defined in the MFT, not at the disk level. It's another logical construct created by the Windows File Manager NTFS. Files too are logical constructs existing soley in the mind of the MFT. The storage device knows nothing of files, folders, fragmentation or free space.

    The disk doesn't defrag itself. It can't, for the reasons given above. If it did then you would never be able to access a file again through the MFT.


  8. I don't think that you are, or could be, physically prevented from doing what you like whilst a deep scan is running, but as the scan is very heavy on I/O any response might be so slow that you would get the impression that the device is locked. You are also altering what Recuva is scanning, with the possibility of damaging what you are attempting to recover, so it's not usual practice.


  9. This message usually comes up when the deleted file is large, in excess of 4gb. When files of that size are deleted NTFS zaps the cluster addresses, so the data can't be found.

    Although the data may still exist on the disk it will be extremely difficult to recover. A deep scan should find the first extent, under a generated numerical name, but the other extents can't be identified without professional help.


  10. It won't make Recuva run any faster (it still has to scan every cluster on the disk) but 15 hours is horrendous.

    After running a wfs a deep scan should find very few files. It will still show all the files in the MFT as these can't be removed. They should have invalid names though.

    If Recuva finds 175,000 deleted files then this is the amount of deleted files in the MFT. If this is the number reported as ignored than they are undeleted (live) files from the MFT.


  11. I think that running in adbvanced mode gives more flexibility and control, but it won't give greater search or recover facilities.

    Recuva will, at the end of a scan, show files found and files ignored. The ignored files are live, zero length, or system files etc. You can show these by switching to advanced mode and then checking the relevant boxes in Options/Settings.

×