Why overwriting is not enough to erase data

Rewriting your data, even rewriting it several times, is not a completely secure solution.

An important client is visiting your office. To impress them, your ace executive has prepared a thorough PowerPoint to showcase your company's strengths. In the morning before their arrival, however, a crisis arises. Somehow part of your ace's presentation got deleted. Worse than that, a portion of the data was overwritten.

Not to fear as there are software solutions – quite a few in fact – that can restore your pitch to working order. Companies like Secure Data Recovery tout their ability to restore data, even data that has been partially overwritten, to its original status. While this solution is a brilliant bailout for the above hypothetical problem, it also raises a potential cybersecurity risk.

When companies decide to destroy data, some begin and end at overwriting. While overwriting data is a solid beginning to ensuring that corporate information is protected, it should never be the final precaution taken. There are several reasons why rewriting your data, even rewriting it several times, is not a completely secure solution.

Storage Systems
Storage systems are increasingly designed to prevent data loss. While this makes them great for saving lost files from total destruction, it can also mean that rewriting is no longer as complete a process as it used to be. The more advanced the storage system, the more likely it is that data rewriting will not erase all the information. 

The Linux Information Project defines a journaling file system as "a filesystem that maintains a special file called a journal that is used to repair any inconsistencies that occur as the result of an improper shutdown of a computer." In layman's terms, this means that journaling file systems preserve data by writing operations in multiple locations on a hard drive. This is great news for a company trying to prevent data corruption from unexpected shutdowns. What it also means, however, is that your data is being backed up outside of its primary directory, and deleting and overwriting a file in that directory does not necessarily remove all the backups.

Journal file systems are not the only programs that create multiple copies of data in different locations. Methods like shadowing and source control also create duplicates of data. Shadowing reads a page into memory, saving its current status in case it is being modified, and then copies it to a new location. Source control is similar in the sense that it also allows you to go back and view different versions of a changing project at stages of its development. While these methods are all helpful when generating digital content, they create a security issue that negates deletion as well as potentially undermining any attempt to rewrite the data.

Some consumers may feel that, if all the data is rewritten, the potential risk is averted. Bad sectors in traditional hard drives, as well as the advancing regularity of solid state drives, makes this not so.

Many computer systems ensure that data is stored at multiple locations.

Bad sectors can be inaccessible to data rewrites
A bad sector is essentially a spot on a traditional hard drive or within an SSD's flash memory. This spot has become damaged – usually physically or through failings in solid state memory transistors. Once this sector becomes damaged, it is rewritten out of an operating system's data recording. Most operating systems know enough to stop writing to a spot that is no longer accessible by the user. HowToGeek states that bad sectors are also caused by software failure, although these are easier to fix. 

Just like with partially rewritten data, bad sectors of any kind have the potential of being at least partially recovered. Providers like EaseUS offer solutions that fix a range of bad sectors. This software works best with newer operating systems so companies that upgrade equipment can find themselves exposed to new risk without being aware of it. This means that any abandoned hard drive with a bad sector could potentially be harboring confidential data. Since these areas are sealed off after discovery, rewriting data may not erase files lost in bad sectors. They exist as potential lingering spots of data remanence on the hard drive.

"SSDs are not always sanitized of data, even after multiple wipes."

Solid state hard drives
A 2011 study from the Center for Magnetic Recording and Research, University of California, San Diego, discovered a flaw in solid state drives that made it so they would not always be fully sanitized of data, even after two wipes. Additionally, individual file sanitization methods that worked on traditional hard drives did not work on SSDs. In the years since, there are no reports to indicate that the problems have been fully resolved.

SSDs are designed differently than traditional hard drives. The data storage structure is different as are the algorithm methods used to maintain and view the data. SSDs utilize two addresses for data location – one used by the system to access the data and the other serving as an internal tool to identify physical storage. This method helps optimize performance but essentially creates duplicates without informing the user.

While a trim command will eventually wipe data no longer deemed to be necessary, this process can take time. Some older operating systems also do not properly support this feature, which means that companies that have not updated technology recently are the most exposed to this security risk.

Overwriting data is a strong first step in ensuring that confidential information is kept classified. However, companies should not stop at this step –  hard drives should also be degaussed and destroyed to completely remove the risk of a cybersecurity breach. NSA-approved devices like the Proton T-4 hard drive degausser and the PDS-100 HDD destroyer will keep the operation secure and in-house, giving the best level of protection for your business.

Proton Data Security: