Chaotic evil is encrypting, compressing, then encrypting again.
Then decompress after. Let fear be your cypher.
When playing football, to keep the socks from riding down our legs, we used to put loose rubber bands on top of them, near the top of the sock. Then to avoid the rubberbands from riding up above the sockline, we used to fold the sock over the rubberbands downwards. Then to avoid the fold from being undone during play another rubberband had to be put on top of the folded part.
Sounds similar to this. Just thought it was notable.
So, when you foot turns purple from the multitude of rubber bands, did it make you play any better?
Nope. It was uncomfortable and I’d argue we played worse because of the discomfort. We were also pretty bad at the game so I think we wanted the socks with rubber bands as a scapegoat instead of accepting we were shit.
Why didn’t you guys just buy good socks? Or those sock suspenders?
Removed by mod
That’s like md5(sha512(“somefile.blah”))
The encryption: base64 encoding
Nah, you just XOR the data with itself and it becomes uncrackable.
Also after encryption like this the result can be compressed down to 4 bytes as long as the data is not larger than around 4Gb, 8 bytes if you need more.
My god, that is absolute perfect encryption (completely uncrackable by brute force) and compression. This is genius and I’m gonna switch all my data to this encryption scheme. Now I just need somewhere to store the decryption keys…
You are truly a mastermind.
What an excellent username for such a chain of comments
SHA-256
The real question is do you encrypt-and-sign or sign-and-encrypt?
Encrypt then sign. Always authenticate before any other operations like decryption. Don’t violate the cryptographic doom principle.
Encrypt then sign. Verification is often much faster than (or at worst as fast as) decryption. Signature can also be verified without decryption key, making it possible to verify the data along the way.
Don’t compress encrypted data since it opens you up to attacks like CRIME, unless it’s at rest and static data.
If that’s true, what’s to stop someone else from just compressing it themself and opening the same attack vector?
Compressing what themselves? Compress then encrypt leaks information about the data being encrypted if an adversary can affect some part of the data being encrypted. If the data is at rest and repeated encryptions are needed , then this isn’t a concern.
Compress the encrypted data. You’re talking about encrypting compressed data, this was talking about compressing encrypted data.
Technically you would be fine to compress the encrypted data, but encrypted data doesn’t compress well so it’s not really worth your time
Depends on if you’re using lossless or lossy compression. Lossless compression will usually make it bigger, because it relies entirely on data being formatted so their are common patterns or elements that can be described with fewer parts. Like, an ok compression algorithm for a book written in English and stored as Unicode would be to convert it to ASCII and have a thing that will denote Unicode if there happens to be anything that can’t convert. An encrypted version of that book would look indestinguishable from random characters, so compressing it at that point would just put that Unicode denoter before every single character, making the book end up taking more space.
The problem is that when you compress before you encrypt, the file size becomes a source of data about the contents. If an attacker has control of part of the data - say - a query string, they can use that to repeatedly add things to your data and see how the size changes as a result.
So it sounds like compression before encryption should only be done in specific circumstances because it can be a security issue depending on use case, but encryption before compression should never be done because it will almost always increase the size of the file
Don’t know about gz but zip files can be encrypted using passwords
Doesn’t actually matter
Encrypted data compresses much worse than non-encrypted data, so it does matter in terms of size.
It really does. Apparently-random data can’t be compressed at all, by the pigeonhole principle.
I mean, you could run it through whatever algorithm for fun, but it won’t accomplish anything.