Note: Someone commented on the “limited shelf-life” of ransomware and why this doesn’t hurt other victims. They deleted their comment but I’m posting my response.
You are incorrect. What is limited is the number of attacks that can be used for victims to recover their files. If you think the author is the only person that was using this attack to recover files, you are incorrect again. I’d recommend checking out book The Ransomware Hunting Team. It’s interesting book about what happens behind the scene for helping victims recover their files.
Anyone know why they are using timestamps instead of /dev/random?
Dont get me wrong,im glad they don't, its just kind of surprising as it seems like such a rookie mistake. Is there something i'm missing here or is it more a caseof people who know what they are doing don't chose a life of crime?
afaik the majority of ransomware does manage to use cryptography securely, so we only hear about decrypting like this when they fuck up. I don't think there's any good reason beyond the fact that they evidently don't know what they're doing.
My unqualified hunch: if they did that, then a mitigation against such malware could be for the OS to serve completely deterministic data from /dev/random for all but a select few processes which are a priori defined.
You can do the same with time though, just return a predefined sequence of timestamps.
And from a "defensive" perspective, if you don't trust any single entropy source, the paranoid solution is to combine multiple sources together rather than to switch to another source.
If it were me, I'd combine urandom (or equivalent), high-res timestamps, and clock jitter (sample the LSB of a fast clock at "fixed" intervals where the interval is a few orders of magnitude slower than the clock resolution), and hash them all together.
Even if the attackers used a fully broken since 1980s encryption-how many organizations have the expertise to dissect it?
I assume that threat detection maintains a big fingerprint databases of tools associated with malware. Rolling your own tooling, rather than importing a known library, gives one less heuristic to trip detection.
They used this with the IVs mucked with: https://www.gnupg.org/software/libgcrypt/index.html
Charitable, use of system level randomness primitives can be audited by antivirus/EDR.
I wonder at what point would the antivirus kick in. It doesn't require reading /dev/urandom for too long.
Rolling your own crypto is still a thing.
If it works (reasonably) it works, and it throws wrenches into the gears of security researchers when the code isn't the usual, immediately recognizable S boxes and other patterns or library calls.
Might be a bit a paranoia about official crypto libs backdoors, too.
In case the tool is used against them.
This was a great read and had just the right amount of detail to satisfy my curiosity about the process without being annoying to read.
Huge props to the author for coming up with this whole process and providing such fascinating details
Ransomware would be less of a problem if applications were sandboxed by default.
Sandboxed how? Applications generally are used to edit files, and those are the valuable files to a user.
Ransomeware wouldn't be a problem at all if copy-on-write snapshotting filesystems were the default.
Sandbox, user specifies access to certain files (like you can do limiting access to certain gallery items on android).
Then changes made to files should be stored as deltas to the original.
But realistically a good readonly/write new backup solution is needed, you never know when something bad might happen.
Okay so you give the sandboxed app access to ~/Documents and those get encrypted…
I think most people don’t care about their system directories but their data?
Backups and onedrive for enterprises, yes. :)
Obviously if you give all sandboxed processes access to /, that doesn't improve anything.
The idea is that you'd notice that your new git binary is trying to get access to /var/postgres, and you'd deny it, because it has no reason to want that.
Feels like a case where ZFS would help mitigate?
Like Android and iOS. The user manually has to grant access to files.
Which doesn't scale to office workstations or workplaces with network drives, where users needing to search and update hundreds of files at a time is the norm.
Developers with 1 project open have potentially hundreds to thousands of open, quite valuable files.
Now of course, we generally expect developers to have backups via VCS but that's exactly the point: snapshotting filesystems with append semantics for common use cases is an actual, practical defense.
About tge two buttons thing in factorys. The reason is, you don't have a hand in the machine. So it's not just two buttons, it's two buttons with such distance you have to use two hands. And, usually one of the two buttons, you have to hold in a middle position, if you push the button to much, it does also not work.
Something else, how many times, because of a bad mousepad, whole directorys got moved somewhere. Often you don't even know what you moved, so you can't even search. Special in my last company, we had for sure once a month such a "new" in our data.
Again: define "explicit"? Does clicking a file count? Asking for code reformatting across the project? How long does access last? How is it revoked?
If the user runs "reformat project" once, then gets a new version, are try going to have any warning that "reformat project" is about to encrypt every file it touches?
Of course, there are many things a company can do to be a bit more assured it can access its data: CoW snapshots, backups on read-only medium (e.g. DVD or BluRay discs), HDDs/SSDs offline on shelves, and certainly many other things could help companies.
That's not incompatible with sandboxing applications to limit the damage a malware can do.
Even on a regular user's "workstation" there's no need for every single app to access every single directory / every single network drive with rw permission etc.
P.S: FWIW the backup procedure I put in place doesn't just encrypt/compress/deduplicate the backups, it also compares the backup to previous backups (comparing size gives an idea, for example), then also verifies that the backup can be decrypted, using a variety of metrics (for example if, after decrypting then decompressing the backup a Git repo backup is found, it'll run "git fsck" on it, if a file with a checksum is found, it'll verify that file's checksum, etc.). Already helped us catch not a malware but a... bitflip! I figured out that if a procedure can help detect a single bitflip, it probably can help detect malware-encrypted data too. I'm not saying it's 100% foolproof: all I'm saying is there's a difference between "we're sandboxing stuff and running some checks" vs "we allow every single application to access everything on all our machines because we need users to access files".
Or if non-trusted/signed apps only had COW disk access.
Or if people backed up more often.
"On my mini PC CPU, I estimated a processing speed of 100,000 timestamp to random bytes calculations per second (utilizing all cores)."
Would like more details on the mini PC. Processor, RAM, price. Is it fanless.
What could explain encrypting the first 65k with KCipher2 and the rest with something else? Seems odd.
[flagged]
Why don't you do the legwork instead of asking rhetorical questions?
Legwork of what? Companies already have done the legwork to make it easy for strangers to send you money.
Companies that "do the legwork" of decrypting ransomware for the most part just pay the ransom on your behalf.
How would they get their data back if someone theoretically knows how to decrypt but never tells anyone.
It was already disclosed to the bad guys that someone managed to break their encryption, when they didn't get paid and they saw that the customer had somehow managed to recover their data. That probably meant they might go looking for weaknesses, or modify their encryption, even without this note.
Other victims whose data were encrypted by the same malware (before any updates) could benefit from this disclosure to try to recover their data.
> why publish this?
New versions of Akira and any other ransomware are constantly being developed. This code is specific to a certain version of the malware.
As noted in the article, it also requires:
1. An extremely capable sysadmin 2. A bunch of GPU capacity 3. That the timestamps be brute-forced separately
So it's not exactly a turn-key defeat of Akira.
once your files are encrypted by ransomware, does the encryption change if the malware gets updated? if not, then anyone currently infected with this version can now possibly recover.
if they don't release their code, then what's the point of having the code? they accomplished their task, and now here you go for someone else that might have the same need. otherwise, don't get infected by a new version
How would it be better, unless it's widely known to be breakable? And at that point, wouldn't the hackers know that too?
What use is a counterattack if it’s inaccessible, either by another cost or because it’s only known by a few experts?
This feels like a net win.
What use is a counterattack if it's immediately fixed? Then absolutely nobody can use it, not even a few experts.
You’re making a lot of assumptions about the capability to reconnect and patch/update itself, preface the fix with “keep your machine offline from here in out” and we’re back to fixing it for everyone before that point.
Im confused, are you saying that you think building a method for anyone to break/brute the ransomeware is bad?
They're saying that publicly disclosing the vulnerability is bad because now it will be fixed.
This is a game of cat and mouse, like it has always been. Cannot rely on security by obscurity.
You have to read my comment in context of the immediate parent which I replied to, not the OP.
The immediate parent comment says that if the vulnerability is publicly declared, attackers can easily patch it.
Paraphrasing my response: not publicly declaring the vulnerability is security by obscurity.. which does not work.
Don't attack a strawman.
[dead]