> "This research resulted in 4 0day bugs (CVE-2021-30861, CVE-2021-30975, and two without CVEs), 2 of which were used in the camera hack. I reported this chain to Apple and was awarded $100,500 as a bounty."
Writing a secure browser for today's web appears to be a technological challenge comparable to a level 5 self-driving car. It has not been shown to be feasible. So such cars are not permitted to be deployed on the world's roads. Today's web sites and browsers should similarly not be deployed on the world's infobahns.
I’m a fan of OpenBSD where I run ‘ps -ax’ and get a list of about 10 processes, all of whose purpose is obvious.
On macOS I spend the first few days disabling several dozen junk processes I didn’t ask for and don’t want. This includes classroom tools (??) and all kinds of syncing/ sharing daemons I have no use for.
This exploit reinforces what we already know — computers are impossible to secure, you should reduce attack surface where possible. If you get a little privacy and performance out of it all the better.
The problem with disabling non-obvious processes is that, well, they are non-obvious.
I suppose if it's your own machine, and you know what you're doing, that's OK, but as soon as something breaks you will have no idea whether it's because you turned something critical off, or whether it's actually a bug.
Please don't do this to other people's Macs.
> Please don't do this to other people's Macs.
Thanks for the advice and all, but where is this coming from? It seems kind of out of left field, as parent comment did not communicate or convey in any way that they would ever mess with someone else’s machine like that. Is it not obvious that such an act would be just plain rude (and stupid)?
I think it is more teenaged mentality trying to be helpful disabling system protection, and otherwise tweaking the heck just because they believe it is the best experience.
It is not an issue here so not too helpful as a post, but plenty of folk out there telling their grandparents to delete the temp folder every week to optimize the computer and other nonsense they heard is helpful in some forum.
When I installed new windows, I go trough all the settings I can find and disable everything. I used to also look up scripts to disable parts of windows (delete cortana) I didnt care about, as well as bloatware that notebooks come with. Rarely I havent run into some issues a year down the line which I knew is caused by "something being disabled", but couldnt google what might be causing it and thus never resolved the issue. Now I just let it rot as I got it, at least everything works..
Point is, disabling junk helps until it doesnt, and its much harder to enable one piece of junk after I forget what all I did in the first place. It would be nice if disabling windows search wouldnt make first attempt to open start menu not work after each restart, and enabling search didnt fix it. But everything is so interwoven that disabling "stuff I dont need" is shooting myself in the foot with a really long gun..
I literarily just use safari and excel. Is there a guide on how to disable the rest of the junk such as classroom tools?
Please don't turn off random stuff and expect your computer to still work/be supported/anyone to ever fix your crash reports.
I suspect this is 10% of the reason why System Integrity Protection exists. Yes, it's great to stop malware, but it's even better to save your frontline support from having to deal with people who decided that it'd be a good idea to delete the dyld shared cache because it's 15GB and ""doesn't look useful"" https://discussions.apple.com/thread/250117852
For those not aware, that file is the equivalent of all of /usr/lib on Linux. It is, quite literally, most of the core OS above the kernel.
That looks like an excellent opportunity to name that better.
Generally agree, but I'll be damned if its ever not one of Photoanalysisd (unused, disabled, empty, but yet scanning), AirplayXPCHelper (often stuck at high CPU, my TV which it shouldn't be able to see, and my neighbour's, both sound output options, yup - weirded out by this tbh), or Spotlight (system deletes system file, Spotlight keeps attempting to read anyway and crash).
These programs came bundled, I don't use them and disabled what I could from any GUIs. It is always them causing problems, hardly ever the ones I want and use.
Not so unfortunately. The com.apple.CloudPhotosConfiguration also remains up (icloud sync is disbled, so not sure what cloud it's hoping to configure).
Similarly, "ScreenTimeAgent" is there even with the feature disabled. AppleMusic and AppleTV daemons scan for caches throughout the day - never opened either. There's also "gamecontrollerd", "studentd", some iMessage and FaceTime related stuff, etc. etc.
And on a system where you've disabled all analytics/tracking, turned off Siri in all places [1], disabled dictation and all other voice input related options, you even keep your microphone muted when it's not in use, even then, "corespeechd" will send out several MBs of data to Apple on every single boot.
Most of this is nothing more than very unexpected and strange behaviour. The exceptions being those 3 apps that keep tripping over themselves, those actually destroying my battery and fans. A little less unintentional obscurity, maybe some official descriptions for these processes, just some communication, that would go a long way.
[1] Siri settings are scattered throughout, a lists exists though: open Preferences > click button "Siri Suggestions & Privacy" > click button "About Siri & Privacy" button > click bu... eh wait click heading "Siri Suggestions" - yes that black print > then, finally, all the places are listed right under where it says "You have choice and control".
Been there done that now knows better than to do the way you described, but still an uncomfortable thing to me to run modern OS on a modern system.
> While this bug does require the victim to click "open" on a popup from my website, it results in more than just multimedia permission hijacking.
That's why I'm so wary of browsers (well, a certain browser) adding more and more APIs that hide behind permission popups. People will blindly click them.
And I fully agree with a sibling comment: "Writing a secure browser for today's web appears to be a technological challenge comparable to a level 5 self-driving car", https://news.ycombinator.com/item?id=30078738
I assume you're referring to chrome?
Yup :)
Apple: "Safari is the most locked down and secure browser that never runs anything without user's permission."
Also Apple: "We have built in a long list of exceptions for Apple services, because it's impossible for an Apple service to have an exploit."
New https://support.apple.com article incoming:
> "We advise users not to store sensitive info and files on their computers, to prevent nefarious actors from being able to steal these sensitive items"
A $100,500 bounty seems pretty cheap compared to the severity of the issue, or is it common?
It's the most that Apple's paid out for a bug bounty, as far as I know. The previous highest was $500 less ($100,000).
It's also an interaction-required bug, apparently.
Now imagine how much money they saved by not researching those bugs themselves.
Now imagine how much the researcher gave up by not selling it to Cellebrite.
You mean to say someone like NSO Group, not Cellebrite. But you should know that it's possible driving up the price of bugs helps companies like NSO, rather than hurting them. They're middlemen, taking a cut of the value of transactions between exploit developers and downstream customers. Those downstream customers, for shops like NSO, are overwhelmingly government agencies that aren't especially price-sensitive to the cost of individual bugs.
I assume NSO group operates in their own best interest. If them buying a bug and reselling it hurts them, then I think they won't do it.
Although I guess one reason they might buy a bug that would lead to financial harm is to prevent a competitor from getting it, which might be an even worse financial harm.
Cellebrite doesn't really have a use for a browser vulnerability.
This does not relate to "webcam" at all? This allows to inject any script to any source, that seems more scary than "just" hacking a webcam?
Also it makes me reconsider using Safari, seeing all these "special cases" of iCloud and iPhoto URLs being allowed.
Bugs of this nature exist in all major browsers, not just Safari.
Reading articles like that always blows my mind. I can't even imagine how people can come up with exploit chains like that. Congratulations, well deserved bounty!
This is incredible and terrifying. Well done.
Congrats, Ryan! Well deserved.
Lucky for me my MacBook camera isn’t being detected.
congrats :) I've always suspected you could use iCloud Sharing as a great one-click vector but I never quite cracked it. I wonder if apple will ever kill the webarchive UXSS—it's been public for almost five or six years at this point and it violates so many assumptions LOL.
Such a good write up, well done!
Absolutely! Was really enjoyable to read! Thanks for this!
congrats
Why does this keep happening to Apple?
To be fair level 5 self driving car failing is much more catastrophic than a browser being hijacked. But I generally agree with your sentiment.
Unfortunately the only way to find these modes of failure is to have them actually fail. It's impossible to design and release an error free system without real world usage from real people.
It doesn't mean we should just give up and go back to HTML1 though. It just means exploits should be fixed as soon as possible to minimize damage.
And tbh, our browsers today are remarkably secure compared to before. It used to be just common advice to never open weird emails or click weird links, not because they would try to trick you in to handing over info, but because it was realistic that simply clicking the link or opening the email would immediately rootkit your computer. These days unless you are a government most wanted or using a very old system, you are pretty safe.
We used to have school kids coming up with highly privileged attacks on systems to it becoming something the top minds spend months on and get paid 6 figures per discovery for.
Excellent perspective, thanks for the positive and factual viewpoint
There is no need to fall back to HTML1. Before HTML5 and even before HTML4, there was a web markup language that was much more powerful than HTML1, was widely deployed and used, did everything we needed, and worked fine. It was called HTML3 and it was great. That is where we should be right now.
You might be wearing rose colored glasses here...
The old web I remember had exploits from flash, java applets, active-x, shockwave, other sketchy plugins people willingly downloaded to access sites, and poorly sandboxed javascript that could take control of your browser window to resize, move, and spam as many popups as it wanted among other worse things. And downloaded "toolbars", https being rarely used, etc.
Even ad blockers and other "power user" extensions (if your browser even offered that) were extremely primitive. etc.
There were websites that could execute user land code through exploits just by visiting a website depending on your browser. And that wasn't uncommon.
It would be completely insane if that was still the case today. But we fixed those issues and evolved.
OP's post shows a major issue obviously. But I would absolutely turn off the entire api with a forced update until it was fixed if I had the power to.
But these sorts of exploits happened back in the day to a worse degree with HTML3 + 4 (can't speak for 1 or 2 though, maybe someone can chime in).
Viewing the bigger picture, the web is way more secure now than back then. And exploits like these are much more rare now.
Maybe if vendors were more open to allowing third-party applications on their platform, people wouldn't be so motivated to increasing the capabilities of the web. Instead, app distribution has turned into a living nightmare, so it's unsurprising to see the web evolved into the monster it is today.
I think what you have said is in many ways very true, a modern web browser is in many ways comparable to an operating system and even more so with Chrome's push into new APIs that do things like access local file systems for example.
It seems that we need a better story for when code inevitably can escape the sandboxing that browsers provide because right now it's a straight up disaster scenario. That means you are going to have to rethink the OS underlying it and none of today's options were built with that kind of a threat model in mind.
There is Fuchsia on the horizon which sounds like a MAJOR leap forward but that also seems like it is several years away from ever reaching a desktop for example and it's also really unclear what kind of attacks are going to be possible in the real world against it either.
> awarded $100,500 as a bounty
How does the economy of bug bounty programs work for the company? $100,500 is probably not much for Apple but it's still some engineer's salary. Do the responsible engineer get a pay cut for this bug? Or this kind of 0day bugs are not bugs, but secret features left open from the beginning?
> This time, the bug gives the attacker full access to every website ever visited by the victim. That means in addition to turning on your camera, my bug can also hack your iCloud, PayPal, Facebook, Gmail, etc. accounts too.
Honestly, $100,000 for this is too low.
Agreed, of all the places you could have sold that information the official channel by FAR pays the worst (Apple in particular being a notoriously terrible company to deal with for security problems but by no means are the only ones) which is a huge problem that people should be making a lot more noise about.
I kind of shudder to think of how many of these bugs have been out there for years and only ever ended up in the hands of governments and shady groups like NSO.
I found a bug in Cash App that exposes SSNs, but their max bug bounty is $8000, so fuck 'em. The other option for people finding bugs is to sell it to "the other side" if they pay more money.
https://bugcrowd.com/cashapp
No. Fuck-em would be actually selling it, not throwing a hissy fit and publicly outing yourself as potential source of the discovery. They DGAF about that bug, their only concern is very low probability of negative consequences in case of data leak with full attribution, but bugcrowd presence gives CISO full ass coverage.
This bug would, in the wrong hands, let someone steal a lot more than $100k.
It should be at least 10x higher, this bug affects a billion users
I have not heard of companies who operate bug bounties punishing engineers. Apple certainly does not do this either. Most companies do the math and decide that it's cheaper for them to deal with paying a research than it is to deal with the possible fallout of an in-the-wild exploit or researchers just dumping zero days on twitter.
A bounty incentivizes people — bad actors included — to responsibly disclose critical bugs. It has the side effect of luring in good actors to find them, but the cost remains negligible compared to the damage they might cause otherwise.
This is overhead; cost of doing business. Finding out you have a bug so you can fix it is much cheaper than not finding out until your customers have been attacked. Trying to get those customers back or get new customers is much much more expensive.
There are certainly much worse bugs than $100k. Think of if Apple itself had been hacked due to a bug. The remediation costs would be very high.
A bug in Knight Capital's stock trader lost $460M in 45 minutes.
A bug in the Ariane 5 rocket caused it to crash, a loss of $370M.
A bug in the Therac-25 radiation machine killed 3 people.
> secret features left open from the beginning?
Are you saying Apple engineers are inserting backdoors? What's the motivation? Security bugs are very easy to accidentally introduce when you have complex interacting systems (in this case a browser, a complicated URL parsing syntax, some ancient barely-known file types, and a file sharing application). Occam's razor (and Hanlon's) says it's an accident.
Apple’s Q4 2022 net profit was 20 billion US dollars.
They can afford the bounty.
The frustrating thing is that we know how to write absolutely secure browsers (or other executors/renderers/etc. of untrusted data) using formal methods. Unfortunately, because most of the negative externality falls on users who don't know any better, the economics don't really work out.
I'm not sure you can actually prove a TLS implementation is correct. It depends on the system clock being right, OCSP server being online, etc.
Even the encryption has to be free of side channels like timing attacks which are left out of what most people think of as a proof.
You can prove a TLS implementation correct up to defined assumptions (e.g. clock consistency). There are already some projects working on this, eg https://project-everest.github.io/
Unlikely... these bugs are all logic bugs. They won't be mitigated by using formal methods, because most of these bugs adhere to pre-defined specs. The problem with browses is that these specs are often in conflict with each other for new and/or obscure features.
Logic bugs are the primary category of bugs targeted by heavyweight formal methods.
Most JS features don't even have a formal spec which is part of the problem.
"Writing a web browser for today's web appears to be a technological challenge comparable to a level 5 self-driving car."
Only when the "constraints" are that the browser must support all "standards" produced by some committee where most if not all memebers are employees of the few browser vendors or other "tech" companies heavily invested in the web. Perhaps that is what is meant by "today's web".
I use a netcat and similar TCP clients and a text-only browser on today's web and this works quite well for mine own purposes. Non-commercial use. Basic tasks like sending GET and POST requests, downloading pages and other resources and reading them. Using the web as an information source. I would be willing to bet these programs are "more secure" than the "modern web browser". They are certainly less complex.
I use a netcat and similar TCP clients and a text-only browser on today's web and this works quite well for mine own purposes.
For 99% of what I do I use Firefox + NoScript and I think I'm relatively secure. My solution probably provides more functionality than yours, but it's still a hassle to use some sites. Other sites simply don't exist w/o JavaScript. The situation isn't getting better.
Any chance could disclose the ones that "simply don't exist w/o Javascript". (Assuming they are information sources, not online games or otherwise purely interactive.) I have had reasonably consistent success finding workarounds for sites that gratuitously use Javascript to present information.
MSN
Hmmm, I got the full text of the article, immediately readable with links(1), no problem using netcat or, e.g., curl.
NB. Unless required, I never send a user-agent header. Here, as is the case with 99% of the sites I visit, it was not required.
To watch the video, I used ffmpeg to save to an MP4 file:
However when I checked archive.org I noticed their archived page was different. It appeared to require Javascript.YouTube
I use YouTube without any JS.^1 I search from the command line using a tiny shell script. I download from the command line using a tiny shell script. With few exceptions I do not use youtube-dl.^1
1. Only exceptions are videos with a dynamic sig value in their googlevideo.com URL. Most videos I watch have static sig values in their googlevideo.com URL. Alternatively, I can use a "modern browser" to get the googlevideo.com URLs to download these videos, with all the tracking and other non-essential URLs blocked (no ads), instead of youtube-dl.
Twitter
I read Twitter without any JS. It currently works fine for me using links(1). I have a tiny shell script to download an entire feed if I need to find some historical tweet. Twitter does make changes from time to time. Previously I had to use the GraphQL API. Not anymore.
I agree 100% that the web of gratuitous JS sucks, but even with TCP clients and a text-only browser that do not run Javascript, I encounter few problems reading the textual information and/or downloading the binary files (resources) that websites provide.
For example, of the 43 open working groups at W3C, "Google LLC" has representatives participating in 36 out of the 43. Of the 89 closed former working groups at W3C, "Google LLC" has had representatives formerly particpating in 54 out of the 89. Source: https://www.w3.org/groups/wg/
This is such an unreasonable and paranoid set of trade offs.
You could have gotten basically the same (I would even argue better) level of security with a virtual machine and an otherwise full browsing experience.
You have assumed wrong.
I do not use this method for security. I use it for speed, reliability and aesthetics. The complexity fetish of too many software developers has driven me to appreciate minimalism.
Also, the computers I use are often older, underpowered and/or have limited resources. Running VMs is usually not an option. You cannot assume that everyone has the free overhead available to run VMs.
The large, graphical browsers controlled by corporations are slow, unwieldy and overkill for making simple HTTP requests. The "full browsing experience" is highly unsatisfactory for me. I am not "trading off" anything. I like fast, text-only information retrieval.
That said, the statement I made still stands. Using TCP clients and a text-only HTML reader is probably "more secure" than using one of the corporate graphical browsers that allow users to do more dangerous things. A side benefit.
To each their own.