That thread also has the benefit of using the original source article from Kaspersky, which is worth a read. This blog post notably disagrees with Kaspersky's own conclusions about whether it's an intentional backdoor, instead citing Steve Gibson, who has clearly learned nothing from his previous "WMF backdoor" debacle.
Thanks for sharing. But I think there are easier solutions if one works for sensitive projects such as an Apple hardware designer or a Google Android kernel programmer:
1. Complete separation of work and personal computers and cellphones.
2. Company cellphones only stay in the facility and are checked for vulnerabilities from time to time.
3. No bragging in public about your project or messing with other women/men other than one's own partner, so one does not expose personal vulnerabilities.
I read the original Kaspersky analysis and found it very weird that such a cyber security company that works with the Russian government closely allows US made phones accessing their networks as late as 2023 Dec.
There's never been any real and substantial evidence for this and much to the contrary. They've moved a lot of their infra out of Russia and have for ages been early on malware that originated from Russia and allies.
It's one of those things that if American media writes about it long enough people somehow just assume its true.
Be it may the case, I highly suspect that is the case by looking at the efforts the other party spent to make it work. Multiple zero days plus maybe some pulling strings in Apple.
Your options are iPhone or Android if you want a reasonably usable phone in 2025. And iPhone is considerably more secure than Android against both script kiddies and nation state attackers.
If they really need the security, considering how the other party spent such trouble to hack their phones, this is probably true, then they should not allow any smart phone into the facility.
This has been done many times before by other companies. Huawei used to do a lot of closed door development -- every one of the team lives in a hotel for a few months without phones and cannot get out. If your adversary burnt so many zero days and maybe also pulled some strings to hack you, you absolutely should do this.
It's possible someone wants to hack you more than you want to defend against it.
Or it's possible you are using your development processes more like a honeypot to trap the attackers. I suspect that was the case here - it's awfully hard to analyze a modern exploit unless you manage to get it to install on a phone you are already monitoring.
(all new exploits are 'single install' - ie. the exploit will retrieve most of its code from a server which will only send the data once, and then immediately after use the exploit code will be deleted. That makes recording the exploit hard).
Just make sure nobody ever sends you an SMS or 'iMessage' as those have a wild history of enabling remote 'zero-click' take-overs. If you doubt this just search for 'imessage vulnerability' or 'imessage cve'. Android has far fewer of these problems, partly due to it being a more diverse system where any single vulnerability is less likely to apply to all Android installs. Of course this diversity also means there are more chances to find problems but the reach of those problems is smaller.
> And iPhone is considerably more secure than Android against both script kiddies and nation state attackers.
Posting this in a thread about a HW backdoor in iPhone seems strange.
And there are also a lot of noclick exploits in the Apple ecosystem: NSO comes to mind.
My main issue with Apple is that they, internally, do not do any security research. They just close the holes, if, and after, they are discovered.
According to this blog it has been patched. But it really does open up the question of how much do we trust Apple, Google and other large tech companies.
I always assumed, not having worked at Apple, but from the observed functionality and the fact that they could patch it, that this was a debug backdoor that didn't get killswitched before release builds and then they decided it would draw attention to it if they killed it after the fact.
Has anyone disassembled that update to figure out how they patched this?
If it is some device sitting on the memory bus, how did they disable it in a way it couldn't be reenabled by the OS kernel? Most hardware that sits on a CPU bus doesn't have such an ability.
The iPhone 16 shipped with iOS 18. The vulnerability in question (CVE 2023-38606) was patched with iOS 16.6 released in July 2023, months before Kaspersky's write-up that prompted this blog post. There, now you don't have to wonder any more.
I wonder if something like this is behind the push from Microsoft to obsolete a lot of hardware with the windows 11 release. The NSA pushed them to require a hardware upgrade so people replace devices bearing old processors with new ones featuring the latest bleeding-edge backdoors.
I do notice that a lot of enemies
of the state seem to use poorly secured platforms. Everything from Hamas using pagers to widespread use of unencrypted telegram groups and discord, and the ANOM sting with a non-e2e app.
Yet platforms with apparently secure e2e messaging (ie. WhatsApp) never seem to be used by criminals.
I wonder if this is just selection bias in the criminals caught, or if there is some forcing factor persuading criminals to make poor security choices.
There are various third party re-implementations of the whatsapp client. It uses the opensource signal protocol. And they certainly appear to keep the keys secret to the right people.
It's always possible there is some secret command which makes the closed source client leak the keys, but I imagine an audit of the disassembly of the client side app would discover that.
Thread at the time https://news.ycombinator.com/item?id=38783112
That thread also has the benefit of using the original source article from Kaspersky, which is worth a read. This blog post notably disagrees with Kaspersky's own conclusions about whether it's an intentional backdoor, instead citing Steve Gibson, who has clearly learned nothing from his previous "WMF backdoor" debacle.
This is a year old, does anyone have an article with updates?
https://nvd.nist.gov/vuln/detail/cve-2023-38606
Compared to other CVEs, the description for this one looks very different.
Does anyone know why it reads so apologetic?
No updates because its just a dude freaking out about his incredible jump to conclusions. The bug was fixed and everyone kept living their lives
Anyone who is paranoid about hardware backdoors might enjoy this:
https://www.contrib.andrew.cmu.edu/~somlo/BTCP/
Thanks for sharing. But I think there are easier solutions if one works for sensitive projects such as an Apple hardware designer or a Google Android kernel programmer:
1. Complete separation of work and personal computers and cellphones.
2. Company cellphones only stay in the facility and are checked for vulnerabilities from time to time.
3. No bragging in public about your project or messing with other women/men other than one's own partner, so one does not expose personal vulnerabilities.
I read the original Kaspersky analysis and found it very weird that such a cyber security company that works with the Russian government closely allows US made phones accessing their networks as late as 2023 Dec.
>that works with the Russian government closely
There's never been any real and substantial evidence for this and much to the contrary. They've moved a lot of their infra out of Russia and have for ages been early on malware that originated from Russia and allies.
It's one of those things that if American media writes about it long enough people somehow just assume its true.
Be it may the case, I highly suspect that is the case by looking at the efforts the other party spent to make it work. Multiple zero days plus maybe some pulling strings in Apple.
Your options are iPhone or Android if you want a reasonably usable phone in 2025. And iPhone is considerably more secure than Android against both script kiddies and nation state attackers.
Correction, non-US nation state attackers. (this comment is mostly a sarcastic joke)
If they really need the security, considering how the other party spent such trouble to hack their phones, this is probably true, then they should not allow any smart phone into the facility.
This has been done many times before by other companies. Huawei used to do a lot of closed door development -- every one of the team lives in a hotel for a few months without phones and cannot get out. If your adversary burnt so many zero days and maybe also pulled some strings to hack you, you absolutely should do this.
It's possible someone wants to hack you more than you want to defend against it.
Or it's possible you are using your development processes more like a honeypot to trap the attackers. I suspect that was the case here - it's awfully hard to analyze a modern exploit unless you manage to get it to install on a phone you are already monitoring.
(all new exploits are 'single install' - ie. the exploit will retrieve most of its code from a server which will only send the data once, and then immediately after use the exploit code will be deleted. That makes recording the exploit hard).
Just make sure nobody ever sends you an SMS or 'iMessage' as those have a wild history of enabling remote 'zero-click' take-overs. If you doubt this just search for 'imessage vulnerability' or 'imessage cve'. Android has far fewer of these problems, partly due to it being a more diverse system where any single vulnerability is less likely to apply to all Android installs. Of course this diversity also means there are more chances to find problems but the reach of those problems is smaller.
> And iPhone is considerably more secure than Android against both script kiddies and nation state attackers.
Posting this in a thread about a HW backdoor in iPhone seems strange. And there are also a lot of noclick exploits in the Apple ecosystem: NSO comes to mind.
My main issue with Apple is that they, internally, do not do any security research. They just close the holes, if, and after, they are discovered.
According to this blog it has been patched. But it really does open up the question of how much do we trust Apple, Google and other large tech companies.
Read the blog date. It's over a year old. This is old news.
Old news or not, the fact that my hardware could have backdoors in it concerns me, and I use a Pixel phone. These things are the ideal spying devices.
Yeah, if the good guys have a backdoor sooner or later the bad guys will also.
Maybe not everybody is so stupid like "the good guys". /s
I always assumed, not having worked at Apple, but from the observed functionality and the fact that they could patch it, that this was a debug backdoor that didn't get killswitched before release builds and then they decided it would draw attention to it if they killed it after the fact.
Smells like CIA stuff. Impressive.
You have to wonder if the only reason the iPhone 16 isn’t included in this article, is because the article was written before the iPhone 16 existed.
It's because Apple fixed the issue on all affected devices with OS updates released in July 2023.
Has anyone disassembled that update to figure out how they patched this?
If it is some device sitting on the memory bus, how did they disable it in a way it couldn't be reenabled by the OS kernel? Most hardware that sits on a CPU bus doesn't have such an ability.
The iPhone 16 shipped with iOS 18. The vulnerability in question (CVE 2023-38606) was patched with iOS 16.6 released in July 2023, months before Kaspersky's write-up that prompted this blog post. There, now you don't have to wonder any more.
Thanks for making this more clear. This is the problem with reading old posts.
I wonder if something like this is behind the push from Microsoft to obsolete a lot of hardware with the windows 11 release. The NSA pushed them to require a hardware upgrade so people replace devices bearing old processors with new ones featuring the latest bleeding-edge backdoors.
What if your comment is a part of a psyop to keep paranoid people (NSA's true target) on their old devices which are even easier to breach?
I do notice that a lot of enemies of the state seem to use poorly secured platforms. Everything from Hamas using pagers to widespread use of unencrypted telegram groups and discord, and the ANOM sting with a non-e2e app.
Yet platforms with apparently secure e2e messaging (ie. WhatsApp) never seem to be used by criminals.
I wonder if this is just selection bias in the criminals caught, or if there is some forcing factor persuading criminals to make poor security choices.
> Yet platforms with apparently secure e2e messaging (ie. WhatsApp)
Do you have the keys to your WhatsApp messages ? Are you sure that they only reach their intended recipient ?
There are various third party re-implementations of the whatsapp client. It uses the opensource signal protocol. And they certainly appear to keep the keys secret to the right people.
It's always possible there is some secret command which makes the closed source client leak the keys, but I imagine an audit of the disassembly of the client side app would discover that.
Wow, this is terrible.