> Espressif will document all Vendor-specific HCI commands to ensure transparancy of what functionality is available at the HCI layer
I'm very glad they're going to be more transparent rather than try to lock things down more. I've been impressed with Espressif over the years and I'm glad they're continuing to impress.
It is a relative small company which is catering the DIY market. To build any substantial user base with industrial applications, I guess their way is to continue to be open and enable people to build DIY stuff and a tiny fraction of these will one day build products with ESP chips inside.
They are in a -lot- of edge devices. They became popular with DIY because they were a very cheap and capable option with the esp8266, which hackers converted from ultracheap UART to WiFi modules into general purpose mcus. They became the DIY default because instead of ignoring this, they embraced it and published specs and translated data sheets. Their diy friendly approach has garnered them a ton of software support and publicity.
They have responded very openly by publishing the vendor specific diagnostic commands.
This whole dog and pony show is very unfortunate, really, since the irresponsible security firm used the term backdoor in a completely inappropriate and misleading way, and the blogosphere and tech press just parroted their claim without bothering to read the disclosure, apparently.
The headlines should have been:
“Security firm Tarlogic makes specious “backdoor” claim at rootCon about popular Bluetooth/Wifi microcontroller deployed in billions of devices.”
I've in no way surveyed the whole wifi mcu market. But wasn't the "beginnings" of Espressif in the western world that it was a cheap wifi mcu that was at the time very popular in cheap Chinese wifi enabled products.
The DIY market (at least in the west) came later. I still remember reading the hackaday article about them being discovered back in the day - https://hackaday.com/2014/08/26/new-chip-alert-the-esp8266-w... and back then all the documentation was in Chinese.
I remember a ton of cheap Chinese wifi enabled stuff would come with an esp for a good while, until even cheaper wifi chips hit the market.
The biggest factor (IMO) is that during the pandemic chip shortages, about the only WiFi/BT module in stock anywhere was espressif. They consistently had stock when no one else did, so a lot of home hackers got into it.
As our industry is so fond of forgetting and rediscovering, if your product is easy and accessible to home hackers, it will be used in those hackers' real jobs.
But also they're so goddamn cheap. Espressif somehow scaled production super fast and got the price way below any of the competition.
> It is a relative small company which is catering the DIY market.
Is that really all it is? I’ve got multiple IOT devices from major vendors that use esp32. I’ve also seen multiple EV chargers that leverage the platform.
Yeah, according to most of the articles reporting on this there are over a billion ESP32s out there. I suspect that the 1 billion number is actually counting the ESP8266 as well, but still there are a lot of them and I see them used in a lot of "real" products.
> Espressif will provide a fix that removes access to these HCI debug commands through a software patch for currently supported ESP-IDF versions
ESP-IDF is the "SDK" for these chips – I read that as "you'll get a new function that you call to disable these commands until you reset the chip, to limit attack surface for the rest of your app".
The researcher-or-tinkerer who wants to play with these commands can just not upgrade to the newer ESP-IDF. Or update to a newer ESP-IDF, and revert the patch. It's open source. https://github.com/espressif/esp-idf
I don't know the exact standard. I was told this by one of the many people I worked with trying to get my CE. If I recall correctly it may have to do with the new IoT Cyber Resilience Act stuff.
I guess this is more about "if your product has bluetooth functionality, this section of compliance also applies to your product" than "we have regulation on switching off bluetooth"
I find it odd that the CE marking doesn't come with anything that also uniquely identifies the source company, product or SKU the way that an FCC ID does, it means it's impossible to determine if a CE mark is real or not.
Well it was very clear from the beginning that these were a set of debug commands. A back door would probably have a door knocking semantics and a very limited apparent surface. These were direct memory reading and writing command and were available in a standardized protocol.
I would think the real issue was wether they were closed enough in real world design to avoid a security issue through debug means.
It was both, tbh. These commands would be usable in some attacks against a “trusted” esp32 implementation.
It’s something they probably should allow developers to disable but it’s also being WILDLY overstated by clout seekers.
You can, from some angles, call this a security flaw or issue. Anyone calling it a backdoor - a term with a specific meaning intentional secret access - is being irresponsible imho
We are talking about a programable device used as a programable device by hobbyists and device manufacturers to make things that you need to program to do something. It’s no different to trying to claim a PC has a security problem because you can load software onto it to do bad things.
Fwiw I’m referring to “trusted” as a specific use case using specific features. Most users won’t use those features. For those that do, this is a big game over. Although some do this, the platform was never well suited to it anyways.
I think we both agree, this “vulnerability” is overstated.
The is a disconnect between the people who do actual work on the ground in the world and the internet.
People in the real world report things - people on the internet read 3 out of 4 of the words in the headline and screech hysterically and run around spouting prophecies about the end of the world.
It kind of was, but I’m happy that it happened because the way it was done (and still is with other vendors) is bad. This resolved that and in my most optimistic dreams other vendors would now follow.
If the researchers had written articles and held talks about undocumented HCI commands to gain direct access to the chip, and hammered on the security impact of having such undocumented commands, they could've probably achieved the same thing without resorting to lying.
> Researchers Miguel Tarascó Acuña and Antonio Vázquez Blanco of Tarlogic Security, who presented their findings yesterday at RootedCON in Madrid.
> "Tarlogic Security has detected a backdoor in the ESP32, a microcontroller that enables WiFi and Bluetooth connection and is present in millions of mass-market IoT devices
I'd like to believe the didn't have ulterior reasons to cry "backdoor" knowing that's not the case. That would be a huge indictment of the researchers' characters.
They wanted the spotlight and calling something a "backdoor in a chip used by a billion devices" does the job.
> And espressif should have closed that gap without external pressure.
There is a possibility it did not cross their minds that those instructions could an issue.
In their blog post they were very open about what does commands do and it looks like some of their clients that do advanced things with their chips are also aware of the instructions.
Probably the idea that the company is Chinese was enough to not bother asking the manufacturer for documentation about the instructions.
There is a narrow set of circumstances where the previously undocumented commands could be exploited by malware running on a host machine or on compromised firmware. It’s kind of a nothing burger because compromise at this level is already pwned and the undocumented commands really don’t add much to that.
How you think always-open debug is NOT a security issue?
The blog post is a "nothing to see here". You really fell for it? Do we still do car analogies? Here's this car with just a ignition button without keys, and keys to your home in the glovebox and the address already on the GPS... but the thief would have to break into the car first!
There's nothing to "fall for". Do you have specific evidence that refutes Espressif's rationale for why they believe it's not a security issue. No, a flawed car analogy doesn't count.
If you can break into the application running on an ESP32, you already have full access to RAM etc. The debug HCI commands will not give you any extra access.
Yes, security researchers are incentivized to make issues seem like more of a problem than they are, and vendors are incentivized to minimize them. In this case, though, the reality is much closer to Espressif's version.
Yet, a significant number of stock investors still suffer heavy losses because the market gets rattled by these exaggerated, attention-grabbing headlines. While the consensus here rightly labels this a non-issue—debug commands, not a backdoor, with no remote exploit possible—the sensationalism still has real-world impact. It’s frustrating to see how security hype, often just for internet clout or CVEs as pointed out, can overshadow Espressif’s solid response and transparency efforts, leaving retail investors to bear the brunt of the volatility.
At first I thought this smelled like a Targeted PR drive-by, leveraging the distrust of China to intensify anti-China sentiment for political traction. I’ve been quite cynical about this ever since I was exposed to shady PR companies and saw their list of offerings and prices…. But…
Nope. This looks like an irresponsible, hype seeking disclosure by a security firm trying to make a name for itself. Or maybe a really, really good PR firm and a PR savvy security firm, if I put my tinfoil hat on.
> Espressif will provide a fix that removes access to these HCI debug commands through a software patch for currently supported ESP-IDF versions
> Espressif will document all Vendor-specific HCI commands to ensure transparancy of what functionality is available at the HCI layer
This is great. While this practice may be common and it may not be considered a backdoor, undocumented functionality is definitely a risk. I’m glad that people didn’t just take it as “well that’s how it’s done” but instead are pushing for a better way.
First and foremost, I have no affiliation with any of the authors previously mentioned. However, I would like to pose a question to the community:
Is it feasible to exploit these undocumented HCI commands to develop malicious firmware for the ESP32? Such firmware could potentially be designed to respond to over-the-air (OTA) signals, activating these hidden commands to perform unauthorized actions like memory manipulation or device impersonation.
However, considering that deploying malicious firmware already implies a significant level of system compromise, how does this scenario differ from traditional malware attacks targeting x86 architectures to gain low-level access to servers?
It is feasible to develop malicious firmware for the ESP32 even without these HCI commands. The existence of these undocumented commands doesn't change anything.
As the article states "These undocumented HCI commands cannot be triggered by Bluetooth, radio signals, or over the Internet, unless there is a vulnerability in the application itself or the radio protocols.". Hence I dont think there is any security risk here assuming the application and radio is safe.
It differs in a way that the person must have access to the device to flash firmware I believe. In x86 as you describe, the person could attack with a connection to the device/machine.
I agree, hence my direct comment of malicious firmware… For me, the open question is, can one still write a malicious firmware on the ESP32 without the non documented opcodes?
Yes. You can write whatever malicious firmware in a hardware you have physical access, with or without the undocumented opcodes. Not OTA though, unless there's a bug in the radio stack. Is not an open question.
HCI is an interface for the low level parts of the Bluetooth stack to exchange information with the higher levels. If you assume that higher level code is malicious, an OTA vulnerability is straightforward.
What would be the purpose of such firmware? The ESP32 is a complete SoC, the “firmware”, “OS”, and “application” are all the same binary.
So yes you could write a malicious “firmware” without using undocumented commands. But what would be the point? Said firmware already has complete execution privileges on the devices already, with the ability to read any memory it wants to, by virtue of said firmware being literally all the software running on the devices, and owning all of the memory.
This would require you to have root access to the thing, at that point you might as well write literally any code you like and not bother with the HCI commands.
It is literally just a debug port exposed over the wired HCI interface.
This gives you absolutely nothing at all that you can't get with a normal UART debug port or JTAG. Everything in the HCI commands already exists in the normal bootloader. If you can get a device into bootloader mode, you can peek and poke flash and memory, along with everything else.
There is absolutely nothing here.
You can create malicious firmware, sure, but it has nothing to do with this HCI thing.
It was feasible even without these commands. If you already had code execution on the host then you could've already done what you wanted with the device
Storm in a teacup. But it raises an interesting question. Why are
these HCI commands "undocumented"?
A phrase I hear around security nowadays is "Shadow API" [0, 1]
A shadow API is an undocumented one, and it tends to set-off massive
alarm bells.
The researchers were clearly too keen to make a splash, and that
reflects a sad state of wannabeism and hustling for crumbs these days.
What exacerbates it is proprietary software - stuff that's hidden,
obscured, opaque, deceptive, distributed as mysterious "blobs"....
this entire cluster of behaviour raises red flags and gets people
looking for things. And when we go looking for things that aren't
there ... we find them (assume the worst).
That is to say, the researchers are reacting emotionally (but with good
justification) to discovering undocumented features, aka a "shadow
API".
This could have been avoided if the device maker had just documented
them. Why didn't they?
One fairly common reason to not document something is not wanting to commit to supporting it or to keeping it around in future releases. Document it and customers will come to depend on it and then you might be stuck with it.
I was looking up vacuum tube recently. Apparently there are things called "getter" pins in them that connects to internal material deposition device, used to remove last bits of air out of tube in the manufacturing line. The exact behaviors and operating parameters for those pins are corporate secrets and not documented. Datasheets only mention them as "No Connect" pins that do nothing and has to be disconnected from anything for safe and correct operation of the device. BACKDOOR!!! in an analog vacuum tube.
... everything ever always had factory-use-only interfaces. Manufacturers don't document every bits of everything on public datasheets. Granted, the dynamics of this change if they can be used as backdoors, and those might have to be either disclosed or securely disabled, but "why they exist in the first place" is just nonsensical question; they exist for necessary internal purposes.
That question should be "how should factory use interfaces be handled going forward".
Great example, and I'm old enough to have also wondered why those "nc"
pins on valves and ICs clearly did do something.
As you frame it, it's "What the eye doesn't see the heart doesn't
grieve". Definitely apropos the making of burgers and sausages.
Now we have a different discussion, about "internal" or "external"
legibility.
We used to live in a world of high trust in vendors. Those days are
gone. Vendor malware is a massive and growing problem and supply-chain
legibility is a hot topic.
FWIW I'm arguing the fully open position. Unless you've desperate
trade secrets, there is no "internal". And if you've a 16 bit register
that's 65536 entries, most of them sparsely documented as "no op".
And if a researcher finds it does something,... it's suspicious by
default.
This "no user serviceable parts" is an old schism in technology.
Today we have almost blanket rights to repair and openness is the new
security model.
All complex devices, like CPUs, GPUs, microcontrollers, have tons of undocumented debug features, without exception.
The only difference between devices is whether such debug features remain accessible to users or they are completely disabled after production.
In this specific case, as well explained in TFA, the debug commands do not really provide any additional exploitable capabilities for a malicious programmer, especially because the Bluetooth controller is already hosted on the same CPU as the potentially malicious application, so if that could use the debug commands it could already do anything it wants, like sending arbitrary Bluetooth packets. The Bluetoth driver also can already do anything it wants on the shared CPU, so if it had bugs or backdoors that can be triggered by received Bluetooth packets the existence of these debug commands does not change anything.
Unlike this case where there is nothing to criticize in the reply of ExpressIf or on what they have done, an example of atrocious handling of the undocumented debug features has been provided by Apple, whose devices had for several years, until the end of 2023, a backdoor created by non-disabled debug registers, which allowed a total bypassing of the memory protection, enabling (in conjunction with some bugs in Apple system libraries) complete remote control of an iPhone, and which has been exploited for the remote and undetectable spying of some iPhone owners (discussed on HN at the time of the public revealing of the CVEs).
All of this higher level HCI and API shiz rests (in the embedded world) on a set of registers which are usually documented in the daatsheet. By manipulating these registers, the developer can make the IC do anything it is capable of doing: it is the ultimate API, and there is almost no means in code of making the hardware do anything except by this means.
So any 'shadow' API is not really of any consequence in embedded, because it all boils down to these registers, which we can perfectly well view or manipulate.
Espressif, as well as all other manufacturers of such BLE enabled SoCs provide at least some parts of their RF stack as precompiled binary.
However, Espressif took the decision to deliberately not publish the registers used at lowest level to configure and command the RF peripheral on the IC. All other manufacturers I am aware of publish these registers, just like all the others.
Ultimately this minor obfuscation offers no additional security benefit, nor does it create any security loophole above any that exist on any other chipset.
This is an absolute non-issue, as others have commented.
Even without domain-specific knowledge, we can see this is all nonsense: 'Allow attackers to carry out impersonation attacks' - the same thing that can be done with any number of legitimate and useful applications designed for developing with BLE, available on Android marketplace :*)
On any embedded platform which allows low level access to a radio transciever of any kind, it is possible to exercise all functionality, which will include various possible attacks. The only thing which prevents this in the industry at present is obfuscation, which most manufacturers (other than Espressif) dont bother with.
Take Nordic for example, who make probably the best and most widely used/respected BLE chipset: the full functionality of the transciever can be used via a relatively small set of registers, which are fully documented. Nordic supply a BLE stack, but you can perfectly well write your own instead, meaning that you can deliberately or accidentally abuse all aspects of the protocol.
Even at physical level, one can easily conduct simple jamming attacks, by broadcasting full power carrier signal on the BLE advertising channels.
None of this naughtiness is a security flaw. BLE and other wireless protocols in common use are well designed to be resistant to jamming, and the physical implementation of the radios on various SoCs available currently are limited: you cant put out 10 watts of hash over 82 channels at once, because the silicon doesnt support it.
You cant even 'sniff' BLE traffic in a real sense, because the silicon on requires a significant number of bytes of 'address' in order to capture any meaningful packet data from ambient noise.
Here is C code, as a fun example, to broadcast full power carrier on any channel, from Nordic NRF52. This is absolutely documented and useful for testing (for example emissions).
// set channel to number between 0 and 100
int channel = 0;
nrf_radio_shorts_set(0);
nrf_radio_int_disable(~0);
nrf_radio_event_clear(NRF_RADIO_EVENT_DISABLED);
nrf_radio_task_trigger(NRF_RADIO_TASK_DISABLE);
while (!nrf_radio_event_check(NRF_RADIO_EVENT_DISABLED)) {}
nrf_radio_event_clear(NRF_RADIO_EVENT_DISABLED);
tl;dr whichever system using an ESP32 as a bluetooth adapter may also just run arbitrary code on the ESP32 itself over the same interface. Commands have to be issued from the host system, not from the air.
This sounds like... a good feature? There are indeed some scenarios where doing so poses a security risk. But most of the time I do want to be able to run arbitrary code on e.g. my WiFi dongle when I'm in control. I know FCC is not a fan of this idea though.
I agree! Your system is already heavily compromised if this is a problem for you.
I think the real problem lies in a lack of visibility into the state of the device. A compromised dongle could easily be transferred between machines. What we need is to make obvious what the machine/device is doing.
Both HN discussion [1] and blogs [2] mentioned that it was a nothingburger.
tl;dr: there are undocumented debugging interfaces on the original ESP32 chip that can only be accessed by code running on the microcontroller or via another computer connected via the physical UART interface. No remote exploit possible. Espressif will now provide a patch to disable these debugging interfaces and document all yet-undocumented interfaces.
> The ESP32 series of chips support open-source NimBLE and Bluedroid as Bluetooth host stacks.
The undocumented commands in question (which implement the debug interface) are about to become documented, and Espressif was both transparent in their publication, and pledged for more transparency moving forward.
I think that they handled this particular issue well. It was more rant about general state of things in the industry. I worked with Nordic chips and they also supply enormous 120 KB proprietary blob called SoftDevice which you supposed to put into your firmware and let it own all the interrupts. This is terrible experience.
I understand your objections, but I don't think I have to explain that anything directly touching wireless is already walking the edge of the regulatory abyss. But yeah, some vendors make it easier, others seem to just want developers to suffer.
I'm very glad they're going to be more transparent rather than try to lock things down more. I've been impressed with Espressif over the years and I'm glad they're continuing to impress.