I know the guy that heads up the team that did this work -- he and I spent 2+ years fighting Broadcom's old, god-awful bluetooth code. Our whole team used to play what-if games about replacing the thing while massive code dumps came in from vendors, making the task ever larger.
Zach, if you're reading this, HUGE kudos to holding the line in replacing that, and double kudos for doing it in a verifiable, sane language!
Fuchsia network stack is also being rewritten from Go to Rust [1] and it follows a functional core/imperative shell pattern, something very unusual in network stacks [2]:
[1] https://fuchsia.dev/fuchsia-src/contribute/contributing_to_n...
[2] https://cs.opensource.google/fuchsia/fuchsia/+/master:src/co...
Interesting; makes you wonder about the potential for code sharing between that and Android or even linux as well. The Linux kernel developers are open to allowing some Rust usage for things like drivers at this point. So the time seems right for this kind of thing. I guess there is also some decent code from e.g. redox os that might be adapted. Licensing (MIT) should not be an obstacle for this as far as I understand it. Likewise Fuchsia also uses GPL2 compatible licenses (MIT, Apache2, BSD). So, there's some potential to avoid reinventing a few wheels across these platforms.
At face value, protocol stacks such as TCP/IP or bluetooth, are great use cases for a language like rust in a resource constrained environment (battery, CPU) where you are looking to get high throughput, low latency, etc combined with decent zero cost abstractions so you can make some strong guarantees about e.g. correctness. A good high level implementation, might be usable across different OS kernels. Of course these kernels have very different designs so it might make less sense at the code level than I imagine.
I do wonder about where Google is headed with Fuchsia. Do they have a plan at this point for getting OEMs to want to switch to that? I imagine e.g. Samsung might have some hesitations about being even more dependent on Google as a supplier.
> I imagine e.g. Samsung might have some hesitations about being even more dependent on Google as a supplier.
Does it make them more dependent? Google is effectively the only upstream for Android, and Fuchsia is open source, so it seems like it should be the same?
One of them is based on mobile linux; which Samsung also uses for things like Bada and which has a lot of hardware vendors supporting it with drivers; including Samsung itself of course. Bada actually originated out of Nokia's Meego and Maemo mobile OS. That predates Android and early Android versions ran pretty much the same Linux kernels. The first devices running Android were actually Nokia N800s before they did the first Nexus.
Fuchsia is open source but closed source friendly (because of the license). I suspect that's actually the main non technical argument for Google to be doing this: Android is too open and they've been trying to fix that for years. Apple has a similar benefit with the BSD internals of IOS and OSX. Still OSS in part but mostly not and Apple has not bothered with supporting separate Darwin releases for a long time.
So, like with Android, I'd expect a fair amount of closed source secret sauce is going to be needed to run Fuchsia. More rather than less. I doubt Google free versions of Fuchsia are going to be a thing like it is a thing with Android. Google is doing this to take more control of the end user experience. Just like Apple does with IOS. Letting Samsung (or anyone) bastardize that is not really what they want here.
I'm guessing, Samsung actually wants less of that Google secret sauce at this point rather than more. They are trying to differentiate with exclusive features, their own UX, their own apps, and services, etc. I'm expecting a lot of OEMs are going to have a similar mindset. Especially the Chinese ones currently shipping flavors of Android without a Google license for the play services on the wrong side of the current trade wars (like Huawei). Google has got their work cut out there trying to get OEMs like that to swallow Fuchsia. I think, Google is going to end up supporting Android for a long time because of this even if they eventually launch Fuchsia (which in my opinion is not a certainty yet). The simple reason for this is that the alternative would be walking away from a chunk of the mobile market that they currently exploit via their play store. I don't see them surrendering that and I don't think they would want third parties to continue releasing Android without Google either. So, the only way to prevent that would be a long term Android development strategy regardless of whether they actually release Fuchsia or not.
So, reusing code across Android and Fuchsia makes a lot of sense.
Bada doesn't have anything to do with Linux, at all. It was a OS using a stack similar to Symbian.
You mean Tizen.
Which, just like Android, the only thing it shares with Linux, is the Linux kernel, now having its own C++ stack, .NET Core/Xamarin and there are still some Englightment leftovers.
> Fuchsia network stack is also being rewritten from Go to Rust
This Android Bluetooth stack is mostly in C++. Only part of it (about 4K lines) is written with Rust.
The headline is misleading.
This is great news. Hopefully it fixes a lot of the jank that I've always experienced with Android Bluetooth. I don't think I've ever had a smooth experience - even with a supposedly flagship device (Pixel 4a!) I've encountered all of the following problems:
* Devices getting stuck at max volume (thankfully not headphones)
* Devices getting stuck at really low volumes
* Devices randomly switching between absolute volume and relative volume (not really sure how to describe this, but sometimes changing the volume on the phone changes the volume on the receiver, and sometimes it changes the mix volume on the phone only (like an aux output would behave) and keeps the volume on the receiver)
* Needing to enable Developer Settings to change the Bluetooth protocol version and other wacky stuff that I just shouldn't have to do [0]
* Headphones cutting in and out when exercising, like the phone can't figure out it needs to stay in the higher of two radio power profiles that it's switching between, as the receiving antenna on my workout band moves 2-3 inches away from the phone and back again
[0]: https://www.reddit.com/r/GooglePixel/comments/8hbcuu/the_100...
Bluetooth has been awful on Android for a long time. I've never not had to futz with it to get it to work. I hope this is a move toward making it as seamless as it should be. I couldn't imagine trying to figure all this out as a non-technical user.
That's bizarre. I have a Samsung phone with Samsung Bluetooth earbuds and I've only ever experienced a single connection issue in the whole year I've owned them and they are a daily driver for me. I wonder what the difference is.
Samsung phones actually use their own Bluetooth stack not the one from AOSP Android. This is why the Bluetooth feature set sometimes doesn't match with the others.
Well that solves that. It seems much more reliable.
Generally, Samsung's bluetooth implementation has worked better than any other Android I've used. On top of that, Bluetooth seems to work better if both devices are from the same vendor
Samsung bluetooth is different from Android, and it has been much better for many years now. In fact I use Galaxy Buds with my S10 and it is an amazing seamless experience. No lag, no disconnections, great audio quality, and I don't even need to have bluetooth enabled for them to connect, just need to open the buds' case. I don't know how that last part works.
I havet not had any issues with Bluetooth on standard android or Samsung devices.
What I _do_ have problems with is the stupid accompanying apps on Samsung phones (Wear and Buds) that are reinstalled automatically every time I reconnect.
Dear Samsung, how many times do I have to refuse the ToS and delete the app before you get my point?
(I don't use the apps because I don't want any of the "smart" functionality and I don't like Samsung sharing my data with unspecified third parties)
I believe the user should not have to choose between not geting firmware updates and not signing away their personal information.
Let me explain: on the first run the app demands access to lots of personal data including contacts and location data while at the same time stating they may share this data with partners and third parties. If you don't agree to _all_ these the app will not start (but continue to nag you)
If Samsung wants to make the app a requirement, they should remove analytics and also let the user choose if he wants to give the app all these permissions.
same, s10 + galaxy buds+ (also, sony's XM4) and i never had a single issue.
I get these same issues on a Pixel 3a! The most common ones (~daily) are:
- volume stuck low or high (on headphones :/) - "fixed" with a quick bluetooth off/on cycle
- volume okay but not responsive to changes in system volume
I've become used to these issues but I'd love a new driver that made them go away!
All my Bluetooth woes from iPhone finally went away when I got a Pixel last year. The iPhone is still here and being used, but not for exercising due to the connectivity issues.
I think it’s just Bluetooth itself being cursed.
What’s interesting is the iPhone Bluetooth experience with Apple products is excellent. I remember my partner used to have an Android wear watch for her Android phone and it was constantly disconnecting and having to be disconnected/reconnected to fix issues. When I got my Apple Watch, I couldn’t believe how smooth the experience was. To the point where if you didn’t know it ran off Bluetooth you would think it is some new protocol. Similar experience with AirPods as well.
I’ve had some issues with AirPods and AirPods Pro, but they are much rarer than other bluetooth devices I’ve used. I use AirPods Pro several times a day and for hours a day so even with a very low failure rate I’d expect things to break occasionally.
re:Watch, it is likely using WiFi at least some of the time. My Watch shows up on my WiFi and conveniently connects using the same network as my phone, so it apparently knows the WiFi credentials from the phone, at least on a personal network. I never explicitly set that up, and getting Watch to stay off WiFi isn’t something I persisted at - mainly because it does help create a more perfect connection experience than is possible with Bluetooth.
I discovered this initially when I was charging my Watch, while well outside of bluetooth range but just barely in range of WiFi.
> I think it’s just Bluetooth itself being cursed.
Hence my desire for a headphone jack.
Hah, I've experienced the absolute/relative volume bug. Sony headphones sometimes end up connecting with independent (relative?) volume controls and then switching the volume control from independent to absolute when you use a track skip gesture. The volume goes to the max instantly as a result. One of the reasons I'm staying away from wireless Sony headphones.
Yep that happens to me every now and then..
Hopefully they fix the Bluetooth issue with YouTube and Chromecast. It is such a odd issue for me. Chromecast will pause the video because YouTube demands to have the Bluetooth audio back to the app instead of the TV (I have specialized Bluetooth audio transceiver hook up to my TV because of my disability). YT keep forcing the bluetooth audio back to my phone instead of the TV after few second of casting on my TV. This cause the cast to pause the video because my phone detected that audio switched to different source (YouTube). The only way to stop that from happening is to swipe away the YouTube casting notification after casting the video in my phone, then YouTube will stop doing that.
Honestly, I think the issue is the bluetooth itself, it an amazing but extremely fragile technology. My Win 10 laptop crap the bed with the internal bluetooth. And it crap the bed with the USB bluetooth transceiver as well. Microsoft been having issue with bluetooth. The same for OSX, Apple have their issues with it. They recently released a update to fix the bluetooth issue in Big Sur M1 and that didn't fix the issue.
Wow, that's awful. I've had a large number of Pixel phones and bluetooth has been basically flawless (other than an old Pixel 1 where I think the hardware actually failed). I guess I have been lucky with the devices I used with the phones.
Why would changing code from c++ to rust fix those issues? It looks more like bugs in logic rather than language.
Of course I can only speculate, but I see two primary reasons to suspect improvement:
* It's a purposeful re-write: not just because it sounded like a fun idea, but because there are reasons to redo the stack. Memory security a main one given the choice of language, but presumably cleaning up the logic and making it better overall is another.
* Broken-windows theory of software development: Writing in a sloppy, old, foot-gun language encourages bad code that just barely works for the happy path. Writing in a language that is far stricter and requires intention and design makes one think a little more critically about the logic.
Shit like that is why I always wondered where all the claims of Android being superior to iOS came from.
One isn't better than the other. This thread itself is full of people saying the same shit about both android and iOS.
That would be Google's second (at least!) bluetooth stack in Rust, the first one being in Fuchsia.
https://fuchsia.googlesource.com/fuchsia/+/refs/heads/master...
The headline is misleading. Most of this code is C++. They included some Rust, but the core isn’t Rust.
Most of the code is in Rust. More than 75% of the entire stack. One component in the stack, the bt-host driver, is currently still in C++ as the oldest component. This part of the code only deals with low level connection management and speaking HCI with the hardware as well as hosting the GATT implementation, but everything is handed off to other components that are all written in Rust. Parts that remaining component itself of it have already been migrated to rust over time as well including GAP. All of the profiles and upper layers are entirely in Rust as well.
The Android IPC driver (binder) is also being re-written in rust. It takes advantage of the upcoming kernel driver rust support in Linux. It is an obvious choice for a memory safe re-write since all android processes (including sandboxed ones) have access to binder
Hello,
I'm actually one of the main binder userspace maintainers in my day job there (opinions are my own), and I haven't heard about this. Do you have a reference? What has happened is that there is a userspace shim over libbinder called libbinder_rs which provides Rust support for binder, but AFAIK, the kernel driver and main userspace lib is remaining in C++. Still, would be cool.
Your Friend, Steven Moreland
Hi! Not a Google employee, but I'm volunteering on the Rust-for-Linux project which is bringing Rust bindings to the upstream linux kernel. A Google engineer is working on porting binder to Rust and is contributing as he implements it. See his comments here[0] and here[1].
[0] https://github.com/Rust-for-Linux/linux/pull/145 [1] https://github.com/Rust-for-Linux/linux/pull/130
Outch! Thanks for the refs.
Thanks! I searched for this rewritten version and couldn't find anything.
Now we need safer NDK APIs, even C++ bindings would be better than nothing.
Additionally exposing some NDK only APIs to ART would also be welcomed from security point of view.
And since we are at it, support Rust on the NDK LLVM toolchain.
This is a big f'n deal for Android, IMHO
In the "big deal" sense, i'm always curious in what way. Eg is it a source of constant problems? Where not only a rewrite, but specifically a rewrite in Rust, would prevent a lot of issues?
Or is it more of a "What if" thing? Ie there's not many problems currently, but the liability is a huge deal?
to be clear i work in Rust, use it for all my projects, etc - i'm a fanboy, but i also recognize there's a lot of hype. I'm always keeping an eye out for the Rewrite It In Rust (RIIR?) meme vs actual needs.
Which isn't to say that i think people _need_ to have a reason to use Rust, i use it for everything because i (and my team) prefer it - but i think the meme is destructive.. so i'm always looking for it heh.
Binder has been the source of a number of Android security vulnerabilities.
Fun fact: Google's from-scratch Bluetooth stack for Fuchsia has been written primarily written in Rust since it's conception for a few years now.
https://fuchsia.googlesource.com/fuchsia/+/master/src/connec...
It seems like Rust is really catching on.
To be honest, I didn't pay much attention to it for a while -- it felt like it might have simply been that day's "flavor of the day", destined to sink once then next flavor became popular.
Now, there's a real problem to be solved. But I thought a simpler approach would be needed (e.g., Zig or something like it). I guess that may still happen, but seems more and more like Rust is here to stay.
I suspect that Zig will still end up eclipsing Rust - not because it's a "more powerful" language, but because it's much easier to learn. Lower barriers of entry seem to matter more than mere "power" - see Common Lisp's multi-decade feature lead being ignored, and Python leapfrogging other more capable languages (Common Lisp) or equally capable ones that came first (Ruby) due to ease-of-use.
Zig is too uninteresting to eclipse anything. It's the same problem as D's -betterC option. It's not better-enough to motivate switching away from C in places where C is still used, and it's far too stripped down & basic to attract the majority that's gone to C++, Rust, Go, ObjC, or Swift (or even "full" D). It doesn't offer much to justify switching costs. If C++11 had never happened then maybe Zig would attract the C++ crowd, but today?
Zig also seems far too opinionated & even self-contradictory. Like no hidden control flow means that you can't have operator overloading because apparently the + operator is invisible to people or something. And if it could be overridden it could call a function (gasp!) or throw an exception (double gasp!), followed immediately by a big section about how the + operator internally is hidden control flow and can throw an exception (overflow checking).
It then also constantly claims itself as being small & simple, but it has compile-time reflection & evaluation for templated functions - which is widely considered the main complexity of C++. I think Zig is better overall having this feature, I love constexpr & friends in C++, but compile-time code generation & evaluation over generic types is also not "simple" nor "small".
> Zig is too uninteresting to eclipse anything.
As someone who knows C and not Zig, Zig is very interesting. It has incremental compilation, in-place binary patching, the ability to use code before declaration in a file, compile-time code execution, and extremely low compile times. Rust itself doesn't have most of those.
Also, as Python illustrated, a language doesn't have to be interesting to be popular.
As Python also illustrated, a language can be opinionated and popular. I'm growing more and more convinced that useful languages have to be opinionated - opinions have to be somewhere, and if they're not in the language (where the entire community is forced to use them), then they'll have to be in individual code-bases, where you'll get a bunch of fragmentation (which I think was one of the things that killed Lisp).
Now, Zig is very imperfect - no hidden control flow/operator overloading, no REPL, and a weak type system, most notably - but it's better than C in almost every way, easier to learn than Rust, and way more conceptually scalable than FORTH (which has a more "beautiful" design).
"Perl but readable" and "batteries included" are true but leave out some equally important things: The explicit reference / dereference business in Perl makes handling nested data structures pretty annoying and strongly guides programmers to avoid those, which has a lot of compounding effects to structure of prgorams. Also Python embraced multiplatform early on, and had eg excellent Windows support, whereas Perl was always Unix first. Also the culture is important. Python was always beginner-friendly whereas Perl people were less likely to suffer fools gladly.
Rust's "compile-time code execution" is a horrendously complex joke, and it's compile times are so atrociously long that the lack of those two things you mentioned is even worse than in a language like C (with somewhat-sane compilation speed).
> no REPL
Personally I don't find it necessary, but the proposal for REPL has been accepted: https://github.com/ziglang/zig/issues/596
A good example of what happens when the language is not sufficiently opinionated is the modern-day JavaScript/Node.js ecosystem.
> Incremental compilation, edit-and-continue are available in Visual C and C++.
In a particular toolchain not available cross-platform - not comparable to Zig having it available in the reference implementation, which is open-source and cross-platform.
> REPLs do exist for C and C++.
Hacky, nonstandard ones with limitations and altered semantics that aren't included in any of the major IDE's. Not remotely comparable to what's provided with SLIME.
> It's the same problem as D's -betterC option. It's not better-enough to motivate switching away from C in places where C is still used, and it's far too stripped down & basic to attract the majority that's gone to C++, Rust, Go, ObjC, or Swift (or even "full" D).
I've been interested in D since the early days (back when it had two competing standard libraries) - I think you're kind of misrepresenting why D never caught on - it wasn't that people weren't interested in a better C++ - it's that D was unsuitable for a lot of scenarios C++ was used because they decided to include GC in the language and it needed to have a runtime that supports it. This put D more in the C# alternative camp than C++ alternative, it was harder to port (I don't know if D still has a working iOS or Android ports, it didn't have them a few years ago when I last checked). And as a C# alternative it's strait out worse across the board (C# has quite string native interop so D doesn't really win much, tooling ecosystem and support is levels above).
If someone came up with something ala D (strong C/C++ interop, fast compile times, good metaprogramming, modules, package manager) without the GC and LLVM based (so it's super portable out of the box) I'm sure it would gain traction. The problem is that's a huge undertaking and I don't see who would be motivated to fund such a thing.
Rust exists because it solves a real problem and places like this BT stack seem like perfect use case due to security aspects - but the security aspect also adds a lot of complexity and there are domains that really don't care about it - they just want sane modules, modern package management and fast compile times.
Go has it's own niche in the application/network programming.
Seems like modern system languages get purpose built and general purpose ones are higher up the stack.
This happened recently (in D timeframe) and is still a hack that basically creates a different language that's a subset of D - which is the dirtiest way to fix an initial design mistake. If D came out as betterc from the go I guarantee you the adoption would have been much better - this way you're stuck with legacy decisions of a language that was never really that popular to begin with (unlike C - C++ story).
Also back when it launched LLVM wasn't really a thing (GCC was probably the closest thing to having a portable backend but it wasn't nearly as clear cut especially with the licensing and all that anti-extensibility attitude), and D having it's own custom backend was also an issue.
I applaud the effort but at this point I think it will never get mainstream popularity like Rust. I'm sure it's useful to people but it had it's time in the spotlight, I don't see how they get the momentum shift.
> It's not better-enough to motivate switching away from C in places where C is still used
I suspect zig will eventually fill this niche. Proper arrays and strings and better compile time execution support while still being a small, simple, explicit language are quite significant improvements on C.
> Zig isn't actually a small or simple language, after all, that's just marketing bullshit. Manual, mostly unassisted memory management isn't simple, and compile time evaluation & reflection isn't small. It keeps making claims about being simpler because it doesn't have macros or metaprogramming, but that's incredibly deceptive at best since it does have compile-time code generation (aka, metaprogramming).
As VP of marketing bullshit I recommend you double check your source of information as we never claimed that Zig has no metaprogramming, in fact comptime is mentioned on the front page of the official Zig website and the codesample on the right of it shows comptime metaprogramming too.
That said, is you need something stable today, then Rust is undoubtedly the better choice.
It think zig makes quite a bit of sense for micro-controllers where you need really precise control of things, avoid lots of bugs simply by having all memory statically allocated, and you need to do a lot of unsafe bit twiddling anyway to interact with hardware.
Zig won't dis-allow any correct programs, zig would compile faster, and zig does appear to be simpler even though in principle you could come across specific zig programs that aren't simple due to doing crazy metaprogramming.
If I imagine a world where Zig is 1.0, and has the same tooling/ecosystem as Rust, and I want to make a single player game from scratch, I would probably pick Zig over Rust, and Zig over C or C++.
I think Zig is really interesting and has a lot of great ideas, some of which I hope Rust takes inspiration from. They've also done some difficult engineering work to make parts of the development experience feel amazing, like bundling clang and libc implementations for a variety of platforms so that cross-compiling works out-of-the-box. The Zig cross-compiling story is the best I've ever seen of any language, bar none.
That being said, I do think Rust's memory safety story is a game-changer for systems programming. We seem to be in a programming language boom, with lots of new and interesting languages being developed. I hope some of them iterate on the concepts that Rust has developed so we can do even better in the future! I don't think anyone involved in Rust would claim that it's the best we can do, or it solves every problem perfectly.
Zigs lack of operator overloading means math code will be illegibly gibberish, which kills any possible interest I had in it.
> Why would you need operator overloading in a low-level language?
Have you ever heard of matrices and vectors? You need them in a lot of DSP (digital signal processing) applications, like audio and video filters, or in 3D graphics. Being able to write
x = m * y
instead of x = m.vector_mult(y)
makes life so much more pleasant.I use operator overloading in C++ all the time, you seem to be looking at it from a data science perspective, but other fields use math also, such as gamedev.
How is adding `Duration` to `DateTime` related to jupyter notebook? I use that much more often than element-wise matrix multiplication.
I share your view partially. As you said, code generation is difficult and it can get messy pretty fast, but this is the beauty of Zig, you have these features tightly packed, there is a capped number of ways for doing things. In Zig you learn how to use the wand properly, instead of being distracted by trying different sizes and colors of wands. While many languages focus on the breadth of ways of doing things, Zig is all about the depth of a small set.
> So what do you do with something that is still useful, possibly needed, but missing? You reinvent it.
Yup, see for example java.lang.Comparable which is basically just the standard library going "yeah the language screwed up, here's your operator overloading"
I've been using Python since it was version 1.4.
Back then the main draw of Python was supposed to be the "one way of doing things" - Python originally started as a teaching language.
And look at Python today - there's what, five different "standard" ways of packaging libraries? (And why is "packaging libraries" even a thing?) Instead of "batteries included" we get at least four different ways of doing every common task: the stdlib Python 2 way, the stdlib Python 3 way, the "standard" community third-party synchronous library and the "standard" community async one.
This is just how it always is. Every language starts with the goal of being small, easy to understand and beautifully composeable.
The cruft builds over time because people eventually want it and because none of it ever really goes away due to backwards compatibility.
I think it's best to make peace with this fact, learn to live with the cruft and accept existing languages rather than switching your entire stack every three years trying to chase an unobtainable dragon.
I think Zig could end up being much more popular for domains where Rust's safety and correctness features are less important. Something like a bluetooth stack is exactly where Rust is ideal though imho. Some for crypto libs.
I believe the number of domains which benefit from Rust’s safety features combined with its runtime performance are vast. IoT, hardware drivers, autonomous systems, embedded systems, (cars, drones) infrastructure, etc. I would choose Rust in these scenarios if I have choice.
I understand that the language has a steeper learning curve but it’s an upfront cost compared to C (or Zig?) where you have to put even more effort later on chasing the same bugs which Rust could’ve protected you from.
I don’t know Zig well enough so I’m not arguing against it. It’s just what I think about being safe vs being easier to learn.
But I see your point. At the end of the day, the growth of the language happens almost organically and might not follow the logic I put forward.
I think of Zig as a quarter step between C and Rust. You don't get memory safety, but you DO get better handling of undefined behavior, and option types, and better protection from array bounds overruns.
So like a single player video game, it might be an easier overall choice, in a hypothetical world where ecosystems are similarly fleshed out.
Common Lisp had itself and its community to blame. Their dependency managers were decades behind other scripting languages. Poor defaults for build tooling. Even now building a statically-linked binary is an exercise in frustration.
Despite being an ANSI specified language, the most popular libraries focuses only on SBCL. CL is like Lua where most interpreters never achieve 100% compatibility with each other. The lack of Emacs/Vim alternatives demands beginners to adhere to their dogma. Adhering to their cargo cult might be reasonable if they are the dominant language and culture. But they are not. Software engineering classes in university teaches how to make Java AbstractSingletonProxyFactoryBeans first and caml/lisp much later. Common lisp had it coming when they had their lunch eaten by Clojure. They were an old irrelevant relic that sat on their ivory tower and refused to improve or confront the status quo beyond empty words.
And it is not like the Common Lisp community lacks resources. They claim their language is used at Google via ITA software and in GOFAI through Grammarly. Even the bleeding edge of computing through Regetti. Then where are all the maintained, up to date tooling and libraries? Do a quick search and almost everything is unmaintained or half dead, with the usual generic excuse being "we are ANSI specified, libraries twenty years ago will work perfectly fine".
HN is an echo chamber for the greatness of Lisp where every commenter would worship at its church before going back to coding JS/Python/Java on Monday.
"We were not out to win over the Lisp programmers; we were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp."
- Guy Steele, Java spec co-author
You forgot Scheme.
Rust isn't that hard to learn. I teach it to college sophomores who only know Java, and within 2-3 months I have them writing parsers and interpreters in Rust. In fact these students are requesting our courses to be taught in Rust, and have never heard of Zig.
I think that while the language is a little complicated, this is tempered by how nice the tooling is. I consider the borrow checker to be my TA, as it actually helps the student write code that is structured better. When they go on to write C and C++ in later courses, their code is actually more memory safe due to having their habits having been shaped by Rust.
> this is tempered by how nice the tooling is.
It's fairly mixed. Compiler warnings and errors are great. IDE integration is improving. CPU profiling with perf/hotspot is fine, albeit memory-hungry. Debugging and memory profiling is still bad compared to java.
Heh, figures that the only person with actual experience systematically teaching the language, is downvoted here.
I'd imagine given another language (e.g. Python), they'd learn a lot faster and be a lot more prodictive throughout the course.
That's nice, at UPB in Bucharest we use for the OS course C and for the distributed systems one Go/Java, though the DS one involves heavy theory/math and less coding but implementations in Go/Java are a big bonus. I don't know where do you teach, but our students would probably be confused by Rust and their time will be spent elsewhere instead of being focused on the actual course content.
> I do wonder if Rust is easier or harder than other comparable languages like C / C++ when the person has no prior knowledge of programming.
I can answer this because I previously taught the course in C and C++. The students supposedly have some knowledge of Java but they seem to always have forgotten all of it when they reach me.
Students learning C use all of the footguns you can imagine. The biggest problem for them is writing code that segfaults, and their code always segfaults. It's not so much a problem that it does, but that there is 0 useful feedback as to why the segfault occurred and where they could potentially look to fix the problem. The bigger issue for them is memory management. They frequently walk into the use after free, double free, and memory leak walls. They also have significant trouble with N-dimensional pointer arrays. They have trouble allocating the appropriate memory for these arrays and can't seem to visualize the layout of memory.
C++ is a little better because they have experience with Java, but they are frequently mystified by constructors/destructors and how to manage memory manually (they only have experience with a GC), and template programming is always an issue. But they still run into the same issue with segfaults.
Rust makes all of this go away. They don't have to manage their memory manually, and they don't encounter a single segfault. Ever. We are able to just sweep all of those problems under the rug, and move on to actual content. With C/C++ we focused a lot on the language with the hope we could use it as a tool eventually, with Rust we use the language as a tool and focus on the wider world of applications for which that tool is useful.
The Rust book as currently written assumes that the user has some programming experience. That's mainly b/c there hasn't been much of any experimentation in teaching Rust as a first time programming language. Although obviously it's been done for C++ and even C, so it ought to be quite doable, if perhaps with a bit of a "learn programming the hard way" style.
I think that wholly depends on the context. Python is a great teaching language IMHO if the goal is to teach how to get things done. It's not a good language if you're teaching how things do work under the hood.
To me it's the difference between programming and computer science.
I don't think so, too many heavy hitters are adopting rust, and rust has the advantage in adoption across the board. To beat rust you have to be at least 2-10X as desirable as rust.
Only if they fix the language to cover use after free errors.
I don't really consider the popularity of a thing on HN as a good measure of its real-world popularity.
Take a look at the GitHub Octoverse https://octoverse.github.com/
Dlang has better metaprogramming capabilities and compiles extremely fast. https://forum.dlang.org
Zig isn’t quite production ready.
And it doesn't have memory safety. Zig is really fun and it is excellent for small wasm modules, but until it gets memory safety it will never be a Rust alternative.
More on the missing memory safety of Zig: https://scattered-thoughts.net/writing/how-safe-is-zig/
Discussion from 9 days ago: https://news.ycombinator.com/item?id=26537693
Thanks for the links. I donate a pittance to Zig, I really want to see where CTFE goes. If nothing else, Zig pushes the state of the art and every time a new system does, it sets the lower bound (hopefully) for the future.
Safety is a matter of degree, not an absolute. Unsafe Rust isn't safe. Not to mention, safe Zig isn't unsafe. And, if you want a lot of memory safety -- more than Rust in some ways -- you can use something like Javascript.
C++ doesn't have "memory safety" and it is a Rust alternative. Stop being pretentious, not every language has to have Rust's borrow checker or whatever its called.
C backwards compatibility doesn't help, but it definilty does have the opt-in tooling to make it way safer than using C, although it isn't Ada safe IDE improvements for live static analysis do help.
I agree, but nothing short of borrow checker is considered memory safety to Rust advocates and they probably think that we segfault at least 10 times an hour with use-after-free's, double-free's and buffer overflows.
I don't think Zig is meant to replace C++. It's a cool language on its own.
By the way, you've been shadowbanned if you haven't noticed it yet.
I'm always curious why bluetooth is such a terrible piece of technology. Did anyone write blogposts analyzing what went wrong?
Enormous spec crafted by committee. It's anything and everything to everyone, which means implementing the entire thing is a _serious_ undertaking. Add to that it's the kind of feature that's a cost center--people expect wireless stuff like headphones, etc. to just work and be there, it's not a glitzy feature that sells products. So there's zero incentive to innovate or spend time and money making it better. Hardware makers want cheaper chips, skimping on the implementation and software side helps make that happen.
From what I know from having worked at Nokia, it's simply the result of design by committee where the committee was made up of mutually very hostile hardware (mostly) companies not particularly good at software (that's how Nokia failed in a nutshell), telcos, chipset manufacturers, hardware manufacturers, etc. And I mean hostile as in looking to squeeze each other for patent licensing, competing for the same customers, and suppliers. All this happened in an ecosystem that also produced such monstrosities as gprs, 3G, 4G, etc. Ericsson and Nokia were on top of the market when they created bluetooth and had a relatively big vote in these committees.
Each of those standards were burdened with lots of little features to cater for the needs (perceived or real) of each of the committee members. It's a very similar dynamic to what happened to bluetooth. A lot of 3G stuff never really got any traction. Especially once Apple and Google decided that IP connectivity was the only thing they needed from 3G/4G modems and unceremoniously declined to even bother to support such things as videocalls over 3G. Apple did Facetime instead and in the process also strangled SMS and cut out the middlemen (operators) from the revenue. Google was a bit slower but on Android a lot of 3G features were never really implemented as they were mostly redundant if you had a working internet connection, fully featured SDKs, and a modern web browser.
It's the same for a lot of early bluetooth features. Lots of stuff you can do with it; lots of vendors with half broken implementations with lots of bugs; decades of workarounds for all sorts of widely used buggy implementations; etc. It kind of works but not great and making it work great in the presence of all those not so great products is kind of hard.
Just a long way of saying that bluetooth is so convoluted and complicated is because the people that built it needed it to be that way more badly than they needed for it to be easy to implement (including by others). At this point it's so entrenched that nothing else seems to be able to displace it. I'm not even sure of people actively putting time and resources in even trying to do that. I guess you could but your product just wouldn't work with any phone or laptop in the market. Which if you make e.g. headphones is kind of a non-starter. It's Bluetooth or nothing. I wouldn't be surprised if Apple considered this at some point and then ended up not doing it. They have a history of taking good ideas and then creating proprietary but functional implementations of those ideas.
> pple did Facetime instead and in the process also strangled SMS and cut out the middlemen
I'm personally really glad for this decision. iMessage is many times better than SMS. SMS security is a nightmare by design.
I just wish there was something better between Google and Apple, like a universal iMessage.
I really wish Apple would release iMessage for Android. I know they never will, and I know I can just use SMS / MMS / RCS or WhatsApp or Signal or whatever. I'm just really tired of the default app for iOS not being able to interop with Android. Every time someone from my wife's family sends (iMessage) her a video and its low quality on her phone (pixel) and she asks them to email it and they say "What is wrong with your phone" a little piece of me dies inside.
I think this is a pretty interesting summary: https://www.reddit.com/r/NoStupidQuestions/comments/mc13t4/c...
Some of it is the constraints with the main use cases: low power, (relatively) high bandwidth, the need to pair devices without screens. Comparing this use case to something like wifi is unfair.
A lot of it is due to backwards compatibility. Bluetooth isn't simply bluetooth. There are different versions, different profiles, different codecs, and even different optional features.
Have a look at the matrix: https://www.rtings.com/headphones/learn/bluetooth-versions-c...
The two devices being paired have to figure out what version/profile/codec to use to talk to each other, and gracefully fall back to the lowest mutually supported featureset. This is a really hard problem, and the devices don't always handle it well.
It's pretty simple really. It tries to do a lot of complicated things, and it's underspecified so it's possible to have compliant devices that behave wildly differently between vendors. It also rarely breaks compatibility so there's a lot of cruft for ancient devices.
This is the exact reason it is challenging. There is a lot of variance in how each manufacturer implements it. You might compare it to the browser wars of old - you've got many different players adding various extensions in different ways etc. The actual radio technology and low power consumption are phenomenal, the problems arise mostly around protocol implementation oddities.
Any analysis would be less than insightful unless it comes from someone who sat in the meetings and helped write the standard.
FWIW Bluetooth isn't "terrible." It's pretty remarkable we can get all sorts of devices to communicate with eachother wirelessly and at low powers with pretty decent bandwidth. And now you can buy a Bluetooth stack on a chip from a variety of vendors.
The bigger issue with Bluetooth is that failure conditions are mostly an afterthought by device manufacturers, and Bluetooth is becoming a sought after feature in environments less than tolerable to failure like automotive and medical devices.
If you go into the parent directory, what appears to be the main Gabeldorsh directory, most of the implementation appears to be written in C++. Is the project being slowly rewritten in Rust? Or are only parts of it written in Rust?
I’m not finding much Rust BlueTooth Stack code in the link. There are only about 4K lines of Rust in here.
Am I missing something? Or is the headline exaggerated?
The headline is technically correct. "rewrite with Rust" means there is Rust inside, but it doesn't mean that it's all (or mostly) Rust. But given the way one commonly understands the headline, it's definitely misleading.
The parent directory is the Android supporting stuff like the bluetooth HAL and the JNI interfaces. The rest is the old bluedroid/floride stuff.
Gah. Just two days ago I finished what must have been our fourth rewrite of Kotlin code to interface with Android BLE. It's a complete trial by fire experience. Various medium and other blogs on the net are a morass of "find the true information amongst the disinformation". If I had a nickel for every "just put a delay here" so called solutions.
We have two apps, one that communicates over many variously configured characteristics, and another that uses less characteristics but pushes/receives just as fast as I can get it to go in big bursts.
The edge cases around connect/disconnect events are the most frustrating and most difficult to reliably debug and gain confidence your implementation is robust. Oh, and don't forget that just because it works on your Samsung, doesn't mean it works on your Moto.
Assuming this new implementation is indeed much better (and not just swapping one pile of surprises for a new and shiny, but different, pile of surprises) my hat is off to the folks behind this. You get a big fat atta-whatever for making the world a better place, even if I wish it had happened 4 years ago.
> Oh, and don't forget that just because it works on your Samsung, doesn't mean it works on your Moto.
And just because it works on a new Samsung doesn't mean it works on a 2 generations old one. I had to do two projects recently developing cross platform mobile apps, one had to interface with the WIFI stack - holly shit the deprecated APIs that only work on Android 10, legacy that doesn't work but is the only way to do it on Android <10, cross device inconsistency, incorrect documentation (one thing in the docs, another in the source code) etc. etc.
To be fair, iOS doesn't expose a lot of that functionality to user space apps (without special certs) but I prefer that to Android where it's technically possible but practically impossible because of the insane support matrix - it just wastes time.
I'm not doing any mobile development from now on - the entire process is just riddled with busywork and working around crap APIs, people used to complain about having to support IE, mobile fragmentation is probably 10x worse.
I went through the same thing recently. The best reference I've ever seen on Android Bluetooth is this series of posts:
https://medium.com/@martijn.van.welie/making-android-ble-wor...
Wait , so can rust generally be replace C++ code in most projects ?
Has anyone here had success with a partial to Rust migration.
>Wait , so can rust generally be replace C++ code in most projects ?
In principle Rust could replace every line of C++ code in the world. The questions of how often it would be a good idea to do so, practical to do so, is harder to say. It is promising that this bluetooth stack only needed 4 lines of unsafe though!
Since the interop is zero overhead doing piecewise migrations is certainly possible, as has been going on with firefox, and curl and discussions of doing it in linux as well. You do complicate your build system and there is a non trivial amount of work to stitch the two languages together.
I thought curl was expressly not migrating any code to Rust?
Well there's this: https://daniel.haxx.se/blog/2020/10/09/rust-in-curl-with-hyp...
Yes, but more recently there has been a change of heart. I don't think it implies for now that there is any aim to replace all of it with Rust though.
I think the answer is yes. Most C++ projects would be better served by Rust. However there are many caveats:
- I think that C++ devs are still more numerous than Rust devs.
- There are many excellent C++ libraries that don't yet have great Rust bindings. Furthermore it is unlikely that template-heavy libraries will ever be easy to use from Rust.
- C++ is supported on more platforms.
- C++ is more powerful. (Particularly templates). You rarely need more power than what is available in Rust but if for whatever reason your project would really benefit from heavy meta-programming C++ will be better. (I think this case is rare). Rust is also catching up, but the language development, especially around generics is fairly slow (which is probably a good thing)
Meta-programming needs in Rust are addressed by macros. There's not really anything missing there compared to what C++ does via templates.
IMHO, you're a bit too far in the other direction; there are areas where we've been working on actively improving, and there are still some things that C++ folks really miss in Rust. Doesn't mean we'll add all of them, of course, but there is desire for more, for good reasons.
There are generics and dependent types. But these are still gaining features and not nearly as powerful as templates and SFINAE. (but way easier to understand)
Macros are an option but don't have access to the same type information so often they solve different problems.
> Has anyone here had success with a partial to Rust migration.
Firefox have.
It is very hard to answer such a general question with a definitive answer, but Rust does want to be viable for the same sorts of things in which C++ is viable. As always, your mileage may vary.
So to clear I can't just plop a Rust class into a C++ project.
You can't. However there are options.
- Rust has strong support for C ABI much like C++. So you can communicate between Rust and C++ via a C ABI.
- There are projects like https://cxx.rs/ to provide higher-level bindings between the two languages.
However I suspect that template-heavy/generic-heavy code will never be well supported. This is usually not an issue for the types of things that we are trying to bind.
Rust does not have classes, strictly speaking, though you can define methods on structs.
https://crates.io/crates/cxx is the simplest way to do an integration. It is slightly more work than "just plop in" but it's not incredibly difficult. It's harder than mixing C and C++ together, but then again, almost no pairings of languages are that easy.
The issue is not "structs" v. "classes" per se, it's things like inheritance, vtables and RTTI (also other C++ features like templates and exceptions), that need special ABI support in C++ for which there is no Rust-side equivalent. (meanwhile Rust traits are quite different from anything in C++, although they're used similarly)
I mean, C++ has the "class" and "struct" keywords for a reason. (they are very similar, Rust structs are closer to "struct" than "class" though) There are a lot more things going on with C++ classes than syntax sugar for functions that have access to "this."
Also, while not in C++, in many languages, classes imply heap allocation, where structs do not.
I mean, Mozilla migrated parts of Firefox to Rust, that's the big one. I hear it might start being included in the Linux Kernel soon.
It's not entirely true though, most of the code base is still C++ and there is no plan to migrate the rest.
Isn't that exactly what makes it successful partial migration?
Some thoughts:
1. Firefox is a huge codebase. 10% of that is still quite a bit.
2. Some highly complex core parts of firefox such as the rendering engine are at least partly written in Rust.
3. The bits written in Rust are not all isolated from the bits written in C++. In places they intertwine at a function level of granularity.
Rust originated from Mozilla...
It's wise to use Rust in Security Critical Projects. Please see Google Fuchsia, they follow hybrid approach where critical stuff written in Rust and rest in C++
Since Android is Linux, will this become available to all Linux users?
You have bit confusion here. Android is based on Linux kernel, and changes on that is directly reflected to both Linux and Android users, it does not go to other direction. We will see if this code ends up on other use as well.
The kernel is a forked Linux, compilable with clang, it has LinuxSE and seccomp enabled by default.
Everything else, including drivers post Treble, doesn't have anything to do with Linux.
I don't know much about Bluetooth in Linux, but I had the same thought. It looks like the license on this is Apache, which I would guess could be a problem.
Very unlikely, the Android driver infrastructure is quite different from Linux.
Probably not.
Wow, this is great! Not just the memory safety aspect of it, but also that it's written with testability in mind.
I have no opinion on this, but a data point:
I had been sticking with an old USB wireless headset for a long time because it Just Worked (tm), but even with a new battery the life wasn't really up to the modern work from home. So I chanced it with a bluetooth noise cancelling headset.
My first experience using bluetooth under Linux in probably a decade. It has been super reliable. I used the CLI tools "bluetooth_ctl", I didn't have a button to click in my i3 setup.
Not saying this rewrite isn't needed, to be clear. But I've been surprised how reliable it's been.
I've always been annoyed at the cross-platform story for Bluetooth. GATT is one of my favorite protocols because it is so simple, but writing simple code against this simple protocol is _not_ portable:
iOS and macOS have CoreBluetooth, Linux has BlueZ, Windows has Windows.Devices.Bluetooth and Android has android.bluetooth.
I've seen a few projects trying to fix this, like https://github.com/deviceplug/btleplug, and I hope one of them becomes production ready.
From my experience the Android Bluetooth experience is much better than the Linux Bluetooth experience. Even my ancient Samsung S3 work better than a recently bought Bluetooth dongle for my desktop and I only bought the new dongle, because the experience with the onboard Bluetooth was so horrible.
Sometimes I wonder how a technology that is so common on modern devices is still so unreliable.
Will this allow higher bitrates for audio without the arbitrary quality restrictions of the current stack? A developer has even implemented a patch for the old one, but never got a response from the dev team: https://itnan.ru/post.php?c=1&p=456476
Why not BlueZ, that everyone else uses? Android feels like the land of Not Invented Here sometimes, to me. There might be good reasons, but often it feels like it's just Google trying to insure Android is as incompatible / different as possible from everything else that runs Linux.
It doesn't help that almost none of the ChromeOS / Android subsystems or tools have not made it to any mainstream / regular Linux. They remain Google-only products.
I wish this company doing so much Linux work would be part of some broader community. There's some reciprocal question, of how hard it would be, why haven't other people gone in & say picked out some of the, say, ChromeOS containerization tools: how much effort has the world made to use the offerings in these mono-repos? Community takes two. But it still feels incredibly weird, so against-the-grain to see such an active but non-participatory Linux user in the world.
Backkground chit-chat aside though, technically what (if anything) makes BlueZ unsuitable for Android? Why is Google on their fourth bluetooth stack (NewBlue, BlueDroid, the Fuchsia one, now this)?
GPL is the problem. Hardware vendors won't touch it with a 10k foot pole because of the requirement to redistribute patches.
There's a history of wanting BSD licenses at Android's inception. If the BSD distributions hadn't run into problems relating to legal battles at the time, Android would be built on Mach with a BSD userland rather than Linux. Additionally, there was more vendor support and drivers for the Linux kernel than Mach. Sadly, for the fledgeling enterprise that was Android, it was better to start from Linux, and use Apache/BSD style licensing and write their own userland.
> If the BSD distributions hadn't run into problems relating to legal battles at the time, Android would be built on Mach with a BSD userland rather than Linux.
What legal battles were there? Wikipedia puts Android being started in 2003 [0] and then only legal battle I recall with BSD having settled in 1994 [1].
[0] https://en.wikipedia.org/wiki/Android_version_history
[1] https://en.wikipedia.org/wiki/UNIX_System_Laboratories,_Inc.....
Android's development started long before Android became a company officially, and there was more fallout from those BSD lawsuits than what is officially reported in Wikipedia -- especially since the settlement agreement included that those that agreed must keep silent.
Also many hardware vendors use their stack to differentiate themselves from other competitors as at the end they tend to 'do' the same things. That stack keeps at bay people just copying their design wholesale and then undercutting their margin and using their drivers. Then even if they are willing, they many times included some lib they bought from someone else. They may even have the full code to use and have changed it as needed. But that 3rd party is usually some consulting group and guess what one of the very few things they sell is. It is a huge mess.
> Sadly, for the fledgeling enterprise that was Android, it was better to start from Linux, and use Apache/BSD style licensing and write their own userland.
Curious about your take on this. Why do you think it would be better if Android were mach kernel + BSD userland akin to Mac/IOS instead of Linux?
Android used BlueZ in the earlier versions, but they changed to bluedroid for reasons I don't know, but I believe it was (in part?) developed by broadcom and thus was probably better supported (licensing probably played a huge part, too, since bluedroid uses a more permissive license)
Part of the reason was that Broadcom could directly support the connectivity guys. We just didn't realize how awful it was, but given that the the Android team didn't have that big of a connectivity team at the time, it seemed like a good idea.
The Glass connectivity team (of which Zach and I were a part of) actually had a few more engineers than the main Android team did, and given that connectivity was absolutely critical for our device, we had the strength to stand up to this mess and plumb its depths, and Zach was a key driver in most of the rework. Lots of our changes made it into mainline, and when Glass ended, I left Google and Zach kept up the fight, moving closer to Android proper.
BlueZ, btw, has its own problems all throughout the stack, and unfortunately suffers from political issues w.r.t. the hardware vendors.
I don't have the foggiest about the politician issues. I was shocked to hit up the bluez repo copyright in the README[1], from the beginning 2000, to 2001, then a Qualcomm email address for one Max Krasnyansky until 2003. Practically ancient history but still a very interesting detail to me that I would never have guessed
I really really really appreciate you writing in. It feels.lile there is so little to go on, so little available to understand the weird twists & turns of how the world, the software world especially, developed. A little bit of background & insight is so refreshing to hear. Thanks again!
[1] https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/READ...
does this work relate in any way to the ChromeOs's recent, abortive, new Bluetooth stack?
I get what you're saying, but a project as big as Chrome OS isn't really suited for a lot of outside cooperation and communication. More often than not, you want to ship a feature as quickly as possible, ship it to canary, then dev, beta and stable at some point. Community run projects require a lot more back and forth and you give up some control over your schedule if you involve them, if you want your stuff to get merged.
Firecracker however is based on Chrome OS' container solution and lives on as an OSS project run by AWS.
Bluez these days is pretty tightly coupled to modern desktop linux. IIRC the only official way to talk to bluez is through dbus (there are still a couple legacy ways through shared libraries though). I don't think Android seriously uses dbus though so that would be a pretty big issue.
And as a person running the latest mainline kernel on their daily driver laptop--I would not want bluez running the wireless peripherals on my phone. I can barely keep a wireless keyboard attached and working on this thing... in 2021.
Is this because of memory bugs\vulnerabilities that target the stack or just because it was too old and junky?
Also, a strong point of Rust is just the way the types and the defaults are. Most people think the borrow checker is the only major thing, but Rust have this features:
* Plain Structs
* Rich enums (aka: Algebraic types) that leads to
* All replacements to nulls (Option, Result, Default(trait), Empty(idiom))
* Immutability and functional style as preferred when sensible
* Consistency in APIs by proxy of traits (all conversions going with Into/From traits, All iterables can .collect into all containers, etc)
and many things like this that make very productive to build good APIs when you get the handle of it.
Dunno what motivated them internally, but the bluetooth vulns that made news in Feb 2020 would have been prevented by safe Rust. See my comment from back then: https://news.ycombinator.com/item?id=22265572
According to dtolnay, there are only 4 lines of unsafe rust in the Rust component. It's a bit small though at the current moment in time, with 4 thousand lines. Most of the code is still C++. Note that it's "with Rust" in the headline, not "in Rust".
Junkiness and vulnerabilities go hand in hand.
True that. maybe I'll rephrase, was Rust specifically chosen to address memory concerns? I don't think it is too common in Android.
> I don't think it is too common in Android.
??? Android crash little things left and right, and have leaks just for existing.
I did a quick look on androidxref and saw a lot more .rs files than I expected. I haven't worked in Android frameworks stuff since around 5.0. I knew not too long after I left that world they started adding some minimal rust stuff, but looks like its grown a bit. Exciting. It's a perfect fit.
That's probably a major reason. But, Rust is also just a really nice language that's very pleasant to use.
shudder I'm thinking back to all of the nightmares I had trying to use Bluetooth (especially BLE) in the early Android 6 and 7 days. It was basically unusable because of _serious_ platform bugs and issues. I hope this time with a new stack it goes a little better.
Just found on my Note 10+ in Developer Options was a setting labeled "Enable Gabeldorsh - Enables the Bluetooth Gabeldorsh feature stack".
Just turned it on and will be reconnecting some things to see if it helps with some of the small issues I've always had.
> impl FacadeServiceManager
Rather Java-like. All we need now is a FacadeServiceManagerFactoryBuilder.
Good for security. Now we just need a protocol that actually works...
Realistically- how long, if at all would it be before this filters down to updates- only with Android 12?
After seeing this headline I installed rust and am reading the book.
Oh good I can't wait for the Android update which makes Bluetooth much much worse. Not because it's written in Rust, but because Android just keeps breaking stuff on my phones every time I get new updates.
> Why is gabeldorsche plural?
> Please see this informative video we've prepared[0].
Is this shipping in Android S?
It's shipped in Android 11, but in disabled state.
On Pixel, developer options, bluetooth, enable "Gabeldorsh" if you want to live on the bleeding edge.
Huh, I just poked around and saw that option on my Pixel 3a.
On the one hand, it might be interesting to try it. On the other hand, at least with the one BT device I regularly use, the current stack works flawlessly...
What is Android S? I see no reference to this version of android anywhere
It's what Android 12, the forthcoming version, would have been called if they'd kept the letter-based codenames.
Android 12, whose developer previews have started.
Source: https://developer.android.com/about/versions/12/overview
Crar'lie*
Neat
nice
BT in Android is proper shitty. We need it updating as quick as we can. The mic input for Android is unusable on most phones. To ad to that the SQ is just bad on all android devices.
Nice to see this.
Sadly the Android team, while taking this safety steps, it keeps using unsafe C userspace for NDK APIs.
Security is as good as the weakest link.
This is false and a meme that continues to stall progress. Hardening the BT stack means that accidentally crappy or adversarial devices cannot use a buggy BT stack to pop a device, from kernel mode remotely.
This is a hugely welcome change. The threat model from a app using the NDK is much different than having a drive by wireless attack.
Defense in depth and put focus on protocols and parsing, the rest of our stacks will come in time.
The meme that stalls progress is the myth of the perfect C developer.
> Sadly the Android team, while taking this safety steps, it keeps using unsafe C userspace for NDK APIs.
You phrase this as a negative but it's overwhelmingly a positive. Imagine how difficult it'd be to write an app using the NDK in Rust if the NDK had been C++ instead. The C ABI remains by far the most portable & common target. Everything can call it.
It is a bit hard to talk about safety when everything one has to play is an API that allows for all the usual stuff that C is known to cause.
Everything can call it, and everyone has to redo the safety work.
> fighting Broadcom's old, god-awful bluetooth code
Correction: god-awful host side bluetooth code.
There is still the bluetooth firmware residing on the BCMxxx chip (or Qualcomm chip) - >1MB of god-awfulerer closed-source code, half of it is in ROM (with limited number of available patch slots), full of bugs. You can see it crash from time to time in the kernel debug logs (and auto-restart itself)
On Glass, we actually went to Broadcom and had them on the ropes for fixing parts of their firmware. Sadly, we couldn't bring those fixes out of the closed source world of hardware, so it's still up to the system integrator to fight those battles...
Serious question, why doesn't Google build its own bluetooth & BLE chips? Please put some competition on Broadcoam and the like and either push them entirely out of the market (good riddance), or force them to step up their game.
The rest of your comment makes sense, but the cognitive dissonance of that sentence was so extreme I had to respond.
Yeah from the Android platform side it would be weird to build chips. For products like the Pixel phone though, that would be a great place to innovate. And realistically, Google needs to get into the custom chip game sooner rather than later... GCE needs to start competing with Amazon's Graviton ARM processors. The sooner you get the expertise and talent to churn out chips (and maybe even a fab or two?), the better. The global shortage for fab usage and chips could kind of force your hand soon.
> Prior to Android, all we had were closed source low powered feature phones and Blackberries.
Symbian was open source, ran on millions of smartphones - most of which had app stores and web browsers. Some of which had touchscreens, GPS, augmented reality features etc.
Don't get me wrong - Android has been brilliant. But let's not completely rewrite history, eh?
Google throws money at problems which don't generate revenue all the time. I feel like all it takes is someone inside Google with enough leverage to push it through without the thing having to make business sense
> Android has never been about driving the hardware narrative
Apple has been building its own hardware from the beginning, but still also uses Broadcom chips.
What a hopelessly naive perspective. Android has always been about ensuring the Google surveillance operation doesn't get shut out of mobile. The whole OS is designed to enable snooping on the user not just while in the browser, but at all times.
The fact that they've had to build actual hardware that functions at times, and includes things like driver stacks is purely incidental to the main mission. Try seeing how willing they are to add support for chipsets on phones that don't include Google Services.
> In fact I think with Bluetooth 4 the file transfer profile just establishes an adhoc WiFi network.
This was a feature of Bluetooth 3.0, but almost nothing ever used it. I was once at a big BT testing company and asked about it, and they had like one device that could do it (a crazy feature-packed HTC WinMo device I think).
And then Bluetooth 4.0 added BLE, and it seems like there hasn't been much development of classic BT since then.
Aren't they considered "essential patents" that must be made available at a reasonable price?
Care to expand? Is that why Software Define Radio is still so much a niche and expensive?
IMHO those dynamics are changing. Custom chips are becoming table stakes for new products. Look at Apple M1, W1, or Amazaon's Graviton2 chips. All of those are core parts of products and services which would not be possible without the custom silicon. The pool of talent and resources to build these things is extremely small, and there are geopolitical issues putting ever more pressure and scarcity on them (i.e. China pivoting to reduce dependency on Western designed chips and hoovering up as much chip design talent as possible). The TL;DR is amazing new hardware and services need custom silicon, but custom silicon is only getting more difficult and more challenging to build in the near future.
Getting good yield/performance on the radio is non trivial. Aside from having domain experts you have a pretty significant investment in equipment to do the testing. Broadcom and other semis split that cost over many customers.
Nrf52 are great BLE chips, however the full bluetooth spec (classic+BLE) is orders of magnitude more complex...
I've heard RF-anything is quite hard to get right. One of the reasons why Intel was dumped for XMM modems is because they were just slower than Qualcomm.
Even if some courageous developer there fixes the bugs and updates the firmware; how many end-users would actually receive the update and actually apply the update? That's the problem Linux's LVFS[1] solves but it's unfortunate that not all manufacturers support it.
I got update for my half a decade old Logitech's 2.4 GHz receiver (nRF24L) for wireless keyboard as soon as I plugged it on Linux, I've used the same keyboard on Mac and the official Logitech software doesn't even detect the device properly let alone update the receiver's firmware(no issues using the device though).
[1] https://fwupd.org/
Do you have any more info about ROM patch slots? I have never heard of this before. I assume this is a small amount of r/w memory that is somehow overlaid over specific locations in the ROM?
Correct. It's a small table of:
The data is overlayed over the specified addresses, in runtime. On some chips its 8 bytes instead of 4. On a typical Broadcom/Cypress chip you have 128 or 256 entries.By the time the chip is 2-3 years in the market and still getting firmware updates, ~98% of them are used by existing firmware, so there are only 5-10 free entries by the time the chip is considered "obsolete".
Case in point: the Broadcom/Cypress BCM43455 chip on the raspberry pi is almost out of patch entries. Broadcom have switched to their usual tactic of stalling for years on known, reproducible bug reports.
Sadly that's common in the hardware world.
Step 1. Have a reliable hardware watchdog that restarts everytime there's a software problem.
Step 2. There is no step 2.
Such is the sad world of Bluetooth. The dirty secret to this industry is that this, while seeming hacky, is the bare minimum de-facto standard in most cases.
I don’t follow the leap. The grandparent’s point was about the quality and terrible lack of long term support of Broadcom chips. How does that translate to issues of the standard itself?
Nobody would complaining about Apple creating their own radio chips (which they seem to plan for 5G/6G). Apple creating their own standard protocols is an issue though.
Easy to say, and I can’t know for sure exactly what factored impacted Broadcom’s decisions here, but I can tell you that chip manufacturers are under extreme pressure to keep costs down, which means that they may under-spec systems at times. Also, with the long design cycles involved in chip design the patch capabilities may have been decided years in advance, before realizing how much would be needed.
In general I agree with your comment, though it’s a lot easier to say this in hindsight.
For more illustrations of bugs/vulns in the firmware: https://www.youtube.com/watch?v=7tIQjPjjJQc&t=1986s
Broadcom's firmware seems to be just absolutely terrible across the stack, NIC included. They seem to have solid product design / engineering chops, but firmware just defies them.
My question would be what were the motivations to move into a stack written in Rust, if the much of the bugs are in the closed source FW running in the peripheral?
What would happen if consistency is lost outside the Rust domain but still in the BT stack?
I really hope it will fare better than NewBlue and BlueDroid... https://www.androidpolice.com/2020/09/14/the-rise-and-fall-o...
NewBlue died because I quit Google. I started that project and was its main motive force.
> NewBlue died because I quit Google. I started that project and was its main motive force.
Unclear why you're being downvoted here -- I seem to remember you, and I definitely remember hearing about NewBlue while we were working on Bluedroid. At the time it wasn't clear what happened. When did you leave?
I am also not sure about the downvotes, but c'est la vie. I was dmitrygr@, i left apr 2019
Bluedroid is the old stack. Zach and co started the floride project to clean it up, but it seems GD is their attempt to totally rewrite it.
Any insight into why Apple seems to have a much better bluetooth stack? I do a lot of BLE development that has to work cross platform, and we see constant GATT connection issues on android compared to almost none on ios.
Apple has a unique relationship to its hardware, and the money to bend vendor's firmware to their will. Half of the problem with Bluetooth is the host stack. The other half is the firmware running on the controller on the other side of HCI. If the controller ever gets screwed up, the host stack can only disconnect from HCI and totally restart the controller.
The third half is the bluetooth firmware on the other device. The fourth half is the other firmwares on the other device. The fifth half is the specification(s). The sixth half is the RF environment.
Haha -- yes, indeed. Bluetooth the spec and implementation are an absolute dumpster fire.
"Bluetooth is a layer cake of sadness" is the turn of phrase we used for a while on the Glass connectivity team. One of our project founders, Thad Starner actually apologized to me for the mess it became; apparently it was supposed to be simpler, but when Nokia took ownership back in the 90s, it started to go downhill.
Our lead on the connectivity team at the time had a crazy idea to rewrite the spec using only ACL and L2CAP, but never really went anywhere with it because of the the Glass org implosion.
A huge factor is that iPhone has a large, long-standing market share, with relatively few different OS and hardware versions. This means that everyone else has been able to test their Bluetooth implementations for interoperability with iPhones.
A lot of people using Apple Bluetooth hosts use them with Apple Bluetooth devices (other hosts, mice, keyboard, headphones, etc)
Anecdotal, but my new 16 inch MBP drops BT connections all the time. I had to give up the keyboard and mouse I've used for years across at least 5 computers, mostly Mac's because they disconnected so much. Even the Apple keyboard and mouse I switched to drop occasionally.
FWIW, I also have a 16" MBP and I've also had a ton of problems with Bluetooth on this thing.
I've noticed that Bluetooth connectivity is significantly worse when the laptop is closed. You might see if keeping it open helps.
Resetting the Bluetooth module also helped resolve some persistent connectivity problems I was having (shift+option+click on the Bluetooth menubar item; choose "reset" from the menu).
Eventually this thing started having kernel panics every time I plugged something into a USB-C port and I had to send it back for replacement. Not a great experience.
Wow, never knew about that combination of keys and the fact that it brings extended options in the drop down menu. Thank you!
the 16 inch mbp is full of bugs, so it's probably a problem with the device (typing with one and still crying because of https://forums.macrumors.com/threads/16-is-hot-noisy-with-an...)
This HN thread from yesterday may help!
https://news.ycombinator.com/item?id=26625356
tl;dr Using USB3 ports can cause Bluetooth dropouts on Macs (and lots of other machines).
I think there might be some issue on that hardware where 2.4ghz wifi and bluetooth share an antenna, and can interfere with each other? If you're using 2.4ghz and can use 5, give that a try.
In that case he's probably holding it wrong.
Honestly sounds like faulty hardware or interference.
Doesn't kill it permanently; there's a fix involving connecting a CSR 2.0 dongle. There's also a workaround that sets an nvram var. Overall it's an absolute clusterfuck, though, because Apple still haven't fixed the problem.
My guess would be that they can apply much more pressure to the Bluetooth chip vendor. The buying power that Apple has for designing in a particular chip is much bigger than any individual Android manufacturer (even Samsung). That gets them the leverage to get the chip vendor to do what they want.
> The buying power that Apple has for designing in a particular chip is much bigger than any individual Android manufacturer (even Samsung)
Why does Samsung have smaller buying power than Apple? Doesn't Samsung sell more phones than them? Or is it because while Samsung sells more phones in total, Apple still has the most successful single models?
A typical manufacturer approach is to beat the hell out of your suppliers on price to make your margin which is likely most of what Samsung does with its buying power. Apple does that but is also willing to throw money at issues and go as far as financing facilities for vendors. Where most manufacturers don't care about the device they sold the second it's out of warranty[1], Apple takes a longer term view of the relationship. So Apple is far more likely to say 'we need this fixed and are willing to provide X million dollars and a team of our own engineers to solve the problem' while Samsung probably goes more like 'make cheaper and make it better... and make it cheaper!'
So the reason Samsung typically has less influence is that when all you do is crush your suppliers margins to make your own, said suppliers don't tend to make much of an investment in making things better since they are incentivized to just make them cheaper.
[1] in fairness, they can't afford to: while Apple has an ongoing revenue stream from its devices, most other manufacturers don't. It's Google/Facebook/etc who monetize the devices post-sale while for the original device manufacturer it's merely a liability at that point. This is a factor in why Android has a rather dismal track record re: updates on older devices.
This, but slightly revised: it's because Samsung is in the market of selling cheap hardware. Apple sells luxury products and part of their value-add is leverage with their vendors.
Apple can come to a vendor and say "these are our constraints on what we are willing to buy. Here's testing benchmarks. You are required to meet them. Failure to do so voids the purchase contract."
The vendor will then, of course, say "We'll have to charge you more if we're spinning up a test infrastructure we don't have."
And then Apple will negotiate a price point and pass the anti-savings onto the consumer.
They do this at multiple levels at multiple points in their hardware story. I met someone once who worked on the USB integration specs back in the first-couple-generation iBooks. Apple built a rig for physically testing the connectors (including multiple cycles of intentional mis-plug, i.e. flipping a USB-A connector over wrong-side-up and pushing hard against the socket). They told the vendors selling them USB sockets that the sockets had to survive the rig, or the sale was void. Vendors priced accordingly. But the resulting product is more robust than others on the market.
Are Samsung devices buggier than other android brands, or just so much more popular within your userbase, that more bugs surface?
Bruv. I have a samsung. I'm touched.
We've been together 4 years, its USB port is a little loose now. Too much charging everyday, but the screen is still pristine. The camera is fine so we can go out and take photos at the beach. I'm happy man. It does what it said it would do and did it. And still doing it.
> Apple still has the most successful single models?
Not just that, every apple device has wireless access, and Apple has thrice the operating income and more than twice the net income of Samsung Electronics.
But yes the "value" of individual devices is also part of the equation, in the sense that Samsung has a lot of cheap-ish devices with fairly short lifetimes, they're not going to fight for device quality. Apple has a very limited number of devices they support for a long time. And they're probably bringing in a lot of baggage from having been screwed over by the plans and fuckups of their suppliers in the past.
And even then, the more time passes the more they just go "fuck'em all" and move chip design in-house, not just the "main" chips (AX, MX, SX) but the ancillary as well: the WX and HX series integrate bluetooth on the SoC. There's no doubt they'll eventually go their own way on larger devices as well, the U1 is probably the first steps towards that.
Samsung bought what was advertised as mobile handset business after CSR failed to spot that combo BT/Wifi chips were the future and threw away their lead. Qualcomm got the rest a few years later.
("what was advertised as" because there wasn't really a differentiation in the R&D bits, so there was a somewhat arbitrary split and hasty redacting of repos given to Samsung to avoid names of other customers in comments).
Samsung's phone division and electronic parts division aren't the same thing, so there was no guarantee that the phones would buy the Bluetooth/Wifi from their new acquisition, although I hear they did eventually.
I get what you're saying, but Samsung is also huge and Bluetooth is used in lots of electronics that Samsung sells -- including laptops, tablets, smart watches, TVs... or conceivably could sell, like future IoT products.
If someone senior at Samsung said to their vendor "good Bluetooth or you lose the Samsung account", that would provoke some, um, intense conversations at the vendor between sales and engineering.
Incidentally Apple sometimes has more than one vendor too, so it's not just two parties. I know cases where they've had two suppliers. Displays and modems come to mind, although I've not Googled to verify.
I wouldn't be surprised if Apple write their own firmware for things like bluetooth chips.
Most of them run RTKit, I believe.
There's quite a bit of evidence that they do. Apple's strength has always been at the point at which hardware is built and interfaces to the software, including firmware.
I still find it buggy on my iPhone 12 Pro, as a user at least. Often says 'connected' to my headphones (Sony WH-100XM3's) when it's not, connecting to Alexa devices to stream audio often fails and requires a reboot to connect properly.
I can't believe Bluetooth is still such a pain in the bum in 2021.
Buggy Bluetooth on the other end... Alexa is running Linux, and the Sony is probably some custom bluetooth stack.
Once you move to all Apple bluetooth, things really smooth out. It seems that Apple does way more testing/validating of their Bluetooth stack.
I've actually worked in this area. You are basically correct.
Sometimes you have to make a choice on which brands/chipsets you support. Devices on different ends of the compatibility spectrum can basically be mutually exclusive. IIRC if you advertise A2DP some devices supporting only HSP won't work, so you can make some hacky workaround but then your nicer A2DP equipment is harder to use. If you only need to guarantee support for X subset of devices you control, it's easy to tweak the settings so they work well together.
Actually the spec is overspecified and repeats itself multiple times in different layers. L2CAP specifies a TTL, so does RFCOMM, so does SOC, so does...
I buy this. All Apple works great, but Apple headset with Windows PC doesn’t seem better than anything else with Windows PC.
They don't smooth out, for me, at least. Airpods Max, for example, can just decide to transmit nothing but deafening static to the other side in the conversation.
Eh, I’m still having weird connection issues between my iPhone and OG AirPods. Less than with my old headphones, but more than I’d like
Even then its behavior differs between iOS and Mac. My BT-using app would run fine under iOS but not the Mac; it turned out that the Mac rediscovers devices several times unnecessarily. So if your application progresses through states based on device discovery, it will be confused by this. Not a huge problem to mitigate once you discover it, but the different behavior is annoying.
As a long time Apple user, I really don't know where that's coming from. Endless issues with bluetooth over the years, including full system crashes. Curiously iOS bluetooth stack seems to be more stable than on macOS. Or maybe it's just hardware differences.
Apple's stack does work great with Apples hardware, though.
How many hardware devices are you talking about? That's very different than what I've seen personally or peripheral to enterprise support.
Well, at least Magic Trackpad, Keyboard etc. work really well with a Mac. Never had any issues.
Bose bluetooth headphones, third party "high end" bluetooth devices... not so great. Lately it's been better, though.
They have less hardware variability and also their bluetooth stack is still quite buggy but everyone works around their bugs on device side due to popularity.
Whilst they also use a lot of Broadcom silicon, the stack they use is written 100% in house.
Could it be because they only have a handful of hardware variants to support, vs hundreds on Android? Presumably, the stack itself is somewhat sane on both sides, but the hardware's bugs, poorly specified or undefined behavior is constantly throwing wrenches in the machinery - Apple managed to fix most of them for the hardware they support, but Android has no chance as they have to support hundreds of different chips.
This is not specific to bluetooth or drivers, but I don't fully buy the thesis/deflection that Apple just has a much smaller set of hardware to support. I run a hackintosh system myself and it's more stable than Windows. I'm starting to think Apple just has stronger general QC. Windows is just as glitchy and crashy on Surface devices as anywhere else.
The counter point does apply to drivers, but it's really not just that.
Possibly, but half the guys on the dev/qa team have google pixel devices and even they throw connection errors fairly regularly.
It's having to support every phone that introduces the instability even the refernece devices.
> Any insight into why Apple seems to have a much better bluetooth stack?
This is a joke right? My M1 BT goes out to lunch several times an hour. It is literally unusable.
A wild guess: less hardware variability.
Can you shed some light on what we’re actually looking at here? I see some Rust but most of the actual BT stack looks to be C++ code. Is the HN headline accurate?
Did you miss this on the main page?
> We are missing Rust support in our GN toolchain so we currently build the Rust libraries as a staticlib and link in C++.
Right, but the headline suggests the BlueTooth stack is being rewritten in Rust while the code seems to suggest that the stack is in C++ still. There isn’t much Rust code in the directory relative to all of the C++
4K lines of Rust is not a BlueTooth stack.
It's almost impossible to be worse than the existing Bluewooth stack. IT's the single biggest problem I have with Google phones and it's shameful it took this long to fix.
I wouldn’t assume it’s fixed yet. Even if they’ve managed to write bug free code, you’re still at the mercy of driver supplied by the chip manufacturer.
Broadcom seems to pump out a lot of garbage in general. The SoCs on the raspberry pis are particularly terrible.