There's a different thread if you want to wax about Fluid Glass etc [1], but there's some really interesting new improvements here for Apple Developers in Xcode 26.
The new foundation frameworks around generative language model stuff looks very swift-y and nice for Apple developers. And it's local and on device. In the Platforms State of the Union they showed some really interesting sample apps using it to generate different itineraries in a travel app.
The other big thing is vibe-coding coming natively to Xcode through ChatGPT (and other) model integration. Some things that make this look like a nice quality-of-life improvement for Apple developers is the way that it tracks iterative changes with the model so you can rollback easily, and the way it gives context to your codebase. Seems to be a big improvement from the previous, very limited GPT integration with Xcode and the first time Apple Developers have a native version of some of the more popular vibe-coding tools.
Their 'drag a napkin sketch into Xcode and get a functional prototype' is pretty wild for someone who grew up writing [myObject retain] in Objective-C.
Are these completely ground-breaking features? I think it's more what Apple has historically done which is to not be first into a space, but to really nail the UX. At least, that's the promise – we'll have to see how these tools perform!
I hoped for a moment that "Containerization Framework" meant that macOS itself would be getting containers. Running Linux containers and VMs on macOS via virtualization is already pretty easy and has many good options. If you're willing to use proprietary applications to do this, OrbStack is the slickest, but Lima/Colima is fine, and Podman Desktop and Rancher Desktop work well, too.
The thing macOS really painfully lacks is not ergonomic ways to run Linux VMs, but actual, native containers-- macOS containers. And third parties can't really implement this well without Apple's cooperation. There have been some efforts to do this, but the most notable one is now defunct, judging by its busted/empty website[1] and deleted GitHub organization[2]. It required disabling SIP to work, back when it at least sort-of worked. There's one newer effort that seems to be alive, but it's also afflicted with significant limitations for want of macOS features[3].
That would be super useful and fill a real gap, meeting needs that third-party software can't. Instead, as wmf has noted elsewhere in these comments, it seems they've simply "Sherlock'd" OrbStack.
--
1: https://macoscontainers.org/
> The thing macOS really painfully lacks is not ergonomic ways to run Linux VMs, but actual, native containers-- macOS containers
Linux container processes run on the host kernel with extra sandboxing. The container image is an easily sharable and runnable bundle.
macOS .app bundles are kind of like container images.
You can sign them to ensure they are not modified, and put them into the “registry” (App Store).
The Swift ABI ensures it will likely run against future macOS versions, like the Linux system APIs.
There is a sandbox system to restrict file and network access. Any started processes inherit the sandbox, like containers.
One thing missing is fine grained network rules though - I think the sandbox can just define “allow outbound/inbound”.
Obviously “.app”s are not exactly like container images , but they do cover many of the same features.
You're kind of right. But at the same time they are nowhere close. The beauty of Linux containerization is that processes can be wholly ignorant that they are not in fact running as root. The containers get, what appear to them, to be the whole OS to themselves.
You don't get that in macOS. It's more of a jail than a sandbox. For example, as an app you can't, as far as I know, shell out and install homebrew and then invoke homebrew and install, say, postgres, and run it, all without affecting the user's environment. I think that's what people mean when they say macOS lacks native containers.
Hard same. I wonder if this does anything different to the existing projects that would mean one could use the WSL2 approach where containerd is running in the Linux micro-VM. A key component is the RPC framework - seems to be how orbstack's `macctl` command does it. I see mention of GRPC, sandboxes and containers in the binfmt_misc handling code, which is promising:
https://github.com/apple/containerization/blob/d1a8fae1aff6f...
What would these be useful for?
Providing isolated environments for CI machines and other build environments!
If the sandboxing features a native containerization system relied on were also exposed via public APIs, those could could also potentially be leveraged by developer tools that want to have/use better sandboxing on macOS. Docker and BuildKit have native support for Windows containers, for instance. If they could also support macOS the same way, that would be cool for facilitating isolated macOS builds without full fat VMs. Tools like Dagger could then support more reproducible build pipelines on macOS hosts.
It could also potentially provide better experiences for tools like devcontainers on macOS as well, since sharing portions of your filesystem to a VM is usually trickier and slower than just sharing those files with a container that runs under your same kernel.
For many of these use cases, Nix serves very well, giving "just enough" isolation for development tasks, but not too much. (I use devenv for this at work and at home.) But Nix implementations themselves could also benefit from this! Nix internally uses a sandbox to help ensure reproducible builds, but the implementation on macOS is quirky and incomplete compared to the one on Linux. (For reasons I've since forgotten, I keep it turned off on macOS.)
Clean build environments for CICD workflows, especially if you're building/deploying many separate projects and repos. Managing Macs as standalone build machines is still a huge headache in 2025.
What's wrong with Cirrus CLI and Tart built on Apple's Virtualization.framework?
Tart is great! This is probably the best thing available for now, though it runs into some limitations that Apple imposes for VMs. (Those limitations perhaps hint at why Apple hasn't implemented this-- it seems they don't really want people to be able to rent out many slices of Macs.
One clever and cool thing Tart actually does that sort of relates to this discussion is that it uses the OCI format for distributing OS images!
(It's also worth noting that Tart is proprietary. Some users might prefer something that's either open-source, built-in, or both.)
I might misunderstand the project, but I wish there was a secure way for me to execute github projects. Recently, the OS has provided some controls to limit access to files, etc. but I'd really like a "safe boot" version that doesn't allow the program to access the disk or network.
the firewall tools are too clunky (and imho unreliable).
Same thing containers/jails are useful for on Linux and *BSD, without needing to spin up an entirely separate kernel to run in a VM to handle it.
MacOS apps can already be sandboxed. In fact it's a requirement to publish them to the Mac App Store. I agree it'd be nice to see this extended to userland binaries though.
People use containers server side in Linux land mostly... Some desktop apps (flatpak is basically a container runtime) but the real draw is server code.
Do you think people would be developing and/or distributing end user apps via macOS containers?
ie: You want to build a binary for macOS from your Linux machine. Right now, it is possible but you still need a macOS license and to go through hoops. If you were able to containerize macOS, then you create a container and then compile your program inside it.
No, that's not at all how that would work. You're not building a macOS binary natively under a Linux kernel.
Orchestrating macOS only software, like Xcode, and software that benefits from Environment integrity, like browsers.
It's not that macoscontainers is empty, it's that the site is https://darwin-containers.github.io
Read more about it here - https://github.com/darwin-containers
The developer is very responsive.
One of Apple's biggest value props to other platforms is environment integrity. This is why their containerization / automation story is worse than e.g. Android.
Ah, that's great! I'd forgotten it moved and struggled to track it down.
In case others are confused about the term "Foundation Models":
"Foundation Models" is an Apple product name for a framework that taps into a bunch of Apple's on-device AI models.
No, the model itself is one of the two Apple foundation language models (the AFM-on-device, specifically)
https://machinelearning.apple.com/research/introducing-apple...
https://machinelearning.apple.com/research/apple-intelligenc...
The architecture is such that the model can be specialized by plugging in more task-specific fine-tuning models as adapters, for instance one made for handling email tasks.
At least in this version, it looks like they have only enabled use of one fine-tuning model (content tagging)
I'm still a little dissapointed. It seems those models are only available for iPhone series 16 and iPhone 15 pro. According to mixpanel that's only 25% of all iOS devices and even less if taking into account iPadOS. You will still have to use some other online model if you want to cover all iOS 26 users because I doubt apple will approve your app if it will only work on those Apple Intelligence devices.
Why should I bother then as a 3rd party developer? Sure nice not having a cost for API for 25% of users but still those models are very small and equivalent of qwen2.5 4B or so and their online models supposed equivalent of llama scout. Those models are already very cheap online so why bother having more complicated code base then? Maybe in 2 years once more iOS users replace their phones but I'm unlikely to use this for developing iOS in the next year.
This would be more interesting if all iOS 26 devices at least had access to their server models.
Uptake of iPhone 16+ devices will be much more than 25% by the time someone develops the next killer app using these tools, which will no doubt spur sales anyway.
If there was a killer app for AI (sorry LLMs) then it would have come out by now and AI (sorry LLMs) would have taken off properly.
App development could be as quickly as a few weeks. If the only "killer apps" we have seen in the past three years are the ChatGPT kind, I'm not holding my breath for a brand new "killer app" that runs only on iPhone 16+.
Why would anyone bother with Apple. Let their product deteriorate and die. It only takes one product to get people off iphone and theyre (tim) cooked.
Okay, the AI stuff is cool, but that "Containerization framework" mention is kinda huge, right? I mean, native Linux container support on Mac could be a game-changer for my whole workflow, maybe even making Docker less of a headache.
FWIW, here are the repos for the CLI tool [1] and backend [2]. Looks like it is indeed VM-based container support (as opposed to WSLv1-style syscall translation or whatever):
Containerization provides APIs to:
[...]
- Create an optimized Linux kernel for fast boot times.
- Spawn lightweight virtual machines.
- Manage the runtime environment of virtual machines.
[1] https://github.com/apple/container
[2] https://github.com/apple/containerizationI'm kinda ignorant about the current state of Linux VMs, but my biggest gripe with VMs is that OS kernels kind of assume they have access to all the RAM the hardware has - unlike the reserve/commit scheme processes use for memory.
Is there a VM technology that can make Linux aware that it's running in a VM, and be able to hand back the memory it uses to the host OS?
Or maybe could Apple patch the kernel to do exactly this?
Running Docker in a VM always has been quite painful on Mac due to the excess amount of memory it uses, and Macs not really having a lot of RAM.
That's called memory balooning and is supported by KVM on Linux. Proxmox for example can do that. It does need support on both the host and the guest.
It's still a problem for containers-in-VMs. You can in theory do something with either memory ballooning or (more modern) memory hotplugging, but the dance between the OS and the hypervisor takes a relatively long time to complete, and Linux just doesn't handle it well (eg. it inevitably places unmovable pages into newly reserved memory, meaning it can never be unplugged). We never found a good way to make applications running inside the VM able to transparently allocate memory. You can overprovision memory, and hypervisors won't actually allocate it on the host, and that's the best you can do, but this also has problems since Linux tends to allocate a bunch of fixed data structures proportional to the size of memory it thinks it has available.
> Is there a VM technology that can make Linux aware that it's running in a VM, and be able to hand back the memory it uses to the host OS?
Isn't this an issue of the hypervisor? The guest OS is just told it has X amount of memory available, whether this memory exists or not (hence why you can overallocate memory for VMs), whether the hypervisor will allocate the entire amount or just what the guest OS is actually using should depend on the hypervisor itself.
A generic vmm can not, but these are specific vmms so they can likely load dedicated kernel mode drivers into the well known guest to get the information back out.
Just looked it up - and the answer is 'baloon drivers', which are special drivers loaded by the guest OS, which can request and return unused pages to the host hypervisor.
Apparently docker for Mac and Windows uses these, but in practice, docker containers tend to grow quite large in terms of memory, so not quite sure how well it works in practice, its certainly overallocates compared to running docker natively on a Linux host.
The short answer is yes, Linux can be informed to some extent but often you still want a memory balloon driver so that the host can “allocate” memory out of the VM so the host OS can reclaim that memory. It’s not entirely trivial but the tools exist, and it’s usually not too bad on vz these days when properly configured.
It’s one reason i don’t like WSL2. When you compile something which needs 30 GB RAM the only thing you can do is terminate the wsl2 vm to get that ram back.
Since late 2023, WSL2 has supported "autoMemoryReclaim", nominally still experimental, but works fine for me.
add:
[experimental] autoMemoryReclaim=gradual
to your .wslconfig
See: https://learn.microsoft.com/en-us/windows/wsl/wsl-config
I just noticed the addition of container cask when I ran b”brew update”.
I chased the package’s source and indeed it’s pointing to this repo.
You can install and use it now on the latest macOS (not 26). I just ran “container run nginx” and it worked alright it seems. Haven’t looked deeper yet.
There’s some problem with networking: if you try to run multiple containers, they won’t see each other. Could probably be solved by running a local VPN or something.
WSLv1 never supported a native docker (AFAIK, perhaps I'm wrong?)
That said, I'd think apple would actually be much better positioned to try the WSL1 approach. I'd assume apple OS is a lot closer to linux than windows is.
This doesn't look like WSL1. They're not running Linux syscalls to the macOS kernel, but running Linux in a VM, more like the WSL2[0] approach.
[0] https://devblogs.microsoft.com/commandline/announcing-wsl-2/...
In the end they're probably run into the same issues that killed WSL1 for Microsoft— the Linux kernel has enormous surface area, and lots of pretty subtle behaviour, particularly around the stuff that is most critical for containers, like cgroups and user namespaces. There isn't an externally usable test suite that could be used to validate Microsoft's implementation of all these interfaces, because... well, why would there be?
Maintaining a working duplicate of the kernel-userspace interface is a monumental and thankless task, and especially hard to justify when the work has already been done many times over to implement the hardware-kernel interface, and there's literally Hyper-V already built into the OS.
Yeah, it probably would be feasible to dust off the FreeBSD Linux compatibility layer[1] and turn that into native support for Linux apps on Mac.
I think Apple’s main hesitation would be that the Linux userland is all GPL.
If they built as a kernel extension it would probably be okay with gpl.
There’s a huge opportunity for Apple to make kernel development for xnu way better.
Tooling right now is a disaster — very difficult to build a kernel and test it (eg in UTM, etc.).
If they made this better and took more of an OSS openness posture like Microsoft, a lot of incredible things could be built for macOS.
I’ll bet a lot of folks would even port massive parts of the kernel to rust for them for free.
It's impossible to have "native" support for Linux containers on macOS, since the technology inherently relies on Linux kernel features. So I'm guessing this is Apple rolling out their own Linux virtualization layer (same as WSL). Probably still an improvement over the current mess, but if they just support LXC and not Docker then most devs will still need to install Docker Desktop like they do today.
Apple has had a native hypervisor for some time now. This is probably a baked in clone of something like https://mac.getutm.app/ which provides the stuff on top of the hypervisor.
In case you're wondering, the Hypervisor.framework C API is really neat and straightforward:
1. Creating and configuring a virtual machine:
hv_vm_create(HV_VM_DEFAULT);
2. Allocating guest memory: void* memory = mmap(...);
hv_vm_map(memory, guest_physical_address, size, HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC);
3. Creating virtual CPUs: hv_vcpu_create(&vcpu, HV_VCPU_DEFAULT);
4. Setting registers: hv_vcpu_write_register(vcpu, HV_X86_RIP, 0x1000); // Set instruction pointer
hv_vcpu_write_register(vcpu, HV_X86_RSP, 0x8000); // Stack pointer
5. Running guest code: hv_vcpu_run(vcpu);
6. Handling VM exits: hv_vcpu_exit_reason_t reason;
hv_vcpu_read_register(vcpu, HV_X86_EXIT_REASON, &reason);
Thanks for this ! Apple Silicon?
One of the reasons OrbStack is so great is because they implement their own hypervisor: https://orbstack.dev/
Apple’s stack gives you low-level access to ARM virtualization, and from there Apple has high-level convenience frameworks on top. OrbStack implements all of the high-level code themselves.
Better filesystem support (https://orbstack.dev/blog/fast-filesystem) and memory utilization (https://orbstack.dev/blog/dynamic-memory)
Using a hypervisor means just running a Linux VM, like WSL2 does on Windows. There is nothing native about it.
Native Linux (and Docker) support would be something like WSL1, where Windows kernel implemented Linux syscalls.
Hyper-V is a type 1 hypervisor, so Linux and Windows are both running as virtual machines but they have direct access to hardware resources.
It's possible that Apple has implemented a similar hypervisor here.
XNU similarly has a concept of "flavors" and uses FreeBSD code to provide the BSD flavor. Theoretically, either Linux code or a compatibility layer could be implemented in the kernel in a similar way. The former won't happen due to licensing.
WSL1 didn't use the existing support for personalities in NT
> the Windows kernel already had a concept of "personalities" from back when they were trying to integrate OS/2 so that zero-to-one for XNU could be a huge lift, not the syscalls part specifically
XNU is modular, with its BSD servers on top of Mach. I don’t see this as being a strong advantage of NT.
Exactly. So it wouldn't necessarily be easier. NT is almost a microkernel.
It is as native as any Linux cloud instance.
> Linux container images generally contain the kernel.
No, containers differ from VMs precisely in requiring dependency on the host kernel.
The screenshot in TFA pretty clearly shows docker-like workflows pulling images, showing tags and digests and running what looks to be the official Docker library version of Postgres.
Every container system is "docker-like". Some (like Podman) even have a drop-in replacement for the Docker CLI. Ultimately there are always subtle differences which make swapping between Docker <> Podman <> LXC or whatever else impossible without introducing messy bugs in your workflow, so you need to pick one and stick to it.
If you've not tried it recently, I suggest give the latest version of podman another shot. I'm currently using it over docker and a lot of the compatibility problems are gone. They've put in massive efforts into compatibility including docker compose support.
Yeah, from a quick glance the options are 1:1 mapped so an
alias docker='container'
Should work, at least for basic and common operationsWhat about macOS being derived from BSD? Isn’t that where containers came from: BSD jails?
I know the container ecosystem largely targets Linux just curious what people’s thoughts are on that.
OS X pulls some components of FreeBSD into kernel space, but not all (and those are very old at this point). It also uses various BSD bits for userspace.
Good read from horse mouth:
https://developer.apple.com/library/archive/documentation/Da...
Thank you—I’ll give that a read. :)
Conceptually similar but different implementations. Containers uses cgroups in Linux and there is also file system and network virtualization as well. It's not impossible but it would require quite a bit of work.
Another really good read about containers, jails and zones.
BSD jails are architected wholly differently from what something like Docker provides.
Jails are first-class citizens that are baked deep into the system.
A tool like Docker relies using multiple Linux features/tools to assemble/create isolation.
Additionally, iirc, the logic for FreeBSD jails never made it into the Darwin kernel.
Someone correct me please.
Thank you for the links I will take a closer look at XNU. It’s neat to see how these projects influence each other.
Jails were explicitly designed for security, cgroups were more generalized as more about resource control, and leverages namespaces, capabilities, apparmor/SELinux to accomplish what they do.
> Jails create a safe environment independent from the rest of the system. Processes created in this environment cannot access files or resources outside of it.[1]
While you can accomplish similar tasks, they are not equivalent.
Assume Linux containers are jails, and you will have security problems. And on the flip side, k8s pods share UTM,IPC, Network namespaces, yet have independent PID and FS namespaces.
Depending on your use case they may be roughly equivalent, but they are fundamentally different approaches.
[1] https://freebsdfoundation.org/freebsd-project/resources/intr...
„Container“ is sort of synonymous with „OCI-compatible container“ these days, and OCI itself is basically a retcon standard for docker (runtime, images etc.). So from that perspective every „container system“ is necessarily „docker-like“ and that means Linux namespaces and cgroups.
Interesting. My experience w/ HP-UX was in the 90s, but this (Integrity Virtual Machines) was released in 2005. I might call out FreeBSD Jails (2000) or Solaris Zones (2005) as an earlier and a more significant case respectively. I appreciate the insight, though, never knew about HP-UX.
Does it really matter, tho?
WSL throughput is not enough for file intensive operations. It is much easier and straightforward to just delete windows and use Linux.
Unless you need to have a working video or audio config as well.
Using the Linux filesystem has almost no performance penalty under WSL2 since it is a VM. Docker Desktop automatically mounts the correct filesystem. Crossing the OS boundary for Windows files has some overhead of course but that's not the usecase WSL2 is optimized for.
With WSL2 you get the best of both worlds. A system with perfect driver and application support and a Linux-native environment. Hybrid GPUs, webcams, lap sensors etc. all work without any configuration effort. You get good battery life. You can run Autodesk or Photoshop but at the same time you can run Linux apps with almost no performance loss.
Are you comparing against the default vendor image that's filled with adware or a clean Windows install with only drivers? There is a significant power use difference and the latter case has always been more power efficient for me compared to the Linux setup. Powering down Nvidia GPU has never fully worked with Linux for me.
How? What's your laptop brand and model? I've never had better battery life with any machine using ubuntu.
WSL doesn't have a virtualization layer, WSL1 did have but it wasn't a feasible approach so WSL2 is basically running VMs with the Hyper-V hypervisor.
Apple looks like it's skipped the failed WSL1 and gone straight for the more successful WSL2 approach.
If they implemented the Linux syscall interface in their kernel they absolutely could.
Aren't the syscalls a constant moving target? Didn't even Microsoft fail at keeping up with them in WSL?
Linux is exceptional in that it has stable syscall numbers and guarantees stability. This is largely why statically linked binaries (and containers) "just work" on Linux, meanwhile Windows and Mac OS inevitably break things with an OS update.
Microsoft frequently tweaks syscall numbers, and they make it clear that developers must access functions through e.g. NTDLL. Mac OS at least has public source files used to generate syscall.h, but they do break things, and there was a recent incident where Go programs all broke after a major OS update. Now Go uses libSystem (and dynamic linking)[2].
Not Linux syscalls, they are a stable interface as far as the Linux kernel is concerned.
They're not really a moving target (since some distros ship ancient kernels, most components will handle lack of new syscalls gracefully), but the surface is still pretty big. A single ioctl() or write() syscall could do a billion different things and a lot of software depends on small bits of this functionality, meaning you gotta implement 99% of it to get everything working.
FreeBSD and NetBSD do this.
They didn't.
I installed Orbstack without Docker Desktop.
WSL 1.0, given that WSL 2.0 is regular Linux VM running on HYPER-V.
I wonder if User-Mode Linux could be ported to macOS...
It would probably be slower than just running a VM.
> Meet Containerization, an open source project written in Swift to create and run Linux containers on your Mac. Learn how Containerization approaches Linux containers securely and privately. Discover how the open-sourced Container CLI tool utilizes the Containerization package to provide simple, yet powerful functionality to build, run, and deploy Linux Containers on Mac.
> Containerization executes each Linux container inside of its own lightweight virtual machine.
That’s an interesting difference from other Mac container systems. Also (more obvious) use Rosetta 2.
Podman Desktop, and probably other Linux-containers on macOS tools, can already create multiple VMs, each hosting a subset of the containers you run on your Mac.
What seems to be different here, is that a VM per each container is the default, if not only, configuration. And that instead of mapping ports to containers (which was always a mistake in my opinion), it creates an externally routed interface per machine, similar to how it would work if you'd use macvlan as your network driver in Docker.
Both of those defaults should remove some sharp edges from the current Linux-containers on macOS workflows.
The ground keeps shrinking for Docker Inc.
They sold Docker Desktop for Mac, but that might start being less relevant and licenses start to drop.
On Linux there’s just the cli, which they can’t afford to close since people will just move away.
Docker Hub likely can’t compete with the registries built into every other cloud provider.
There is already a paid alternative, Orbstack, for macOS which puts Docker for Mac to shame in terms of usability, features and performance. And then there are open alternatives like Colima.
Use OrbStack for sometime, made my dev team’s m1 run our kubernetes pods in a much lighter fashion. Love it.
How does it compare to Podman, though?
Podman works absolutely beautifully for me, other platforms, I tripped over weird corner cases.
That is why they are now into the reinventing application servers with WebAssembly kind of vibe.
It’s really awful. There’s a certain size at which you can pivot and keep most of your dignity, but for Docker Inc., it’s just ridiculous.
They got Sherlocked.
It's cool but also not as revolutionary as you make it sound. You can already install Podman, Orbstack or Colima right? Not sure which open-source framework they are using, but to me it seems like an OS-level integration of one of these tools. That's definitely a big win and will make things easier for developers, but I'm not sure if it's a gamechanger.
All those tools use a Linux VM (whether managed by Qemu or VZ) to run the actual containers, though, which comes with significant overhead. Native support for running containers -- with no need for a VM -- would be huge.
Still needs a VM. It'll be running more VMs than something like orbstack, which I believe runs just one for the docker implementation. Whether that means better or worse performance we'll find out.
there's still a VM involved to run a Linux container on a Mac. I wouldn't expect any big performance gains here.
Yes, it seems like it's actually a more refined implementation than what currently exists. Call me pleasantly surprised!
The framework that container uses is built in Swift and also open sourced today, along with the CLI tool itself: https://github.com/apple/containerization
It looks like nothing here is new: we have all the building blocks already. What Apple done is packaged it all nicely, which is nothing to discount: there's a reason people buy managed services over just raw metal for hosting their services, and having a batteries included development environment is worth a premium over the need to assemble it on your own.
The containerization experience on macOS has historically been underwhelming in terms of performance. Using Docker or Podman on a Mac often feels sluggish and unnecessarily complex compared to native Linux environments. Recently, I experimented with Microsandbox, which was shared here a few weeks ago, and found its performance to be comparable to that of native containers on Linux. This leads me to hope that Apple will soon elevate the developer experience by integrating robust containerization support directly into macOS, eliminating the need for third-party downloads.
Docker at least runs a linux vm that runs all those containers. Which is a lot of needless overhead.
The equivalent of Electron for containers :)
Use Colima.
yeah -- I saw it's built on "open source foundations", do you know what project this is?
My guess is Podman. They released native hypervisor support on macOS last year. https://devclass.com/2024/03/26/podman-5-0-released-with-nat...
My guess is nerdctl and containerd.
The CLI sure looks a lot like Docker.
If I had to guess, colima? But there are a number of open source projects using Apple's virtualisation technologies to run a linux VM to host docker-type containers.
Once you have an engine podman might be the best choice to manage containers, or docker.
Being able to drop Docker Desktop would be great. We're using Podman on MacOS now in a couple places, it's pretty good but it is another tool. Having the same tool across MacOS and Linux would be nice.
Migrate to Orbstack now, and get a lot of sanity back immediately. It’s a drop-in replacement, much faster, and most importantly, gets out of your way.
There's also Rancher Desktop (https://rancherdesktop.io/). Supports moby and containerd; also optionally runs kubernetes.
I have to drop docker desktop at work and move to podman.
I'm the primary author of amalgamation of GitHub's scripts to rule them all with docker compose so my colleagues can just type `script/setup` and `script/server` (and more!) and the underlying scripts handle the rest.
Apple including this natively is nice, but I won't be a able to use this because my scripts have to work on linux and probably WSL
Orbstack
Seems to be this: https://github.com/apple/containerization
vminitd is the most interesting part of this.
Colima is my guess, only thing that makes sense here if they are doing a qemu vm type of thing
That's my guess too... Colima, but probably doing a VM using the Virtualization framework. I'll be more curious if you can select x86 containers, or if you'll be limited to arm64/aarch64. Not that it really makes that much of a difference anymore, you can get pretty far with Linux Arm containers and VMs.
Should be easy enough, look for the one with upstream contributions from Apple.
Oh, wait.
I’ve been using Colima for a long while with zero issues, and that leverages the older virtualization framework.
They Sherlocked OrbStack.
Well, Orbstack isn't really anything special in terms of its features, it's the implementation that's so much better than all the other ways of spinning up VMs to run containers on macos. TBH, I'm not 100% sure 2025 Apple is capable anymore of delivering a more technically impressive product than orbstack ...
That's a good thing though right?
It would be better for the OrbStack guy if they bought it.
Interestingly it looks like Apple has rewritten much of the Docker stack in Swift rather than using existing Go code.
I thought it's more like Colima than OrbStack
Microsoft did it first to Virtual Box / VMWare Workstation thought.
That is what I have been using since 2010, until WSL came to be, it has been ages since I ever dual booted.
Orbstack has been pretty bulletproof
Orbstack is not free for commercial use
Orbstack owners are going to be fuming at this news!
It's a VM just like WSL... So yeah.
WSL 2 involves a VM. WSL 1, which is still maintained and usable, doesn't.
https://learn.microsoft.com/en-us/windows/wsl/compare-versio...
Ok, I've squeezed containerization into the title above. It's unsatisfactory, since multiple announced-things are also being discussed in this thread, but "Apple's kitchen-sink announcement from WWDC this year" wouldn't be great either, and "Apple supercharges its tools and technologies for developers to foster creativity, innovation, and design" is right out.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...
Title makes sense to me.
It seems like a big step in the right direction to me. It's hard to tell if its 100% compatible with Docker or not, but the commands shown are identical (other than swapping docker for container).
Even if its not 100% compatible this is huge news.
> Apple Announces Foundation Models and Containerization frameworks, etc.
This sounds like apple announced 2 things, AI models and container related stuff I'd change it to something like:
> Apple Announces Foundation Models, Containerization frameworks, more tools
The article says that what was announced is "foundation model frameworks", hence the awkward twist in the title, to get two frameworkses in there.
Small nitpick but "Announces" being capitalized looks a bit weird to me.
It’s title case[0].
Me too - I thought I'd fixed that! Fixed now, thanks.
Some 15 years ago, A friend of mine said to me "mark my words, Apple will eventually merge OSX with iOS on the iPad". And with every passing keynote since then, it seemed Apple's been inching towards that prophecy, and today, the iPad has become practically a MacBook Air with a touch screen. Unless you were a video editor, programmer who needs resources to compile or a 3D artist, I don't see how you'd need anything other than an iPad.
The fact that they haven't done it in 15 years should be an indication that they don't intend to do it at all. Remember that in the same time period Apple rebuilt every Macbook from scratch from the chipset up. Neither the hardware nor software is a barrier to them merging the two platforms. It's that the ecosystems are fundamentally incompatible. A true "professional" device needs to offer the user full control, and Apple isn't giving up this control on an i-Device. The 30% cut is simply too lucrative.
If anyone wants to read up on how much effort Apple actually went through to keep Apple Silicon Macs open, take a look here: https://asahilinux.org/docs/platform/security/#per-container...
Secure Boot on other platforms is all-or-nothing, but Apple recognizes that Mac users should have the freedom to choose exactly how much to peel back the security, and should never be forced to give up more than they need to. So for that reason, it's possible to have a trusted macOS installation next to a less-trusted installation of something else, such as Asahi Linux.
Contrast this with others like Microsoft who believe all platforms should be either fully trusted or fully unsupported. Google takes this approach with Android as well. You're either fully locked in, or fully on your own.
> You're either fully locked in, or fully on your own.
I'm not sure what you mean by that. You can trivially root a Pixel factory image. And if you're talking about how they will punish you for that by removing certain features: Apple does that too (but to a lesser extent).
https://github.com/cormiertyshawn895/RecordingIndicatorUtili...
That is a good point. I wish dual booting with different security settings was possible on Android as well. The incentives for Google to implement that aren't really there though.
Out of interest, are you currently using android (or fork) or iOS?
This is a pretty wild take.
Its reasonable to install a different OS on Android, even if some features don't work. I've done this, my friends and family have done this, I've seen it IRL.
I've never seen anyone do this on iPhone in my entire life.
But I flipped and I'm a Google hater. Expensive phones and no aux port. At least I can get cheap androids still.
the only macbook I’ve tried to put linux on was a t2 machine, and it still doesn’t sleep/suspend right, so I’m a bit skeptical that apple is really leading the way here, but maybe I’ve just not touched any recent windows devices either
Apple is also rather notorious for tinkering with Intel's ACPI files, for better or worse. Suspend is finnecky enough on hardware that supports it, and probably outright impossible if your CPU power states disagree with what the software is expecting.
If anyone wants to read up on all the features Apple didn't implement from Intel Macs that made Linux support take so long, here is a list of UEFI features that represents only a small subset of the missing support relative to AMD and Intel chipsets: https://en.wikipedia.org/wiki/UEFI#Features
Alternatively, read about iBoot. Haha, just kidding! There is no documentation for iBoot, unlike there is for uBoot and Clover and OpenCore and SimpleBoot and Freeloader and systemd-boot. You're just expected to... know. Yunno?
It's not how homebrew worked on Intel Macs, or even PowerMacs[0] either. It's a change made with the Apple Silicon lineup - I cannot speak on Apple's behalf to tell you why they did that. But I can blame UEFI as the reason why the M3 continues to have pitiful Linux support when brand-new AMD and Intel chips have video drivers and power management on Day One.
They don’t want to overtake their desktop device market. If the UI fully converges, then all you have a iPad with a keyboard across all devices (laptops, desktop).
I think practically everyone is better off with a laptop. iPad is great if you're an artist using the pencil, or just consuming media on it. Otherwise a macbook is far more powerful and ergonomic to use.
I think perhaps you are overestimating the computing needs of the majority of the population. Get one of the iPad cases with a keyboard and an iPad is in many ways a better laptop.
The problem is that almost everything, including basic web browsing, is straight-up worse on the iPad. Weird incompatibilities, sites that don’t honor desktop mode, tabs unloading from memory, random reloads, etc. all mar the experience.
But the majority won't pay extra for an ipad and a keyboard, when they can pay less for an air with everything included...
I'm not sure - I just looked casually at some options and it appears one can find an iPad between $700-$900 for a pretty solid model, which includes the $250 folio keyboard. The base model MBA starts at $999. So depends on whether you want a traditional laptop or a "computing device."
yes they will
Or maybe a stand and separate keyboard. Better ergonomics than a laptop that way with similar portability.
Something wireless would be nice for portability IMO, e.g. Apple or Logitech Bluetooth. Security considerations there though.
I wouldn't want a numpad. A track point would be ape.
I struggle with keyboard recommendations b/c I'm not fully satisfied lol.
I have an iPad and really like it, but no, it is not.
Several small things combined make it really different to the experience that I have with a desktop OS. But it is nice as side device
I'm guessing you are coming at it from the perspective of a laptop user and likely a power user. The majority of the population just needs to scroll social media, message some friends, send an email or two, do a little shopping, maybe write a document or two. For this crowd an iPad is plenty. When I was a software developer - yeah, I had a Mac Pro on my desk and a MBP I carried when I traveled. Now as a real estate agent, an iPad is plenty for when I'm on the go.
I used to think that, not having used an iPad. Now I carry a work-issued iPad with 5G and it's actually pretty convenient for remote access to servers. I wouldn't want to spend a day working on it, but it's way faster than pulling out a laptop to make one tiny change on a server. It's also great for taking notes at meetings/conferences.
It's irritatingly bad at consuming media and browsing the web. No ad blocking, so every webpage is an ad-infested wasteland. There are so many ads in YouTube and streaming music. I had no idea.
It's also kindof a pain to connect to my media library. Need to figure out a better solution for that.
So, as a relatively new iPad user it's pleasantly useful for select work tasks. Not so great at doomscrolling or streaming media. Who knew?
There's native ad blocking on iOS and has been for a while—I've found that to significantly enhance the usability of the device. I use Wipr[0], other options are available.
I use Wipr on my phone, the experience is a lot worse than ublock origin on desktop...
Use Orion Browser. It allows installing Firefox/Chrome extensions. Install Firefox unlock Origin.
Try the Brave browser for YouTube. I used Jellyfin for my media library and that seemed to work fine for tv and movies.
I just got a Macbook and haven't touched my iPad Pro since, I would think I could make a change faster on a Macbook then iPad if they were both in my bag. Although I do miss the cellular data that the iPad has.
> practically everyone is better off with a laptop
The majority of the world are using their phones as a computing device.
And as someone with a MacBook and iPad the later is significantly more ergonomic.
I prefer MacBook to iPad most of the time. The only use case for iPad for me where it shines is when I need to use a pencil.
I don't understand why my MacBook doesn't have a touchscreen. I'm switching to an iPad Pro tomorrow. I use Superwhisper to talk to it 90% of the time anyway.
My theory is because of the hinge, which is a common point of failure on laptops. Either you are putting extra strain on it by having someone constantly touching the screen, and some users just mash their fingers into touch screens. Or users want a fully openable screen to mimic a tablet format, and those hinges always seem to fail quicker. Every touchscreen laptop I've had eventually has had the hinge fail.
There seems to be some kind of incompatibility between antiglare and oleophobic coatings that may also contribute.
Every single touch screen laptop I’ve seen has huge reflection issues, practically being mirrors. My assumption is that in order for the screen to not get nasty with fingerprints in no time, touchscreen laptops need oleophobic coating, but to add that they have to use no antiglare coating.
Personally I wouldn’t touch my screen often enough to justify having to contend with glare.
Apple is capable of solving it if they want to. They don't want to (yet at least).
Because MacBooks have subpar displays, at least the M4 Air does. The iPad Pro is a better value.
I don't use an iPad much, but it's been interesting to watch from afar how it's been changing over these years.
They could have gone the direction of just running MacOS on it, but clearly they don't want to. I have a feeling that the only reason MacOS is the way it is, is because of history. If they were building a laptop from scratch, they would want it more in their walled garden.
I'm curious to see what a "power user" desktop with windowing and files, and all that stuff that iPad is starting to get, ultimately looks like down this alternative evolutionary branch.
Its obvious isn't it? It will look like a desktop, except Apple decides what apps you can run and takes their 30% tax on all commerce.
Yeah, it's like we're watching two parallel evolution paths: macOS dragging its legacy along, and iPadOS trying to reinvent "productivity" from first principles, within Apple's tight design sandbox.
Whether or not they eventually fuse, I don't know—I doubt it. But the approach they've taken over the past 15 years to gradually increase the similarities in user experience, while not trying to force a square peg in a round hole, have been the best path in terms of usability.
I think Microsoft was a little too eager to fuse their tablet and desktop interface. It has produced some interesting innovations in the process but it's been nowhere near as polished as ipadOS/macOS.
Yeah I think the majority of users, even in an office environment would be better of with an iPad in 99% of cases. All standard office stuff, like presentations; documents and similar are going to run better on an iPad. There are less foot guns, users are less likely to open 300 tabs just because they can.
If you are a developer or a creative however, then a Mac is still very useful.
I really wish there was some sort of hybrid device. I often travel by foot/bike/motorbike and space comes at a premium. I'd have a Microsoft Surface if Windows was not so unbearable.
On the other hand, I have come to love having a reading/writing/sketching device that is completely separate from my work device. I can't get roped into work and emails and notifications when I just want to read in bed. My iPad Mini is a truly distraction-free device.
I also think it would be hard to have a user experience that works great both for mobile work and sitting-at-a-desk work. I returned my Microsoft Surface because of a save dialog in a sketching app. I did not want to do file management because drawing does not feel like a computing task. On the other hand, I do want to deal with files when I'm using 3 different apps to work on a website's files.
ipad hardware is a full blown M chip. There's no real hardware limitation that stops the iPad from running macOS, but merging it cannibalizes each product line's sales
The new windowing feature basically cannibalizes MacBook Air.
A Macbook Air is cheaper than an iPad Pro with a keyboard though. Not to mention you still can't run apps from outside the app store, and most of these new features we're hoping work as well as they do on MacOS, but given that background tasks had to be an API, I doubt they will.
iPad+keyboard is also awkwardly top heavy and not very well suited for lap use. That might cease to be an issue with sufficiently dense batteries bringing down the weight of the iPad though.
There's still software I can't run on an iPad which is basically the only reason I have a MacBook Air. Maybe for some a windowing system may be the push to switch but that seems doubtful to me.
I still find iPadOS frustrating for certain "pro" workflows. File management, windowing, background tasks - all still feel half-baked compared to macOS. It's like Apple's trying to protect the simplicity of iOS while awkwardly grafting on power-user features
> The iPad has become practically a MacBook Air with a touch screen. Unless you were a video editor, programmer who needs resources to compile or a 3D artist, I don't see how you'd need anything other than an iPad.
No! It's not - and it's dangerous to propagate this myth. There are so many arbitrary restrictions on iPad OS that don't exist on MacOS. Massive restrictions on background apps - things like raycast (MacOS version), Text Expander, cleanshot, popclip, etc just aren't possible in iPad OS. These are tools that anyone would find useful. No root/superuser access. I still can't install whatever apps I want from whatever sources I want. Hell, you can't even write and run iPadOS apps in a code editor on the iPad itself. Apple's own editor/development tool - Xcode - only runs on MacOS.
The changes to window management are great - but iPad and iPadOS are still extremely locked down.
But when you have so many customers buying and using both, seems like it'd be bad business for them to fully merge those lines.
With Microsoft opening Windows's kernel to the Xbox team, and a possible macOS-iPadOS unification, we are reaching multiple levels of climate changes in Hell. It's hailing!
> I don't see how you'd need anything other than an iPad.
For the same price, you still get a better mac.
Does an iPad allow for multiple users?
Yes, but only if it's enrolled in MDM, bizarrely enough.
I don’t think that’s bizarre at all, there’s a clear financial incentive for things to be this way. Apple can’t have normal people sharing a single device instead of buying one for each.
> Yes, but only if it's enrolled in MDM, bizarrely enough
In education or corporate settings, where account management is centralized, you want each person who uses an iPad to access their own files, email, etc.
Same applies to “families” and it’s somewhat bizarre that this is still ignored in 2025.
In home settings, where devices are shared with multiple family members, you want each person who uses an iPad to access their own files, email, etc.
Parents and spouses would appreciate if they could take the multiple user experience for tvOS and make it an option for iPadOS.
I wish Apple provided the MDM, rather than relying on a random consumer ecosystem of dodgy companies who all charge 3-18$ per machine per month, which is a lot.
Auth should be Apple Business Manager; image serving should be passive directories / cloud buckets.
Apple launched their own solution last year (maybe it was the year before).
Haven’t tried it though, still using JamF.
They can't do this. It would destroy their ability to rent their iOS users out because they'd have access to dev tools and could "scale the wall."
Then why break it off as iPadOS?
I told that to John Gruber and he said never will happen
I wish they’d focus on just enabling actual functionality on iPad - like can I have Xcode please? And a shell?
I dgaf what the UI looks like. It’s fine.
Nothing Apple can do to iPadOS is going to fix the fundamental problem that:
1. iPadOS has a lot of software either built for the "three share sheets to the wind" era of iPadOS, or lazily upscaled from an iPhone app, and
2. iPadOS does not allow users to tamper with the OS or third-party software, so you can't fix any of this broken mess.
Video editing and 3D would be possible on iPadOS, but for #1. Programming is genuinely impossible because of #2. All the APIs that let Swift Playgrounds do on-device development are private APIs and entitlements that third-parties are unlikely to ever get a provisioning profile for. Same for emulation and virtualization. Apple begrudgingly allows it, but we're never going to get JIT or hypervisor support[0] that would make those things not immediately chew through your battery.
[0] To be clear, M1 iPads supported hypervisor; if you were jailbroken on iPadOS 14.5 and copied some files over from macOS you could even get full-fat UTM to work. It's just a software lockout.
The video on Containerization.framework, and the Container tool, is live [0].
It looks like each container will run in its own VM, that will boot into a custom, lightweight init called vminitd that is written in Swift. No information on what Linux kernel they're using, or whether these VMs are going to be ARM only or also Intel, but I haven't really dug in yet [1].
If you search the code, they support Rosetta, which means an ARM Linux kernel running x86 container userspace with Rosetta translation. Linux kernel can be told to use a helper to translate code using non-native architecture.
Actually, they explain it in detail here: https://github.com/apple/containerization/issues/70#issuecom...
It's unclear whether this will keep being supported in macOS 28+, though: https://github.com/apple/container/issues/76, https://www.reddit.com/r/macgaming/comments/1l7maqp/comment/...
looks like there isn't much to take away from this, here's a few bullet points:
Apple Intelligence models primarily run on-device, potentially reducing app bundle sizes and the need for trivial API calls.
Apple's new containerization framework is based on virtual machines (VMs) and not a true 'native' kernel-level integration like WSL1.
Spotlight on macOS is widely perceived as slow, unreliable, and in significant need of improvement for basic search functionalities.
iPadOS and macOS are converging in terms of user experience and features (e.g., windowing), but a complete merger is unlikely due to Apple's business model, particularly App Store control and sales strategies.
The new 'Liquid Glass' UI design evokes older aesthetics like Windows Aero and earlier Aqua/skeuomorphism, indicating a shift away from flat design.
Full summary (https://extraakt.com/extraakts/apple-intelligence-macos-ui-o...)
App Store control is something that EU is challenging, including on iPads. So while there’s no macOS APIs on ipadOS, I can totally see 3rd party solutions running macOS apps (and Linux or Windows, too) in a VM and outputting the result as now regular iPad windowed apps.
this aged so well https://github.com/apple/ml-fastvlm/issues/7
> including over 250,000 APIs that enable developers to integrate their apps with Apple’s hardware and software features.
This doesn’t sound impressive, it sounds insane.
Which ones would you like to get rid of?
Related ongoing threads:
Containerization is a Swift package for running Linux containers on macOS - https://news.ycombinator.com/item?id=44229348 - June 2025 (158 comments)
Container: Apple's Linux-Container Runtime - https://news.ycombinator.com/item?id=44229239 - June 2025 (11 comments)
Oh, Apple is doing windows Aero now? Wonder how long that one'll last.
I'm cautious. Apple's history with developer tools is hit or miss. And while Xcode integrating ChatGPT sounds helpful in theory, I wonder how smooth that experience really is.
iPad update is going to encourage a new series of folks trying to use iPads for general programming. I'm curious how it goes this time around. I'm cautiously optimistic
Isn't it still impossible to run any dev tools on the iPad?
IIRC Swift Playgrounds goes pretty deep -- a full LLVM compiler for Swift and you can use any platform API -- but you can't build something for distribution. The limitations are all at the Apple policy level.
Not quite. As another user mentioned, there's Swift Playgrounds which is complete enough that you can even upload apps made in it to the App Store. Aside from that, there are also IDEs like Pythonista for creating Python-based apps and others for Lua, JavaScript, etc. many of which come with their own frameworks for making native iOS/iPadOS interfaces.
I can assume that they are going to bring the Container stuff to iPad at some point. That would unlock so many things...
No vscode, no deal. I don't see that happening any time soon.
I think the story might actually be changing this time
You can’t run Docker on an iPad.
I don't understand the foundation models here. Are they new LLMs trained by Apple such as Qwen?
But actually released in 2024?
Does this mean we will longer need Docker Desktop or colima?
That's what it sounds like: https://developer.apple.com/videos/play/wwdc2025/346
Not if musl is standard
That's just for the statically-linked vminitd binary in the VM I believe. Containers should still be running whatever's packaged into them.
I wonder what happened to Siri. Not a single mention anywhere?
I actually loved Siri when it first came out. It felt magical back then (in a way)
hope to show you more later this year. was like the first thing they said about apple intelligence
Which is the same as what they said last year.
They also just announced that Shortcuts can use these endpoints (or Private Cloud Compute or ChatGPT).
Will they ever update Terminal.app?
They did!!! At least color options. Just announced at platform state of the union
Yes! In another comment of mine, that was the main thing I mentioned, haha.
edit: For those curious, https://youtu.be/51iONeETSng?t=3368.
- New theme inspired by Liquid Glass
- 24-bit colour
- Powerline fonts
Unlikely to happen soon. It’s maintained by one engineer who is very against anything resembling iTerm2.
Just use iTerm2 (Warp or Kitty are two other options out of many) and be done w/it; why would Apple even worry about this when so few people who care about terminal applications even think twice about it?
I've tried all of them, including ones that yourself, and others, haven't mentioned like Rio. I stand by wanting Terminal.app simply updated with better colour support, then it's one less alternative program to get.
Also ghostty
WebKit is also being swiftified, as mentioned on the platforms state of the union.
As in they're integrating Swift into the WebKit project, or exposing Swift-y wrappers over WebKit itself?
There is probably going to be a session later this week, the reference seemed to imply they are integrating Swift into Webkit project for new development.
Interesting, I wonder if that pushes Swift on Linux further given other projects (webkitgtk etc).
Most likely not, for Apple what matters for Swift on Linux, is being a good server language for app developers that want to share code between app and server, with Apple no longer caring to sell macOS for servers.
Everything else they would rather see devs stay on their platforms, see the official tier 1 scenarios on swift.org.
> Every Apple Developer Program membership includes 200GB of Apple hosting capacity for the App Store. Apple-Hosted Background Assets can be submitted separately from an app build.
Is this the first time Apple has offered something substantial for the App store fees beyond the SDK/Xcode and basic app distribution?
Is it a way to give developers a reason to limit distribution to only the official App Store, or will this be offered regardless of what store the app is downloaded from?
> Is this the first time Apple has offered something substantial for the App store fees beyond the SDK/Xcode and basic app distribution?
They've offered 25hrs/mo of Xcode Cloud build time for the last couple years.
Background Assets have existed for years. I’m not sure that 200GB figure is new.
Huh. Does this cover if you use public CloudKit databases...?
Excited to try these out and see benchmarks. Expectations for on device small local model should be pretty low but let’s see if Apple cooked up any magic here.
some benchmarks here: https://machinelearning.apple.com/research/apple-foundation-...
I watched the video and it seems they are statically linking atop musl to build their lightweight VM layer. I guess the container app itself might use glibc, but will the musl build for the VM itself cause downstream performance issues? I'm no expert in virtualization to be able to understand if this should be a concern or not.
See also:
- https://edu.chainguard.dev/chainguard/chainguard-images/abou...
Hopefully not bound to SwiftUI like seemingly everything else Apple Intelligence so far. But on-device llm (private) would be real nice to have.
The api looks like "give it a string prompt, async get a string back", so not tied to any particular UI Framework.
"The framework has native support for Swift, so developers can easily access the Apple Intelligence model with as few as three lines of code."
Bad news.
Swift != SwiftUI
I like that there's support for locally-run models on Xcode.
I wish I thought that the Game Porting Toolkit 3 would make a difference, but I think Apple's going to have to incentivize game studios to use it. And they should; the Apple Silicon is good enough to run a lot of games.
... when are they going to have the courage to release MacOS Bakersfield? C'mon. Do it. You're gonna tell me California's all zingers? Nah. We know better.
yeah, getting better LLM support for XCode is great!
I sure hope they provide an accessibility option to turn down translucency to improve contrast or this UI is a non-starter for me. Without using it, this new UI looks like it may favor design over usability. Why don’t they do something more novel and let user tweak interface to their liking?
They’ve had Reduce Transparency (under Accessibility) for a long time now. It still works.
>favor design over usability
That's... kinda what Apple is famous for.
The files are INSIDE the computer!
I hope they don't turn Liquid Glass into Aqua... which I hated. The only time I started to like the iOS interface was iOS 7 with flat design. I hope they don't turn this into old, skeuomorphic, Aqua-like UI by time.
Not sure about that Liquid Glass idea.
Ultimately UI widgets are rooted in reality (switches, knobs, doohickeys) and liquid glass is Salvador-Dali-Esque.
Imagine driving a car and the gear shifter was made of liquid glass… people would hit more grannies than a self-driving Tesla.
Does this mean, we can use llm for free in developing ios apps?
TIL macOS doesn’t have native containers, just in vm.
Don’t use macOS but had just kinda assumed it would by virtue of shared unixy background with Linux
Dont containers imply a linux kernel interface ? hence, you can only have truly native containers on linux or use containers in a VM or some kind of Wine-like translation layer.
It's the year of Linux on the desktop!
Another one? lol
Is there a beta we can install to try out these models yet?
This press release says it will be available “starting today” through developer program https://www.apple.com/newsroom/2025/06/apple-supercharges-it...
they mention kata, so is this using kata underneath instead of their Hypervisor.framework?
im confused
Repo says it uses Hypervisor.framework on Apple Silicon devices.
ah ty i should have read more
Can someone who uses Xcode daily compare to say Cursor or VsCode how the developer experience is. Just curious how Apple is keeping up
XCode so far is very rudimentary. miles behind VSCode in autocomplete. autocomplete is very small, single line, and suggests very very rarely. and no other features except autocomplete exist.
very good to see XCode LLM improvements!
> I use VSCode Go daily + XCode Swift 6 iOS 18 daily
Several years ago XCode also had “jump to definition” and a few other features.
All this focus on low power gaming makes me think Apple wants to get in on the Steam Deck hype.
Apple is in a reasonably good place to make gaming work for them.
Their hardware across the board is fairly powerful (definetly not top end), they have a good API stack especially with Metal. And they have systems at all levels including TV. If they were to just make a standard controller or just say "PS5 dualshock is our choice" they could have a nice little slice for themselves.
As I understand it, Apple has a long history of entitlement and burning bridges with every major game developer while making collaboration extremely painful. They were in a much better place to make gaming work 10 years ago when major gaming studios were still interested in working with them.
Just let me have JIT! My jailbroken iPad Pro can emulate Wii at 4k without getting warm. Unfortunately you have to hack around enabling JIT on newer ios releases.
They've been hyping up their hardware capabilities and APIs for years now.
Until Apple-ported games are able to be installed from Steam instead of the App Store, you can count me out.
They better have a partnership with Sony in the works, then. Valve and Apple's approach to supporting video games diverged a decade ago. Hearing "Steam" and "Apple" uttered in the same breath is probably giving people panic attacks already.
> New Design with Liquid Glass Yes, bringing back aqua! I even see blue in their examples.
Does the privacy preserving aspect of this mean that Apple Intelligence can be invoked within an app, but the results provided by Apple Intelligence are not accessible to the app to be transmitted to their server or utilized in other ways? Or is the privacy preservation handled in a different way?
I think they just mean private from Apple. I don’t see how they can keep it private from the developer if it’s integrated into the app
What model are they bundling? Something apple-custom? How capable is it?
They described their home-grown models last year: https://machinelearning.apple.com/research/introducing-apple...
I'm assuming this is an updated version of those.
Apple has their own models under the hood I believe. I remember from like a year or two ago they had an open line called "ELM" (Efficient Language Model), but I'm not sure if that's what they're actually using.
I am excited to see what the benchmarks look like though, once it's live.
they also use their ANE and CoreML for smaller on-device stuff
They're also working with Anthropic on a coding platform: https://www.macrumors.com/2025/05/02/apple-anthropic-ai-codi...
There is almost no information under the link
iPadOS and OSX continue to converge into one platform.
Calling it: Apple allOS 27 incoming next year, with Final Cut Pro on your Apple Watch.
Multi-user iPadOS when?
When they figure out how to make it not dent sales of individual devices. If you and your spouse could easily share one around the house for different purposes but still having each of your personal apps and settings, you might not buy two!
I think this may be overestimating how often people buy tablets. My wife has an iPad Air 1 or 2, so it's close to 10 years old and mostly sits in a drawer. I had a VERY old iPad 2 that I held off on replacing because I wanted to wait for a multi-user iPad.
I finally gave up and bought a Mini6 a year or two ago, which gets.... also minimal use. And I'm sure not buying ANOTHER tablet we're not going to use.
If they were multi-user I actually think we'd both get more value out of it, and upgrade our one device more often.
> If you and your spouse could easily share one around the house for different purposes but still having each of your personal apps and settings, you might not buy two!
I get it, but an iPad starts at $349; often available for less.
At this point, an iPad is no different than a phone—most people wouldn't share a single tablet.
Laptops and desktops that run macOS, Linux, Windows which are multiuser operating systems have largely become single-user devices.
> an iPad starts at $349; often available for less.
It's less about the cost and more about having to have another stupid device to charge, update, and keep track of, when a tablet is not a device that gets used enough by any one person to be worth all that. It would be much more convenient to have a single device on a coffee or end table which all family members could use when they need to do more than you can do on a phone.
> Laptops and desktops that run macOS, Linux, Windows which are multiuser operating systems have largely become single-user devices.
Maybe. Probably 90% of work laptops are single-user, I'm sure. But for home computers, multi-user can be very useful. And it's better than ever to use laptops as dumb terminals, since all most people's stuff is in the cloud. It's not nearly as much trouble to get your secondary user account on a spare laptop in the living room to be useful as it was in the Windows XP days. Just having a browser that's signed into your stuff, plus Messages or Whatsapp, and maybe Slack/Discord/etc. is enough.
> most people wouldn't share a single tablet.
Since iPads have never supported doing so in a sane way, that unfounded assertion is just as likely due to the fact that it's a terrible experience today, since if you share one today, someone else will be accidentally marking your messages as read, you'll be polluting their browser or YouTube history, etc.
It's also the kind of dismissive claim true Apple believers tend to trot out when someone points out a shortcoming: "Nobody wants to use a touchscreen laptop!" "Nobody wants USB-C on an iPhone when Lightning is slightly smaller!" "Nobody needs an HDMI port or SD slot on a MacBook Pro!" "Nobody needs a second port on the 12-inch MacBook!" Most of the above things have come true except the touch laptop, and somehow it hasn't hurt anyone, but the "nobody wants..." crew immediately stops when Apple finally [re-]embraces something
We use iPads interchangeably. All personal apps like banking are on phones. Some apps that only I would use such as for the roomba and car are on both.
Having profiles for the kids however would be nice though. But most apps have that built in themselves.
>Having profiles for the kids however would be nice though.
I find this madness that apple doesnt have this already.
Never, it'll be single user MacOS.
Thank goodness… this will hopefully help keep app bundle sizes down, and allow developers to avoid calling AI APIs for trivial stuff like summaries.
back to "glass" UI element/design? Early 2000s is back, I guess.
Edit: surprised apple is dumping resources into gaming, maybe they are playing the long game here?
After reading the book "Apple in China", it’s hilarious to observe the contrast between Apple as a ruthless, amoral capitalist corporation behind the scenes and these WWDC presentations...
This just in: company that spends billions on marketing is effective at marketing their products. News at 11.
News at 11.
…10 Central and Mountain.
> New Design with Liquid Glass
Looks like software UI design – just like fashion, film, architecture and many other fields I'm sure – has now officially entered the "nothing new under the sun" / "let's recycle ideas from xx years ago" stage.
https://en.wikipedia.org/wiki/Aqua_%28user_interface%29
To be clear, this is just an observation, not a judgment of that change or the quality of the design by itself. I was getting similar vibes from the recent announcement of design changes in Android.
To me it looks more like Windows Vista's "Aero" than OS X's "Aqua".
And I couldnt be happier to see it back. I have not been a fan of the flattening of UI design over the last 15 years.
But the opposite of "flat" is not "transparent".
This was posted in another HN thread about Liquid Glass: https://imgur.com/a/6ZTCStC . I'm sure Apple will tweak the opacity before it goes live, but this looks horribly insane to me.
Yeah it definetly needs work. But I hope they do tone it down like Microsoft did with Aero glass effects between Vista and win 7.
They are heading in a good direction, it just needs to be toned down. But like any new graphics technology the first year is the "WOW WE CAN DO X!!!!" then the more tame stuff comes along.
The explicitly mention this is (paraphrasing) "bringing elements from visionOS to all your devices" in the video in TFA.
I'll just want the option to turn it off because it will use extra CPU cycles just existing.
I remember the catastrophe of Windows Vista, and how you needed a capable GPU to handle the glass effect. Otherwise, one of your (Maybe two) CPU cores would have to process all that overhead.
bleary eyed, waking up while trying to find my reading glasses would make that interface essentially useless.
Yes, I immediately thought of Windows Aero too!!! I wasn’t able to enable it until I got a 9800GX2 a few years later, very cool at the time combined with the ability to have movies as your desktop background. It was a nice vibe.
It's the second coming of Frutger Aero[1]
It also looks like KDE 4.
Maybe this is consequence of the Frutiger Aero trend, and that users miss the time where user interfaces were designed to be cool instead of only useful
Current interfaces are not aimed at being optimally useful. Padding everywhere as of today means more time scrolling and wasted screen space. Animations everywhere means a lot of wasted time watching pixels moving instead of the computer/phone giving us control immediately after it did the thing we (maybe) asked for. Hiding scrollbars is a nightmare in general in desktop OSes but is the default (once lost half an hour setting up a proxy because the "save" button was hidden behind a scrollbar).
Usability feels it has only been down since Windows 7. (on another hand, Windows has plenty of accessibility features that help a lot in restoring usability)
I love that we're getting some texture back. UI has been so boring since iOS 7.
Sebastiaan de With of Halide fame did a writeup about this recently, and I think he makes some great points.
Interesting, I never made the connection between dashboard widgets UI and early iPhone UI. It does make sense, early iPhone had a UI that was glossier and more colorful than "metallic" aqua.
Open link and type into this box "physicality is the new skeumorphism"
Read on and:
They are completely dynamic: inhabiting characteristics that are akin to actual materials and objects. We’ve come back, in a sense, to skeuomorphic interfaces — but this time not with a lacquer resembling a material. Instead, the interface is clear, graphic and behaves like things we know from the real world, or might exist in the world. This is what the new skeuomorphism is. It, too, is physicality.
Well worth reading for the retrospective of Apple's website taking a twenty year journey from flatland and back.
They’re describing material design, which Google popularized. Skeuomorphism with things that could exist in the real world, avoid breaking the laws of physics, etc. Which then morphed into flat design as things like drop shadows were seen as dated. You are here.
I kind of hate it. Every use of it in the videos shown so far has moments where it's so transparent as to have borderline unreadable contrast.
Same. And white on light blue is just as bad. Looks like I’ll be using more accessibility features.
This is the first time I have ever thought “maybe I don’t want to update my phone“. Entirely because of the look.
In Settings -> Accessibility -> Display, you can enable Increase Contrast or Reduce Transparency to get rid of some of the worse glass effects, and Settings -> Accessibility -> Motion, you can enable Reduce Motion to get rid of the some of the light effects for content passing under glass buttons.
The last example in the first carousel is the worst, the bottom glass elements have complete unreadable text
I agree with you, I hope they quickly tweak this into something more readable. There could be a really nice mid ground here.
I used to find these changes compelling but now I think they are mostly a pain in the ass or questionable.
Proof of a well-designed UI is stability, not change.
Reads to me strongly of an effort to give traditional media something shiny to put above the headline and keep the marketing engine running.
If you read the press release, you can see it's 100% about marketing and nothing else.
Apple will spend 10x the effort to tell you way a useless feature is necessary before they look at user feedback.
I’m usually on board with Apple UI changes but something about all the examples they showed today just looked really cheap.
My only guess is this style looks better while using the product but not while looking at screenshots or demos built off Illustrator or whatever they’re using.
I love it. Reminds me of Windows 7. The nostalgia is too strong with this one.
In fact, Apple once did a version of Aqua that did an overengineered materials-based rasterization at runtime, including a physically correct glass effect.
It was too slow and was later optimized away to run off of pre-rendered assets with some light typical style engine procedural code.
Feels like someone just dusted off the old vision now that the compute is there.
Just one or two years ago I remember a handful of articles popping up that Gen Z was really into Frutiger Aero, that's the first thing I thought of, with the nature themes and skeuomorphic UI elements.
https://www.yahoo.com/lifestyle/why-gen-z-infatuated-frutige...
Thanks, I wasn't even aware the style had gotten a name in the meantime.
Back when Jobs was introducing one of the Mac OS X versions, there was a line that stuck with me.
Showing off the pulsating buttons he said something like "we have these processors that can do billions of calculations of second, we might as well use them to make it look great".
And yet a decade later, they were undoing all of that to just be flat an boring. Im glad they are using the now trillions of calculations a second to bring some character back into these things.
He was selling. The audience were sales. OS's were fully matured at that point. Computers were something you buy at a store. It was a selling point.
A decade later they were handling the windfall that came with smartphone ascendancy. An emergence of an entirely new design language for touch screen UI. Skeumorphism was slowing that all down.
Making it all flat meant making it consistent, which meant making it stable, which meant scalability. iOS7 made it so that even random developers' apps could play along and they needed a lot of developers playing along.
Liquid Glass is not adding a dimension. It is still flat UI, sadly. They just gave the edges of the window a glass like effect. There's also animation ("liquid" part). Overall, very disappointing.
The world flip flops from flat to 3D UI design every few years.
We were in a flat era for the last several years, this kicks off the next 3D era.
HN should have a conference-findings thread for something like WWDC, with priority impact rankings
P4: Foundation models will get newbies involved, but aren't ready to displace other model providers.
P4: New containers are ergonomic when sub-second init is required, but otw no virtualization news.
P2: Concurrency now visible in instruments and debuggable, high-performance tracing avoid sampling errors; are we finally done with our 4+ years of black-box guesswork? (Not to mention concurrency backtracking to main-thread-by-default as a solution.)
P5: UI Look-and-feel changes across all platforms conceal the fact that there are very few new API's.
Low content overall: Scan the platforms, and you see only L&F, app intents, widgets. Is that really all? (thus far?) - It's quite concerning.
Also low quality: online links point no where, half-baked technologies are filling presentation slots: Swift+Java interop is no where near usable, other topics just point to API documentation, "code-along" sessions restating other sessions.
Beware the new upgrade forcing function: adding to the memory requirements of AI, the new concurrency tracing seems to require M4+ level device support.
> This year, App Intents gains support for visual intelligence. This enables apps to provide visual search results within the visual intelligence experience, allowing users to go directly into the app from those results.
How about starting with reliably, deterministically, and instantly (say <50ms) finding obvious things like installed apps when searching by a prefix of their name? As a second criterion, I would like to find files by substrings of their name.
Spotlight is unbelievably bad and has been unbelievably bad for quite a few years. It seems to return things slowly, in erratic order (the same search does not consistently give the same results) and unreliably (items that are definitely there regularly fail to appear in search results).
Fwiw, spotlight in MacOS seems to be getting a major revamp too (basing this on the WWDC livestream, but there seems to be a note about it on their blog[0] too), pushing it a bit more in the direction of tools like Alfred or Raycast, and allegedly also being faster (but that's marketing speak of course, so we'll see when Fall comes).
[0]: https://www.apple.com/newsroom/2025/06/macos-tahoe-26-makes-...
“How about starting with reliably, deterministically, and instantly (say <50ms) finding obvious things like <…> searching by a prefix of their name? As a second criterion, I would like to find files by substrings of their name”
Even I can, and have, build search functionality like this. Deterministically. No LLMs or “AI” needed. In fact for satisfying the above criteria this kind of implementation is still far more reliable.
I've also written search code like this. It's trivial, at least at the scale of installed apps and such on a single computer.
AI makes it strictly worse. I do not want intelligence. I want to type, for example, "saf" and have Safari appear immediately, in the same place, every time, without popping into a different place as I'm trying to click it because a slower search process decided to displace the result. No "temperature", no randomness, no fancy crap.
Quicksilver worked great back in the day before Spotlight was ever even a thought.
I have no idea what happened to my Mac in the last month but for some reason, spotlight isn't able to search by name any app name anymore. Like if search for Safari, it will show me results for everything except the Safari app. Even tried searching for Safari.app and still no results. It can't find any apps.
User: yells Feedback into void.
[dead]
[dead]
[flagged]
[flagged]
[flagged]
k
Apple's integration of AI into its MacOS is the one reason why I am considering a switch back to Linux after my current laptop dies.
If that’s the one reason, have you considered just… not using the AI features?
Sure you can for now. But what when it's forced upon you to use them?
Well if that hypothetical situation ever happens, you can just switch to Linux then.
There is no real need and the issue is hypothetical?
I find it offensive to have any generative AI code on my computer.
I promise you there is Linux code that has been tab-completed with Copilot or similar, perhaps even before ChatGPT ever launched
That is true. I actually was ambiguous in my post, because I meant code that generates stuff, not that was generated by AI, even though I don't like the latter, either.
I think I know what you meant. You mean you don't want code that runs generative AI in your computer? But, what you wrote could also mean you don't want any code running that was generated by AI. Even with open source, your computer will be running code generated by AI as most open source projects are using it. I suspect it will be nearly impossible to avoid. Most open source projects will accept AI generated code as long as it's been reviewed.
Good point, and you were right. I was ambiguous. I meant a system that generates stuff, not stuff that was generated by AI. But I'd rather not use stuff that was generated by AI, either. But you are also right. That will become impossible, and probably already is. Not a very nice world, I think. Best thing to do then is to minimize it, and avoid computers as much as possible....
So, then don’t do that? It’s not like it’s automatically generating code without you asking.
I didn't say "generating code", I meant I find it offensive to have any code sitting on my computer that generates code, whether I use it or not. I prefer minimalism: just have on my computer what I will use, and I have a limited data connection which means even more updates with useless code I won't use.
I find it offensive to have any generative AI code on my computer.
Settings → Apple Intelligence and Siri → toggle Apple Intelligence off.
It's not enabled by default. But in case you accidentally turned it on, turning it off gets you a bunch of disk space back as the AI stuff is removed from the OS.
Some people are just looking for a reason to be offended.
Not forced to use, forced to download and waste 2GB of disk space.
So don't. You have to tell the computer to download Apple Intelligence. It doesn't just happen on its own.
Just don't push the Yes button when it offers.
With a single toggle, you can turn off Apple Intelligence
See (System) Settings
But I can't toggle off downloading it, which is 2GB on my limited connection and 2GB of MY disk space.
This reads like the crotchety and persnickety 60-somethings in the 1990's who said the internet was a passing and annoying fad.
I was musing before sleep days ago about how maybe the internet still is just a fad. We’ve had a few decades of it, yeah, but maybe in the future people will look at it as boring tech just like I viewed VCRs or phones when I was growing up. Maybe we’re still addicted to the novelty of it, but in the future it fades into the background of life.
I’ve read stories about how people were amazed at calling each other and would get together or meet at the local home with a phone installed, a gathering spot, make an event about it. Now it’s boring background tech.
We kind of went through a faze of this with the introduction of webcams. Omegle, Chatroulette, it was a wild Wild West. Now it’s normalized, standard for work with the likes of Zoom, with FaceTiming just being normal.
A few years ago I would've said you were incredibly cynical, but nowadays with so much AI slop around social media and just tonnes of bad content I tend to agree with you.
Now the Cyberpunk pen and paper RPG seems prophetic if turn your head sideways a bit https://chatgpt.com/share/684762cc-9024-800e-9460-d5da3236cd...
I think younger me would think the same. Its not even the AI slop or bad content but also the intrusive tracking, data collection, and the commercialization of interests. I just feel gross participating.
I do think there is a lot of valid criticism of the internet. I certainly don't think it's an annoying fad but I do think it has caused a lot of bad things for humanity. In some ways, life was much better without it, even though there are some benefits.
It is impossible to have a negative opinion of AI without silly comments like this just one step removed from calling you a boomer or a Luddite. Yes all technological progress is good and if you don’t agree you’re a dumb hick.
AI maximalists are like those 100 years ago that put radium everywhere, even in toothpaste, because new things are cool and we’re so smart you need to trust us they won’t cause any harm.
I’ll keep brushing my teeth with baking soda, thank you very much.
I am a Luddite, but I think that's a good thing. I don't mind the negative comments at all. I get them all the time.
On the other side of that are the people screaming that AI is murder.
There are lots of folks like this, and it's getting exhausting that they make being anti-AI their sole defining character trait: https://www.reddit.com/r/ArtistHate
It's also exhausting to see endless new applications of AI, even worse IMO.
Actually, most "AI" cults blindly worship at their own ignorance:
https://www.youtube.com/watch?v=sV7C6Ezl35A
The ML hype-cycle has happened before... but this time everyone is adding more complexity to obfuscate the BS. There is also a funny callback to YC in the Lisp story, and why your karma still gets incinerated if one points out its obvious limitations in a thread.
Have a wonderful day, =3
Good move, not sure they are exposing other modalities as well ?
I guess LLM and AI are forbidden words in Apple language. They do their utmost to avoid these words.
They took the clever (in my opinion) decision to rebrand "AI" as "Apple Intelligence", presumably partly in order to avoid the infinite tired "it's not really AI" takes that have surrounded that acronym for decades.
It's about as cringe as that Chinese guy with the funny-shaped head, who said a few years ago that AI for him means "alibaba intelligence".
And that's why we haven't heard of him since then.
LLM's get six mentions in this article.
Nah, I think they made it model agnostic, which is kinda smart.
Search for "large language model" instead of "LLM".
Because they don't own it, or the models they (don't) own aren't good enough for a standalone brand? Sure seems like it.
> And it's local and on device.
Does that explain why you don't have to worry about token usage? The models run locally?
> You don’t have to worry about the exact tokens that Foundation Models operates with, the API nicely abstracts that away for you [1]
I have the same question. Their Deep dive into the Foundation Models framework video is nice for seeing code using the new `FoundationModels` library but for a "deep dive", I would like to learn more about tokenization. Hopefully these details are eventually disclosed unless someone else here already knows?
[1] https://developer.apple.com/videos/play/wwdc2025/301/?time=1...
I guess I'd say "mu", from a dev perspective, you shouldn't care about tokens ever - if your inference framework isn't abstracting that for you, your first task would be to patch it to do so.
To parent, yes this is for local models, so insomuch worrying about token implies financial cost, yes
I maintain a llama.cpp wrapper, on everything from web to Android and cannot quite wrap my mind around if you'd have any more info by getting individual token IDs from the API, beyond what you'd get from wall clock time and checking their vocab.
The direction the software engineering is going in with this whole "vibe coding" thing is so depressing to me.
I went into this industry because I grew up fascinated by computers. When I learned how to code, it was about learning how to control these incredible machines. The joy of figuring something out by experimenting is quickly being replaced by just slamming it into some "generative" tool.
I have no idea where things go from here but hopefully there will still be a world where the craft of hand writing code is still valued. I for one will resist the "vibe coding" train for as long as I possibly can.
To be meta about it, I would argue that thinking "generatively" is a craft in and of itself. You are setting the conditions for work to grow rather than having top-down control over the entire problem space.
Where it gets interesting is being pushed into directions that you wouldn't have considered anyway rather than expediting the work you would have already done.
I can't speak for engineers, but that's how we've been positioning it in our org. It's worth noting that we're finding GenAI less practical in design-land for pushing code or prototyping, but insanely helpful helping with research and discovery work.
We've been experimenting with more esoteric prompts to really challenge the models and ourselves.
Here's a tangible example: Imagine you have an enormous dataset of user-research, both qual and quant, and you have a few ideas of how to synthesize the overall narrative, but are still hitting a wall.
You can use a prompt like this to really get the team thinking:
"What empty spaces or absences are crucial here? Amplify these voids until they become the primary focus, not the surrounding substance. Describe how centering nothingness might transform your understanding of everything else. What does the emptiness tell you?"
or
"Buildings reveal their true nature when sliced open. That perfect line that exposes all layers at once - from foundation to roof, from public to private, from structure to skin.
What stories hide between your floors? Cut through your challenge vertically, ruthlessly. Watch how each layer speaks to the others. Notice the hidden chambers, the unexpected connections, the places where different systems touch.
What would a clean slice through your problem expose?"
LLM's have completely changed our approach to research and, I would argue, reinvigorated an alternate craftsmanship to the ways in which we study our products and learn from our users.
Of course the onus is on us to pick apart the responses for any interesting directions that are contextually relevant to the problem we're attempting to solve, but we are still in control of the work.
Happy to write more about this if folks are interested.
Reading this post is like playing buzz word bingo!
This sounds like a boomer trying to resist using Google in favor of encyclopedias.
Vibe coding can be whatever you want to make of it. If you want to be prescriptive about your instructions and use it as a glorified autocomplete, then do it. You can also go at it from a high-level point of view. Either way, you still need to code review the AI code as if it was a PR.
Karpathy's definition of vibe coding as I understood it was just verbally directing an agent based on vibes you got from the running app without actually seeing the code.
Is any AI assisted coding === Vibe Coding now?
Coding with an AI can be whatever one can achieve, however I don’t see how vibe coding would be related to an autocomplete: with an autocomplete you type a bit of code that a program (AI or not) complete. In VC you almost doesn’t interact with the editor, perhaps only for copy/paste or some corrections. I’m not even sure for the manual "corrections" parts if we take Simon Willinson definition [0], which you’re not forced to obviously, however if there’s contradictory views I’ll be glad to read them.
0 > If an LLM wrote every line of your code, but you've reviewed, tested, and understood it all, that's not vibe coding in my book—that's using an LLM as a typing assistant
https://arstechnica.com/ai/2025/03/is-vibe-coding-with-ai-gn...
(Your may also consider rethinking your first paragraph up to HN standards because while the content is pertinent, the form sounds like a youngster trying to demo iKungFu on his iPad to Jackie Chan)
No, this sounds like an IC resisting becoming a manager.
No, that's what's separates the vibecoding from the glorified autocomplete. as originally defined, vibe coding doesn't include the final code review of the generated code, just a quick spot check, and then moving on to the next prompt.
You can take an augmented approach, a sort of capability fusion, or you can spam regenerate until it works.
I might we wrong but I guess this will only works on iphone 16 devices and iphone 15 pro - thus drastically limits your user base and you would still have to use online API for most apps. I was hoping they provide free ai api on their private cloud for other devices even if also running small models
If you start writing an app now, by the time it's polished enough to release it, the iPhone 16 will already be a year old phone, and there will be plenty potential customers.
If your app is worthwhile, and gets popular in a few years, by that time iPhone 16 will be an old phone and a reasonable minimum target.
Skate to where the puck is going...
Basically this - network effects are huge. People will definitely by hardware if it solves a problem for them - so many people bought blackberries just for BBM.
Developers could be adding a feature utilizing LLMs to their existing app that already has a large user base. This could be a matter of a few weeks from an idea ti shipping the feature. While competitors use API calls to just "get things done", you are trying to figure out how to serve both iPhone 16 and older users, and potentially Android/web users if your product is also available elsewhere. I don't see how an iPhone 16 only feature helps anyone's product development, especially when the quality still remains to be seen.
Exactly, it can take at least a couple of years to get big/important apps to use iOS, macOS features. By that Iphone 16 would be quite common.
If the new foundation models are on device, does that mean they’re limited to information they were trained on up to that point?
Or do have the ability to reach out to the internet for up to the moment information?
In addition to context you provide, the API lets you programmatically declare tools