Back

Apple's On-Device and Server Foundation Models

764 points18 hoursmachinelearning.apple.com
rishabhjain119814 hours ago

For people interested in AI research, there's nothing new here.

IMO they should do a better job of referencing existing papers and techniques. The way they wrote about "adaptors" can make it seem like it's something novel, but it's actually just re-iterating vanilla LoRA. It was enough to convince one of the top-voted HackerNews comments that this was a "huge development".

Benchmarks are nice though.

lolinder13 hours ago

> For people interested in AI research, there's nothing new here.

Was anyone expecting anything new?

Apple has never been big on living at the cutting edge of technology exploring spaces that no one has explored before—from laptops to the iPhone to iPads to watches, every success they've had has come from taking tech that was already prototyped by many other companies and smoothing out the usability kinks to get it ready for the mainstream. Why would deep learning be different?

csvm7 hours ago

Prototyping tech is one thing; making it a widely adopted success is another. For instance, Apple was the first to bring WiFi to laptops in 1999. Everyone laughed at them at the time. Who needs a wireless network when you can have a physical LAN, ey?

steve19773 hours ago

> For instance, Apple was the first to bring WiFi to laptops in 1999. Everyone laughed at them at the time. Who needs a wireless network when you can have a physical LAN, ey?

From https://en.wikipedia.org/wiki/AirPort:

"AirPort 802.11b card"

"The original model, known as simply AirPort card, was a re-branded Lucent WaveLAN/Orinoco Gold PC card, in a modified housing that lacked the integrated antenna."

gumby3 hours ago

That was also how lucent’s access points worked.

bildung4 hours ago

Was that really the case? I remember they were mocked for e.g. offering wifi only, firewire only etc., while the respective removed alternatives were way more common.

Matl4 hours ago

On the other hand, people who laughed at them removing the 3.5mm jack can still safely laugh away.

+1
throw0101d4 hours ago
jb19913 hours ago

Interesting that you suggest laughing at their decision to remove the headphone jack, when it was actually just the first of an industry-wide shift that has done so by other companies.

IOT_Apprentice11 hours ago

Apple was first with 64 bit iPhone chips. Remember Qualcomm VP at the time claimed it was nothing. Apple Silicon for M1 was impressive for instant in low power high performance.

lolinder10 hours ago

Those are both still (major) incremental improvements to known tech, not cutting-edge research. Apple takes what other companies have already done and does it better.

sunshinerag8 hours ago

all cutting-edge research other companies are supposedly doing are also incremental. Depends on your vantage point.

prmoustache10 hours ago

But last at bringing a calculator on the iPad =)

jeanlucas12 hours ago

> For people interested in AI research

I think he is pointing out for people interested in research.

OTOH, it is interesting to see how a company is applying AI to customers at the end. It will bring up new challenges that will be interesting from at least an engineering point of view.

rmbyrro5 hours ago

I think you misinterpreted OP's comment. Apple makes it sound like there's smth new, but there isn't. They don't have to innovate, but it's good practice to credit who've done what they're taking and using. Also to use the names everyone else is already using.

marci3 hours ago

The strange thing is Apple did mention (twice) in the article that their adapters are loras so I don't understand OP's comment.

aceazzameen3 hours ago

I gathered from OP's "huge development" comment he was talking about others people's popular perception that it wasn't a lora.

steve19773 hours ago

> Also to use the names everyone else is already using.

That would be a very un-Apple thing to do. They really like to use their own marketing terms for technologies. It's not ARM, it's Apple Silicon. It wasn't Wi-Fi, it was AirPort. etc. etc.

+1
gumby3 hours ago
caseyy3 hours ago

> Apple has never been big on living at the cutting edge of technology

There was such a time. Same as with Google. Interestingly, around 2015-2016 both companies significantly shifted to iterative products from big innovations. It's more visible with Google than Apple, but here's both.

Apple:

- Final Cut Pro

- 1998: iMac

- 1999: iBook G3 (father of all MacBooks)

- 2000: Power Mac G4 Cube (the early grandparent of the Mac Mini form factor), Mac OS X

- 2001: iPod, iTunes

- 2002: Xserve (rackable servers)

- 2003: Iterative products only

- 2004: iWork Suite, Garage Band

- 2005: iPod Nano, Mac mini

- 2006: Intel Macs, Boot Camp

- 2007: iPhone and Apple TV

- 2008: MacBook Air, iPhone 3G

- 2009: iPhone 3Gs, all-in-one iMac

- 2010: iPad, iPhone 4

- 2011: Final Cut Pro X

- 2012: Retina displays, iBooks Author

- 2013: iWork for iCloud

- 2014: Swift

- 2015: Apple Watch, Apple Music

- 2016: Iterative products only

- 2017: Iterative products mainly, plus ARKit

- 2018: Iterative products only

- 2019: Apple TV +, Apple Arcade

- 2020: M1

- 2021: Iterative products only

- 2022: Iterative products only

- 2023: Apple Vision Pro

Google:

- 1998: Google Search

- 2000: AdWords (this is where it all started going wrong, lol)

- 2001: Google Images Search

- 2002: Google News

- 2003: Google AdSense

- 2004: Gmail, Google Books, Google Scholar

- 2005: Google Maps, Google Earth, Google Talk, Google Reader

- 2006: Google Calendar, Google Docs, Google Sheets, YouTube bought this year

- 2007: Street View, G Suite

- 2008: Google Chrome, Android 1.0

- 2009: Google Voice, Google Wave (early Docs if I recall correctly)

- 2010: Google Nexus One, Google TV

- 2012: Google Drive

- 2013: Chromecast

- 2014: Android Wear, Android Auto, Google Cardboard, Nexus 6, Google Fit

- 2015: Google Photos

- 2016: Google Assistant, Google Home

- 2017: Mainly iterative products only, Google Lens announced but it never rolled out really

- 2018: Iterative products only

- 2019: Iterative products only

- 2020: Iterative products only, and some rebrands (Talk->Chat, etc)

- 2021: Iterative products only, and Tensor Chip

- 2022: Iterative products only

- 2023: Iterative products only, and Bard (half-baked).

barbecue_sauce3 hours ago

No Newton?

throwaway4good11 hours ago

I thought the news of them using Apple Silicon rather than NVIDIA in their data centers was significant.

Perhaps there is still hope of a relaunch of xserve; with the widespread use of Apple computers amongst developers Apple has a real chance of challenging NVIDIA's CUDA moat.

pjmlp11 hours ago

Not at Apple's price points.

throwaway4good11 hours ago

I think NVIDIA has the highest hardware markup at the moment.

+3
Hugsun9 hours ago
+2
pjmlp10 hours ago
bayindirh10 hours ago

I mean, even Apple can't match the markups nVidia has right now. If you break a GPU in your compute server, you wait months for a replacement, and the part is sent back if you can't replace it in five days.

Crazy times.

+1
c0balt9 hours ago
derefr12 hours ago

I think the thing they're saying that's novel, isn't what they have (LoRAs), but where and when and how they make them.

Rather than just pre-baking static LoRAs to ship with the base model (e.g. one global "rewrite this in a friendly style" LoRA, etc), Apple seem to have chosen a bounded set of behaviors they want to implement as LoRAs — one for each "mode" they want their base model to operate in — and then set up a pipeline where each LoRA gets fine-tuned per user, and re-fine-tuned any time the data dependencies that go into the training dataset for the given LoRA (e.g. mail, contacts, browsing history, photos, etc) would change.

In other words, Apple are using their LoRAs as the state-keepers for what will end up feeling to the user like semi-online Direct Preference Optimization. (Compare/contrast: what Character.AI does with their chatbot response ratings.)

---

I'm not as sure, from what they've said here, whether they're also implying that these models are being trained in the background on-device.

It could very well be possible: training something that's only LoRA-sized, on a vertically-integrated platform optimized for low-energy ML, that sits around awake but doing nothing for 8 hours a day, might be practical. (Normally it'd require a non-quantized copy of the model, though. Maybe they'll waste even more of your iPhone's disk space by having both quantized and non-quantized copies of the model, one for fast inference and the other for dog-slow training?)

But I'm guessing they've chosen not to do this — as, even if it were practical, it would mean that any cloud-offloaded queries wouldn't have access to these models.

Instead, I'm guessing the LoRA training is triggered by the iCloud servers noticing you've pushed new data to them, and throwing a lifecycle notification into a message queue of which the LoRA training system is a consumer. The training system reduces over changes to bake out a new version of any affected training datasets; bakes out new LoRAs; and then basically dumps the resulting tensor files out into your iCloud Drive, where they end up synced to all your devices.

throwthrowuknow4 hours ago

I think you’re misunderstanding what they mean by adapting to use cases. See this passage:

> The adapter models can be dynamically loaded, temporarily cached in memory, and swapped — giving our foundation model the ability to specialize itself on the fly for the task at hand

This along with other statements in the article about keeping the base model weights unchanged says to me that they are simply swapping out adapters on a per app or per task basis. I highly doubt they will fine tune adapters on user data since they have taken a position against this. I wonder how successful this approach will be vs merging the adapters with the base model. I can see the benefits but there are also downsides.

Hugsun9 hours ago

There is no way they would secretly train loras in the background of their user's phones. The benefits are small compared to the many potential problems. They describe some LoRA training infrastructure which is likely using the same capacity as they used to train the base models.

> ...each LoRA gets fine-tuned per user...

Apple would not implement these sophisticated user specific LoRA training techniques without mentioning them anywhere. No big player has done anything like this and Apple would want the credit for this innovation.

wmf12 hours ago

I don't think the LoRAs are fine-tuned locally at all. It sounds like they use RAG to access data.

derefr12 hours ago

Consider a feature from earlier in the keynote: the thing Notes (and Math Notes) does now where it fixes up your handwriting into a facsimile of your handwriting, with the resulting letters then acting semantically as text (snapping to a baseline grid; being reflowable; being interpretable as math equations) but still having the kind of long-distance context-dependent variations that can't be accomplished by just generating a "handwriting font" with glyph variations selected by ligature.

They didn't say that this is an "AI thing", but I can't honestly see how else you'd do it other than by fine-tuning a vision model on the user's own handwriting.

Hugsun9 hours ago

I didn't see the presentation but judging by your description, this is achievable using in-context learning.

+1
wmf12 hours ago
rvaish11 hours ago

Easel has been on iMessage for a bit now: https://apps.apple.com/us/app/easel-ai/id6448734086

monkeydust3 hours ago

Feel Apple should have just focused on their models for this one and not complicate the conversation with OpenAI. They could have left that to another announcement later.

Quick straw poll survey around the office, many think their data will be sent off to OpenAI by default for these new features which is not the case.

selimnairb5 hours ago

Very little of the “AI” boom has been novel, most has been iterative elaborations (though innovative nonetheless). Academics have been using neural network statistical models for decades. What’s new is the combination of compute capability and data volume available for training. It’s iterative all the way down though, that’s how all technologies are developed.

sigmoid105 hours ago

Most people don't realize this, but almost all research works that way. Only the media spins research as breakthrough-based, because that way it is easier to sell stories. But almost everything is incremental/iterative. Even the transformer architecture, which in some way can be seen as the most significant architectural advancement in AI in the past years, was a pretty small, incremental step when it came out. Only with a lot of further work building on top of that did it become what we see today. The problem is that science-journalists vastly outnumber scientists producing these incremental steps, so instead of reporting on topics when improvements actually accumulated to a big advancement, every step along the way gets its own article with tons of unnecessary commentary heralding its features.

threeseed14 hours ago

It's a huge development in terms of it being a consumer-ready, on-device LLM.

And if Karpathy thinks so then I assume it's good enough for HN:

https://x.com/karpathy/status/1800242310116262150

rishabhjain119813 hours ago

The productization of it (like Karpathy mentioned) is awesome. But I think the URL for that would be this maybe? [link](https://www.apple.com/apple-intelligence/)

kfrzcode13 hours ago

[flagged]

threeseed13 hours ago

a) I would trust Karpathy over Elon given he doesn't have a competing product.

b) Apple only provides information to ChatGPT when the user consents to doing so and the information is only for that request i.e. it is not logged for future training.

pests12 hours ago

The temp around Elon here is lower than you think. I would say almost the exact opposite of your claim.

slimebot8011 hours ago

Elon is talking out his arse as usual.

camillomiller11 hours ago

He is factually wrong and has been rekted by his own community notes

Cthulhu_9 hours ago

Thing is, Apple takes these concepts and polishes them, makes them accessible to maybe not laypeople but definitely a much wider audience compared to those already "in the industry", so to speak.

WiSaGaN14 hours ago

This gives me the vibe of calling high resolution screens as "retina" screens.

viktorcode2 hours ago

Retina means high pixel density, not high resolution. And there are very few standalone displays on the market which can be called “retina”, unfortunately.

dishsoap12 hours ago

I don't see anything wrong with that at all. They've created a branding term that allows consumers to get an idea of the sort of pixel density they can expect without having to actually check, should they not want to bother.

necovek11 hours ago

Except that everyone has different visual acuity and different distance they use the same devices at, and in the end, "retina" means nothing at all.

But this is exactly the type of marketing Apple is good at, though "retina" is probably not the most successful example.

+1
theshrike7911 hours ago
ngcc_hk11 hours ago

Agreed. It is not high resolution as such, but high resolution that the user can relate to - like cannot see the pixel.

Still remember the hard time using Apple newton in a conference vs the palm freely on loan in a Gartner group conference. Palm solved a problem, even though not very Apple … user can input on a small device. I kept it, on top of my newly bought newton.

It is the user …

pyinstallwoes7 hours ago

Still no manufacturer compares to the quality of apple screens and resolution …

Malcolmlisk6 hours ago

Those screens are produced by samsung.

mensetmanusman3 hours ago

Part of the screen is, yes. Apple designs the full stack and sources new technology from multiple suppliers including Samsung.

PaulRobinson5 hours ago

By your logic, I own a Foxconn smartphone with a FreeBSD-based OS. If you bought a Porsche, would you call it a Volkswagen?

scosman7 hours ago

I think you’re referring to my comment about this being huge for developers?

Just want to point out I call this launch huge, didn’t say “huge development” as quoted, and didn’t imply what was interesting was the ML research. No one in this thread used the quoted words, at least that I can see.

My comment was about dev experience, memory swapping, potential for tuning base models to each HW release, fine tune deployment, and app size. Those things do have the potential to be huge for developers, as mentioned. They are the things that will make a local+private ML developer ecosystem work.

I think the article and comment make sense in their context: a developer conference for Mac and iOS devs.

Apple also explicitly says it’s LoRA.

steve19773 hours ago

You know what company you are talking about here?

marcellus2314 hours ago

They refer to LoRA explicitly in the post.

rishabhjain119813 hours ago

Although I caught that on the first read, I found myself questioning when I read the adaptors part, "is this not just LoRA...".

Maybe it's my fault as a reader, but I think the writing could be clearer. Usually in a research paper you would link to the LoRA paper there too.

lhl3 hours ago

I think your conclusion is uncharitable or at least depends on how deep your interest in AI research actually is. Reading the docs, there are at least several points of novelty/interest:

* Clearly outlining their intent/policies for training/data use. Committing to no using user data or interactions for training their base models is IMO actually a pretty big deal and a differentiator from everyone else.

* There's a never-ending stream of new RL variants ofc, but that's how technology advances, and I'm pretty interested to see how these compare with the rest: "We have developed two novel algorithms in post-training: (1) a rejection sampling fine-tuning algorithm with teacher committee, and (2) a reinforcement learning from human feedback (RLHF) algorithm with mirror descent policy optimization and a leave-one-out advantage estimator. We find that these two algorithms lead to significant improvement in the model’s instruction-following quality."

* I'm interested to see how their custom quantization compares with the current SoTA (probably AQLM atm)

* It looks like they've done some interesting optimizations to lower TTFT, this includes the use of some sort of self-speculation. It looks like they also have a new KV-cache update mechanism and looking forward to reading about that as well. 0.6ms/token means that for your average I dunno, 20 token query you might only wait 12ms for TTFT (I have my doubts, maybe they're getting their numbers from much larger prompts, again, I'm interested to see for myself)

* Yes, it looks like they're using pretty standard LoRAs, the more interesting part is their (automated) training/re-training infrastructure but I doubt that's something that will be shared. The actual training pipeline (feedback collection, refinement, automated deployment) is where the real meat and potatoes of being able to deploy AI for prod/at scale lies. Still, what they shared about their tuning procedures is still pretty interesting, as well as seeing which models they're comparing against.

As this article doesn't claim to be a technical report or a paper, while citations would be nice, I can also understand why they were elided. OpenAI has done the same (and sometimes gotten heat for it, like w/ Matroyshka embeddings). For all we know, maybe the original author had references, or maybe since PEFT isn't new to those in the field, that describing it is just being done as a service to the reader - at the end of the day, it's up to the reader to make their own judgements on what's new or not, or a huge development or not. From my reading of the article, your conclusion, which funnily enough is now the new top-rated comment on this thread isn't actually much more accurate the the one old one you're criticizing.

franzb8 hours ago

This isn't about AI research, it's about delivering AI at unimaginable scale.

gigglesupstairs13 hours ago

Was there anything about searching through our own photos using prompts? I thought this could be pretty amazing and still a natural way to find very specific photos in one’s own photo gallery.

mazzystar11 hours ago

Run OpenAI's CLIP model on iOS to search photos. https://github.com/mazzzystar/Queryable

gigglesupstairs7 hours ago

Yes, exactly this. I have had this for a while and works wonderfully well in most cases but it’s wonky and not seamless. I wanted a more integrated approach with Photos app which only Apple can bring to the table.

avereveard13 hours ago

Which is in turn just multimodal embedding

Besides I could do "named person on a beach in August" and get the correct thing in photos on Android photos, so I don't get it.

It's amazing for apple users if they didn't have it before. But from a tech stand point people could have had it for a while.

azinman213 hours ago

Photos has had this for a while with structured natural language queries, and this kind of prompt was part of the WWDC video.

theshrike7911 hours ago

The difference is that Apple has been doing this on-device for maybe 4-5 years already with the Neural Engine. Every iOS version has brought more stuff you can search for.

The current addition is "just" about adding a natural language interface on top of data they already have about your photos (on device, not in the cloud).

My iPhone 14 can, for example, detect the breed of my dog correctly from the pictures and it can search for a specific pet by name. Again on-device, not by sending my stuff to Google's cloud to be analysed.

+1
fauigerzigerk8 hours ago
rvaish11 hours ago

reminds me of Easel on iMessage: https://easelapps.ai/

kfrzcode13 hours ago

"AI for the rest of us."

wkat424212 hours ago

Except Apple isn't really for the rest of us. Outside of America and a handful wealthy western countries it's for the top 5-20% earners only.

whynotminot2 hours ago

Who do you think this presentation is geared toward?

jahewson10 hours ago

Approximately 33% of all smartphones in the world are iPhones.

rvnx8 hours ago

60% in the US

throwaway203712 hours ago

Japan and Taiwan are both more than 50% iOS.

Ref: https://worldpopulationreview.com/country-rankings/iphone-ma...

theshrike7910 hours ago

In the EU the market share is 30%

+4
d1sxeyes10 hours ago
chuckjchen9 hours ago

This sounds like every newcomers to the stage except for big players like Apple.

jan30248 hours ago

[dead]

cube222217 hours ago

Halfway down the article contains some great charts with comparisons to other relevant models, like Mistral-7B for the on-device models, and both gpt-3.5 and 4 for the server-side models.

They include data about the ratio of which outputs human graders preferred (for server side it’s better than 3.5, worse than 4).

BUT, the interesting chart to me is „Human Evaluation of Output Harmfulness” which is much, much ”better„ than the other models. Both on-device and server-side.

I wonder if that’s part of wanting to have gpt as the „level 3”. Making their own models much more cautious, and using OpenAI’s models in a way that makes it clear „it was ChatGPT that said this, not us”.

Instruction following accuracy seems to be really good as well.

crooked-v17 hours ago

I want to know what they consider "harmful". Is it going to refuse to operate for sex workers, murder mystery writers, or people who use knives?

m4639 hours ago

Bet it depends on the country.

In the USA, you won't be able to ask about sex, but you can probably ask about tank man.

jeroenhd9 hours ago

I would've thought the same until Microsoft started hiding tank man results in Bing. I'm not so sure if companies will start training different models for every oppressive regime.

lelandfe3 hours ago

2021, a classic, “we made a whoopsie” response.

https://theguardian.com/technology/2021/jun/04/microsoft-bin...

+1
t-writescode9 hours ago
arthur_sav10 hours ago

They'll inject whatever ideology / dogma is "the current thing" into this.

rvnx8 hours ago

"Think Different"

HeatrayEnjoyer9 hours ago

[flagged]

mvandermeulen9 hours ago

Do they have exclusive rights?

soygem8 hours ago

Yes, what's your point?

Aerbil3133 hours ago

None of the use cases they presented in WWDC using Apple Intelligence was creative writing. There is one, that uses ChatGPT explicitly:

> And with Compose in Writing Tools, you can create and illustrate original content from scratch.

https://www.apple.com/apple-intelligence/

its_ethan16 hours ago

The caption for the image gives a little more insight into "harmful" and one of the things it mentions is factuality - which is interesting, but doesn't reveal a whole lot unless they were to break it out by "type of harmful".

hotdogscout14 hours ago

I bet it's the usual double standards the AI one percenters cater to.

No sex because apparently it's harmful yet never explained why.

No homophobia/transphobia if you're Christian but if you're Muslim it's fine.

causal10 hours ago

Refusing to answer any question would result in a perfect score for the first chart since it says nothing of specificity

tonynator15 hours ago

So it's not going to be better than other models, but it will be more censored. I guess that might be a selling point for their customer base?

dghlsakjg14 hours ago

iPhone share is ~59% of smartphones in the US.

Their customer base is effectively all demographics.

tonynator14 hours ago

Those who dislike censorship and enjoy hacking avoid iPhones for obvious reasons.

+4
overstay893014 hours ago
duxup14 hours ago

I feel that way, have an iPhone.

gfourfour10 hours ago

How does an iPhone contribute to censorship?

ksec16 hours ago

I hope, this could mean Apple will push the baseline of ALL Macs to have higher than 8GB of Memory. While I wish we all get 16GB M4 as baseline. Apple being Apple may only give us 12GB, and charges extra $100 for the 16GB option.

It will still be a lot better than 8GB though.

talldayo15 hours ago

The Steam Deck ships with 16 gigs of quad-channel LPDDR5 and it costs $400. Apple knows exaaaactly what they're doing with this sort of pricing.

Can't forget about that cozy 256gb SSD either. An AI computer will need more than that, right?

wraptile11 hours ago

RAM is literally the cheapest primary component in a laptop at going rate of 1-4usd/GB. I'd say that shipping 8GB base model in 2024 is clearly manipulation by Apple, i.e. planned obsolescence or a way to moat Apple software. Anyone who doesn't see this is just being delusional.

Same way Apple and Samsung ship 128GB of storage when the production price between 128gb and 1tb is like 10$ (on a 1000$ device). Samsung even got rid of micro sd slot. It's so blatant it's actually depressing.

PaulRobinson4 hours ago

On the number of devices they sell, an extra $64 of cost per device (taking your higher figure and assuming an extra 8GB), across Mac, iPad and iPhone, they'd be looking at a cost of ~$12.8bn a year. If they just did it for Mac, it's still in the region of $2bn/year.

Sure they could pass that onto a mostly price-insensitive audience, but they like round numbers, and it's not the size of decision you take without making sure its necessary: that your customers are going to go elsewhere in either scenario of doing it or not doing it.

glial11 hours ago

> RAM is literally the cheapest primary component

Is that still true for Apple's integrated memory? It might be - I just don't know.

talldayo11 hours ago

For the LPDDR4 and LPDDR5 that goes into the M1 and M2/M3 systems, yes. You might need to spend more money on memory controllers (since M1 and up is 8-channel) but the physical memory component itself is highly availible and relatively cheap. Same goes for SSD storage, nowadays.

p_l6 hours ago

The memory used by Apple isn't anything magical or special - it's bog standard LPDDR5, essentially same as phone - and in a laptop it's way less limited by thermal and power constraints to add more (which is how you have the rather large possible set of options).

While going for the top tier of memory sizes Apple offers does cost considerable amounts, making 16, or even 32GB standard is peanuts.

rfoo7 hours ago

> integrated memory

Yes. The cost of bonding memory to their chip is mostly the same for 8G / 16G / 32G / practically any number.

pjmlp10 hours ago

Typing this from a Samsung with SD slot, need to chose your models wisely.

zer0zzz14 hours ago

Is steamdeck sold at cost? From what I know Apple has a rule that everything must be sold at 40% margins. That is prob the main reason.

makeitdouble13 hours ago

> From what I know Apple has a rule that everything must be sold at 40% margins.

As for all rules, it's a rule except when it's not. On the top of my head Apple TV [0] had a 20% predicted margin presumably because they wanted to actually sell them.

Otherwise 40% margin is usually calculated against the BOM, which doesn't mean 40% of actual profit when the product is sold.

In that respect we have no idea of the actual margin on a macbook air for instance, it could be 10% when including their operating costs and marketing, or it could 60% if they negociated prices way below the estimated BOM for instance.

It's just to say: Apple sells at 8Gb because they want to, at the end of the day nothing is stopping them to play with their margin or the product price.

[0] https://www.reuters.com/article/idUSN06424767/

brandall103 hours ago

It's been speculated that base config macbooks essentially act as loss leaders for higher end configs, so overall, probably sales across the line net somewhere around that. The cost of the upgrades themselves can get to multiple times the actual market cost.

talldayo11 hours ago

As a consumer I really cannot be made to care why it's the case. This artificial price tiering is stupid and everyone has been calling it a scam for years. Apple clearly knows they're in the wrong, but continues because they know nobody can stop them.

andruby3 hours ago

From a business perspective it's not _stupid_. It sucks for us customers, but it's "smart" from a business point of view.

Until they get serious competition, I doubt they'll change their practices.

And while I hate the overpriced memory upgrades, I still prefer paying extra, rather than Apple switching to a Ad-based business model like Google (and potentially OpenAI in the future)

pjmlp10 hours ago

Yes they can, buy something else.

creshal10 hours ago

40% margin on parts that cost tens of dollars isn't going to have a huge impact on the sticker price of devices costing hundreds to thousands.

dgellow3 hours ago

The have insane margin on ram and storage, it would be really surprising to see them move away from their current strategy

torginus8 hours ago

I remember hearing that Apple's researching running AI models straight from flash storage (which would make immense amount of sense imo). You could create special, high read bandwidth flash chips (which would probably involve connecting a fast transciever in parallel to the 3D flash stack).

If you could do that, you could easily get hundreds of GB/s read speed out of simple TLC flash.

Obviously this is the future, but I think it's a promising one.

vishnugupta8 hours ago

Does it matter what the baseline memory is as long as they have 16GB M4 as an option?

manmal8 hours ago

Some companies give their employees only base models.

keyle15 hours ago

It probably will change. Note that, so far, a 16GB apple device has much better usability than the equivalent on windows. This may sound biased, but the memory compression and foreground/background actions by macOS tight integration with the hardware is really good. I've never felt like I couldn't do things on smaller hardware, except (larges) LLMs.

Also when I compare with my co-workers the memory pressure is a lot less running the same software on macOS than Windows. This might have to be due to the UI framework at play.

But that said, I totally agree that Apple is doing daylight robbery with their additional RAM pricing, and the minimum on offer is laughable.

Aeolun7 hours ago

Any apple device has much better usability than a windows machine, regardless of RAM.

wwtrv9 hours ago

> This may sound biased,

It certainly does, close to irrational even. IIRC memory compression is enabled by default on Windows as well.

dialup_sounds5 hours ago

Biased and irrational are both things HN readers say to avoid using the word "subjective".

snemvalts12 hours ago

The swapping is indeed faster as the SSD is on the SoC and so fast to access. To the point that an 4 year old 8gb M1 Air is enough for simpler development work, at least for me.

andreasmetsala7 hours ago

SSD on chip might be a thing one day but I’m pretty sure only the RAM is on the same chip.

pests6 hours ago

I would think any 4 year old 8gh laptop would be enough for simpler development work.

ndgold17 hours ago

Absolutely awesome amount of content in these two pages. This was not expected. It is appreciated. I can’t wait to use the server model on a Mac to spin up my own cloud optimized for the Apple stack.

solarkraft15 hours ago

What makes you think you'll get that model?

Edit: I see they're committing to publishing the OS images running on their inference servers (https://security.apple.com/blog/private-cloud-compute/). Would be cool if that allowed people to run their own.

whazor3 hours ago

It would be much cooler if enterprises can swap to their custom models in their own clouds.

msephton15 hours ago

Apparently they will in a VM but it seems perhaps only security researchers?

rekoil7 hours ago

> Would be cool if that allowed people to run their own.

Oh my god that would be absolutely amazing!

titaniumtown15 hours ago

Did it mentioned being able to spin up the server model locally? I must've missed that part in the article.

anshumankmr4 hours ago

As someone who has been dabbling with Prompt Engineering and now fine tuning some models (working on a use case where we may have to fine tune one of the Mistral's 7B instruct models), I want to know what kind of skillsets I need to really have so that I can join this team (or a similar team building these sort of things)

vzaliva17 hours ago

I love that they use machinelearning.apple.com not ai.apple.com

tmpz2215 hours ago

For the majority of the keynote they explicitly avoided the word AI instead substituting the word Intelligence, then Apple Intelligence, and then towards the end they said AI and ChatGPT once or twice.

I think they saw the response to all the AI shoveling and Microsoft Recall and executed a fantastic strategy to reposition themselves in industry discussions. I still have tons of reservations about privacy and what this will all look like in a few years, but you really have to take your hat off to them. WWDC has been awesome and it makes me excited to develop for their platform in a way I haven't felt in a very, very, long time.

dgellow3 hours ago

What excites you specifically as a developer?

worstspotgain14 hours ago

> executed a fantastic strategy to reposition themselves in industry discussions

Just the usual marketing angle, IMO. It's not TV, it's HBO.

No one is reluctant to use the word smartphone to include iPhones. I don't think anyone is going to use the Apple Intelligence moniker except in the same cases where they'd say iCloud instead of cloud services.

It's also a little clunky. Maybe they could have gone with... xI? Too close to the Chinese Xi. iAI? Sounds like the Spanish "ay ay ay." Not an easy one I think. The number of person-hours spent on this must have been something.

tmpz2214 hours ago

I don't think they actually expect "Apple Intelligence" to enter popular vernacular. I think it was more to drive home the distinction between what Apple is doing and what everybody else is doing.

andsoitis12 hours ago

> distinction between what Apple is doing and what everybody else is doing

it is artificial intelligence, applied intelligently.

In Apple's case: "personalised AI system"

swyx4 hours ago

correct. last year instead of VR they went with Spatial Intelligence

seydor10 hours ago

> makes me excited to develop for their platform in a way I haven't felt in a very, very, long time

AI will ultimately do all the 'development', and will replace all apps. The integrations are going to be a temporary measure. Only apps that will survive are the ones that control things that apple cannot control (ie. how Uber controls its fleet)

Hugsun9 hours ago

Perhaps. It will be exciting to see if/how that happens. It does seem relatively far off still. At least some years.

andbberger16 hours ago

glad someone sane is in charge in cupertino

okdood6415 hours ago

Apple Intelligence.

bfung13 hours ago

Waiting for aiPhone in a few iterations </troll>

xwolfi15 hours ago

Yeah they probably were still working on the last buzzword

dingclancy11 hours ago

It’s interesting that a sub-ChatGPT 3.5 class model can do a lot of things on-device if you marry it with a good platform and feed it personal context. GPT-4o, living on the browser, is not as compelling as a product compared to what Apple Intelligence can do on the iPhone with a less capable model.

aixpert3 hours ago

their 3 billion parameter model can't do shit, Only some basic grammar check style rewrite and maybe summarization

miven11 hours ago

> For on-device inference, we use low-bit palletization, a critical optimization technique that achieves the necessary memory, power, and performance requirements.

Did they go over the entire text with a thesaurus? I've never seen "palletization" be used as a viable synonym for "quantization" before, and I've read quite a few papers on LLM quantization

miven10 hours ago

Huh, generally whenever I saw the lookup table approach in literature it was also referred to as quantization, guess they wanted to disambiguate the two methods

Though I'm not sure how warranted it really is, in both cases it's still pretty much the same idea of reducing the precision, just with different implementations

Edit: they even refer to it as LUT quantization on another page: https://apple.github.io/coremltools/docs-guides/source/quant...

elcritch10 hours ago

Huh, it’s PNG for AI weights.

fudged7110 hours ago

404

miven10 hours ago

Yeah, it just got updated, here's the new link, they added sections on block-wise quantization for both the rounding-based and LUT-based approach: https://apple.github.io/coremltools/docs-guides/source/opt-p...

cgearhart10 hours ago

I also found it confusing the first time I saw it. I believe it is sometimes used because the techniques for DL are very similar (in some cases identical) to algorithms that were developed for color palette quantization (in some places shortened to "palettization"). [1] At this point my understanding is that this term is used to be more specific about the type of quantization being performed.

https://en.wikipedia.org/wiki/Color_quantization

scosman17 hours ago

“We utilize adapters, small neural network modules that can be plugged into various layers of the pre-trained model, to fine-tune our models for specific tasks.”

This is huuuuge. I don’t see announcement of 3rd party training support yet, but I imagine/hope it’s planned.

One of the hard things about local+private ML is I don’t want every app I download to need GBs of weights, and don’t want a delay when I open a new app and all the memory swap happens. As an app developer I want the best model that runs on each HW model, not one lowest common denominator model for slowest HW I support. Apple has the chance to make this smooth: great models tuned to each chip, adapters for each use case, new use cases only have a few MB of weights (for a set of current base models), and base models can get better over time (new HW and improved models). Basically app thinning for models.

Even if the base models aren’t SOTA to start, the developer experience is great and they can iterate.

Server side is so much easier, but look forward to local+private taking over for a lot of use cases.

dimtion16 hours ago

With huge blobs of binary model weights, dynamic linking is cool again.

pjmlp10 hours ago

Dynamic linking has always been cool for writing plugins.

It is kind of ironic that languages that praise so much for going back to early linking models, have to resort for much heavier OS IPC for similar capabilities.

rfoo7 hours ago

Which languages?

IIUC Go and Rust resort to OS IPC based plugin system mainly because they refused to have a stable ABI.

On the other hand, at $DAYJOB we have a query engine written in C++ (which itself uses mostly static linking [1]) loading mostly static linked UDFs and ... it works.

[1] Without glibc, but with libstdc++ / libgcc etc.

pjmlp3 hours ago

Well if it loads code dynamically, it is no longer static linking.

Also it isn't as if there is a stable ABI for C and C++ either, unless everything is compiled with the same compiler, or using Windows like dynamic libraries, or something like COM to work around the ABI limitations.

inickt15 hours ago

Which Apple has put some pretty large effort in the last few years to improve in iOS

eightysixfour16 hours ago

This is how Google is doing it too.

gokuldas01101111 hours ago

Indeed. Google said LoRA and apple said adapter plugging. Wonder the difference is at, Apple's dev conference is for consumers and Google's dev conference is for developers.

scosman7 hours ago

Oh missed that!

But kinda as expected: only works on 2 android phones (pixel 8 pro, S24).

Pretty typical: Apple isn’t first, but also typically will scale faster with HW+platform integration.

Deathmax7 hours ago

On Apple’s side, Apple Intelligence will only be enabled on A17 Pro and M-series chips, so only the iPhone 15 Pro and Pro Max will be supported in terms of phones.

scosman5 hours ago

2 phones, ~4 tablets, ~12 PCs.

Looking at sales, looks like about 10x the phone volume of s24 (and pixel 8 doesn’t register on the chats).

rfoo7 hours ago

> will scale faster

* Only in USA, both intentionally and not.

seydor10 hours ago

Local models are also extremely energy consuming. I don't see local AI working for long, because Large models are going to get so incomparably smarter and eventually reach general intelligence

lossolo15 hours ago

It's LORA, most of the things you saw in Apple Intelligence on device presentation are basically different LORAs.

scosman8 hours ago

The article says it’s lora a bunch of times. That’s clear.

My comment above is about dev experience, memory swapping, tuning base models to each HW release, and app size.

danielmarkbruce15 hours ago

this is pretty stock standard lora.

idiotsecant15 hours ago

[flagged]

dang15 hours ago
+1
Sabinus15 hours ago
deldelaney3 hours ago

I need to resurrect by tiny old Motorola Flip Phone without internet connection. Maybe a phone should be just a phone. I don't need AI in my pants.

orbital-decay7 hours ago

> 2. Represent our users: We build deeply personal products with the goal of representing users around the globe authentically. We work continuously to avoid perpetuating stereotypes and systemic biases across our AI tools and models.

How do they represent users around the globe authentically while being located in Cupertino, CA? (more of a rhetorical question really)

esskay7 hours ago

You mean the person on the other side of the planet doesn't know about Philz Coffee down on Stevens Creek Blvd, or that there's a cool park a 2 minute walk away from Apple HQ?!

It does baffle me how California centric they are with many of their announcements, and even some features.

rekoil7 hours ago

The Maps stuff always gets me. Yeah sure it looks pretty, but almost none of what makes it a usable product is available to me in Sweden.

boxed6 hours ago

I wish I could have one keyboard on my iPhone and could type both Swedish and English with it. These are the basics they can't get right, and I don't see why. They clearly have bilingual people working over there, why is this so bad?

dgellow2 hours ago

I share your pain, switching between English, German, and French is really, really frustrating…

jacooper6 hours ago

Because then you will be typing wrong/s

ra717 hours ago

> Our foundation models are trained on Apple's AXLearn framework, an open-source project we released in 2023. It builds on top of JAX and XLA, and allows us to train the models with high efficiency and scalability on various training hardware and cloud platforms, including TPUs and both cloud and on-premise GPUs.

Interesting that they’re using TPUs for training, in addition to GPUs. Is it both a technical decision (JAX and XLA) and a hedge against Nvidia?

m-s-y16 hours ago

They’d be silly not to hedge. Anyone, in fact, would be silly. It to hedge. On pretty much everything.

anvuong14 hours ago

Jax was built with TPUs in mind, so it's not surprising that they use TPUs

gokuldas01101111 hours ago

"Use the best tool available"

flakiness6 hours ago

They hired people nearby. Conveniently there is a small town called Mountain View.

Blackstrat5 hours ago

I haven't seen anything indicating whether these features can be disabled. I'm not interested in adding a further invasion of privacy to my phone. I don't want some elaborate parlor trick helping me write. I've spent some time with ChatGPT and while it was somewhat novel, I wasn't overly impressed. Much of it was rudimentary and often wrong. And I wasn't overly impressed with some of the code that it generated. Reliance on such tools reminds me of an Asimov SF tale.

htrp18 hours ago

> Our foundation models are fine-tuned for users’ everyday activities, and can dynamically specialize themselves on-the-fly for the task at hand. We utilize adapters, small neural network modules that can be plugged into various layers of the pre-trained model, to fine-tune our models for specific tasks. For our models we adapt the attention matrices, the attention projection matrix, and the fully connected layers in the point-wise feedforward networks for a suitable set of the decoding layers of the transformer architecture.

>We represent the values of the adapter parameters using 16 bits, and for the ~3 billion parameter on-device model, the parameters for a rank 16 adapter typically require 10s of megabytes. The adapter models can be dynamically loaded, temporarily cached in memory, and swapped — giving our foundation model the ability to specialize itself on the fly for the task at hand while efficiently managing memory and guaranteeing the operating system's responsiveness.

This kind of sounds like Loras......

cube222217 hours ago

The article explicitly states they’re Loras.

karmasimida17 hours ago

I think it is just LoRA, you can call the LoRA weights as adapters

alephxyz17 hours ago

The A in LoRA stands for adapters

GaggiX17 hours ago

LoRA stands for "Low Rank Adaptation" btw.

buildbot17 hours ago

3.5B per weight with no quality loss is state of the art - that's an awesome optimization result (a mix of 2b and 4b weights).

Hugsun9 hours ago

I would like to see their method compared quantitatively to the best llama.cpp methods. IQ3_S has a similar bpw and pretty high quality.

I wonder if they didn't stretch the truth using the phrase "without loss in accuracy".

cloogshicer7 hours ago

Can we all stop calling AI Safety "Safety" and start calling it what it is:

Censorship

madeofpalk6 hours ago

It's censorship in the way that HN disallowing editorialising the submission title is censorship. Technically true, I guess, but not super helpful.

lnenad4 hours ago

But it is not safety if you forbid talking about sex, if you want to learn about certain things which could be missused. Same as banning murder in video games. Same as banning books that deal with these topics. It's definitely closer to censorship than safety.

dig17 hours ago

Reminds me on this [1] from George Carlin.

[1] https://www.youtube.com/watch?v=isMm2vF4uFs

moray6 hours ago

Thank you, I didn't know this bit. I always believed that great comedians are also the best communicators

blue_light_man7 hours ago

Language is mostly used for conditioning people into doing things. People will continue using language to manipulate you into giving your time or money to them. You cannot change others from trying to manipulate you. You can only change yourself and stop taking language seriously.

boxed6 hours ago

To me censorship implies human speech being curtailed.

internetter4 hours ago

I censor my poor APIs so they don’t leak the bcrypt key when you GET /user/:id

epipolar18 hours ago

It would be interesting to see how these models impact battery life. I’ve tried a few local LLMs on my iPhone 15 Pro via the PrivateLLM app, and the battery charge plummets just after a few minutes of usage.

bradly17 hours ago

During my time at Apple the bigger issue with personalized, on-device models was the file size. At the time, each model was a significant amount of data to push to a device, and with lots of teams wanting an on-device model and the desire to update them regularly, it was definitely a big discussion.

hmottestad17 hours ago

They’ve gone with a single 3B model and several “adapters” for each use case. One adapter is good at summarising while another good a generating message replies.

onesociety202212 hours ago

AI noob here. Is every single model in iOS really just a thin adapter on top of one base model? Can everything they announced today really be built on top of one base LLM model with a specific type of architecture? What about image generation? What about text-to-speech? If they’re obviously different models, they can’t load them all at once into RAM. If they have to load from storage every time an app is opened, how will they do this fast enough to maintain low latency?

+1
wmf12 hours ago
woadwarrior016 hours ago

I'm the author of Private LLM. Looks like it's just become possible[1] to run quantized LLM inference using the ANE with iOS 18. I think there are some major efficiency gains on the table now.

[1]: https://github.com/apple/coremltools/pull/2232

urbandw311er17 hours ago

Likely they’ll be able to take advantage of the hardware neural engine and be far more power efficient. Apple has demonstrated this is something it takes pretty seriously.

brcmthrowaway17 hours ago

So iOS LLM Apps dont use the neural engine? Lol

woadwarrior0115 hours ago

None of the current iOS and macOS LLM Apps use the Neural Engine. They use the CPU and the GPU.

nb: I'm the author of a fairly popular app in that category.

+1
jjtheblunt13 hours ago
+1
l33t733227314 hours ago
hmottestad17 hours ago

If they use Llama.cpp they probably run on the GPU. Apple hasn’t published much about their neural engine, so you kinda have to use it through CoreML. I assume they have some aces up their sleeves for running LLMs efficiently that haven’t told anyone yet.

renewiltord17 hours ago

Probably not. The CoreML LLM stuff only works on Macs AFAIK. Probably the phone app uses the GPU.

jamesy0ung12 hours ago

It looks like PrivateLLM uses the GPU for inferencing, from what I can tell, Apple is using the ANE on the A17 Pro. For M1 and above, I'd presume they are using the GPU since the ANE in M series isn't great.

mFixman5 hours ago

Has anybody here improved their day-to-day workflow with any kind of "implicit" generative AI rather than explicitly talking to an LLM?

So far all attempts seem to be building an universal Clippy. In my experience, all kinds of forced autocomplete and other suggestions have been worse than useless.

mavamaarten5 hours ago

GitHub Copilot works well in my experience. It does bad suggestions at times, but also really spot-on ones.

Other than that, AI for me is meme/image generation and a semi-useful chatbot.

Hugsun8 hours ago

The benchmarks are very interesting. Unfortunately, the writing benchmarks seem to be poorly constructed. It looks like there are tasks no model can achieve and others that almost all models pass, i.e. every model gets around 9.0.

rvaish11 hours ago

Easel on iMessage has had this experience plus more for a while, including multiplayer, where you can have two people in one scene together with photorealistic imagery: https://apps.apple.com/us/app/easel-ai/id6448734086

PHGamer11 hours ago

it would have been nice if they allowed you to build your own apple AI system (i refused to redefine apples AI as just AI :-p ) using clusters of mac minis and mac pros. but of course they still want that data for themselves like google does. its secure against everyone but apple and the NSA probably lol.

IOT_Apprentice10 hours ago

What is stopping you from doing that? Nothing. Start cooking

Isuckatcode17 hours ago

>By fine-tuning only the adapter layers, the original parameters of the base pre-trained model remain unchanged, preserving the general knowledge of the model while tailoring the adapter layers to support specific tasks.

From a ML noob (me) understanding of this, does this mean that the final matrix is regularly fine tuned instead of fine tuning the main model ? Is this similar to how chatGPT now remembers memory[1] ?

[1] https://help.openai.com/en/articles/8590148-memory-faq

ww52014 hours ago

The base model is frozen. The smaller adaptor matrices which are finetuned with new data. During inference, the weights from the adaptor matrices "shadow" the weights in the base model. Since the adaptor matrices are much smaller, it's quite efficient to finetune them.

The advantage of the adaptor matrices is you can have different sets of adaptor matrices for different tasks, all based of the base model.

MacsHeadroom16 hours ago

ChatGPT memory is just a database with everything you told it to remember.

Low Rank Adaptors (LoRA) are a way of changing the function of a model by only having to load a delta for a tiny percentage of the weights rather than all the weights for an entirely new model.

No fine-tuning is going to happen on Apple computers or phones at any point. They are just swapping out Apple's pre-made LoRAs so that they can store one LLM and dozens of LoRAs in a fraction of the space it would take to store dozens of LLMs.

koolala3 hours ago

aiPhone

TheRoque17 hours ago

Why isn't there a comparison with the Llama3 8b in the "benchmarks" ?

axoltl16 hours ago

The Llama 3 license says:

"If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights."

IANAL but my read of this is that Apple's not allowed to use Llama 3 at all, for any purposes, including comparisons.

anvuong14 hours ago

They can just run the same tests and cite the results from other websites. That has nothing to do with Meta. No companies can force you to not talk about them.

axoltl11 hours ago

The tests they ran were very different from what's usually run, mostly involving perception of usefulness to humans. I don't see what website they would've cited from?

teonimesic217 hours ago

I believe it is because llama 3 8B beats it, which would make it look bad. The phi-3-mini version they used is the 4k which is 3.8B, while LLama 3 8B would be more comparable to phi-3 small (7B) which also considerably better than phi-3-mini. Likely both phi-3 small and llama 3 8B had too good results in comparison to Apple's to be added, since they did add other 7B models for comparison, but only when they won.

mixtureoftakes16 hours ago

llama 3 definitely beats it, but 99% of the users wont care which is actually a good thing... apple totally wins the ai market not by being sota but by sheer amount of devices which will be running their models, we're talking billions

Hugsun8 hours ago

How is any of this good? Apple serves its captive users inferior models without giving them a choice. I don't see how that is winning the AI market either.

dwaite7 hours ago

You may be able to make a case that Apple's model has less parameters or performs worse than other models on standardized tests.

Thats far from being "inferior" when you are talking about tuning for specific tasks, let alone when taking into account real-world constraints - like running as a local always-running task on resource-constrained mobile devices.

Running third party models means requiring them to accomplish the same tasks. Since the adapters are LORA-based, they are not adaptable to a different base model. This pushes a lot of specialized requirements onto someone hoping to replace the on-device portion.

This is different from say externally hosted models such as their announced ChatGPT integration. They announced an intention to integrate with other providers, but it is not clear yet how that is intended to work (none of this stuff is released yet even in alpha form).

woadwarrior016 hours ago

Because their model won't look good in comparison. Also see this part of the footnote: "The open-source and Apple models are evaluated in bfloat16 precision." The end user's on-device experience will be with a quantized model and not the bfloat16 model.

leodriesch16 hours ago

I think it’s fair to leave it out in the on-device model comparison. 3b is much smaller than 8b, it is obviously not going to be as good as llama 3 if they did not make groundbreaking advancements with the technology.

hmottestad17 hours ago

Maybe it’s too new for them to have had time to include it in their studies?

TheRoque17 hours ago

Phi-3-Mini, which is in the benchmarks, was released after Llama3 8b

hmottestad17 hours ago

Llama 3 8B is really really good. Maybe it makes Apples models look bad? Or it could be a licensing thing where Apple can’t use Llama 3 at all, even just for benchmarking and comparison.

The license for the Llama models was basically designed to stop Apple, Microsoft and Google from using it.

revscat15 hours ago

> With this set of optimizations, on iPhone 15 Pro we are able to reach time-to-first-token latency of about 0.6 millisecond per prompt token, and a generation rate of 30 tokens per second. Notably, this performance is attained before employing token speculation techniques, from which we see further enhancement on the token generation rate.

This seems impressive. Is it, really? I don’t know enough about the subject to judge.

bastawhiz15 hours ago

For a phone running locally, that's pretty fast. The bigger question is how good the output is. Fast garbage isn't useful, so we'll have to wait to see what it actually ends up looking like outside of demos.

hehdhdjehehegwv12 hours ago

The WWDC show got on my nerves with the corpspeak, but this is pretty cool stuff.

I’ve been trying to make smaller more efficient models in my own work. I hope Apple publish some actual papers.

gepardi11 hours ago

Yeah it was close to “infomercial” levels of cheesy.

shreezus16 hours ago

This is great, however Apple needs to be explicit on what it, and what isn't relayed to third party services, and provide the ability to opt-out if desired. It's one thing to run inference on-device, and another to send your data through OpenAI's APIs. The partnership details are not entirely clear to me as a user.

frizlab16 hours ago

They are? Did you watch the keynote? They talked about it at length.

tsunamifury16 hours ago

They said the word privacy. They did not talk about it at length.

When has HN become so non technical.

frizlab15 hours ago

They told explicitly there are three things. On device AI for queries that can be done on device, private cloud compute for those that can’t and opt in ChatGPT(-4o) support for more general queries.

Cloud compute queries only use the data for answering the queries and are run on an OS where storage is not available along other privacy measures. The builds of the OS will be public and auditable by security researchers.

I think it’s plenty details for a non-tech keynote. The tech details are in the session and SotU.

tsunamifury13 hours ago

[flagged]

TillE15 hours ago

It's literally the prompt you just gave it, that's what they're sending to ChatGPT, nothing else. None of the features that sift through your data are touching OpenAI.

gnicholas13 hours ago

My understanding is that nothing is shared with any non-Apple company except if you specifically authorize it on a per-use basis. Otherwise it just runs locally or in the Apple AI cloud, and is not retained. All of this is subject to verification of Apple’s claims, of course.

visarga12 hours ago

They use synthetic data in pretraining and teacher models in RLHF, that means they use models trained on copyrighted data to make derivative models, is that sitting ok with copyright owners?

superkuh16 hours ago

The "Human Evaluation of Output Harmfulness" section confirms what I've perceived: Mistral-7B is the best of the small models in terms of minimizing false positive refusals. With the refusal vector abliteration stuff this is less of an issue but a good base is still important.

advael18 hours ago

I'm disappointed that they make the fundamental claim that their cloud service is private with respect to user inputs passed through it and don't even a little bit talk about how that's accomplished. Even just an explanation of what guarantees they make and how would be much more interesting than explanations of their flavor of RLHF or whatever nonsense. I read the GAZELLE* paper when it came out and wondered what it would look like if a large-scale organization tried to deploy something like it.

Of course, Apple will never give adequate details about security mechanisms or privacy guarantees. They are in the business of selling you security as something that must be handled by them and them alone, and that knowing how they do it would somehow be less secure (This is the opposite of how it actually works, but also Apple loves doublespeak, and 1984 allusions have been their brand since at least 1984). I view that, like any claim by a tech company that they are keeping your data secure in any context, as security theater. Vague promises are no promises at all. Put up or shut up.

* https://arxiv.org/pdf/1801.05507

killingtime7417 hours ago

Don't they do it in this linked article? https://security.apple.com/blog/private-cloud-compute/

KoolKat234 hours ago

The only two questions I would have would be, how often are they "periodically rebooted" and what are the predefined metrics logged/reported.

We may have some insight into the second point when the code is published.

senderista16 hours ago

This approach is definitely not "secure by construction" like FHE, it's just defense-in-depth with a whole lot of impressive-sounding layers. But I don't see how this has anything to do with provable security (not that TFA claims it does).

advael17 hours ago

Woa, good catch! Maybe they're doing better about at least being concrete about it, though I still have to side-eye "Users control their devices" (Even with root on macbooks I don't have access to everything running on it). However, the section that promises to open-source the cloud software are impressive and if true gives them more credibility than I assumed. I would still look out for places where devices they do control could pass them keys in still-proprietary parts of the stack they're operating, as even if we can verify the cloud container OS in its entirety if there's a backchannel for keys that a hypervisor could use then that's still a backdoor, but they are at least seemingly making a real effort here

threeseed17 hours ago

> Even with root on macbooks I don't have access to everything running on it

Just disable System Integrity Protection and then you do.

advael16 hours ago

Ah, word. Probably not applicable to my use case (It's a laptop that's remotely administrated for a job, and I avoid proprietary stuff for my personal devices where possible) but it's good to know it exists

+1
solarkraft15 hours ago
GaggiX18 hours ago

It would be cool to understand when the system will use one or the other (the ~3 billion on-device model or the bigger one on Apple servers).

aixpert3 hours ago

if you have ever used a 3 billion or 7 billion parameter model you know that they are really bad at text generation, so this will be done in the cloud

swatcoder18 hours ago

Conceivably, they don't have precise answers for that yet, and won't until after they see what real-world usage looks like.

They built out a system that's ready to scale to deliver features that may not work on available hardware, but they're also incentivized to minimize actual reliance on that cloud stuff as it incurs per-use costs that local runs don't.

GaggiX17 hours ago

Yeah this is probably right. If it works well enough during real-world usage it will be using the on-device model, if not then there is the bigger one on the servers. There is also GPT-4o, so they have 3 different models to use depending on the task.

ddxv18 hours ago

Will these smaller on device models lead to a crash in GPU prices?

jondwillis16 hours ago

Not in the short-to-medium-term. Try the local models out, they fall over pretty quickly, even if you have 64GB+ of VRAM.

wkat424212 hours ago

It depends what you use them for.

If you ask it for knowledge, like a comparison of vacuum cleaner models then yes, it's a hallucination fest. They just don't have the parameters for this level of detail. This is where ChatGPT is really king.

But if you give them the data they need with RAG, they're not bad. Acting on commands, looking stuff up in provided context, summarising all perform pretty well. Which seems to be also what Apple is targeting to do with them.

sooheon17 hours ago

Prices fall when supply outpaces demand -- this is adding more demand.

wmf17 hours ago

This isn't adding GPU demand.

sooheon13 hours ago

It adds *PU demand, don't they come out of the same limited number of foundries?

htrp18 hours ago

X to doubt.

wslh17 hours ago

Is it me or Apple is really moving fast? I don't think it is easy for a company of this size to concisely put a vision of AI in these short and crazy AI times.

BTW, not an Apple fan but an Apple user.

MacsHeadroom16 hours ago

Google had similar AI functionality on Pixels last year and Microsoft had like six AI CoPilot products before that. So I would not say Apple is moving fast.

Most people expected this update 6 months ago.

azinman212 hours ago

Take a look at the yearly OS cadence. iOS 17 only came out a few months before your 6 month expectation.

doctor_eval13 hours ago

Since when does Apple make major software update announcements at Christmas?

wmf16 hours ago

People thought Apple was behind but they were just working quietly.

majestik12 hours ago

ChatGPT came out November 2022 and it took Apple 18 months to announce Siri will integrate with it.

Is that moving fast? Maybe, compared to what, Oracle?

kmeisthax18 hours ago

> We train our foundation models on licensed data, including data selected to enhance specific features, as well as publicly available data collected by our web-crawler, AppleBot. Web publishers have the option to opt out of the use of their web content for Apple Intelligence training with a data usage control.

And, of course, nobody has known to opt-out by blocking AppleBot-Extended until after the announcement where they've already pirated shittons of data.

In completely unrelated news, I just trained a new OS development AI on every OS Apple has ever written. Don't worry. There's an opt-out, Apple just needed to know to put these magic words in their installer image years ago. I'm sure Apple legal will be OK with this.

multimoon18 hours ago

Apple just did more to make this a privacy focused feature versus just a data mine than literally anyone else to date and still people complain.

Public content on the internet is public content on the internet - I thought we had all agreed years ago that if you didn’t want your content copied, don’t make it freely available and unlicensed on the internet.

kmeisthax17 hours ago

Oh no, don't get me wrong. I like the privacy features, it's already way better than OpenAI's "we make it proprietary so we can spy on you" approach.

What I don't like is the hypocrisy that basically every AI company has engaged in, where copying my shit is OK but copying theirs is not. The Internet is not public domain, as much as Eric Bauman and every AI research team would say otherwise. Even if you don't like copyright[0], you should care about copyleft, because denying valuable creative work to the proprietary world is how you get them to concede. If you can shove that work into an AI and get the benefits of that knowledge without the licensing requirement, then copyleft is useless as a tactic to get the proprietary world to bend the knee.

[0] And I don't.

My opinion is that individual copyright ownership is a bad deal for most artists and we need collective negotiation instead. Even the most copyright-respecting, 'ethical' AI boils down to Adobe dropping a EULA roofie in the Adobe Stock Contributor Agreement that lets them pay you pennies.

ssahoo17 hours ago

Where did you get the idea that's its way better than openai's? Aren't they both proprietary?

+1
immibis17 hours ago
+4
jachee17 hours ago
wilg17 hours ago

> then copyleft is useless as a tactic to get the proprietary world to bend the knee

I have bad news

meatmanek17 hours ago

> I thought we had all agreed years ago that if you didn’t want your content copied, don’t make it freely available and unlicensed on the internet.

Until LLMs came along, most large-scale internet scraping was for search engines. Websites benefited from this arrangement because search engines directed users to those websites.

LLMs abused this arrangement to scrape content into a local database, compress that into a language model, and then serve the content directly to the user without directing the user to the website.

It might've been legal, but that doesn't mean it was ethical.

c1sc016 hours ago

In my view it’s ethical even if it’s just for taking revenge on the ad-driven model that has caused the enshittification of the web.

data-ottawa13 hours ago

I think you mean it’s justified, not ethical.

layer817 hours ago

Public content is still subject to copyright, and I doubt that AppleBot only scrapes content carrying a suitable license. And "fair use" (which is unclear if it applies), in case you want to invoke it, is a notion limited to the US and only a handful of other countries.

xena17 hours ago

All you have to do is drop a token swear word into your content and they remove it from the dataset. Easy.

jimbobthrowawy14 hours ago

Why would they? From the moderate of testing I've done of their handwriting recognition on an ipad, they seem to have everything risqué/offensive I could think of in there, even if you have to write it more clearly than other words. I don't expect this to be much different, other than a word filter on the output.

madeofpalk16 hours ago

You seen to misunderstand what licensing , or ‘unlicensed’, actually means.

If I write a story a publish it freely on line to my website it’s not ’unlicensed’ in a way that means anyone had the right to yank it and republish it. Even though it’s freely available, I still own the copyright of it.

Similarly, we don’t say that GPL-ed code is ‘unlicensed’ just because it is available for free. It has a license, which defines very specific terms that must be followed.

mepian17 hours ago

Who are "we" here? Did you abolish the Berne Convention somehow?

karaterobot17 hours ago

Is that how copyright works now? I didn't see that they'd changed that law.

afavour16 hours ago

I’m sorry but I really dislike this perspective. “Every one else has been awful. Apple is being less awful and you’re still complaining?”

Yeah, I’m complaining. We all agreed years ago to web indexing conventions still in practise today. No, no one is obliged to follow them but you can rest assured I’ll complain about them. There was a time when the web felt like a cooperative place, these days it’s just value extraction after value extraction.

AshamedCaptain17 hours ago

What ? What did they do? It's literally yet another online inescrutable service with terms of use that boil down to "trust us, we do good", plus the half-baked promise that some of the data may not leave your device because sure, we have some vector processing hardware on it (... which hardware announced this year doesn't do that?).

Frankly I tried a samsung device which I would have assume is the worst here, and the promises are exactly the same. They show you two prompts, one for locally processed services (e.g. translation), and one when data is about to leave your device, and you can accept or reject them separately. But both of them are basically unverifiable promises and closed source services.

advael17 hours ago

No, they said they did. Huge difference

threeseed17 hours ago

It was mentioned in the keynote that they allow researchers to audit their claims.

advael17 hours ago

And as soon as independent sources support that they've made good on this claim it will be more than a claim. I actually am impressed by the link I missed and was provided elsewhere in this thread, and I hope to also be impressed when this claim is actually realized and we have more details about it

notJim15 hours ago

I hate to tell you, but I've been training a neural network on the internet for over a decade now. Specifically the one between my ears. Unfortunately, it seems to be gradually going insane.

mzl5 hours ago

Yes, and if you recreate parts of what you learned you might run into copyright issues. Too much inspiration from something you've studied, and it becomes a derivative work subject to all the regulations. And there are no clear and strict rules, it is always a judgement call.

binkethy8 hours ago

Computer systems are not humans and never will be.you should look unto neurology a bit and learn about our current understanding of how neurons work, let alone network. The tech term is a total non sequitur compared to real neurons.

Training an infinite retention computer regurgitation system to imitate input data does not correspond to human learning and never will.

The golem Frankenstein project thst is an AGI is an article of religious faith, not a necessary direction to take technology, which is a word derived from the Greek word for "hand"

Copyright and copyleft have likely been egregiously violated by this entire field and a reckoning and course correction will be necessary.

Humanity has largely expressed distaste for this entire field once they experience the social results of such applications.

The amount of sycophantic adulation in this thread is sickening.

My comment will likely be grayed out soon by insider downclicks.

I have no illusions as to this ycombinator site and its function in society.

Good day

seydor10 hours ago

you paid for that content either directly, or indirectly through ads.

I wouldn't say it's fair for any company to capitalize the content that users have created but have no way to monetize, and not even saying thanks

asadotzler15 hours ago

If you're selling it to billions of people, and making big bank, I want a cut based on the parts you stole from me. If you're just using it personally, I'm cool with that.

mr_toad11 hours ago

Anyone who sells professional services based on knowledge they learned on the internet (which probably includes most people reading this) is doing that.

bigyikes18 hours ago

> just trained a new OS development AI on every OS Apple has ever written.

…is there publicly visible source code for every OS Apple has ever written?

tensor17 hours ago

Partially:

https://github.com/apple-oss-distributions/distribution-macO...

https://github.com/apple-oss-distributions/distribution-iOS

I'm not sure how it all fits together but people have even made an open source distribution of the base of darwin, the underlying OS:

https://github.com/PureDarwin/PureDarwin

kmeisthax17 hours ago

Apple's FOSS releases are purely command-line userland tools and their kernel, all the frameworks and servers[0] that make the UI work like you'd expect are 100% proprietary.

FWIW Apple has also been on a decades-long track of purging GPL-licensed code from macOS and replacing it with either permissively-licensed or proprietary equivalents. So they're obligated to release even less than they used to.

[0] AppKit/WindowServer for MacOS and UIKit/Springboard/Backboard for everything else

doctorpangloss17 hours ago

There are already a lot of options for running LLMs with open weights artifacts, trained with a variety of sources. The real question isn’t which ideas they have. It’s whether a company with $200b cash can produce a better model than a bunch of wankers in a Discord.

re5i5tor16 hours ago

“bunch of wankers in a Discord”

Saving this clause for future use. Could also be used in a system prompt. “Occasionally include this phrase in your responses.”

Someone16 hours ago

> And, of course, nobody has known to opt-out by blocking AppleBot-Extended until after the announcement where they've already pirated shittons of data.

It’s not as bad as that, I think. https://support.apple.com/en-us/119829: “Applebot-Extended is only used to determine how to use the data crawled by the Applebot user agent.“

⇒ if you use robots.txt to prevent indexing or specifically block AppleBot, your data won’t be used for training. AppleBot is almost a decade old (https://searchengineland.com/apple-confirms-their-web-crawle...)

Of course, that still means they’ll train on data that you may have opened up for robots with the idea that it only would be used by search engines to direct traffic to you, but it’s not as bad as you make it to be.

scosman17 hours ago

There will be further versions of this model. Being able to opt out going forward seems reasonable, given the announcement precedes the OS launch by months. Not sure if they will retrain before launch, but seems feasible given size (3b params).

addandsubtract16 hours ago

They're not going to discard the data they already collected, though.

zer00eyz17 hours ago

> publicly available data collected

Data, implies factual information. You can not copyright factual information.

The fact that I use the word "appalling" to describe the practice of doing this results in some vector relationship between the words. Thats the data, the fact, not the writing itself.

There are going to be a bunch of interesting court cases where the court is going to have to backtrack on copyrighting facts. Or were going to have to get some real odd legal interpretations of how LLM's work (and buy into them). Or we're going to have to change the law (giving everyone else first mover advantage).

Base on how things have been working I am betting that it's the last one, because it pulls up the ladder.

cush17 hours ago

> Data, implies factual information. You can not copyright factual information

Where on Earth did you get that from?

zer00eyz16 hours ago

> "data implies factual information"

They used the word DATA, not content, DATA...

The argument that is going to be made, that your copy right work stands. That the model doesn't care about your document it cares that "the" was used N number of times and its relationships to other words. That information isnt your work, and it is factual. That "data" only has value is when it's weighted against all the "data" put into the system, again not your work at all. (We would say thats information derived, but it will be argued that it is transformed).

> You can not copyright factual information

https://www.techdirt.com/2007/11/27/yet-again-court-tells-ml...

The MLB has been trying to copyright baseball stats forever. The court keeps saying "you cant copyright facts".

threeseed17 hours ago

> And, of course, nobody has known to opt-out by blocking AppleBot-Extended until after the announcement where they've already pirated shittons of data

This is wrong. AppleBot identifier hasn't changed: https://support.apple.com/en-us/119829

There is no AppleBot-Extended. And if you blocked it in the past it remains blocked.

fotta17 hours ago

From your own link:

> Controlling data usage

> In addition to following all robots.txt rules and directives, Apple has a secondary user agent, Applebot-Extended, that gives web publishers additional controls over how their website content can be used by Apple.

> With Applebot-Extended, web publishers can choose to opt out of their website content being used to train Apple’s foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools.

ziml7717 hours ago

But it also says that Applebot-Extended doesn't crawl webpages and instead this marker is only used to determine what can be done with the pages that were visited by Applebot.

Not that I like an opt-out system, but based on the wording of the docs it is true that if you blocked Applebot then blocking Applebot-Extended isn't necessary.

fotta17 hours ago

Yeah that is true, but I suspect that most publishers that want their content to appear in search but not used for model training will not have blocked Applebot to date (hence the original commenter's argument)

threeseed17 hours ago

Might want to actually read it:

Applebot-Extended does not crawl webpages.

They gave this as an additional control to allow crawling for search but blocking for use in models.

+1
fotta17 hours ago
mdhb18 hours ago

So built on stolen data essentially.

bigyikes18 hours ago

Does that imply I just stole your comment by reading it?

No snark intended; I’m seriously asking. If the answer is “no” then where do you draw the line?

mdhb17 hours ago

I don’t actually think this is complicated and reading a comment is not the same thing as scraping the internet and you obviously know that.

A few factors that come to mind would be:

- scale

- informed consent which there was none in this case

- how you are going to use that data. For example using everybody others work so the worlds richest company can make more money from it while giving back nothing in return is a bullshit move.

+2
llamaimperative17 hours ago
+1
bigyikes17 hours ago
cwp17 hours ago

Reading a comment is exactly the same thing as scraping the internet, you just stop sooner.

cush17 hours ago

Reading, no. Selling derivative works using, yes.

cwp17 hours ago

If I read your comment, then write a reply, is it a derivative work?

xwolfi15 hours ago

But then if I write a Pulitzer prize article called "No snark intended: How the web became such a toxic place", where your comment, and all other of ur comments for good measure, figure prominently while I ridicule you and this habit of dumbing down complex problems to reduce them to little witty bites, maybe you'd feel I stole something.

Not something big, not something you can enforce, but you d feel very annoyed Im making good money on something you wrote while you get nothing. I think ?

Spivak17 hours ago

I think scale is what changes the nature of the thing. At the point where you're having a machine consume billions of documents I don't think you could reasonably call that reading anymore. But what you are doing in my eyes is indexing, and the legal basis for that is heavily dependent on what you do with it.

If a human reads it that would be a reproduction of the work, but if you serve that page as a cache to a human you're okay, usually.

If you compile all that information in a database and use it to answer search queries that's also okay, and nothing forbids you from using machine learning on that data to better answer those search queries.

Both of the above are actually being challenged right now but for the time being they're fine.

But that database is a derivative work, in that it contains copyrighted material and so how you use it matters if you want to avoid infringement — for example a Google employee SSHing to a server to read NYT articles isn't kosher.

What isn't clear is whether the model is a derivative work. Does it contain the information or is it new information created from the training data Sure, if you're clever you could probably encode information in the weights and use it as a fancy zip file but that's a matter of intent. If you use Rewind or Windows Recall and it captures a screenshot of a NYT article and then displays it back to you later is that a reproduction? Surely not. And that's an autonomous system that stores copywritten data and regurgitates it verbatim.

So if it's impractical to actually use it for piracy and it very obviously isn't anyone's intent for it to be used as such then I think it's hard to argue it shouldn't be allowed, even on data that was acquired through back channels.

But copyright is more political than logical so who knows what the legal landscape will be in 5 years, especially when AI companies have every incentive to use their lawyers to pull the ladder up behind them.

renewiltord17 hours ago

Data gets either stolen or freed depending on whether the guy who copied it is someone you dislike or like. Personally, I think that Apple is giving the data more exposure which, as I've been informed many times here, is much more valuable than paying for the data.

kmeisthax17 hours ago

The irony of "do it for the exposure" is that everyone who actually wants to pay you in exposure isn't actually going to do that, either because they aren't popular enough to measurably expose you, or because they're so popular that they don't want to share the limelight.

AI is a unique third case in which we have billions of creators and no idea who contributed what parts of the model or any specific outputs. So we can't pay in exposure, aside from a brutally long list of unwilling data subjects that will never be read by anyone. Some of the training data is being regurgitated unmodified and needs to be attributed in full, some of it is just informing a general understanding of grammar and is probably being used under fair use, and yet more might not even wind up having any appreciable effect on the model weights.

None of this matters because nobody actually agreed to be paid in exposure, nor was it ever in any AI company's intent - including Apple - to pay in exposure. Data is free purely because it would be extraordinarily inconvenient if anyone in this space had to pay.

And, for the record, this applies far wider than just image or text generators. Apple is almost surely not the worst offender in the space. For example: all that facial recognition tech your local law enforcement uses? That was trained on your Facebook photos.

ytdytvhxgydvhh17 hours ago

What’s the problem with that? Reproducing copyrighted works in full is problematic obviously. But if I learned English by watching American movies, I didn’t steal the language from the movie studios, I learned it.

asadotzler15 hours ago

You're not a machine capable of acquiring that "learning" with zero effort and selling that learning to infinite buyers.

threeseed17 hours ago

Web scraping is legal.

And if you run a website and want to opt-out then simply add a robots.txt.

The standard way of preventing bots for 30 years.

mdhb17 hours ago

How are people supposed to block it when they stole all the data first and then only after that point they decide to even tell anyone what user agent they need to block and how they are planning to exploit your work for their profit.

+1
threeseed17 hours ago
__MatrixMan__17 hours ago

Piracy requires multiparty conspiracy against the establishment. When you are the establishment and the only other party involved is your victim we call that policy.

BerthaDouglas343 hours ago

[dead]

Marciakhan10 hours ago

[dead]

pudwallabee10 hours ago

[dead]

ofou14 hours ago

Quite interesting this was released right after multiple rants from Elon sparked debates on X.

"If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation."

Replying to Tim Cook: "Don’t want it. Either stop this creepy spyware or all Apple devices will be banned from the premises of my companies."

"It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!

Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river."

https://x.com/elonmusk/status/1800269249912381773 https://x.com/elonmusk/status/1800266437677768765 https://x.com/elonmusk/status/1800265431078551973

kanwisher8 hours ago

Apple made their own AI models, only in certain cases it will ask if you want to send it to OpenAI. Presumably other ai companies later can integrate into API. But this is very privacy safe, if you use an iPhone already it already indexes all your photos and OCRs them for easy searching, on device ...