Back

Mistral ships Le Chat – enterprise AI assistant that can run on prem

508 points2 monthsmistral.ai
codingbot30002 months ago

I think this is a game changer, because data privacy is a legitimate concern for many enterprise users.

Btw, you can also run Mistral locally within the Docker model runner on a Mac.

simonw2 months ago

There are plenty of other ways to run Mistral models on a Mac. I'm a big fan of Mistral Small 3.1.

I've run that using both Ollama (easiest) and MLX. Here are the Ollama models: https://ollama.com/library/mistral-small3.1/tags - the 15GB one works fine.

For MLX https://huggingface.co/mlx-community/Mistral-Small-3.1-24B-I... and https://huggingface.co/mlx-community/Mistral-Small-3.1-24B-I... should work, I use the 8bit one like this:

  llm install llm-mlx
  llm mlx download-model mlx-community/Mistral-Small-3.1-Text-24B-Instruct-2503-8bit -a mistral-small-3.1
  llm chat -m mistral-small-3.1
The Ollama one supports image inputs too:

  llm install llm-ollama
  ollama pull mistral-small3.1
  llm -m mistral-small3.1 'describe this image' \
    -a https://static.simonwillison.net/static/2025/Mpaboundrycdfw-1.png
Output here: https://gist.github.com/simonw/89005e8aa2daef82c53c2c2c62207...
indigodaddy2 months ago

Simon, can you recommend some small models that would be usable for coding on a standard M4 Mac Mini (only 16G ram) ?

simonw2 months ago

That's pretty tough - the problem is that you need to have RAM left over to run actual applications!

Qwen 3 8B on MLX runs in just 5GB of RAM and can write basic code but I don't know if it would be good enough for anything interesting: https://simonwillison.net/2025/May/2/qwen3-8b/

Honestly though with that little memory I'd stick to running against hosted LLMs - Claude 3.7 Sonnet, Gemini 2.5 Pro, o4-mini are all cheap enough that it's hard to spend much money with them for most coding workflows.

+3
codetrotter2 months ago
jychang2 months ago

16GB on a mac with unified memory is too small for good coding models. Anything on that machine is severely compromised. Maybe in ~1 year we will see better models that fit in ~8gb vram, but not yet.

Right now, for a coding LLM on a Mac, the standard is Qwen 3 32b, which runs great on any M1 mac with 32gb memory or better. Qwen 3 235b is better, but fewer people have 128gb memory.

Anything smaller than 32b, you start seeing a big drop off in quality. Qwen 3 14b Q4_K_M is probably your best option at 16gb memory, but it's significantly worse in quality than 32b.

+1
chedabob2 months ago
reichardt2 months ago

With around 4.6 GiB model size the new Qwen3-8B quantized to 4-bit should fit comfortably in 16 GiB of memory: https://huggingface.co/mlx-community/Qwen3-8B-4bit

martin_a2 months ago

Strange idea, but if I'd like to set up a solid LLM for use in my home network, how much processing power would I need for a multi-purpose model?

A Raspberry Pi? And old ThinkPad? A fully speced-out latest gen Macbook?

edit: One of those old Mac Pros?

+1
wsintra20222 months ago
the_other_mac2 months ago

Run Mistral 7b in under 4gb ram:

https://github.com/garagesteve1155/Overload

(As announced this morning in the FB group "Dull Men's Club!)

kergonath2 months ago

> I think this is a game changer, because data privacy is a legitimate concern for many enterprise users.

Indeed. At work, we are experimenting with this. Using a cloud platform is a non-starter for data confidentiality reasons. On-premise is the way to go. Also, they’re not American, which helps.

> Btw, you can also run Mistral locally within the Docker model runner on a Mac.

True, but you can do that only with their open-weight models, right? They are very useful and work well, but their commercial models are bigger and hopefully better (I use some of their free models every day, but none of their commercial ones).

distances2 months ago

I also kind of don't understand how it seems everyone is using AI for coding. I haven't had a client yet which would have approved any external AI usage. So I basically use them as search engines on steroids, but code can't go directly in or out.

fhd22 months ago

You might be able to get your clients to sign something to allow usage, but if you don't, as you say, it doesn't seem wise to vibe code for them. For two reasons:

1. A typical contract transfers the rights to the work. The ownership of AI generated code is legally a wee bit disputed. If you modify and refactor generated code heavily it's probably fine, but if you just accept AI generated code en masse, making your client think that you wrote it and it is therefore their copyright, that seems dangerous.

2. A typical contract or NDA also contains non disclosure, i.e. you can't share confidential information, e.g. code (including code you _just_ wrote, due to #1) with external parties or the general public willy nilly. Whether any terms of service assurances from OpenAI or Anthropic that your model inputs and outputs will probably not be used for training are legally sufficient, I have doubts.

IANAL, and _perhaps_ I'm wrong about one or both of these, in one or more countries, but by and large I'd say the risk is not worth the benefit.

I mostly use third party LLMs like I would StackOverflow: Don't post company code there verbatim, make an isolated example. And also don't paste from SO verbatim. I tried other ways of using LLMs for programming a few times in personal projects and can't say I worry about lower productivity with these limitations. YMMV.

(All this also generally goes for employees with typical employment contracts: It's probably a contract violation.)

+3
jstummbillig2 months ago
+1
distances2 months ago
+1
_bin_2 months ago
+3
genghisjahn2 months ago
shmel2 months ago

How is it different from the cloud? Plenty startups store their code on github, run prod on aws, and keep all communications on gmail anyway. What's so different about LLMs?

simion3142 months ago

>How is it different from the cloud? Plenty startups store their code on github, run prod on aws, and keep all communications on gmail anyway. What's so different about LLMs?

Those plenty startups will also use Google, OpenAi or the built in Microsoft AI.

This is clearly for companies that need to keep the sensitive data under their control. I think they also get support with adding more training to the model to be personalized for your needs.

layer82 months ago

It’s not different. If you have a confidentiality requirements like that, you also don’t store your code off-premises. At least not without enforceable contracts about confidentiality with the service provider, approved by the client.

+1
jamessinghal2 months ago
mark_l_watson2 months ago

I have good results running Ollama locally with olen models like Gemma 3, Qwen 3, etc. The major drawback is slower inference speed. Commercial APIs like Google Gemini are so much faster.

Still, I find local models very much worth using after taking the time to set them up with Emacs, open-codex, etc.

trollbridge2 months ago

Most my clients have the same requirement. Given the code bases I see my competition generating, I suspect other vendors are simply violating this rule.

abujazar2 months ago

You can set up your IDE to use local LLMs through e.g. Ollama if your computer is powerful enough to run a decent model.

crimsoneer2 months ago

Are your clients not on AWS/Azure/GCP? They all offer private LLMs out of the box now.

ATechGuy2 months ago

That was my question too.

blitzar2 months ago

I also kind of don't understand how it seems everyone is using AI for doing their homework. I haven't had a teacher yet which would have approved any AI usage.

Same process, less people being called out for "cheating" in a professional setting.

Pamar2 months ago

Personally I am trying to see if we can leverage AI to help write design documents instead of code, based on a fairly large library of human (poorly) written design documents and bug reports.

betterThanTexas2 months ago

I would take any such claim with a heavy rock of salt because the usefulness of AI is going to vary drastically with the sort of work you're tasked with producing.

demarq2 months ago

Also it’s like saying you can host a database on your Mac.

Unless you have experience hosting and maintaining models at scale and with an enterprise feature set, then I believe what they are offering is beyond (for now) what you’d be able put up on your own.

Tepix2 months ago
ATechGuy2 months ago

Have you tried using private inference that uses GPU confidential computing from Nvidia?

lolinder2 months ago

Game changer feels a bit strong. This is a new entry in a field that's already pretty crowded with open source tooling that's already available to anyone with the time and desire to wire it all up. It's likely that they execute this better than the community-run projects have so far and make it more approachable and Enterprise friendly, but just for reference I have most of the features that they've listed here already set up on my desktop at home with Ollama, Open WebUI, and a collection of small hand-rolled apps that plug into them. I can't run very big models on mine, obviously, but if I were an Enterprise I would.

The key thing they'd need to nail to make this better than what's already out there is the integrations. If they can make it seamless to integrate with all the key third-party enterprise systems then they'll have something strong here, otherwise it's not obvious how much they're adding over Open WebUI, LibreChat, and the other self-hosted AI agent tooling that's already available.

troyvit2 months ago

> crowded with open source tooling that's already available to anyone with the time and desire to wire it all up.

Those who don't have the time and desire to wire it all up probably make up a larger part of the market than those who do. It's a long-tail proposition, and that might be a problem.

> I have most of the features that they've listed here already set up on my desktop at home

I think your boss and your boss' boss are the audience they are going for. In my org there's concern over the democratization of locally run LLMs and the loss of data control that comes with it.

Mistral's product would allow IT or Ops or whatever department to set guardrails for the organization. The selling point that it's turn-key means that a small organization doesn't have to invest a ton of time into all the tooling needed to run it and maintain it.

Edit: I just re-read your comment and I do have to agree though. "game-changer" is a bit strong of a word.

abujazar2 months ago

Actually you shouldn't be running LLMs in Docker on Mac because it doesn't have GPU support. So the larger models will be extremely slow if they'll even produce a single token.

burnte2 months ago

I have an M4 Mac Mini with 24GB of RAM. I loaded Studio.LM on it 2 days ago and had Mistral NeMo running in ten minutes. It's a great model, I need to figure out how to add my own writing to it, I want it to generate some starter letters for me. Impressive model.

raxxorraxor2 months ago

I think the the standard setup for vscode continue for ollama is already 99% of ai coding support I need. I think it is even better than commercial offerings like cursor, at least in the projects and languages I use and have tested it.

We had a Mac Studio here nobody was using and it we now use it as a tiny AI station. If we like, we could even embed our codebases, but it wasn't necessary yet. Otherwise it should be easy to just buy a decent consumer PC with a stronger GPU, but performance isn't too bad even for autocomplete.

thepill2 months ago

Which models are you using?

Palmik2 months ago

I really don't see the big deal. Gemini also allows on-prem in similar fashion: https://cloud.google.com/blog/products/ai-machine-learning/r...

nicce2 months ago

> Btw, you can also run Mistral locally within the Docker model runner on a Mac.

Efficiently? I thought macOS does not have API so that Docker could use GPU.

jt_b2 months ago

I haven't/wouldn't use it because I have a decent K8S ollama/open-webui setup, but docker announced this a month ago: https://www.docker.com/blog/introducing-docker-model-runner

nicce2 months ago

Hmm, I guess that is not actually running inside container/ there is no isolation. Some kind of new way that mixes llama.cpp , OCI format and docker CLI.

v3ss0n2 months ago

What's the point when we can run much powerful models now? Qwen3 , Deepseek

_bin_2 months ago

It would be short-termist for Americans or euros to use chinese-made models. Increasing their popularity has an indirect but significant cost in the long term. china "winning AI" should be an unacceptable outcome for America or europe by any means necessary.

atwrk2 months ago

Why would that be? I can see why Americans wouldn't want to do that, but Europeans? In the current political climate, where the US openly claims their desire to annex European territory and so on? I'd rather see them prefer a locally hostable open source solution like DeepSeek.

+1
tigroferoce2 months ago
ulnarkressty2 months ago

I think many in this thread are underestimating the desire of VPs and CTOs to just offload the risk somewhere else. Quite a lot of companies handling sensitive data are already using various services in the cloud and it hasn't been a problem before - even in Europe with its GDPR laws. Just sign an NDA or whatever with OpenAI/Google/etc. and if any data gets leaked they are on the hook.

boringg2 months ago

Good luck ever winning that one. How are you going to prove out a data leak with an AI model without deploying excessive amounts of legal spend?

You might be talking about small tech companies that have no other options.

dzhiurgis2 months ago

How many is many? Literally all of them use cloud services.

ATechGuy2 months ago

Why not use confidential computing based offerings like Azure's private inference for privacy concerns?

beernet2 months ago

Mistral really became what all the other over-hyped EU AI start-ups / collectives (Stability, Eleuther, Aleph Alpha, Nyonic, possibly Black Forest Labs, government-funded collaborations, ...) failed to achieve, although many of them existed way before Mistral. Congrats to them, great work.

Palmik2 months ago

It feels to me they turned into a generic AI consulting & solutions company. That does not mean it's a bad business, especially since they might benefit from the "built in EU" spin (whether through government contracts, regulation, or otherwise).

One can deploy similar solution (on-prem) using better and more cost efficient open-source models and infrastructure already.

What Mistral offers here is managing that deployment for you, but there's nothing stopping other companies doing the same with fully open stack. And those will have the benefit of not wasting money on R&D.

jamesblonde2 months ago

That's what we do with Hopsworks - EU built platform for developing and operating AI systems. We have customers running DeepSeek-v3 and Llama models. I never thought about slapping a Chat UI on it and selling the Chat app as a ready made product for the sovereign AI market. But why not.

stogot2 months ago

I’m wondering why. More funding, better talent, strategy, or something else?

agumonkey2 months ago

i'm an outsider but none of the startups mentioned above ever came to my ears. Mistral suddenly popped after openai/anthropic exploded, and they were rapidly described as the 3rd contender, with emphasis on technical merit. Maybe i was fooled though.

danielbln2 months ago

Black Forest Labs are the makers of FLUX, which for a while was the best open image model available (and generally a pretty strong image model). That said, now with a wave of Chinese models and the advent of autoregressive image models, I'm not sure how much that will stay true.

bobxmax2 months ago

is Mistral really doing anything here? Llama models are open source, Cohere runs on prem etc

retinaros2 months ago

what did they achieve exactly?

beernet2 months ago

Signs of market traction and executing on product development. All other mentioned companies never made it there.

85392_school2 months ago

This announcement accompanies the new and proprietary Mistral Medium 3, being discussed at https://news.ycombinator.com/item?id=43915995

Havoc2 months ago

Not quite following. It seems to talk about features common associated with local servers but then ends with available on gcp

Is this an API point? A model enterprises deploy locally? A piece of software plus a local model?

There is so much corporate synergy speak there I can’t tell what they’re selling

frabcus2 months ago

They mention Google Cloud Marketplace (not Google Cloud Platform), this seems to be their listing there:

https://console.cloud.google.com/marketplace/product/mistral...

Which says:

"Managed Services are fully hosted, managed and supported by the service providers. Although you register with the service provider to use the service, Google handles all billing."

My assumption is that they're using Google Marketplace for discovery and billing, and they offer a hosted option or an on-prem option.

But agreed, it isn't clear!

tecleandor2 months ago

Lota of tools offer billing you via Google Marketplace or the AWS equivalent as:

- it joins billing with other stuff

- I guess it's easier to get approval

- and more important (at least in our case), it allows you to reach your Google Cloud (or AWS) contract commitments of expense, and keep your discounts :)

_pdp_2 months ago

While I am rooting for Mistral, having access to a diverse set of models is the killer app IMHO. Sometimes you want to code. Sometimes you want to write. Not all models are made equal.

the_clarence2 months ago

Tbh I think the one general model approach is winning. People don't want to figure out which model is better at what unless its for a very specific task.

_pdp_2 months ago

IMHO people want to interact with agents that do things not with models that chat. And agents by definition are specialised which means a specific model and Mistral might not be good for all types of tasks just like the top of line models are not always for everything.

the_clarence2 months ago

Agents are specialized via prompts and MCP now, and more and more rarely by model

sschueller2 months ago

Couldn't you could place a very light weight model in front to figure out which model to use?

sReinwald2 months ago

That’s a perfectly valid idea in theory, but in practice you’ll run into a few painful trade-offs, especially in multi-user environments. Trust me, I'm currently doing exactly that in our fairly limited exploration of how we can leverage local LLMs at work (SME).

Unless you have sufficient VRAM to keep all potential specialized models loaded simultaneously (which negates some of the "lightweight" benefit for the overall system), you'll be forced into model swapping. Constantly loading and unloading models to and from VRAM is a notoriously slow process.

If you have concurrent users with diverse needs (e.g., a developer requiring code generation and a marketing team member needing creative text), the system would have to swap models in and out if they can't co-exist in VRAM. This drastically increases latency before the selected model even begins processing the actual request.

The latency from model swapping directly translates to a poor user experience. Users, especially in an enterprise context, are unlikely to tolerate waiting for a minute or more just for the system to decide which model to use and then load it. This can quickly lead to dissatisfaction and abandonment.

This external routing mechanism is, in essence, an attempt to implement a sort of Mixture-of-Experts (MoE) architecture manually and at a much coarser grain. True MoE models (like the recently released Qwen3-30B-A3B, for instance) are designed from the ground up to handle this routing internally, often with shared parameter components and highly optimized switching mechanisms that minimize latency and resource contention.

To mitigate the latency from swapping, you'd be pressured to provision significantly more GPU resources (more cards, more VRAM) to keep a larger pool of specialized models active. This increases costs and complexity, potentially outweighing the benefits of specialization if a sufficiently capable generalist model (or a true MoE) could handle the workload with fewer resources. And a lot of those additional resources would likely sit idle for most of the time, too.

+1
gustofied2 months ago
F-Lexx2 months ago

Good idea. Then you could place another lighter-weight model in front of THAT, to figure out which model to use in order to find out which model to use.

It,'s LLMs, all the way down.

the_clarence2 months ago

My guess is that this is basically what AI providers are slowly moving to. And this is what models seem to be doing underneath the surface as well now with Mixture of Experts (MoE).

promiseofbeans2 months ago

I mean, the general purpose models already do this in a way, routing to a selected expert. It's a pretty fundamental concept for ensemble learning, which is what MOE experts are, effectively.

I don't see any reason you couldn't stack more layers of routing in front, to select the model. However, this starts to seem inefficient.

I think the optimal solution will eventually be companies training and publishing hyper-focused expert models, that are designed to be used with other models and a router. Then interface vendors can purchase different experts and assemble the models themselves, like how a phone manufacter purchases parts from many suppliers, even their compeditors, in order to create the best final product. The bigger players (e.g. Apple for this analogy) might make more parts in house, but even the latest iPhone still has Samsung chips in it in teardowns.

downsplat2 months ago

Same here. Since I started using LLMs a bit more, the killer step for me was to set up API access to a variety of providers (Mistral, Anthropic, Gemini, OpenAI), and use a unified client to access them. I'm usually coding at the CLI, so I installed 'aichat' from github and it does an amazing job. Switch models on the fly, switch between one-shot and session mode, log everything locally for later access, and ask casual questions with a single quick command.

I think all providers guarantee that they will not use your API inputs for training, it's meant as the pro version after all.

Plus it's dirt cheap, I query them several times per day, with access to high end thinking models, and pay just a few € per month.

Deathmax2 months ago

Gemini's free tier will absolutely use your inputs for training [1], same with Mistral's free tier [2]. Anthropic and OpenAI let's you opt into data collection for discounted prices or free tokens.

[1]: https://ai.google.dev/gemini-api/terms#data-use-unpaid

[2]: https://mistral.ai/terms#privacy-policy

downsplat2 months ago

Yeah, I mean paid API access. You put a credit card in, and it's peanuts at the end of the month. Sorry I didn't specify. Good reminder that with free services you are the product!

binsquare2 months ago

Well that sounds right up the alley of what I built here: www.labophase.com

I_am_tiberius2 months ago

I really love using le chat. I feel much more save giving information to them than to openai.

victorbjorklund2 months ago

Why use this instead of an open source model?

dlachausse2 months ago

> our world-class AI engineering team offers support all the way through to value delivery.

victorbjorklund2 months ago

Guess that makes sense. But I'm sure they charge good money for it and then you could just use that money for someone helping you with an open source model.

disgruntledphd22 months ago

Presumably one throat to choke logic applies here, particularly in Europe.

iamnotagenius2 months ago

[dead]

starik362 months ago

I don't see any mention of hardware requirements for on prem. What GPUs? How many? Disk space?

tootie2 months ago

I'm guessing it's flexible. Mistral makes small models capable of running on consumer hardware so they can probably scale up and down based on needs. And what is available from hosts.

rowanajmarshall2 months ago

I run a Mistral model on my phone!

dr_kretyn2 months ago

Explain more please? Is that a big phone/tiny laptop with long GPU connector? Is that a tiny model?

adamsiem2 months ago

Parsing email...

The intro video highlights searching email alongside other tools.

What email clients will this support? Are there related tools that will do this?

guerrilla2 months ago

Interesting. Europe is really putting up a fight for once. I'm into it.

fortifAI2 months ago

Mistral isn't really Europe, it's France. Europe has some plans but as far as I can tell their goal isn't to make something that can really compete. The goal is to make EU data stay in the EU for businesses, meanwhile every user that is not forced by their company sends their data to the US or China.

blitzar2 months ago

Last I checked France is in Europe. It would be like saying Google or Apple are not American because they are in California.

klabb32 months ago

The big picture incongruence in the thread is using terms as patriotic, allegories to US states, which imo is but a US-centric projection. Even proponents don’t think of the EU to be a supreme government with federated states, and they certainly don’t think of ”Europeans” as a unified demographic. At best, the EU protects against stupid shit from other EU countries (tariffs, freedom of movement) and stupid shit against the outside, such as bullying by superpowers like Russia, China and recently also the US, or extremely large corporations who can take on smaller nation states.

The EU is more similar to NAFTA or five eyes, and culturally the loyalty is more similar to the US vs the anglosphere, like how Americans think of Australia, UK and Canada. Well, again, until recently. Things are changing fast.

+1
r0p32 months ago
resource_waste2 months ago

Expected this comment.

Mistral has been consistently last place, or at least last place among ChatGPT, Claude, Llama, and Gemini/Gemma.

I know this because I had to use a permissive license for a side project and I was tortured by how miserably bad Mistral was, and how much better every other LLM was.

Need the best? ChatGPT

Need local stuff? Llama(maybe Gemma)

Need to do barely legal things that break most company's TOS? Mistral... although deepseek probably beats it in 2025.

For people outside Europe, we don't have patriotism for our LLMs, we just use the best. Mistral has barely any usecase.

omneity2 months ago

> Need local stuff? Llama(maybe Gemma)

You probably want to replace Llama with Qwen in there. And Gemma is not even close.

> Mistral has been consistently last place, or at least last place among ChatGPT, Claude, Llama, and Gemini/Gemma.

Mistral held for a long time the position of "workhorse open-weights base model" and nothing precludes them from taking it again with some smart positioning.

They might not currently be leading a category, but as an outside observer I could see them (like Cohere) actively trying to find innovative business models to survive, reach PMF and keep the dream going, and I find that very laudable. I expect them to experiment a lot during this phase, and that probably means not doubling down on any particular niche until they find a strong signal.

drilbo2 months ago

>You probably want to replace Llama with Qwen in there. And Gemma is not even close.

Have you tried the latest, gemma3? I've been pretty impressed with it. Altho I do agree that qwen3 quickly overshadowed it, it seems too soon to dismiss it altogether. EG, the 3~4b and smaller versions of gemma seem to freak out way less frequently than similar param size qwen versions, tho I haven't been able to rule out quant and other factors in this just yet.

It's very difficult to fault anyone for not keeping up with the latest SOTA in this space. The fact we have several options that anyone can serviceably run, even on mobile, is just incredible.

Anyway, i agree that Mistral is worth keeping an eye on. They played a huge part in pushing the other players toward open weights and proving smaller models can have a place at the table. While I personally can't get that excited about a closed model, it's definitely nice to see they haven't tapped out.

omneity2 months ago

It's probably subjective to your own use, but for me Gemma3 is not particularly usable (i.e. not competitive or delivering a particular value for me to make use of it).

Qwen 2.5 14B blows Gemma 27B out of the water for my use. Qwen 2.5 3B is also very competitive. The 3 series is even more interesting with the 0.6B model actually useful for basic tasks and not just a curiosity.

Where I find Qwen relatively lackluster is its complete lack of personality.

amelius2 months ago

I certainly had some opposite experiences lately, where Mistral was outperforming Chatgpt for some hard questions.

tacker20002 months ago

Whats your point here? There is a place for a European LLM, be it “patriotism” or data safety. And dont tell me the Chinese are not “patriotic” about their stuff. Everyone has a different approach. If Mistral fits the market, they will be successful.

lysace2 months ago

[flagged]

_bin_2 months ago

[flagged]

byefruit2 months ago

You are probably getting downvoted because you don't give any model generations or versions ('ChatGPT') which makes this not very credible.

resource_waste2 months ago

[flagged]

dismalaf2 months ago

In your first comment you mentioned you used Mistral because of its permissive license (so guessing you used 7B, right?). Then you compare it to a bunch of cutting edge proprietary models.

Have you tried Mistral's newest and proprietary models? Or even their newest open model?

+1
thrance2 months ago
qwertox2 months ago

This is so fast it took me by surprise. I'm used to wait for ages until the response is finished on Gemini and ChatGPT, but this is instantaneous.

amelius2 months ago

I'm curious about the ways in which they could protect their IP in this setup.

badmonster2 months ago

interesting take. i wonder if other LLM competitors would do the same.

mxmilkiib2 months ago

the site doesn't work with dark mode, the text is dark also

phupt262 months ago

Another new model ( Medium 3) of Mistral is great too. Link: https://newscvg.com/r/yGbLTWqQ

m-hodges2 months ago

I love that "le chat" translates from French to English as "the cat".

Jordan-1172 months ago

Also, "ChatGPT" sounds like chat, j’ai pété ("cat, I farted")

layer82 months ago

Mistral should highlight more in their marketing that it doesn’t make you fart.

foobahhhhh2 months ago

Instead it disobeys commands, uses up your resources then you find it never belonged to you in the first place.

cryptonector2 months ago

I came in to say this, and I was sure I'd be the first. This is so appropriate considering how ChatGPT -like all LLMs- hallucinates.

debugnik2 months ago

Their M logo is a pixelated cat face as well.

AceJohnny22 months ago

I wonder if they mean to reference the Belgian comic Le Chat by Philippe Geluck.

https://en.wikipedia.org/wiki/Le_Chat

caseyy2 months ago

This will make for some very good memes. And other good things, but memes included.

iamnotagenius2 months ago

Mistral models though are not interesting as models. Context handling is weak, language is dry, coding mediocre; not sure why would anyone chose it over Chinese (Qwen, GLM, Deepseek) or American models (Gemma, Command A, Llama).

tensor2 months ago

Command A is Canadian. Also mistral models are indeed interesting. They have a pretty unique vision model for OCR. They have interesting edge models. They have interesting rare language models.

And also another reason people might use a non-American model is that dependency on the US is a serious business risk these days. Not relevant if you are in the US but hugely relevant for the rest of us.

tootie2 months ago

I flip back and forth with Claude and Le Chat and find them comparable. Le Chat always feels very quick and concise. That's just vibes not benchmarks.

crowcroft2 months ago

I haven't used it much, but I did find Le Chat to be FAST in a way that I don't always get with ChatGPT.

amai2 months ago

Data privacy is a thing - in Europe.

FuriouslyAdrift2 months ago

GPT4All has been running locally for quite a while...

curiousgal2 months ago

Too little too late, I work in a large European investment bank and we're already using Anthropic's Claude via Gitlab Duo.

croes2 months ago

Is there are replacement for the Safe Harbor replacement?

Otherwise it could be illegal to transfer EU data to US companies

_bin_2 months ago

The law means don’t do what a slow moving regulator can and will prove in court. In this case, the law has no moral valence so I doubt anyone there would feel guilty breaking it. He may mean individuals are using ChatGPT unofficially even if prohibited nominally by management. Such is the case almost everywhere.

sofixa2 months ago

> In this case, the law has no moral valence

That's not how laws work.

croes2 months ago

There is a difference if you upload your data or your customers data.

There are countries in the EU where you get sued for less

jagermo2 months ago

AI data residency is an issue for several of our customers, so I think there is still a big enough market for this.

alwayseasy2 months ago

Your bank sticks with any tech that comes out first? How is this a cogent argument?