Back

Translation of Rust's core and alloc crates to Coq for formal verification

355 points3 daysformal.land
thaliaarchi3 days ago

That's really impressive.

Automatic translation like this shifts the trust to the tool. coq-of-rust itself is written in Rust, not in Coq. The recursive nature is somewhat boggling, but I think a proof of its correctness is possible in a similar process to David A. Wheeler's “Countering Trusting Trust through Diverse Double-Compiling” (2009) [0] (which circumvents Ken Thompson's Trusting Trusting attack by using a second compiler), but with a mix of a CompCert approach.

To verify it, you'd use coq-of-rust to convert the coq-of-rust translator from Rust to Coq. That translation is not trusted, because it was performed in Rust, but it doesn't matter. Once in Coq, you prove the desired correctness properties—crucially, that it preserves the semantics of the Rust program when it translates a program to Coq.

As in the article, it is likely easier to work with more functional definitions in proofs instead of generated ones, so you'd undertake the same process as they do with the stdlib of proving equivalence between definitions. Since the current line count for the coq-of-rust translator (specifically, lib/ [1]) is 6350 lines of Rust, it even seems feasible to write a full translator in Coq and prove its equivalence to the generated one.

Then, you execute the proven-correct Coq coq-of-rust translator on the Rust source of the coq-of-rust translator. The Coq definitions it outputs should match the output of the Rust coq-of-rust translator that you started with.

As an aside, it's nice to see industry funding for work like this. I'm often cynical of cryptocurrency, but its correctness constraints really push for improvements in areas I like (Rust, Coq, funding for masters students I know, etc.).

[0]: https://dwheeler.com/trusting-trust/wheelerd-trust.pdf

[1]: https://github.com/formal-land/coq-of-rust/tree/main/lib

clarus3 days ago

Thanks for the comment! One of the authors here.

Indeed this would be a nice process to verify coq-of-rust. Also, although the code is rather short, we depend on the Rust compiler to parse and type-check the input Rust code. So that would need to be also verified, or at least formally specified without doing the proofs, and the API of rustc is rather large and unstable. It could still be a way to get more insurance.

thaliaarchi3 days ago

I didn't touch on that, but I did assume trust of the Rust toolchain, verifying starting at THIR. Verifying rustc would be a monumental undertaking, though I think some people are working on it.

Since we don't have a verified rustc (a la CompCert [0]), I wonder if an approach like the translation validation of seL4 [1] would work. They prove that the artifact (ARM machine code) produced by an existing compiler (gcc) for a chosen program (seL4) matches the source semantics (C). Thus you could circumvent trusting rustc, but it only works to verify a specific output of a chosen program. If the chosen program was coq-of-rust, I don't think this would be easier than the approach I detailed above. The seL4 kernel is 9,500 lines of C, while their Isabel/HOL specification is over 200,000 lines, so the technique doesn't seem to scale to a large chosen source like rustc.

Isn't bootstrapping fun?

[0]: Xavier Leroy. 2008. “Formal verification of a realistic compiler”. https://xavierleroy.org/publi/compcert-CACM.pdf

[1]: Thomas Sewell, Magnus Myreen, and Gerwin Klein. PLDI 2013. “Translation Validation for a Verified OS Kernel”. https://sci-hub.st/10.1145/2491956.2462183

deredede3 days ago

Since you contrast the two approaches you might be interested in learning that CompCert also uses translation validation in some part of the compiler (notably for register allocation), see Jean-Baptiste Tristan's papers with Xavier.

deredede3 days ago

Is there a Coq formalisation for enough of Rust that would be usable here? I thought people were still figuring that out.

clarus3 days ago

The formalization work for Rust was done mostly at the MIR level, which is one step lower than the THIR level we use here. See, for example, the https://plv.mpi-sws.org/rustbelt/ project. MIR should be more amenable to formal specification, as this language is more compact than THIR and aims to be more stable.

However, we also lose some information going to MIR, as there are no expressions/loops from the original code anymore. There are still ways to reconstruct these, but we preferred to use the THIR representation directly.

deredede2 days ago

That makes sense and fits what I had in mind: there is no formalization at the source level. Thanks for the details!

Gajurgensen2 days ago

Very interesting work! I'm curious how you handle loops/recursion? I imagine the `M` monad seen in the examples has a special primitive for loops?

clarus2 days ago

Yes there is a special primitive for loops in the monad. The primitives are uninterpreted, we define valid finite traces of execution of a program, and reason about these traces.

If a program has a loop we show that it terminates by constructing an execution trace. Note that we do not yet consider concurrency, so programs are deterministic.

noneeeed3 days ago

It reminds me of when I used to work in SPARK Ada. On a number of projects where there was no supported Ada target (especially very small devices), it would be converted to C and then compiled for the target. That allowed us to perform the various forms of static analysis in the SPARK land.

It obviously introduced issues around verifying the output or the transpiler, but I think the resulting C code was quite readable and limited in style for verification purposes, and the tools had a high degree of trust.

The SPARK static analysis was only ever a part of the whole verification and validation process and there was plenty of testing and other activities that provides additional levels of trust.

The whole approach seemed to work pretty well.

im3w1l3 days ago

> As an aside, it's nice to see industry funding for work like this. I'm often cynical of cryptocurrency, but its correctness constraints really push for improvements in areas I like (Rust, Coq, funding for masters students I know, etc.).

That's part of it, but another part I think is that crypto currencies simply transferred a lot of wealth to people interested in CS (and a wouldn't-it-be-cool-if mindset in general). And they are spending that money on CS research not just because it benefits them but because it's philanthropy aligned with their hobby.

epolanski3 days ago

You reminded me of this blog post by Stephen Diel:

https://www.stephendiehl.com/posts/crypto.html

hedora2 days ago

I’m not sure how that approach prevents the rust compiler that’s used to build binaries from injecting its malicious payload. (You could build A with B and B with A, and diff the binaries, I guess).

Another approach is proof carrying code: The rust compiler would emit a coq proof that its executable output matches the semantics of the source code input.

Of course, you could also write a compiler of a subset of rust (perhaps without the borrow checker or optimizations) in machine code for one architecture, then bootstrap everything with that.

thaliaarchi2 days ago

A compiler that injects backdoors in targeted programs and self-propagates the meta-backdoor (to avoid detection in the source) is exactly the trusting trust attack and it can be mitigated by diverse double-compiling (paper linked above). It requires a second compiler and we have mrustc, a Rust compiler in C++ built specifically for circumventing the unverified bootstrap chain of rustc.

The process is: Compile mrustc with a C++ compiler. Compile rustc sources with untrusted rustc binary and compile rustc sourcs with mrustc (these have identical behavior, but different codegen). Compile rustc sources with rustc-by-rustc and compile rustc sources with rustc-by-mrustc (these will have identical behavior and codegen). Those will match. If you compile once more, they will match. Since mrustc is never compiled by rustc, such a backdoor would have to also exist in gcc/clang and propagate with exactly identical behavior in mrustc. The process could be repeated for gcc/clang.

pron3 days ago

The size of programs (as opposed to libraries, which could be easier because their code is not nearly as self-interacting) that has been end-to-end verified with semi-automated deductive proof systems like Coq is quite small. In fact, the size of programs that can be verified in this way has grown at a slower pace than the growth in the average size of programs overall. While sound software verification (i.e. a 100% conformance to a spec) certainly has its place (usually to prove the correctness of some core algorithms), it doesn't scale well, which is why there's been a shift in research into unsound methods because the cost of 100% assurance can be 10x that of 99.9999% assurance, and the difference in probability may fall below that of non-software failures (after all, a physical system cannot be proven to conform to a spec) or the probability of a spec that doesn't conform well enough to reality.

pcwalton2 days ago

The point is that, by proving the unsafe parts of the standard library correct and Rust's safety guarantees correct, we transitively prove the memory safety (as well as data race freedom, etc.) of all safe Rust code (that uses only the standard library). Only proving the unsafe code correct is far less effort than proving all Rust code correct would be. This addresses the common criticism of "what about unsafe code?" when people talk about Rust's safety guarantees. We can use a more powerful system, like Coq, to fill in the gaps that the Rust type system itself isn't powerful enough to handle.

pron2 days ago

Sure (assuming Rust's type system were sound, which, I believe, it isn't yet). But my general point -- which isn't at all specific to Rust -- is that with deductive methods, sometimes the effort required may not be worth the added assurance probability they give over unsound methods, while in other cases, deductive methods may be cost-effective. The question of cost-effectiveness is one that always needs to be asked in software assurance.

The level of confidence that software requires (as opposed to an abstract algorithm) is not that much higher than one minus the probability of an "unaccounted" failure (such as a hardware failure). Sometimes the extra confidence beyond that point is free, and sometimes it can be very expensive.

pcwalton2 days ago

Rust's type system is sound. There are some known bugs in the compiler, however.

saagarjha2 days ago

Has this been verified somewhere?

weinzierl3 days ago

What I don't quite understand with these efforts is this: If we have to translate the code to Coq manually or semi-manually, isn't the likelihood that we make mistakes there much higher than what we ultimately gain with the formal verification. In other words, how do we know that what we proof is still valid for the original code?

muldvarp3 days ago

> In other words, how do we know that what we proof is still valid for the original code?

We don't. We will always have to trust _something_ (the hardware, the compiler, the specification, Coq's trusted kernel, ...). Formal verification is sadly often discussed as removing the possibility of bugs in their entirety. In practice, formal verification only makes bugs much less likey.

That said, yes, automatic translation between Rust and Coq massively reduces the complexity of the things you have to trust and should therefore be preferred over manual translation.

epolanski3 days ago

Every tool can only guarantee absence of some categories of bugs.

The most common ones, business logic, often escape verification simply because of wrong assumptions.

Simple example: some websites use the browser's default language to set your locale, some use your ip. Thus I travel to Poland and suddenly YouTube shows me Polish content and ads (this can be changed often, but not always, see Google docs idiocy). They will all unavoidably lead to business logic bugs because the assumptions are wrong in the first place.

jlokier2 days ago

That's why some industries have separate "verification" and validation" (V&V) activities.

Verification is checking against formal or at least known requirements like those being discussed in this thread.

Validation is a set of processes for ensuring the specifications actually correspond to how the product ought to work. "How it ought to work" is open-ended and meant to include real-world aspects you didn't know beforehand. For example business logic assumptions. But despite being open-ended it is possible to formalise and track the process of looking for issues systematically.

In your example of YouTube showing Polish content when you travel, verification is the process of checking the website uses the IP correctly because using the IP is specified somewhere, but validation is the process of asking "what's the right thing to do when a user travels, is it really to use the IP?" and "let's systematically research what issues are affecting users due to incorrect specifications that we should change".

bennofs3 days ago

Yes, that is a limitation. But this limitation is not too bad.

In many cases, a bug in the translation simply makes the proof impossible. Somebody then investigates why the proof does not go through and finds the bug in the translation.

We only have a problem if the bug in the translation specificially cancels a bug in the original code. If there is no systematic risk, it is quite unlikely that two bugs cancel each other in this way.

im3w1l3 days ago

Let's say you want to check if use after free can ever occur. Your translation is bugged and translates the whole program to a single "nop". Then the verifier happily asserts that the translated program does not cause use after free.

I doubt the translation would be that bad but it could have more subtle issues of the same kind.

bennofs2 days ago

My argument is that you typically also check other properties, like properties describing what your program should do. These other properties likely wouldn't be satisfied by a nop program, and hopefully the same is true for many subtle issues.

user23423 days ago

In case of coq-to-ocaml: is it feasible to do an extraction to OCaml on the translated code and compare it with the original?

cccbbbaaa3 days ago

You can write programs in Coq and extract them in OCaml with the `Extraction' command: https://coq.inria.fr/doc/v8.19/refman/addendum/extraction.ht...

This is used by compcert: https://compcert.org/

+1
user23423 days ago
clarus3 days ago

The code is translated automatically with coq-of-rust! When issues are found in the translation they can be fixed once in the coq-of-rust tool, and all the translations are updated.

weinzierl3 days ago

Ok, what I think I do not understand is what they mean by "tedious and error prone"? Is it tedious to write the automated translation (aka the coq-of-rust tool in this case) or does the translation of a concrete piece of code (e.g. the core crate) still involve manual work?

clarus3 days ago

The "tedious and error prone" code was what we were doing before, when the translation of the standard library was not yet working automatically with coq-of-rust. Now, this is automatic. Maybe the explanations on the blog post were not clear.

p4bl03 days ago

Note to interested readers: I submitted this blog post because it is less directly versed in cryptocurrency-related stuff than other posts from the same blog, but there are many more technically interesting posts there, especially the last two discussing the same approach but for Python instead of Rust.

konstantinua003 days ago

I remember lecture [1] that mentioned fuzzer finding bugs in formally-verified C compiler - because formal verification didn't cover frontend and backend

I know people raise questions about trust in Coq itself and in translation (and I question how it will synchronize with rust updates), but even perfect formal verification doesn't mean 100% correctness from start to finish

[1] https://youtu.be/Ux0YnVEaI6A

fsloth3 days ago

Totally not expert on formal verification. If Rust’s base libraries get formally verified, and one does not use unsafe code, does that means all Rust programs that use the formally verified libs get de-facto formally verified quality regarding their memoryhandling?

Galanwe3 days ago

Rust safety has very little to do with bugs.

Rust has its own definition of what "safe" means, which is arguably a subset of memory safety.

You can have perfectly safe Rust code with data races, deadlocks, memory exhaustion, etc. Not even talking about logical errors.

dist1ll3 days ago

> You can have perfectly safe Rust code with data races

No, you can't. Data races are explicitly one type of race condition that Rust protects you from. Anything else would indicate an unsound library or compiler bug. For reference, you can take a look at the non-exhaustive list of undefined behaviors[0]. None of those behaviours should be possible to trigger from safe Rust.

[0] https://doc.rust-lang.org/reference/behavior-considered-unde...

hu33 days ago

As per https://doc.rust-lang.org/nomicon/races.html

> However Rust does not prevent general race conditions.

> This is mathematically impossible in situations where you do not control the scheduler, which is true for the normal OS environment.

> For this reason, it is considered "safe" for Rust to get deadlocked or do something nonsensical with incorrect synchronization: this is known as a general race condition or resource race. Obviously such a program isn't very good, but Rust of course cannot prevent all logic errors.

+1
couchand3 days ago
Galanwe3 days ago

You can have both data races and race conditions in a perfectly safe Rust program.

Rust can only offer its safety for the lifetimes it tracks, and this tracking is limited to a single instance of your process.

In other words, if you perform concurrent accesses through multiple threads spawned from a same process, you're safe from data races, at risk of race conditions. If you perform concurrent accesses through multiples processes, you're at risk of both.

That implies that even in a world where everything is safe Rust, you cannot guarantee that these two Rust processes will not data races together.

+1
oconnor6633 days ago
+1
bigstrat20032 days ago
raphlinus3 days ago

I think that to some extent represents the dream, with caveats. A few pieces need to fall into place.

1. You need to formalize the semantics of unsafe Rust. Ralf Jung's pioneering RustBelt[1] work was a huge step in that direction, but it is not yet complete. One element that's emerging as tricky is "pointer provenance."

2. Part of that is making a formal model of the borrow checker. Stacked borrows[2] was a good attempt, but has some flaws. Tree borrows[3] may correct those flaws, but it's possible something even more refined may emerge.

3. Another part of that is a formal memory model, which is mostly about the behavior of atomics and synchronization (thus, is critically important for standard library components such as Arc). It is widely understood that Rust's memory model should be similar to the C++ one (and that they should interoperate), but there are some remaining flaws in that, such as the "out of thin air" problem and also some missing functionality like seqlocks (part of why the Linux kernel still has its own).

4. You need a proof that the safety guarantees compose well; in particular if you have sound code written in unsafe Rust, composed with other code written in safe Rust, the safety guarantees still hold. There are good results so far but it needs to be proved over the entire system.

5. You do need to close all the remaining soundness bugs in Rust[1], for such a proof to hold for all code. Progress on this is slow, as many of these problems are theoretical, or are only really relevant for adversarial code[5].

When all this is done, you still only have a partial guarantee. Huge amounts of the surface area to interface with the system are based on unsafe code. If you're doing only pure computation, that's one thing, but if you're building, say, a graphical UI, there's a massive amount of stuff that can still go wrong.

Even so, there is a practical path forward, and that leads to a much better situation than the usual state of things today, which is of course systems riddled with vulnerabilities.

[1]: https://people.mpi-sws.org/~dreyer/papers/rustbelt/paper.pdf

[2]: https://plv.mpi-sws.org/rustbelt/stacked-borrows/

[3]: https://www.ralfj.de/blog/2023/06/02/tree-borrows.html

[4]: https://github.com/rust-lang/rust/issues?q=is%3Aissue+is%3Ao...

[5]: https://github.com/Speykious/cve-rs

tialaramex2 days ago

Provenance is a serious problem for a language which has pointers and optimisation.

It's interesting to me that serious people propose "bag of bits" semantics (basically "What is provenance, lets just say pointers are raw addresses") for C++ which seems like a terrible idea and I can't tell whether they really mean that. P2414R3: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p24...

clarus3 days ago

Thanks for the explanations and all the links!

hu33 days ago

I'm no Rust expert but it seems to have a lot of unsafe in core code. And understandably so:

https://github.com/search?q=repo%3Arust-lang%2Frust+unsafe+l...

I don't think coq, as being presented here, can verify all unsafe calls.

steveklabnik2 days ago

Most of the interest in formalizing Rust is specifically about formalizing the model that unsafe code is supposed to uphold.

xavxav3 days ago

Can you compare how your approach contrasts to Aeneas or RustHornBelt? How do you handle pointers and mutable borrows?

clarus3 days ago

I do not know how RustHornBelt works. We are focusing on safe code, although we still generate a translation for unsafe blocks as a "best effort".

Compared to Aeneas the goal is very similar as we want to verify Rust programs using interactive theorem provers. However, with coq-of-rust we write the purely functional version of the code (the one on which we make proofs) by hand, or with the help of some GitHub Copilot as this is rather repetitive, and prove it equivalent with the automatic translation. In Aeneas the aim is to directly generate a functional version.

We handle all the pointers as if they were mutable pointers (the `*` type). We do not use any information from Rust's borrow checker, which simplifies the translation, but we pay that at proof time.

To reason about pointers in the proofs, we let the user provide a custom allocator that can be designed depending on how the memory will be used. For example, if the program uses three global mutable variables, the memory can be a record with three entries. These entries are initially `None` to represent when they are not yet allocated. We do not yet know how this technique can scale, but at least we can avoid separation logic reasoning for now. We hope that most of the programs we will verify have a rather "simple" memory discipline, especially on the application side.

clarus2 days ago

I should add that for pointers to immutable data, we can translate them to immutable values on the Coq side. Thus for Rust code written in a purely functional way (using only immutable data structures) the translation is almost one-to-one with Coq, that is to say the same code up to syntactical differences and the verbosity of our monad that behaves like the identity monad in that case.

Thus we can provide a path for people who are ready to sacrifice performance for proofs. I guess that immutable Rust is simpler to verify with the other systems too.

xavxav17 hours ago

> I guess that immutable Rust is simpler to verify with the other systems too.

I don't think that's the case (it's not harder either). The type system of Rust makes handling mutability fairly trivial in verification, in fact it could be translated to immutable code in a continuation monad like the Tardis monad from Haskell I think.

vlovich1232 days ago

Is formal verification specification writing similar to writing a more complicated property test? I’ve found writing property tests to be kinda difficult and time consuming for non-trivial programs.

Jtsummers2 days ago

Similar, not always the same. Property tests generally work at the interface of a system (either down to the function/procedure level or higher level) where you specify a description of the inputs and test the outputs satisfy some property or set of properties.

This will look a lot like what you do with formal verification at the same level. Formal verification tools can go deeper, though, like answering questions about the state of the system internal to a computation. Like "I have this loop invariant, can I prove (or have a system prove) that it actually holds across all iterations?" or "Can I prove that while doing this computation there is never underflow/overflow in any intermediate computation?" The former could be done with property testing if you pull the loop kernel out into its own procedure and then test the invariant property by executing the kernel with a large number of intermediate states. The latter is harder to do without really, really tearing your program apart down to the level of making each line separately executable.

vlovich1232 days ago

I think my main point is that writing the specification is difficult enough with a non-trivial program with property tests, with many places tending to even avoid it if they can. If formal verification is even more difficult, it doesn't matter that it has more power and can make the code even more correct - the cost is a huge blocker.

Jtsummers2 days ago

It depends on how you do the formal verification. There are systems like SPARK/Ada which brings the specification into the language you're developing in (supports only a subset of Ada, but every year they're expanding what it covers), JML which does something similar for Java but the specification language is expressed in its own notation in Java comments, and others.

If you use these kinds of systems you can be more judicious in what you try to prove. Whole program proofs are infeasible (or at least impractical) generally, but you pick the subsets you care about most and you can add annotations and prove them (automatically ideally, but you can bring in an external proof assistant as well). You leave the rest of the system to testing which has proven remarkably effective over the decades (when taken seriously, at least).

This isn't an all-or-nothing proposition, you choose how to spend your time and effort with formal verification just like you do with testing or other work.

muldvarp2 days ago

> the cost is a huge blocker.

It is. Formal verification is and will be (for a very long time) a thing you only do for airplanes and nuclear reactors.

clarus2 days ago

There are some specifications that are always there, like the absence of reachable panics or the backward compatibility between releases on stable entry points.

Otherwise you can outsource this work to specialized companies such as Formal Land or others!

CrimsonCape1 day ago

Could someone ELI5 the concept of "verification", why does an entire language (Coq) exist for this purpose, and what practical implications this has on wider society?

Havoc3 days ago

Didn’t know that is even feasible.

I wonder if efforts like this could speed up rust in key parts of kernel adoption

zem3 days ago

wonder how much work it would be to add a rust backend to F* [https://github.com/FStarLang/FStar]

clarus3 days ago

It shoud be possible. A specific feature of Coq that we use is impredicative Set. I do not know if this is the case in F*.

the84723 days ago

Did you find any bugs in the crates?

clarus2 days ago

No, we have not found bugs! We have only scratched the surface and bugs are probably well hidden as the standard library is very well tested. We do not expect to find bugs there, especially as we also assume the unsafe code is safe.

Our main hope with the translation to Coq of the standard library is to get a formalization of it, so that we can precisely verify Rust programs calling core and alloc.

hgyjdetjj3 days ago

[flagged]

baq3 days ago

The 'problem' was very obviously directed at politically correct English people who have no idea what the word means in other languages.

Also, people don't have a problem with the 'git' brand, somehow?

rob743 days ago

Well, the primary meaning of "cock" is a male chicken (also applied to some other male birds) in English too, irrespective of the slang terms. "Le coq" is also a symbol of France https://en.wikipedia.org/wiki/Gallic_rooster, see also https://en.wikipedia.org/wiki/Le_Coq_Sportif, and coq being a French project, it's understandable why they would choose this name (and have second thoughts about changing it because of other people's sensitivities).

thih93 days ago

> and have second thoughts about changing it

Do you have a source? As of now this is still in the roadmap[1] and I saw no information about removing it from there.

[1]: https://github.com/coq/ceps/blob/coq-roadmap/text/069-coq-ro...

baq3 days ago

of course it's the 069 roadmap doc...

yau8edq12i3 days ago

"Coq" is also the first letters of the creator's name (Coquand).

deredede3 days ago

It is also very likely (if not definitely known) that the name was chosen at least in part because of the English meaning of the word, making the "people have no idea what the word means" argument moot.

Outside of France, most users of Coq are in the US where it is taught in many universities. I am sure there were no female professors or TA whose job was made harder purely due to being "the coq teacher" (this is sarcastic, multiple such professors and TAs have mentioned having this type of issues on the Coq mailing list).

hurutparittya3 days ago

Oh no, giggling students! Someone do something, quick! I can only imagine the horror.

+1
cccbbbaaa3 days ago
baq3 days ago

So teachers complain on the mailing list instead of telling the students to grow the f* up.

I don't doubt this is true, but I question the worth of complaints.

deredede3 days ago

You are assuming a lot here: as far as I remember, someone proposed a name change, then these accounts came up supporting the change. It's not like people were complaining left and right on the mailing lists, but when the opportunity arose to fix once and for all a problem they had, they spoke in favor.

And in my opinion multiple such accounts coming up are a good reason to proceed with a name change: the problem shows up as a pattern rather than isolated incidents, and a reasonable fix is identified.

saagarjha2 days ago

Ok, but you claimed otherwise in your parent comment.

littlestymaar3 days ago

Do you know what a «string» means in French? Or a «bite» (which is pronounced exactly like “bit”). If not, look that up.

And besides lame jokes when 14 years old hear it for the first time (both male and female, btw), it's fine.

It's essentially a problem of American puritanism (and likely also a problem of misogyny among students, but the solution to that really isn't to change the project name!)

rugina3 days ago

The English word bit has the same meaning in French as word Coq has in English.

+1
deredede3 days ago
hiddencost3 days ago

Reminds me of renaming NeurIPS. Another change we're better for.

Certhas3 days ago

I mean... I _do_ think that language sensitivities are overblown these days. E.g. the majority of women participants did not "agree or strongly agree" that NIPS should become NeurIPS, yet it did.

GIMP and Git are words that most non-native English speakers don't have a strong association with. So they get a pass in most of the world.

But cock is as universal a swear word as you will find on this planet.

If I do a retreat in the village of Fucking, and write a compiler there, and then call it TFC -- the Fucking compiler, I know what I am doing.

Gare3 days ago

> If I do a retreat in the village of Fucking

Funnily enough, they have renamed it to Fugging in 2021 because of annoying tourists.

https://en.wikipedia.org/wiki/Fugging,_Upper_Austria

peoplefromibiza3 days ago

> But cock is as universal a swear word

is it?

Are you confused with Dick? the name of course...

Ironically the word dick comes from the German language, meaning "dominant ruler"

+1
croemer3 days ago
didntcheck3 days ago

Well we do have the Brain Fuck Scheduler

Frankly I'd welcome more rejection of corporate safetyism. We're adults, not kids in a "Christian Minecraft server"

yau8edq12i2 days ago

> But cock is as universal a swear word as you will find on this planet.

It's not, though. Come to France, say "coq" - people will wonder why you're talking about roosters all of a sudden. Holy Anglo-centrism, Batman...

Certhas2 days ago

I am German. German culture does not have the Anglo Saxon obsession with swear words to begin with. The US/British obsession with keeping dirty words from children's ears is a never-ending source of amusement for me.

I think you missed my point. "As universal as you will find" does not mean universal. It means There is nothing "more universal" - no word understood as a swear word by more people. There is no absolutely universal swear word. Obviously.

But English is spoken by a lot of people, most of them not native, most of whom don't have a native word that sounds like cock as a first association.

boxed3 days ago

Python has a pretty big naming problem too. Ophidiophobia is the number one phobia in the world, and even though "python" is from Monty Python and not the snake, people keep posting enormous pictures of snakes on book covers and blogs, making it basically impossible for ophidiophobia sufferers to go near python. There are estimates that this is 10% of the population.

Now.. do I think Python should change its name? No.

yau8edq12i3 days ago

I am extremely skeptical about the idea that 10% of the population has a phobia of serpents severe enough to avoid learning a programming language named after a kind of snake.

boxed20 hours ago

You mean severe enough to want to avoid being exposed to pictures of snakes. You have to see it from the sufferers perspective.

humzashahid983 days ago

I thought to myself a rename to FACT Prover (Formerly known As Coq Theorem Prover) would be nice when seeing they wanted to rename the project and it's a bit of a disappointment that didn't quite pan out.

Rocq is similar to the Roc programming language which is confusing naming (and they sound identical as far as I know).

cjfd3 days ago

Since I am both gay and a user of coq, I have had the opportunity to jokingly say that I like coq in all of its spellings. It goes without saying that I am against the name change.

zorked3 days ago

> niche theorem prover

As opposed to popular theorem provers.

rob743 days ago

Quite fittingly, there is a South American bird called Coq of the Rocq... errr, Cock of the Rock: https://en.wikipedia.org/wiki/Cock-of-the-rock

gwervc3 days ago

A new name always take time to disseminate after a rebranding, especially when the change is controversial. The most glaring example being X.

deredede3 days ago

The decision to change the name has been taken by the Coq team, but I believe they do not want the new name to be used until it has been acted administratively, legally and technically (e.g. the documentation should mention the new name and be hosted on rocq.inria.fr).

speedbird3 days ago

[flagged]

psychoslave3 days ago

True, W(indow), X, Xfre86, X.org. And you might thought that they would have come with Y and Z next, but no they throw Wayland (name after some locality) instead in your face. How do you want to follow this stuffs? :D

Narishma3 days ago

There was a Y windowing system.

http://www.y-windows.org/