Using a hierarchy to show template errors is brilliant and I'm sort of surprised compilers haven't always done that.
I was investigating C++-style templates for a hobby language of mine and SFINAE is an important property to make them work in realistic codebases, but leads to exactly this problem. When a compile error occurs, there isn't a single cause, or even a linear chain of causes, but an potentially arbitrarily large tree of them.
For example, it sees a call to foo() and there is a template foo(). It tries to instantiate that but the body of foo() calls bar(). It tries to resolve that and finds a template bar() which it tries to instantiate, and so on.
The compiler is basically searching the entire tree of possible instantiations/overloads and backtracking when it hits dead ends.
Showing that tree as a tree makes a lot of sense.
Usability improvement request for this article: don't hijack the browser back button :P
Browser makers should straight up remove the JS API for interacting with history. There are legitimate uses for it, but the malicious actors far outweigh the good ones at this point. Just remove it.
That's a biased thing to say, since you're never going to notice the times when the history api is being used appropriately. Just as often I find myself raging when a webpage doesn't rewrite history at times when it should. Good taste is hard to come by.
This type of thinking is what doomed uBlock Origin. I strongly disagree.
The difference is that uBlock Origin is an extension you intentionally trust and install, while the JS API we talk about are something any websites (untrusted) can use.
To be fair, uBlock Origin has always been a special case. It's so good and so important and so trusted that it should have access to browser internals that normal extensions can't access.
Honestly, uBlock Origin shouldn't be an extension to begin with, it should be a literally built in feature of all browsers. Only reason it's not is we can't trust ad companies to maintain an ad blocker.
Perhaps the users should be given an option to opt out (enabled by default) for such APIs on a per-site basis. That way, users can intervene when they're abused, while their fair use will remain transparent.
This seems like a good compromise. Similar to requesting location information, and/or denying popups after a few have been spawned
How is uBlock Origin "doomed" ?
An advertising company controls the user agent everyone uses to access the internet, and wants to shove more ads into your eyeballs. uBlock exists as long as they allow it. Anyone who disagrees with this, works for them or own shares in the company.
So UBO isn't doomed, just UBO on Chrome. While that's significant given Chrome's market share, I and everyone else on the planet have the option to use something else, and will continue to do so.
Ditto for unprompted redirects.
Redirects are used for stuff like POST->GET or canonicalizing URLs (adding slashes on directories), would you get rid of that too?
Ah, I see what you mean. The canonicalization is, whereas redirects after processing forms could be done in JavaScript from and on click or on submit handler.
But wouldn't you be able to replicate the same issue by using a redirect?
Ditto for rewriting anchor destinations onclick, allowing sites to show one destination for hover but send you somewhere else.
I mean, without `history.pushState()` and `window.onpopstate` things wouldn't be as nice. Ok, I guess one could do about everything with `location.hash = ...` and `window.onhashchange`, like in the before times. But the server will not get the hash part of the URL so a link to such an page can't be server side rendered and has to fetch the actual page content in JavaScript, evaluating the hash. When I browse things like image search it is really handy that you can use the browsers back button to close an opened image without loosing any dynamic state of the page and that the x button on the page will close the image and remove the history entry just the same way, so a later back won't re-open that image.
For me the back button wasn't hijacked.
But I am for disallowing the use of `history.go()` or any kind of navigation inside of `onpopstate`, `onhashchange`, `onbeforeunload` or similar or from a timer started from one of those.
Like I said: I recognize there are legitimate uses. But unfortunately, they are majorly outnumbered by people doing things like overwriting my history so that when I hit "back", it stays on the site instead of going back to my search engine. I would love to live in the world where malicious dark patterns didn't exist, and we could have nice things. But we don't, and so I would rather not have the functionality at all.
How about we just navigate to new pages by .. navigating to new pages? Browsers have perfectly functional history without javascript shenanigans
same for clipboard events.
In Firefox, you can prevent this by setting `browser.navigation.requireUserInteraction` via about:config. I've been told that it breaks some stuff, but to date I haven't noticed any downsides
> you can prevent this by setting `browser.navigation.requireUserInteraction`
Setting it to what?
true
Doesn't answer your question, but in Tor Browser it would decrease your anonymity set.
It's already egregious when a site adds history pushState entries for just clicking through a gallery or something, but wow adding them just for scrolling down on a page is simply bizarre, especially on a page about usability.
It's in the spirit of adding emojis to compiler output...
I actually quite like the emojis they put in the output, it helps alleviate the balance of providing enough context while also giving a clear visual indicator for the actual error message.
They aren't going overboard on it, they just put a warning emoji in front of the error message.
Windows key + dot, then type in warning / if it was among the last ones you used, use the arrow keys, and hit Enter to insert it.
If you use whatever OS other than Windows, I'm sure there are similar flows available if you search for it. And since it's just Unicode, I'm sure there are numpad based keybinds available too.
Not an issue with JS disabled ;)
Usability improvement request for this site: don't promote a comment that isn't related to the content to the top :P
works fine for me (MS Edge)
With GCC error messages usually the only part I wanna see is "required from here" and yet it spams zillions of "note: ...". With the nesting enabled, it's way easier to read --- just look at the top indentation level. Hooray!!!
[flagged]
I hope gcc remains the default in Linux due to the GPL. But I expect someday clang will become the default.
Plus I heard COBOL was merged in with the compiler collection, nice!
> I expect someday clang will become the default [compiler for the Linux kernel].
Why? I don't personally use GCC except to compile other people's projects in a way that's mostly invisible to me, but it seems like it's still widely used and constantly improving, thanks in part to competition with LLVM/Clang. Is the situation really so dire?
> Is the situation really so dire?
I for one don't think so. From my perspective, there's at least as much momentum in GCC as clang/LLVM, especially on the static analysis and diagnostics front over the past 5 or so years. That was originally one of the selling points for clang, and GCC really took it to heart. It's been 10 years since GCC adopted ASAN, and after playing catchup GCC never stopped upping their game.
Perhaps the image problem is that LLVM seems to be preferred more often for interesting research projects, drawing more eyeballs. But by and large these are ephemeral; the activity is somewhat illusory, at least when comparing the liveliness of the LLVM and GCC communities.
For example, both clang/LLVM and GCC have seen significant work on addressing array semantics in the language, as part of the effort to address buffer overflows and improve static analysis. But GCC is arguably farther along in terms of comprehensive integration, with a clearer path forward, including for ISO standardization.
More importantly, the "competition" between GCC and clang/LLVM is mutually beneficial. GCC losing prominence would not be good for LLVM long-term, just as GCC arguably languished in the period after the egcs merger.
> More importantly, the "competition" between GCC and clang/LLVM is mutually beneficial. GCC losing prominence would not be good for LLVM long-term, just as GCC arguably languished in the period after the egcs merger.
You're right to note that "competition" here is more like inspiration than a deathmatch. But I vaguely remember two things that seem similar to motivation via competitive pressure to me: (1) when GCC 5 came out, it had way nicer error messages, and I immediately thought "Oh, they wanted to make GCC nice like Clang" and (2) IIRC the availability of a more modular compiler stack like LLVM/Clang essentially neutralized Stallman's old strategic argument against more a more pluggable design, right?
Clang:
- has about the same quality of error messages as GCC now
- is now almost exactly as slow (/fast) as GCC at compiling now
- sometimes produces faster code than Clang, sometimes slower, about the same overall
I see no reason why the default would change.
well for one clang uses way less memory (RAM). also ld.lld is wayyyyyyyy faster than ld (and also uses way less memory).
ld.lld works with any compiler, and anyway, mold is even faster and also works with any compiler.
Yup, COBOL is that overnight sensation 4 years in the making. GCC COBOL is foremost an ISO COBOL compiler, with some extensions for IBM and MicroFocus syntax. We also extended gdb to recognize COBOL, so the GCC programmer has native COBOL compilation and source-level debugging.
Unless you're talking about a compiler built into the kernel, I don't see that anyone is in a position to dictate to each distro what compilers they package.
GCC can honestly only blame itself for its inevitable increasing obsolescence. LLVM only has the attention it has because it can be used as a building block in other compilers. GCC could've made a tool and library which accepts IR, performs optimizations and emits machine code, but the project avoided that for ideological reasons, and as a result created a void in the ecosystem for a project like LLVM.
I'd add code quality as a reason. I find it much easier to understand and modify code in LLVM compared to GCC. Both have a fairly steep learning curve and not too much documentation, but often I (personally) find LLVM's architecture to be more thought out and easier to understand. GCC's age shows in the code base and it feels like many concepts and optimizations are just bolted on without "required" architectural changes for a proper integration.
The embedded compiler vendors, UNIX and consoles are quite happy with it.
How much do you think they contribute back upstream regarding ISO compliance outside LLVM backend for their hardware and OS?
Embedded compiler vendors and UNIXes want a possibly slightly patched C or C++ compiler, maybe with an extra back-end bolted on. I'm talking about use-cases like Rust and Zig and Swift, projects which want a solid optimizing back-end but their own front-end and tooling.
And they do! You can choose not to contribute your changes back to permissively-licensed software, but in actual practice most people do contribute them. It's not like the Rust compiler is proprietary software with their own closed-source fork of LLVM...
I haven't advocated for re-licensing GCC to be permissively licensed. And patching GCC is necessarily going to be much easier for vendors than to build a new C front-end which generates GCC IR. So I'm not sure what difference you think what I'm proposing would make with regard to your concerns.
LLVM/Clang is evolving more quickly than and is a much richer base for innovation than GCC is. LLVM spawned Rust, Swift, and Zig. The most recent GCC languages are Spark and COBOL.
One of the reasons that LLVM has been able to evolve so quickly is because of all the corporate contribution it gets.
GCC users want Clang/LLVM users to know how dumb they are for taking advantage of all the voluntary corporate investment in Clang/LLVM because, if you just used GCC instead, corporate contributions would be involuntary.
The GPL teaches us that we are not really free unless we have taken choice away from the developers and contributors who provide the code we use. This is the “fifth freedom”.
The “four freedoms” that the Free Software Foundation talks about are all provided by MIT and BSD. Those only represent “partial freedom”.
Only the GPL makes you “fully free” by providing the “fifth freedom”—-freedom to claim ownership over code other people will write in the future.
Sure, the “other people” are less free. But that is the price that needs to be paid for our freedom. Proper freedom (“full freedom”) is always rooted in the subjugation of others.
It's an early implementation of a part of rust. It's not even close to being able to use as the daily compiler.
And gccrs is not really very widely used and is not the "official" Rust compiler. The Rust project chose to base their compiler on LLVM, for good technical reasons. That's bad news for the GCC project.
Rust developers don't "not want their users to be fully free", they disagree with you on which license is best. Don't deliberately phrase people's motivations in an uncharitable way, it's obnoxious as hell.
What exactly do you mean by "UNIX"? Commercial UNIX vendors other than Apple have basically a rounding error from 0% of market share.
Apple is not a UNIX vendor. They checked off enough of a compliance list to pass in the legal and marketing sense, but in any real practical sense it's not usable as a UNIX system. It's not designed to serve anything. Most of it is locked down and proprietary. No headless operation, no native package manager, root is neutered... and so on. It's not UNIX.
Yet IBM, Oracle, HP and POSIX RTOS vendors keep enough customers happy with that rouding error.
Are you calling Linux a UNIX? I mean GNU/Linux is certainly a UNIX-like system, but aren't "UNIXes" typically used to refer to operating systems which stem from AT&T's UNIX, such as the BSDs and Solaris and HP-UX and the like? GNU's Not UNIX after all
Gnu/Linux and UNIX are not the same thing. Yes I understand that RHEL and Ubuntu are popular.
Apple, Arm, OpenBSD, FreeBSD have all switched to Clang.
Could be. They stuck with GPL2 GCC 4.x for a while. IIRC, they never switched to GPL3 GCC5+
Swichting wasn't the point.
It's not like GCC or the embedded toolchains are a shining beacon of ISO compliance... and if you mean video game consoles, are any of them using GCC today? Sony and Nintendo are both LLVM and Microsoft is Microsoft
Knowing the history of GCC (more focused on C) and Clang (more focused on C++), it makes sense to me that GCC has better ISO C compliance.
Clang has a very nice specific page for ISO C version compliance: https://clang.llvm.org/c_status.html#c2x
I could not find the same for GCC, but I found an old one for C99: https://gcc.gnu.org/c99status.html
CppRef has a joint page, but honestly, I am more likely to believe a page directly owned/controlled/authored by the project itself: https://en.cppreference.com/w/c/compiler_support/23
Finally, is there a specific feature of C23 that you need that Clang does not support?
"substantially complete" C99 even
With LLVM they don't need to fork it in the first place. But still, it doesn't matter because ISO compliance is a frontend problem.
The one vendor who forks LLVM and doesn't contribute their biggest patches back is Apple, and if you want bleeding edge or compliance you're not using Apple Clang at all.
If you say "isn't it great vendor toolchains have to contribute back to upstream?" I'm going to say "no, it sucks that vendor toolchains have to exist"
Venders who use llvm quickly discover the cost of mainaining their own fork is too high and end up contributing back.
Is it still the case that the Linux kernel cannot be compiled using clang, or can you do that now?
I believe clang has worked for years now.
GCC is still marginally superior at both code size and performance in my experience.
The main thing I like about clang is it compiles byzantine c++ code much faster.
Some projects like llama.cpp it's like eating nails if you're not using clang.
So with projects like llamafile I usually end up using both compilers.
That's why my cosmocc toolchain comes with -mgcc and -mclang.
I believe GCC still wins by far on support for weird CPUs and embedded systems.
GCC still for Linux distributions using glibc or any other library with many "GCCisms". Also, I'm not sure whether or not Clang is ABI compatible enough for enterprise customers with some rather extreme backwards compatibility requirements. Still, I can imagine a future where glibc can be built with Clang, possibly even one where llvm-libc is "good enough" for many users.
Mainly because it was the default for so long, and Clang isn't so much better that distros really need to switch.
https://static.lwn.net/kerneldoc/kbuild/llvm.html
I do remember reading about LTO not working properly, you're either unable to link the kernel with LTO, or get a buggy binary which crashes at runtime. Doesn't look like much effort has been put into solving it, maybe it's just too large a task.
There are a few distros that use Clang as the system compiler (SerpentOS and Chimera Linux for two).
There is now libgccjit that aims at allowing to embed gcc https://gcc.gnu.org/onlinedocs/jit/
There is an alternative backend to rustc that relies on it.
libgccjit is, despite its name, just another front-end for GIMPLE. The JIT-part is realized through compiling the object file to a shared library and using dlopen on this.
One big problem with libgccjit, despite its fairly bad compile-time performance, is that it's GPL-licensed and thereby makes the entire application GPL, which makes it impossible to use not just in proprietary use-cases but also in cases where incompatible licenses are involved.
Yet another reason why I'm not a fan of Richard Stallman.
Most of the decisions he made over the past 25 years have been self-defeating and led directly to the decline of the influence of his own movement. It's not that "the GCC project" avoided that for ideological reason, Stallman was personally a veto on that issue for years, and his personal objection led to several people quitting the project for LLVM, with a couple saying as much directly to him.
https://gcc.gnu.org/legacy-ml/gcc/2014-01/msg00247.html
https://lists.gnu.org/archive/html/emacs-devel/2015-01/msg00...
(both threads are interesting reading in their entirety, not just those specific emails)
I think that's an unreasonable lens for viewing his work. Of course he values purity over practicality. That's his entire platform. His decision making process always prioritizes supporting Free Software over proprietary efforts, pragmatism be damned.
Expecting Stallman to make life easier for commercial vendors is like expecting PETA to recommend a good foie gras farm. That's not what they do.
Alternatively, if you join a Stallman-led Free Software project and hope he'll accept your ideas for making life easier for proprietary vendors, you're gonna have a bad time. I mean, the GNU Emacs FAQ for MS Windows (https://www.gnu.org/software/emacs/manual/html_mono/efaq-w32...) says:
> It is not our goal to “help Windows users” by making text editing on Windows more convenient. We aim to replace proprietary software, not to enhance it. So why support GNU Emacs on Windows?
> We hope that the experience of using GNU Emacs on Windows will give programmers a taste of freedom, and that this will later inspire them to move to a free operating system such as GNU/Linux. That is the main valid reason to support free applications on nonfree operating systems.
RMS has been exceedingly clear about his views for decades. At this point it's hard to be surprised that he’ll make a pro-Free Software decision every time, without fail. That doesn't mean you have to agree with his decisions, of course! But to be shocked or disappointed by them is a sign of not understanding his platform.
> His decision making process always prioritizes supporting Free Software over proprietary efforts, pragmatism be damned.
No, this is giving him too much credit. His stance on gcc wasn't just purity over pragmatism, it was antithetical to Free Software. The entire point of Free Software is to let users modify the software to make it more useful to them, there is no point to Free Software if that freedom doesn't exist - I might as well use proprietary software then, it makes no difference.
Stallman fought tooth and nail to make gcc harder for the end user to modify, he directly opposed letting users make their tools better for them. He claims it was for the greater good, but in practice he was undermining the whole reason for free software to exit. And for what? It was all for nothing anyway, proprietary software hasn't relied on the compiler as the lynchpin of its strategy for decades.
Don't forget that the LLVM folks actually went to GNU and offered it to them, and they failed to pay attention and respond. (It's not even that they responded negatively; they just dropped it on the floor completely.) There's an alternate history where LLVM was a GNU project.
With the benefit of hindsight, I'm glad that that didn't happen, even though I have mixed feelings about LLVM being permissively licensed.
Whether you care about number of users or not, there's value in considering whether what you're doing is actually advancing the cause of Free Software or not.
GCC today has a very interesting license term, the GCC Runtime Library Exception, that makes the use of runtime libraries like libgcc free if-and-only-if you use an entirely Free Software toolchain to compile your code with; otherwise, you're subject to the terms of the GPL on libgcc and similar. That is a sensible pragmatic term, and if they'd come up with that term many years ago, they could have shipped libgccjit and other ways to plug into GCC years ago, and the programming language renaissance that arose due to LLVM might have been built atop GCC instead.
That would have been a net win for user freedoms. Instead, because they were so afraid of someone using an intermediate representation to work around GCC's license, and didn't do anything to solve that problem, LLVM is now the primary infrastructure people build new languages around, and GCC lost a huge amount of its relevance, and people now have less software freedom as a result.
> allowing an endrun of the the spirit of the GPL in gcc was never going to happen.
You're right:
https://gcc.gnu.org/legacy-ml/gcc/2005-11/msg00888.html
> If people are seriously in favor of LLVM being a long-term part of GCC, I personally believe that the LLVM community would agree to assign the copyright of LLVM itself to the FSF and we can work through these details.
Also note that GCC adoption only took off when Sun introduced the concept of UNIX developer SDK, where developer tooling was an additional license.
[flagged]
Without GNU, GPL and Richard Stallman, FOSS would not exist in the first place. The GPL forced companies to make FOSS a thing, whether they liked it or not.
Yes, and UNIX vendors and MS were happilly taking pieces of it without upstreaming.
I think without GNU/Linux, most likely all UNIX vendors would have a better market share today.
> other licenses that gave even more freedom and flexibility
That gave vendors more freedom and flexibility (to lock their software away from their customers.)
As usual, customers got less freedom and flexibility.
When VSCode et all beging shipping DRM and who knows what in their extensions, then we'll se what with happens with these half-shareware semilibre projects.
Specially when propietary dependencies kill thousands of projects at once.
As with any other achievement of civilization, younger generations will at some point find why previous ones fought for something and how it sucks to loose it.
But when this realization comes, it will be too late.
The question is if that person would have dedicated their life in the same way.
Kind like how GPL 3 makes it infeasible for most companies to use/support free software. At least Stallman gets to feel morally superior though
Which open-source licenses can't be used together?
Honestly at least Linus has his head in the right place
Big fan of all of this except for the emojis in my console
Few versions ago GCC added `-fdiagnostics-text-art-charset=[none|ascii|unicode|emoji]` so feel free to disable them.
On the one hand I feel like the #WARNING_SIGN is a welcome addition. On the other, I'm sensitive to the idea that it will be totally obnoxious if you need to grep/search for it in whatever tool you're using.
Was a little surprised to learn that the warning sign is generally considered an Emoji; I guess I don't think of it that way. Was even more surprised to learn that there is no great definition for what constitutes an Emoji. The term doesn't seem to have much meaning in Unicode. The warning sign - U+26A0 - goes all the way back to Unicode version 4.0 and is in the BMP, whereas most Emoji are in the SMP.
> Was even more surprised to learn that there is no great definition for what constitutes an Emoji. The term doesn't seem to have much meaning in Unicode
The definition is messy, but the list of Unicode emojis is defined. Starting points: https://www.unicode.org/reports/tr51/, https://unicode.org/emoji/charts/full-emoji-list.html
How will it output on my VT100 serial terminal? :P
Somewhat related fun fact, anyone can submit an emoji to the Unicode Consortium annually, submissions are actually open right now: https://unicode.org/emoji/proposals.html.
It's only an emoji if followed by the "emoji presentation selector" U+FE0F. Without it, it has no colors (but hackernews doesn't allow including it).
If you're using an OS that honors your font choices (i.e., not macOS), you can use a font like Symbola to provide emoji so that they're stylized, monochrome, and vectorized, rather than incongruous color bitmap images. That helps them play nice with terminal colorschemes and things like that.
I'm not aware of any emoji fonts like Symbola which provide a monospace typeface, though. That would be a great option.
By honoring font choices, do you mean the ability to overwrite emojis as well? I’ve never had issues using custom/nerd fonts, but it’s true that the emojis have stayed true to the Apple style so far.
Indeed. Emoji are just characters, rendered via some font just like any text character. On non-Apple operating systems, you can select emoji sets via font configuration.
You can do it on macOS as well, but you have to disable SIP and modify/replace the files for the Apple Color Emoji font, because some widely used GUI libs are hardcoded to use it.
Idr the situation on Windows except that emoji glyphs are inherited from your other font choices, if your chosen font includes emoji. But on Linux it's generally easy to configure certain font substitutions only for some groups of characters, like emoji.
I actually appreciate a #\WARNING_SIGN, but then, why is there no #\STOP_SIGN ?
It should be easy to patch out or possibly add a command line switch to disable it if desired.
Already exists : -fdiagnostics-text-art-charset=[none|ascii|unicode|emoji]
Why?
Makes me feel like I'm writing JavaScript, if that makes any sense. Also I hate fun.
Well, other than the fact that it's hard to search for, things should be kept as simple as possible to improve reliability/interoperability. Are you confident that multi-column-wide non-ASCII characters work reliably without causing rendering issues on every possible combination of terminal/shell/OS, including over SSH? I'm certainly not.
You can already put multi-column-wide non-ASCII characters in your source code, so the horse has left the barn already.
But the compiler isn't forcing it on anyone?
They are quite easy to search for actually, because they are sparse and you tend to not get so many spurious results due hitting things that contain your query as a substring. Even then, the glyphs appear inline in the warning message, they are not the anchor, you could still just search for "warning:" like you or your editor have been doing for years. Every operating system comes with an emoji picker, it takes like 2 seconds to use. I'm not sympathetic, tbqh.
If anything, an emoji is easier to search for, as less things would be using it than a general "warning" (or whatever would make sense, as the emoji'd thing isn't the warning itself). Do have to get a copy from somewhere to search for though. It's also much easier to "visually" search for it.
At least in the blog there are a two spaces after the emoji so it can freely draw past its boundaries rightwards without colliding with anything for a good bit; and nothing to its right is used assuming monospace alignment. So at worst you just get a half-emoji.
I started putting emoji's in my bash scripts.
gitk broke! Can't parse emojis
I want emojis in my code. They are a superior form of communication (non-linear)
> I want emojis in my code.
Then you will enjoy swift [1]!
[1] Emoji driven development: https://www.swiftbysundell.com/special/emoji-driven-developm...
(Note, recommendation to use emojis is an April fools joke, but all the listed code compiles. You can use emojis as shown.)
Please gcc, let me have a `~/.config/gcc` config file or an ENV variable so I can ask for single lined error messages.
I literally do not need ascii art to point to my error, just tell me line:col and a unique looking error message so I can spend no more than 1 second understanding what went wrong
Also allow me to extend requires with my own error messages. I know it'll be non standard but it would be very nice tyvm
The template error messages look great. I wonder if it’s worth writing a translator from clang/gcc messages to these ones, for users of clang and older gcc (to pipe one’s error messages to)
I mean why not show a proper image instead of doing fancier ASCII art. Or skip it entirely and have an LLM describe the issues and fix it for you.
I can't imagine a piece of software easier to use than gcc 3.4.6, sorry.
We are now entering a Rococo period of unnecessarily ornate compiler diagnostics.
Getting these elaborate things to work is a nice puzzle, like Leetcode or Advent of Code --- but does it have to be merged?
I'm all up for better error messaging. :)
Can't use Clang where I'm at, but I do get to use fairly cutting-edge GCC, at least for Windows development. So I may get to see these improvements once they drop into MSYS.
The writer David Malcolm tells a story in his article about compiling C17 code and pretending it's C23. Duh, no wonder it breaks!
You C standard authors have it bass-ackwards. The version of the code must accompany the code, not the compiler invocation.
Why all this waste in Unicode art that nobody will ever see and that confuses your IDE.
Why not spend time on helping IDEs to understand error messages? That would be billion times more useful.
What is the status of GCC plugins?
Is there any reason to ask this? GCC plugins have been good since 4.8. 4.5 lacked some essential features; 4.6 would be fine if not for the annoyance of trying to support a plugin that works across the C++ transition. Of course, you can just use the Python plugin to save yourself a lot of sanity ...
They work. You can use them? They haven't gone anywhere and the Linux kernel relies on them extensively, among other things.
Here's the pending GCC 15 release notes: https://gcc.gnu.org/gcc-15/changes.html (since the link in the article points to GCC 14)
- I'd love to see godbolt examples of the sort of optimizations [[unsequenced]] and [[reproducible]] can do.
- GCC has always been in my experience smart enough to know how to optimize normal C code into ROL and ROR instructions. I've never had any issues with it. So what's the point of __builtin_stdc_rotate_left()? Why create a builtin when it is not needed? What I wish GCC and Clang would do instead, is formally document the magic ANSI C89 incantations that trigger the optimization, e.g. `#define ROL(x, n) (((x) << (n)) | ((x) >> (64 - (n))))`. That way we can have clean portable code with less #ifdef hell along with assurances it'll go fast when -O is passed.
- What is "Abs Without Undefined Behavior (addition of builtins for use in future C library <stdlib.h> headers)."?
- What is "Allow zero length operations on null pointers"?
- Re: "Introduce complex literals." How about some __int128 literals?
- "The "redzone" clobber is now allowed in inline assembler statements" wooo I've wanted something like this for a while.
Great work from the greatest compiler team!
> What I wish GCC and Clang would do instead, is formally document the magic ANSI C89 incantations that trigger the optimization, e.g. `#define ROL(x, n) (((x) << (n)) | ((x) >> (64 - (n))))`. That way we can have clean portable code with less #ifdef hell along with assurances it'll go fast when -O is passed.
Well they don’t commit to that happening, so they want to provide you with an alternative.
> How about some __int128 literals
There are _BitInt literals (wb and uwb), look adequate https://godbolt.org/z/xjEEM5Pa4 despite clang's "is a C23 extension" noise.
Wait I missed that. They finally implemented _BitInt? YESSS
Already looking forward to `grep`ing warning emojis in output. What's next? Poop emoji for errors?
I do not want any emojis from my compiler's output, heck, in my terminal in general. Please, do not make it the default behavior.
(BTW my terminal does not even support emojis.)
Why would you grep for the emoji? The actual pattern you want to match is the standard format of the error message that proceeds it, which hasn’t changed in eons:
my $line = "infinite-loop-linked-list.c:30:10: warning: infinite loop [CWE-835] [-Wanalyzer-infinite-loop]";
grammar gcc {
token filename { <-[:]>+ };
token linenumber { \d+ };
token colnumber { \d+ };
token severity { info|warning|error };
token message { .* $$ };
regex diagnostic { <filename>
[\:] <linenumber>
[\:] <colnumber>
[\:]\s <severity>
[\:]\s <message>
};
token TOP { <diagnostic> };
}
say gcc.parse($line);
which when run produces the obvious output: 「infinite-loop-linked-list.c:30:10: warning: infinite loop [CWE-835] [-Wanalyzer-infinite-loop]」
diagnostic => 「infinite-loop-linked-list.c:30:10: warning: infinite loop [CWE-835] [-Wanalyzer-infinite-loop]」
filename => 「infinite-loop-linked-list.c」
linenumber => 「30」
colnumber => 「10」
severity => 「warning」
message => 「infinite loop [CWE-835] [-Wanalyzer-infinite-loop]」
I’ll leave handling filenames containing colons as an exercise for the reader.The emoji just focus the reader’s eye on the most critical line of the explanation.
Why wouldn't you just keep search for the "warning:" or "error:" anchor, like people (and editors) have been doing forever? Even if that wasn't true it's not like searching for emojis is hard anyway. If it takes longer than 2 seconds for you to open a picker then you should ask for a refund for your computer.
Same with their ‘’ quotation marks. I've had cases where I was searching for "'something'", and it didn't find anything, because it was printed as "‘something’".
I like a these modernization efforts in GCC, a lot of old niggles are gone now, and they've made a lot of new improvements I didn't think were possible/easy!
Only if stuck in C++17 and earlier.
Past that point SFINAE should be left for existing code, while new code should make use of concepts and compile time execution.
I really wish templates didn't work on a dumb "replace at call site until something compiles" manner.
All template requirements should be verified at the function definition, not at every call site.
There is concepts. But they are so unwieldy.
It's honestly a seriously hard problem.
Yes, it's definitely nice to be able to typecheck generic code before instantiation. But supporting that ends up adding a lot of complexity to the typesystem.
C++-style templates are sort of like "compile-time dynamic types" where the type system is much simpler because you can just write templates that try to do stuff and if the instantiation works, it works.
C++ templates are more powerful than generics in most other languages, while not having to deal with covariance/contravariance, bounded quantification, F-bounded quantification, traits, and all sorts of other complex machinery that Java, C#, etc. have.
I still generally prefer languages that do the type-checking before instantiation, but I think C++ picks a really interesting point in the design space.
Please no. Templates being lazy is so much better than Rust's eager trait evaluation, the latter causing incredible amounts of pain beyond a certain complexity threshold.
How so? I'd really like to see an example. If you can't explain what requirements your function has before calling it then how do you know it even does what you expect?
C++0x concepts tried to achieve that but that didn't work.
(But Rust traits works like that)
There were lots of politics evolved, moreso than technical issues.
While not perfect concepts lite, alongside compile time evaluation do the job.