Back

Win32 Is the Only Stable ABI on Linux?

456 points2 yearsblog.hiler.eu
oppositelock2 years ago

Many moons ago, one of the things I did was to port the Windows version of Google Earth to both Mac and Linux. I did the mac first, which was onerous, because of all the work involved in abstracting away system specific API's, but once that was done, I thought Linux would be a lesser task, and we hired a great linux guy to help with that.

Turns out, while getting it running on linux was totally doable, getting it distributed was a completely different story. Due to IP reasons, this can't ship as code, so we need to ship binaries. How do you do that? Do you maintain a few separate versions for a few popular distributions? Do you target the Linux Standard Base? The first approach is a lot of work, and suffers from breakages from time to time, and you alienate users not on your list of supported distros. The second version, using LSB, was worse, as they specify ancient libraries and things like OpenGL aren't handled properly.

End result; management canned the Linux version because too much ongoing support work was required, and no matter what you did, you got hate mail from Gentoo users.

jcelerier2 years ago

> Due to IP reasons, this can't ship as code, so we need to ship binaries. How do you do that?

I build on an distro with an old enough glibc following this table: https://gist.github.com/wagenet/35adca1a032cec2999d47b6c40aa... (right now rockylinux:8 which is equivalent to centos:8 and good enough for debian stable and anything more recent than that ; last year I was still on centos:7), use dlopen as much as possible instead of "normal" linking and then it works on the more recent ones without issues.

LawnGnome2 years ago

I worked on a product that shipped as a closed source binary .so (across four OSes and two architectures) for almost seven years, and that's exactly what we did too — build on the oldest libc and kernel any of your supported distros (or OS versions) support, statically link as much as you can, and be defensive about _any_ runtime dependencies you have.

edflsafoiewq2 years ago

That's the trick. AppImage has a pretty good list of other best practices too: https://docs.appimage.org/reference/best-practices.html (applies even if you don't use AppImages).

kelnos2 years ago

If what you're doing works for you, great, but in case it stops working at some point (or if for some reason you need to build on a current-gen distro version), you could also consider using this:

https://github.com/wheybags/glibc_version_header

It's a set of autogenerated headers that use symbol aliasing to allow you to build against your current version of glibc, but link to the proper older versioned symbols such that it will run on whatever oldest version of glibc you select.

edflsafoiewq2 years ago

glibc 2.34 has a hard break where you cannot compile with 2.34 and have it work with older glibc versions even if you use those version headers. It will always link __libc_start_main@GLIBC_2.34 (it's some kind of new security hardening measure, see https://sourceware.org/bugzilla/show_bug.cgi?id=23323).

Since additionally you also need to build all your dependencies with this same trick, including say libstdc++, it's really easiest to take GP's advice and build in a container with the old library versions. And nothing beats being able to actually test it on the old system.

jefftk2 years ago

> We need to ship binaries. How do you do that? Do you maintain a few separate versions for a few popular distributions? Do you target the Linux Standard Base?

When I worked on mod_pagespeed we went with the first approach, building an RPM and a DEB. As long as we built on the oldest still-supported CentOS and Ubuntu LTS, 32-bit and 64-bit, we found that our packages worked reliably on all RPM- and DEB-based distros. Building four packages was annoying, but we automated it.

(We also distributed source, so it may be that it didn't work for some people and they instead built from source. But people would usually ask questions on https://groups.google.com/g/mod-pagespeed-discuss before resorting to that I don't think I saw this issue.)

scoopr2 years ago

FWIW, these days Valve tries to solve same problems with their steam runtime[0][1]. Still doesn't seem easy, but looks like almost workable solution.

[0] https://github.com/ValveSoftware/steam-runtime

[1] https://archive.fosdem.org/2020/schedule/event/containers_st...

piyh2 years ago

A multi billion dollar company with massive investments in Linux making an almost workable solution means everyone else is screwed

account422 years ago

Nope. Valve has to deal with whatever binaries clueless developers uploaded over the years which they can't update wheres you only need to leant how to make your one binary portable. Entirely different issues.

endofreach2 years ago

Well, or we could remember the idea of linux… „IP reasons“ shouldn‘t be an obstacle in the first place… lol

+2
matheusmoreira2 years ago
weq2 years ago

.NET5+ its my choice as a SME with this challenge. I run the same code across every device and only support what MS supports. These days u could likely redo this with a webview and wasm... Let the webview handle the graphics abstraction for you!

pjmlp2 years ago

Hybrid Blazor than.

jrockway2 years ago

Was static linking not enough?

I feel like the problem most people run into today is glibc vs. musl differences. They develop on Ubuntu, then think they can just copy their binaries into a "FROM alpine:latest" container, which doesn't actually work.

It is possible, though, that whatever you statically link doesn't work with the running kernel, of course. And there are a lot of variants out there; every distribution has their own patch cadence. (A past example of this was the Go memory corruption issue from 1.13 on certain kernels. 1.14 added various checks for distribution + kernel version to warn people of the issue, and still got it wrong in several cases. Live on the bleeding edge, die on the bleeding edge.)

rascul2 years ago

> I feel like the problem most people run into today is glibc vs. musl differences. They develop on Ubuntu, then think they can just copy their binaries into a "FROM alpine:latest" container, which doesn't actually work.

Could it work with gcompat? Alpine has it in the community repo.

https://git.adelielinux.org/adelie/gcompat

naniwaduni2 years ago

gcompat is roughly the "yeah the plugs look the same so just stick the 120 V device into the 240 V socket" approach to libc compatibility.

Hello712 years ago

that's running it directly on musl. gcompat is more like a passive adapter which works except you need to know if the device you're plugging in actually supports 240V, which most stuff does nowadays but when it doesn't it explodes horribly.

flohofwoe2 years ago

Static linking against MUSL only makes sense for relatively simple command line tools. As soon as 'system DLLs' like X11 or GL are involved it's back to 'DLLs all the way down'.

matheusmoreira2 years ago

> Was static linking not enough?

It is a GPL violation when non-GPL software does it.

seba_dos12 years ago

glibc is LGPL, not GPL, so it wouldn't be a violation as long as you provided its source code and ability to replace it by the user, for example by providing compiled object files for relinking. And musl is MIT, so no problems there either.

panzi2 years ago

How do Firefox and Blender do it? They just provide compressed archives, which you uncompress into a folder and run the binary, no problem. I myself once had to write a small CLI program in Rust, where I statically linked musl. I know, can't compare that with OpenGL stuff, but Firefox and Blender do use OpenGL (and perhaps even Vulkan these days?).

preisschild2 years ago

Firefox maintains an Flatpak package on Flathub. Flatpak uses runtimes to provide a base layer of provided libraries that are the same regardless which distro you use.

https://beta.flathub.org/apps/details/org.mozilla.firefox

Hello712 years ago

with difficulty, and not that well. for example, firefox binaries require gtk built with support for X, despite only actually using wayland at runtime if configured. the reason why people generally don't complain about it is because if you have this sort of weird configuration, you can usually compile firefox yourself, or have it compiled by your distro. with binary-only releases, all complaints (IMO justifiably) go to the proprietary software vendors.

AlphaSite2 years ago

I’m happy to blame the OS vendors for not creating a useable base env, I think that’s one of the core tenants for an OS and not providing it is a problem. It may be easier and may push an ideological agenda but I don’t think it’s the right thing to do.

TingPing2 years ago

Firefox has a binary they ship in a zip which is broken but they also officially ship a Flatpak which is excellent.

kelnos2 years ago

Not sure what you mean; I've been using the Firefox zip for over a decade now with zero problems.

Godel_unicode2 years ago

The irony of this comment/response pair being in this thread is delightful.

bachmeier2 years ago

> The second version, using LSB, was worse, as they specify ancient libraries and things like OpenGL aren't handled properly.

That was a shame. There was a lot of hope for LSB, but in the end the execution flopped. I don't know if it would have been possible to make it succeed.

bandrami2 years ago

So this sort of bleeds in to the Init Wars, but there's a lot of back and forth about whether LSB flopped or was deliberately strangled by a particular player in the Linux ecosystem.

quickthrower22 years ago

I guess this is another instance of Windows and Mac OS are operating systems. "Linux" is a kernel, powering multiple different operating systems.

marcodiego2 years ago

It is important to note that this comment is from a time before snaps, flatpaks and AppImages.

TaylorAlexander2 years ago

Yesterday I tried to install an Inkscape plugin I have been using for a long time. I upgraded my system and the plug-in went away. So I download the zip, open Inkscape, open the plugin manager, go to add the new plugin by file manager… and the opened file manager is unable to see my home directory (weird because when opening Inkscape files it can see home, but when installing extensions it cannot). It took some time to figure out how to get the downloaded file in to a folder the Inkscape snap could see. Somehow though I still could not get it installed. Eventually I uninstalled the snap and installed the .deb version. That worked!

Recently I downloaded an AppImage for digiKam. It immediately crashed when trying to open it, because I believe glibc did not work with my system version (a recent stable Ubuntu).

Last week I needed to install a gnome extension. The standard and seemingly only supported way of doing this is to open a web page, install a Firefox extension, and then click a button on the web page to install it. The page told me to install the Firefox extension and that worked properly. Then it said Firefox didn’t have access to the necessary parts of my file system. It turns out FF is a snap and file system access is limited, so the official way of installing the gnome extension doesn’t work. I ended up having to download and install chrome and install the gnome extension from there.

These new “solutions” have their own problems.

kevin_thibedeau2 years ago

Gnome's insistence on using web pages for local configuration settings is the dumbest shit ever. It's built on top of a cross platform GUI library but instead of leveraging that they came up with a janky system using a browser extension where you're never 100% sure you're safe from an exploit.

+1
cbrogrammer2 years ago
oleg_antonyan2 years ago

If snaps or flatpaks are the only future for Linux desktop software distribution then I'm switching to windows+wsl

preisschild2 years ago

Snaps are universally hated by the Linux community as they have many problems, but what's wrong with Flatpak?

oleg_antonyan2 years ago

Instead of fixing fundamental problem, flatpaks/snaps bring the whole new layer of crap and hacks. They try to solve real problem, but the way they do it is like reanimating dead body cells via cancer. But that's not even the worst part.

They can lead eventually to "Microsoft Linux Store". Canonical pushes snaps for a reason and they are partnered with MS. Flatpacks essentially follow the same route and can be embraced, extended and extinguished, and we'll be left with bare kernel+glibc distros and the rest available for 20$ in Steam/MS/you name it store

matheusmoreira2 years ago

Yeah. Now people just statically link the dynamic libraries.

curt152 years ago

>The first approach is a lot of work, and suffers from breakages from time to time

Are there any distros that treat their public APIs as an unbreakable contract with developers like what MS does?

salmo2 years ago

RedHat claims or at least claimed that for EL. I think it’s limited to within minor releases though, with majors being API.

That’s fine if you’re OK relying on their packages and 3rd party “enterprise” software that’s “certified” for the release. No one in their right mind would run RHEL on a desktop.

The most annoying to me was that RHEL6 was still under support and had an ancient kernel that excluded running Go, GRaalVM, etc. static binaries. No epoll() IIRC.

Often times you find yourself having to pull more and more libraries into a build. It all starts with wanting a current Python and before you know it you’re bringing in your own OpenSSL.

And they have no problem changing their system management software in patch releases. They’ve changed priority of config files too many times. But that’s another rant for another day.

This is a place where I wish some BSD won out. With all the chunks of the base userspace + kernel each moving in their own direction it’s impossible to get out of this place. Then add in every permutation of those pieces from the distros.

Multiple kernel versions * multiple libc implementations * multiple inits * …

I’d never try to make binary-only software for Linux. Dealing with packaging OSS is bad enough.

yellowapple2 years ago

> No one in their right mind would run RHEL on a desktop.

glances nervously at my corporate-issued ThinkPad running RHEL 8

twic2 years ago

> No one in their right mind would run RHEL on a desktop.

I worked somewhere where we ran CentOS on the desktop. That seemed to work pretty well. I don't see why RHEL would be any worse, apart from being more expensive.

+1
_muff1nman_2 years ago
umanwizard2 years ago

I ran RHEL (IIRC it was RHEL 6) on my desktop at Amazon from 2013 to 2015, as did all SDEs.

bandrami2 years ago

> No one in their right mind would run RHEL on a desktop.

Err.... yes we do? It's a development base I know isn't going to change for a long time, and I can always flatpak whatever applications I need. Hell, now that RHEL 9 includes pipewire I put it on my DAW/DJ laptop.

dfabulich2 years ago

No, no one does. It's a lot more work to maintain all public APIs and their behavior for all time; it can often prevent even fixing bugs, if some apps come to depend on the buggy behavior. Microsoft would occasionally add API parameters/ options to let clients opt-in to bug fixes, or auto-detect known-popular apps and apply special bug-fix behaviors just for them.

Even Apple doesn't make that level of "unbreakable contract" commitment. Apple will announce deprecations of APIs with two or three years of opportunity to fix them. If apps don't upgrade within the timeframe, they just stop working in newer versions of macOS.

eschaton2 years ago

Most Apple-deprecated API stick around rather than “just stop working in newer versions of macOS.” Binary compatibility is very well-maintained over the long term.

ungamedplayer2 years ago

Red hat definitely has Kabi and Abi gaurantees.

perryh22 years ago

Are these Linux app distribution problems solved by using Flatpak?

entropicdrifter2 years ago

Most of them are, yes. AppImage also solves this, but doesn't have as robust of an update/package management system

edflsafoiewq2 years ago

AppImage is basically a fancy zip file. It's still completely up to you to make sure the thing you put in the zip file will actually run on other people's system.

akvadrako2 years ago

Yeah, their Linux guy obviously didn't know what he was doing.

Shared4042 years ago

Google Earth was released in 2001, and Flatpak in 2015. That's a 14 year window of time in which this wasn't an option.

zsims2 years ago

Flatpack only came out in 2015

littlestymaar2 years ago

In that context, cannot the issue be sidestepped entirely by statically linking[1] everything you need?

AFAIK the LGPL license even allows you to statically link glibc, as long as you provide a way for your user to load their own version of the libs by themself if that's what they want.

[1]: (or dlopening libs you bundle with your executable)

LargoLasskhyfv2 years ago

Ricers wanna rice! Can we spin the globe so fast that it breaks apart?

Would the hate mails also have included 'internal' users, say from Chromium-OS?

mncharity2 years ago

Another approach might be a hybrid, with a closed-source binary "core", and open-source code and linkage glue between that and OS/other libraries. And an open-source project with one-or-few officially-supported distributions, but welcoming forks or community support of others.

A large surface area app (like Google Earth?) could be less than ideal for this. But I've seen a closed-source library, already developed internally on linux, with a small api, and potential for community, where more open availability quagmired on this seemingly false choice of "which distributions would we support?"

codeguro2 years ago

> Due to IP reasons, this can't ship as code, so we need to ship binaries.

Good, it should be as difficult as possible, if not illegal, to ship proprietary crap to Linux. The operating system was always intended to be Free Software. If I cannot audit the code, it’s spyware crap and doesn’t belong in the Linux world anyway.

ok_dad2 years ago

Could you have used something like this:

https://justine.lol/cosmopolitan/index.html

naniwaduni2 years ago

I'd assume not without violating causality?

pxc2 years ago

I took the question to be whether having something like that available, at the time, would have solved any of their problems with distributing Google Earth for Linux

actuallyalys2 years ago

I don't think Cosmopolitan as it currently exists would do it because the difficulty in making graphics cross-platform [0]. Maybe Cosmopolitan circa 2032?

[0]: https://github.com/jart/cosmopolitan/issues/35#issuecomment-....

account422 years ago

Loki managed to release binaries for Linux long before Google Earth was a thing. I'm not going to claim that things are/were perfect but you never needed to support each distro individually: Just ship your damned dependencies except for base system stuff like libc and OpenGL which does provide pretty good backwards compatibility so you only need to target the oldest version you want to support and it will work on newer ones as well.

metaltyphoon2 years ago

And then you have many devs complaining as to why MS doesn’t want to invest time on MAUI for Linux. This is why.

shmerl2 years ago

One possible idea: https://appimage.org

frozenport2 years ago

Wait. Google Earth has always been available for Linux? https://www.google.com/earth/versions/

cyral2 years ago

They probably mean the old desktop one that has been re-branded to "Google Earth Pro". The UI looks a decade old but it's still useful for doing more advanced things like taking measurements.

oppositelock2 years ago

Yup. That's the one. If it works for you, great, but it crashes on symbol issues for many people.

freedomben2 years ago

FWIW, I use Google Earth Pro on Fedora quite often, and I'm deeply appreciative of the work it took to make that such a simple and enjoyable experience.

I hate that the vocal and tiny minority of linux users who are never satisfied are the ones that most people hear from.

preisschild2 years ago

Flatpak solved this issue. You use a "runtime" as the base layer, similar to the initial `FROM` in Dockerfiles. Flatpak runs then the app in a containerized environment.

iforgotpassword2 years ago

Agree. Had a few games on Steam crap out with the native version, forced it to use proton with the Windows version, everything worked flawlessly. Developers natively porting to linux seem to be wasting their time.

Funnily enough with wine we kinda recreated the model of modern windows, where Win32 is a personality on top of the NTAPI which then interfaces with the kernel. Wine sits between the application and the zoo of libraries including libc that change all the time.

Aachen2 years ago

> Developers natively porting to linux seem to be wasting their time.

Factorio runs so much better than any of this emulationware, it's one of the reasons I love the game so much and gifted licenses for friends using Windows.

Some software claims to support Linux but uses some tricks to avoid recompiling and it's always noticeable, either as lag, as UI quirks, or some features plainly don't work because all the testers were windows users.

Emulating as a quick workaround is all fair game but don't ship that as a Linux release. I appreciate native software (so long as it's not java), and I'm also interested in buying your game if you advertise it as compatible with WINE (then I'm confident that it'll work okay and you're interested in fixing bugs under emulation), just don't mislead and pretend and then use a compatibility layer.

WarChortle2 years ago

In case you weren't aware Wine is not an emulator, it is a compatibility layer.

The whole point of wine is to take a native Windows app, only compiled for Windows and translate its Windows calls to Linux calls.

Wowfunhappy2 years ago

> In case you weren't aware Wine is not an emulator, it is a compatibility layer.

Ehhh. I know it’s in the name, but I feel like the significance is debatable. It’s not a CPU emulator, true. It is emulating calls, which is emulation of a sort.

+5
kmeisthax2 years ago
+1
udp2 years ago
+1
xeromal2 years ago
lenkite2 years ago

By that definition, wouldn't the whole of POSIX simply be an "emulation layer" ;-)

randyrand2 years ago

It is not a "Windows Emulator", but it is certainly a "Windows Userspace Emulator"

Dolphin can run Wii binaries on top of Linux, by emulating a Wii.

Wine can run Windows binaries on top of Linux, by emulating Windows Userspace.

Why would one be an emulator, and the other is not?

Is there some distinction in what an emulator is that goes against common sense?

This reminds of the Transpiler/Compiler "debate." They're both still compilers. They're both emulators (VMs & Compatability Layers).

What the creators meant to say, IMO, is WINVM, "Wine is not a Virtual Machine".

extropy2 years ago

Wine is more like a re-implementation of windows Userland using a different kernel (and graphics libraries).

Since the hardware is all the same, there is nothing to emulate or compile.

Or from the other direction - windows NT contains a re-implementation of the win16 API. Is that an emulator?

+1
hot_gril2 years ago
yjftsjthsd-h2 years ago

The Wii emulator is emulating the whole system including the CPU; WINE describes itself as not an emulator specifically because it's not doing anything about the CPU (hence not working on ARM Linux without extra work).

(I'm not 100% sold on this; I think "CPU emulator" and "ABI emulator" are both reasonable descriptions, albeit of different things, but that's the distinction that the WINE folks are making.)

hot_gril2 years ago

By any well-known definition of an emulator, like https://en.wikipedia.org/wiki/Emulator, Wine is an emulator. It's emulating a Windows system to Windows programs. It's not emulating hardware is all. That WINE acronym, other than being a slightly annoying way to correct people, is wrong. Reminds me of Jimmy Neutron smartly calling table salt "sodium chloride" when really he's less correct than the layman he's talking to, since there are additives.

WINE should simply stand for WINdows Emulator.

dwaite2 years ago

According to the Wikipedia link given, emulation is for host and target computer systems. In a computing context, 'emulate' is not a synonym for 'impersonate' - it describes an approach for a goal.

Wine does not aim to emulate a "Windows computer system" (aka a machine which booted into Windows). For instance, it doesn't allow one to run windows device drivers. WINE is taking an approach that ultimately means it is doing a subset of what is possible with full emulation (hopefully in return going much faster).

To put it another way, a mobile phone app with a HP49 interface that understands RPN is not necessarily a calculator emulator. It could just be an app that draws a calculator interface and understands RPN and some drawing commands. It doesn't necessarily have the capability to run firmwares or user binaries written for the Saturn processor.

tzs2 years ago

The people who wrote Wine thought it was an emulator. Their first idea for a name was "winemu" but they didn't like that, then thought to shorten it to "wine".

The "Wine is not an emulator" suggestion was first made in 1993, not because Wine is not an emulator but because there was concern that "Windows Emulator" might run into trademark problems.

Eventually that was accepted as an alternative meaning to the name. The Wine FAQ up until late 1997 said "The word Wine stands for one of two things: WINdows Emulator, or Wine Is Not an Emulator. Both are right. Use whichever one you like best".

The release notes called it an emulator up through release 981108: "The is release 981108 of Wine, the MS Windows emulator".

The next release changed that to "This is release 981211 of Wine, free implementation of Windows on Unix".

As far as I've been able to tell, there were two reasons they stopped saying it was an emulator.

First, there were two ways you could use it. You could use it the original way, as a Windows emulator on Unix to run Windows binaries. But if you had the source to a Windows program you could you could compile that source on Unix and link with libraries from Wine. That would give you a real native Unix binary. In essence this was using Wine as Unix app framework.

Second most people who would be likely to use Wine had probably only ever encountered emulators before that were emulating hardware. For example there was Virtual PC from Connectix for Mac, which came out in 1997, and there were emulators on Linux for various old 8-bit systems such as NES and Apple II.

Those emulators were doing CPU emulation and that was not fast in those days. It really only worked acceptably well if you were emulating hardware that had been a couple of orders of magnitude slower than your current computer.

Continuing to say that Wine is an emulator would likely cause many people to think it must be slow too and so skip it.

messe2 years ago

One might say it emulates windows library and system calls.

+1
rascul2 years ago
+1
tmccrary552 years ago
deelowe2 years ago

A lot of high level emulation works this way as well. And, similarly, FPGA cores often aren't "simulating game hardware" either.

DiggyJohnson2 years ago

Surely you see that this is a slight semantic distinction, unless you consider Wine apps to be Linux native?

mort962 years ago

Have you actually tried to run the Windows version of Factorio through Proton and experienced slowdowns? In my experience, WINE doesn't result in a noticeable slowdown compared to running on Windows natively (without "emulationware" as you call it), unless there are issues related to graphics API translation which is a separate topic.

sirn2 years ago

I believe OP is referring to the fact Factorio has some optimization on Linux such as using fork() to autosave (which is copy-on-write unlike its Windows counterpart), which result in zero-stuttering during auto-save for large-SPM bases.

Kaze4042 years ago

I’m a huge fan of Wine, but there’s no reason to run Factorio under it. The native Linux version works flawlessly from my experience, and I’m even using a distro where problems with native binaries are common.

yellowapple2 years ago

> WINE doesn't result in a noticeable slowdown compared to running on Windows natively (without "emulationware" as you call it)

That says very little about whether there'd be a noticeable slowdown compared to running a Linux-native version on Linux natively, though.

hot_gril2 years ago

> unless there are issues related to graphics API translation

But there almost always are problems in that area. I've never had a game run with the same FPS and stability in Wine vs natively in Windows.

+1
mort962 years ago
littlestymaar2 years ago

Exactly the same fps, unlikely.

But phoronix has tons of benchmarks showing that WINE/proton are in the same ballpark as the native windows version, sometimes a bit slower but as many times a bit faster as well.

jbverschoor2 years ago

Wine Is Not an Emulator

delusional2 years ago

Sure, but it's also WINdows Emulator

+1
ink_132 years ago
jbverschoor2 years ago

Well you know.. I guess I'm OG when it comes to naming.

bombcar2 years ago

I believe it now actually is again, or was, or something. It's a question when dealing with Win16 and now Win32 (Windows-on-Windows).

caffed2 years ago

Ha Perfect!

Yes, wine is an Windows binary executor with library translation.

https://wiki.winehq.org/FAQ#Is_Wine_an_emulator.3F_There_see...

gorkish2 years ago

Except sometimes on aarch64

LtWorf2 years ago

I've been using wine and glibc for almost 20 years now and wine is waaaay more unstable than glibc.

Wine is nice until you try to play with sims3 after updating wine. Every new release of wine breaks it.

Please use wine for more than a few months before commenting on how good it is.

It's normal that every new release some game stops working. Which is why steam offers the option to choose which proton version to use. If they all worked great one could just stick to the last.

noirbot2 years ago

As someone who's been gaming on Proton or Lutris + Raw Wine, I'm not sure I agree. I regularly update Proton or Wine without seeing major issues or regressions. It certainly happens sometimes, but I'm not sure it's any worse of a "version binding" problem than a lot of stuff in Linux is. Sure, sometimes you have to specifically use an older version, but getting "native" linux games to work on different GPU architectures or distros is a mess as well, and often involves pinning drivers or dependencies. I've had games not run on my Fedora laptop that run fine on my Ubuntu desktop, but for the most part, Wine or Proton installed things work the same across Linux installs, and often with better performance somehow.

LtWorf2 years ago

I specifically mentioned the sims3. That one is constantly broken by updates.

Also age of empires2 hd, after working fine with wine/proton for a decade, doesn't work with the latest proton for me.

+1
noirbot2 years ago
hot_gril2 years ago

And Aoe2 HD broke with every game update in Wine. I had to keep patching in different DLLs. Gave up one day. It's worse than the original game anyway.

andai2 years ago

I used to say that Wine makes Linux tolerable, but after using it for several years I've concluded that Wine makes Windows tolerable.

pongo12312 years ago

Absolute opposite experience for me. The native versions of Half-Life, Cities: Skylines and a bunch of other games refuse to start up at all for me for a few years now. Meanwhile I've been on the bleeding edge of Proton and I can count the number of breakages with my sizeable collection of working Windows games within the last couple of years on one hand. It's been a fantastic experience for me with Proton.

edflsafoiewq2 years ago

Mind saying what the error is? Linker problem?

pongo12312 years ago

Not sure if I'm honest. Starting up steam through the terminal and launching the game doesn't give me anything indicating the reason in the logs, which is weird. I'm using Arch and tried both steam and steam-runtime already.

LtWorf2 years ago

It works fine for me. If it was something with glibc it wouldn't work for me.

LtWorf2 years ago

Half life works fine for me on latest kernel, latest glibc.

Probably you have different unrelated issues.

Kaze4042 years ago

> Please use wine for more than a few months before commenting on how good it is.

I’ve used it for several years, and even to play Sims 2 from time to time, and while I’ve had issues the experience only gets better over time. It’s gotten to the point where I can confidently install any game on my Steam library and expect it to run. And be right most of the time.

LtWorf2 years ago

But not sims 3

Age of empires DE will not work with proton. And it's not really a top notch graphics game.

Kaze4042 years ago

I'm not entirely sure what the point of updating Wine is in the first place. If you have a version that works with the game you're trying to play, why not pin it? Things definitely tend to break with Wine upgrades by nature of what Wine does, that's why it's common for people to have multiple versions of Wine installed.

+1
yellowapple2 years ago
a9h74j2 years ago

> Every new release of wine breaks it.

Is there any way to easily choose which Wine version you use for compatiblity? Multiple Wine versions without VMs etc?

LtWorf2 years ago

Steam lets you do that, but I think it's a global setting and not per game.

Debian normally keeps 2 versions of wine in the repositories, but if none of those 2 work, you're out of luck.

Kaze4042 years ago

> if none of those 2 work, you're out of luck

That's not true. There are multiple tools for managing multiple versions of Wine and Wine-related tools for gaming, the oldest one being PlayOnLinux. Lutris is the most widely used one, and works great in my experience.

kupiakos2 years ago

This is wrong. Steam lets you choose a Proton (Wine) version per-game.

bayindirh2 years ago

> Developers natively porting to linux seem to be wasting their time.

Initial port of Valve's source engine ran 20% faster without any special optimizations back in the day. So I don't see why the effort is wasted.

DiggyJohnson2 years ago

Isn’t part of the original point not just that Wine is a perfect (dubious, imo) compatibility layer, but that distributing a native port is cumbersome on the Linux ecosystem?

bayindirh2 years ago

I don't buy the cumbersomeness argument for Linux games. A lot of games in the past and today has been distributed as Linux binaries. Most famously Quake and Unreal Tournament series had native Linux binaries on disks and they worked well until I migrated to 64 bit distributions. I'm sure they'll work equally fine if I multi-arch my installations.

Many of the games bundled by HumbleBundle as downloadable setups have Linux binaries. Some are re-built as 64 bit binaries and updated long after the bundle has closed.

I still play Darwinia on my 64 bit Linux system occasionally.

Most of these games are distributed as .tar.gz archives, too.

I can accept and respect not creating Linux builds as engineering (using Windows only APIs, libraries, etc.) and business (not enough market) decisions, but cumbersomeness is not an excuse, it's a result of the other related choices.

In my book, if a company doesn't want to support Linux, they can tell it bluntly, but telling "we want to, but it's hard, and this is Linux's problem" doesn't sound sincere even remotely.

+2
nomel2 years ago
ece2 years ago

> I can accept and respect not creating Linux builds as engineering (using Windows only APIs, libraries, etc.) and business (not enough market) decisions..

Yep, and if somebody made your game work under Wine/Proton or Bottles/Heroic, just pay them to support it or create an AppImage/Flatpak and that's the easiest way to get compatibility.

+1
LtWorf2 years ago
dleslie2 years ago

FWIW, targeting proton is likely the best platform target for future Windows compatibility too.

yellowapple2 years ago

There are plenty of examples of things being the other way around. For example, heavily modding Kerbal Space Program basically necessitated running Linux because that's the only platform that had a native 64-bit build that was even remotely stable (this has since been fixed, but for the longest time the 64-bit Windows version was horrendously broken) and therefore the only platform wherein a long mod list wouldn't rapidly blow through the 32-bit application RAM ceiling.

matheusmoreira2 years ago

This wasn't a problem with the game itself. It's their anti-cheat malware that stopped working. On Windows these things are implemented as kernel modules designed to own our computers and take away our control so we can't cheat at video games.

It's always great when stuff like that breaks on Linux. I'm of the opinion it should be impossible for them to even implement this stuff on Linux but broken and ineffective is good too.

codebolt2 years ago

Coincidentally, Win32 is also the only stable API on Windows.

WinForms and WPF are still half-broken on .NET 5+, WinRT is out, basically any other desktop development framework since the Win32 era that is older than a couple of years is deprecated. Microsoft is famous for introducing new frameworks and deprecating them a few years later. Win32 is the only exception I can think of.

taneq2 years ago

I was gonna say, I think Win32 is the only stable API full stop. Everything else is churn city.

IYasha2 years ago

Yeah. And MFC on top makes it a bit more chewable :3

int_19h2 years ago

.NET Framework is still there by default out of the box, and still runs WinForms and WPF like it always did.

actionfromafar2 years ago

Which version of it? 1.0?

int_19h2 years ago

Ironically, 1.0 is the one that's the most problematic, because it never shipped in any version of Windows by default.

.NET 4.6 is the version that shipped in Win10 back in 2015. Win11 ships with .NET 4.8. Thus, both OSes can run any apps written for .NET 4.x out of the box. I would expect this to remain true for many years to come, given that Windows still supports VB6.

.NET 3.5 runtime (which supports apps written for .NET 2.0 and 3.0, as well) is also available in Win10 and Win11, but it's an optional feature that must be explicitly enabled - although the OS will automatically prompt you to do so if you try to run a .NET app that needs it.

kevingadd2 years ago

4.8x ships and still gets security updates

+1
jiggawatts2 years ago
FpUser2 years ago

>"Coincidentally, Win32 is also the only stable API on Windows"

And this is what I use for my Windows apps. In the end I have self contained binaries that can run on anything starting from Vista and up to the most up to date OS.

ninkendo2 years ago

Honest question, do you get HiDPI support if you write a raw win32 app nowadays? I haven’t developed for windows in over a decade so I’ve been out of the loop, but I also used to think of win32 being the only “true” API for windows apps, but it’s been so long that I’m not sure if that opinion has gotten stale.

As a sometimes windows user, I occasionally see apps that render absolutely tiny when the resolution is scaled to 200% on my 4k monitor, and I often wonder to myself whether those are raw win32 apps that are getting left behind and showing their age, or if something else is going on.

ripley122 years ago

IIRC yes but you have to opt into it with a manifest (or an API call).

FpUser2 years ago

I use Delphi for my "official" windows desktop applications. They have GUI library that wraps Win32. Seem to handles HiDPI just fine.

I do use raw Win32 but not for GUI stuff.

kevin_thibedeau2 years ago

GDI32 has always had support for configurable screen DPI. You used to be able to customize it in Win95 but they hid the setting because too many poorly developed applications were written with inflexible pixel based dimensioning. If you lay everything out in twips it will scale without any special effort.

alkonaut2 years ago

WinForms is just mostly a managed Win32 wrapper so unsurprisingly it’s very stable on the OS frameworks (4.X).

Building for .NET Framework using any APIs is extremely stable as development has mostly ceased. You pick a target framework depending on how old windows versions you must support.

pjmlp2 years ago

WinRT lives on as WinAppSDK.

Longhanks2 years ago

Metro lives on as UWP lives on as WinRT lives on as Project Reunion lives on as WinAppSDK.

Exactly the point the OP was making. Win32 is stable.

pjmlp2 years ago

It is all COM, all those marketing names only reflect a set of interfaces, using .NET metadata instead of TLB files and processes being bound to an App Container.

Win32 is stable indeed, raw Win32 is stuck in Windows XP API surface, most stuff that came afterwards is based on COM.

solarkraft2 years ago

The names aren't getting better either ...

ahtihn2 years ago

Since when is .NET 5+ part of Windows?

codebolt2 years ago

Since MS decided to deprecate .NET Framework, making .NET 5+ the recommended basis for C# desktop development going forward. Yes you will still be able to run your old apps for many decades still, but you can never move to a newer version of the C# language and maintaining them is going to be an increasing pain as the years go by. I've already been down this road with VB6.

markmark2 years ago

And .NET 4.8 is still installed by default on Windows 11 and will presumably happily run your WPF app if you target it.

solarkraft2 years ago

.NET 4.8 might be one of those things they won't be able to get rid of.

int_19h2 years ago

Windows 11 still ships msvbvm60.dll - that's the runtime for Visual Basic 6, released back in 1998. And it is officially supported:

https://docs.microsoft.com/en-us/previous-versions/visualstu...

franga20002 years ago

I recently experienced this in a critical situation. Long story short, something went very wrong during a big live event and I needed some tool to fix it.

I downloaded the 2 year old Linux binary, but it didn't run. I tried running it from an old Ubuntu Docker container, but there were dependencies missing and repos were long gone. Luckily it was open source, but compiling was taking ages. So in a case of "no way this works, but it doesn't hurt to try" I downloaded the Windows executable and ran it under Wine. Worked like a charm and everything was fixed before GCC was done compiling (I have a slow laptop).

nicce2 years ago

I have personally used containers for this reason to set my gaming environment. If something breaks, all I need to do is to run older image and everything works.

Animats2 years ago

Notes:

"EAC" is Easy Anti Cheat, sold by Epic.[1] Not EarthCoin.

"EOS", in this context, is probably Epic Online Services, not one of the 103 other known uses of that acronym.[2]

Here's a list of the games using those features.[3]

So, many of these issues are for people building games with Epic's Unreal Engine on Linux. The last time I tried UE5, after the three hour build, it complained I had an NVidia driver it didn't like. I don't use UE5, but I've tried it out of curiosity. They do support Linux, but, as is typical, it's not the first platform they get working. Epic does have support forums, and if this is some Epic problem encountered by a developer, it can probably be fixed or worked round.

Wine is impressive. It's amazing that they can run full 3D games effectively. Mostly. Getting Wine bugs fixed is somewhat difficult. The Wine people want bugs reported against the current dev version. Wine isn't set up to support multiple installed versions of itself. There's a thing called PlayOnLinux which does Wine version switching, but the Wine team does not accept bug reports if that's in use.[4] So you may need a spare machine with the dev version of Wine for bug reproduction.

[1] https://www.easy.ac/en-us/

[2] https://acronyms.thefreedictionary.com/EOS

[3] https://steamcommunity.com/groups/EpicGamesSucks/discussions...

[4] https://wiki.winehq.org/Bugs

Hello712 years ago

> Wine isn't set up to support multiple installed versions of itself.

huh? the official wine packages for ubuntu, debian, and i believe fedora provide separate wine-devel and wine-staging packages, which can be installed in parallel with each other and with distro packages. in fact, debian (and ubuntu) as well as arch provide separate wine and wine-staging packages as part of the distro itself, no separate repo required.

wine has no special support for relocated installations, but no more or less so than any large Unix program; you can install as many copies as you want, but they must be compiled with different --prefixes, and you cannot use different versions of wine simultaneously with the same WINEPREFIX.

Animats2 years ago

Oh, that's good to know. Thanks.

notRobot2 years ago

Related:

Win32 is the stable Linux userland ABI (and the consequences): https://news.ycombinator.com/item?id=30490570

336 points, 242 comments, 5 months ago

ChikkaChiChi2 years ago

Without getting into spoilers, I'll say that playing "Inscryption" really got me thinking about Docker's continued development could help consumers in the gaming industry.

I would love to see game being virtualized and isolated from the default userspace with passthrough for graphics and input to mitigate latency concerns. Abandonware could become a relic of the past! Being able to play what you buy on any device you have access to would be amazing.

I won't hold my breath, though. The industry pretty loudly rejected Nvidia's attempt to let us play games on their cloud without having to buy them all again. Todd needs the ability to sell us 15 versions of Skyrim to buy another house.

rstat12 years ago

For games on Steam there's the Steam Linux Runtime which can run games on Linux in a specialized container to isolate them from these sort of bugs.

There's also a variant of this container that contains a forked version of Wine for running Windows games as well.

mort962 years ago

Doesn't the Steam Linux Runtime have a problem in the other direction though? Games are using libraries which are so old that they have bugs which are long since fixed or don't work properly in modern contexts. Apparently a lot of issues with Steam + Wayland comes from the ancient libraries in the Steam Linux Runtime from what I have been able to find out from googling issues I've experienced under Wayland.

disintegore2 years ago

> Abandonware could become a relic of the past!

That would eat into some business models though, like Nintendo's quadruple-dipping with its virtual consoles

matheusmoreira2 years ago

Good. All those games should be in the public domain anyway. It's been 30-40 years, Nintendo has been more than adequately compensated.

chungy2 years ago

Write to your representative and plead to reduce copyright (I'd pitch in for the original 14 year term).

matheusmoreira2 years ago

I would certainly do that if I was american.

disintegore2 years ago

I agree, but some notoriously litigious companies probably will not.

ece2 years ago

Flatpak is basically Docker for linux, there are layers and everything. What you're saying should be possible if you make a AppImage/Flatpak out of the Steam Runtime+Proton(if needed)+Game, it should run anywhere with the right drivers.

LtWorf2 years ago

Good luck once wayland starts to actually be used to run any game from before wayland.

jacooper2 years ago

Xwayland exists, all my games use Xwayland, there are no stable proton/wine implementation that's uses Wayland natively.

LtWorf2 years ago

It exists now, but it will be left to rot once (probably in another 15 to 20 years) wayland is finally ready and most software is migrated to it.

solarkraft2 years ago

Isn't that what gamescope has been doing for quite some time?

bjourne2 years ago

Glibc is not Linux, and they have different backwards compatibility policies, but everyone should still read Linus Torvalds' classic 2012 email about ABI compability: https://lkml.org/lkml/2012/12/23/75 Teaser: It begins with "Mauro, SHUT THE FUCK UP!"

efficax2 years ago

man it's always a trip to see how much of a jerk torvalds could be, even if exasperation is warranted in this context (i have no idea), by god, this is not how you build consensus or a high functioning team

kochb2 years ago

The context, from Mauro’s previous message:

> Only an application that handles video should be using those controls, and as far as I know, pulseaudio is not a such application. Or are it trying to do world domination? So, on a first glance, this doesn't sound like a regression, but, instead, it looks tha pulseaudio/tumbleweed has some serious bugs and/or regressions.

Style and culture are certainly open for debate (I wouldn’t be as harsh as Linus), but correcting a maintainer who was behaving this way towards a large number of affected users was warranted. The kernel broke the API contract, a user reported it, and Mauro blamed the user for it.

matheusmoreira2 years ago

I'd like to mention that Mauro is a very nice person. I worked briefly with him when I submitted some patches to ZBar and it was the best experience I've had contributing to open source to this day. He gave me feedback and I got to learn new stuff such as D-Bus integration.

harry82 years ago

When this comes up in conversation it is worthwhile remembering that Linux was built on the team of volunteers centered around Torvalds who was famous for not acting like a jerk. Really. The perception of him among hackers of being a good guy, you could work with, who acknowledged when linux had bugs, accepted patches and was pretty self-effacing is probably the thing that most made that project at that time take off to the stratosphere. Linus was a massive contrast to traditional bearded unix-assholery.

The nature of the work changes. The pressures change. The requirements change. We age. Also the times change too.

But yeah, it is possible to act like a jerk sometime without actually being a jerk in all things. It is also possible to be a lovely person who makes the odd mistake. Assholes can have good points. Life is nuanced.

Of the bajillion emails linus has sent to lkml how many of them can you find that you believe show evidence of being a jerk.

Compare to Theo de Raadt at OpenBSD who have also built a pretty useful thing with their community. Compare also to Larry Wall and Guido van Rossum.

None of us is above reasoned, productive criticism. Linus has done ok.

rayiner2 years ago

It’s not my personal style, but there’s plenty of high functioning teams in different domains headed by leaders who communicate like Torvalds. From Amy Klonuchar throwing binders (https://www.businessinsider.com/amy-klobuchar-throwing-binde...) to tons of high level folks in banking, law firms, etc.

Put differently, you can construct a high functioning team composed of certain personalities who can dish out and take this sort of communication style without burning out on it.

JamesBarney2 years ago

I've definitely seen teams that were low functioning because they were so worried about consensus and upsetting someone else that no one ever criticized any decisions any team member made even if they were both impactful and objectively terrible.

kurupt2132 years ago

This is worse

+1
brigandish2 years ago
worik2 years ago

> but there’s plenty of high functioning teams in different domains headed by leaders who communicate like Torvalds

Maybe in the past. It is not acceptable now.

A lot of men from the "old days" are finding that their table thumping "plain talking" (obscenity shouting) ways are getting them sidelined and ignored.

Good.

+4
mikewave2 years ago
+2
StevePerkins2 years ago
LocalH2 years ago

If a person works best in this fashion, who are you to say it is unacceptable?

martincmartin2 years ago

Are the teams high functioning because of that, or despite that?

nomel2 years ago

I would assume a little of both. I've seen weeks wasted just because someone wouldn't say "that's a bad idea". I've also seen whole projects turn to crap, and then get canceled, when people that new better decided to remain silent, to avoid conflict.

Through my years, it seems to be increasingly rare to find disagreeable people, and that agreeableness is being favored/demanded. I'm not one to judge if it's working or not, but when I see people getting upset at managers because the manager criticized their work/explanation during the presentation of that work, which is literally meant for criticism, I know quality coming from that group will be impacted. Maybe not surprising, but many of these people are new graduates. The few "senior" people I know, like this, are from companies who are in the process of failing, in very public ways.

I think the ideal scenario is a somewhat supportive direct manager, and a disagreeable, quality demanding, manager somewhere not far above, keeping the ship from sinking.

+1
dm3192 years ago
blablabla1232 years ago

That probably works if people get bribed with interesting enough projects (Linux) or money (banks, lawfirms...). Most other projects probably fall apart before you can blink an eye

dshpala2 years ago

Speaking about consensus - there is another thread on the HN where people complain about Android 13 UI. I guess that was built with a healthy dose of consensus.

The point is - sometimes you need a jerk with a vision so that the thing you're building don't turn into amorphous blob.

celtain2 years ago

You need someone with vision who enforces strict adherence to that vision. I'm not convinced you have to be a jerk to do that though.

tasuki2 years ago

Yes, you don't need to be a jerk to do that. Linus Torvalds used to be a jerk (perhaps still is, but I think much less so these days). Do you have a non-jerk with Torvalds' vision?

a9h74j2 years ago

The old quote: I would trust Einstein, but I wouldn't trust a committee of Einsteins.

kurupt2132 years ago

How is Steve Jobs getting left out of this conversation

pestatije2 years ago

Maybe because: if you cannot say anything good about a dead person, don't say anything.

moonchrome2 years ago

>by god, this is not how you build consensus or a high functioning team

Says you, while criticizing Linus Torvalds from 2012. Who has a better track record of building consensus and high functioning teams ?

5424582 years ago

Says Linus Torvalds from 2018…

> My flippant attacks in emails have been both unprofessional and uncalled for. Especially at times when I made it personal. In my quest for a better patch, this made sense to me. I know now this was not OK and I am truly sorry.

> The above is basically a long-winded way to get to the somewhat painful personal admission that hey, I need to change some of my behavior, and I want to apologize to the people that my personal behavior hurt and possibly drove away from kernel development entirely.

https://lkml.org/lkml/2018/9/16/167

He still writes very frankly, but he generally doesn’t resort to personal insults like he did in the past.

fezfight2 years ago

That doesn't change his point about building high functioning teams in the past, though. Just that he is capable of adapting to the times. He had 27 years of career leading Linux before 2018. Successful, by any measure.

ekianjo2 years ago

> Says Linus Torvalds from 2018…

He was successful between 2012 and 2018 with that style. The track record is still there.

pydry2 years ago

I think if you take it out of context (which most people do), it looks a lot worse than it is.

A very senior guy who shouldve known better was trying, fairly persistently, to break a very simple rule everybody agreed to for a very bad reason. Linus told him to shut the fuck up.

I wouldnt say that Linus's reaction was anything to look up to but I wouldnt say that calling the tone police is at all justified either.

mrtranscendence2 years ago

I mean, the guy sent one short email before Torvalds flew off the handle; that’s hardly “trying persistently” to break a rule. I can think of a thousand assertive ways to tell the guy to shut up that wouldn’t have required behaving like an angry toddler.

codeguro2 years ago

> by god, this is not how you build a consensus or a high functioning team

I beg to differ. Linus Torvalds is an example for us all, and I’d argue he has one of the most, if not the most, highly functioning open source teams in the world. The beauty in open source is you’re not stuck with the people you do not want to work with. You can “pick” your “boss”. Plus, different people communicate differently. Linus is abrasive. That is Okay because it works for him. What is not okay is having other people policing the tone in a conversation. Linus had this same conversation with Sarah Sharp, I’ll post the relevant quote below:

Because if you want me to "act professional", I can tell you that I'm not interested. I'm sitting in my home office wearign a bathrobe. The same way I'm not going to start wearing ties, I'm also not going to buy into the fake politeness, the lying, the office politics and backstabbing, the passive aggressiveness, and the buzzwords. Because THAT is what "acting professionally" results in: people resort to all kinds of really nasty things because they are forced to act out their normal urges in unnatural ways.

robertlagrant2 years ago

> man it's always a trip to see how much of a jerk torvalds could be, even if exasperation is warranted in this context (i have no idea), by god, this is not how you build consensus or a high functioning team

True. I think Linux could've been pretty successful if someone with good management practices had been in charge from the start.

ekianjo2 years ago

> by god, this is not how you build consensus or a high functioning team

Linus has been pretty successful so far. There's not just "one style" that works.

blibble2 years ago

maybe it is how you build the world's most popular operating system?

because he did

moomin2 years ago

People often think that because jerks work at successful companies, you need to be a jerk to be successful. It’s more the other way around: a successful firm can carry many people who don’t add value, like parasites.

Guarantee you Linus wasn’t this bad in the 90s.

tasuki2 years ago

I think he's not this bad these days? He issued some public apologies for his behaviour. He gave us Linux and Git. Yes, he used to be an asshole, but he still did way more for the betterment of humanity than most people.

freedomben2 years ago

Yep Linus is reformed these days. He took some time off and went to sensitivity training a few years ago. I'm sure like all humans he has bad days and makes mistakes, but all in all he's really trying.

ekianjo2 years ago

> Guarantee you Linus wasn’t this bad in the 90s.

Guarantee you that Linux was not that big and influential in the industry in the 90s.

val3141592 years ago

Everyone I knew ran Linux in the late 90's, I chose to run FreeBSD. I guarantee you Linux was very influential. Solaris was its only real competitor after a while, and Sun went all Java and screwed up its OS.

nomoreusernames2 years ago

naw man, let the old git be. he is a lovely old man. one day we wont have people like this. he gave more than he took.

pdntspa2 years ago

He's a product of a different time. Personally, I love his attitude -- wouldn't want to work under him though.

Spivak2 years ago

Glibc is GNU/Linux though and cannot be avoided when distributing packages to end users. If you want to interact with the userspace to do things get users, groups, netgroups, or DNS queries you have to use glibc functions or your users will hit weird edge cases like being able to resolve hosts in cURL but not your app.

Now, do I think it would make total sense for syscall wrappers and NSS to be split into their own libs (or dbus interfaces maybe) with stable ABIs to enable other libc's, absolutely! But we're not really there. This is something the BSD's got absolutely right.

emidln2 years ago

There are other libc implementations that work on Linux with various tradeoffs. Alpine famously uses musl libc for a lightweight libc for containers. These alternate libc implementations implement users/groups/network manipulation via well-known files like /etc/shadow, /etc/passwd, etc. You could fully statically link one of these into your app and just rely on the extremely stable kernel ABI if you're so interested.

Spivak2 years ago

We're not disagreeing. You can, of course, use other libc's on Linux the kernel, but you cannot use other libc's on GNU/Linux the distro that uses glibc without some things not working. This can be fine on your own systems so long as you're aware of the tradeoffs but if you're distributing your software for use on other people's systems your users will be annoyed with you.

Even Go parses /etc/nsswitch.conf and farms out to cgo when it finds a module it can't handle. This technically doesn't work because there's no guarantee that the hosts or dns entries in nsswitch have consistent behavior, it's just the name of a library you're supposed to dlopen. On evil, but valid, distro resolv.conf points to 0.0.0.0 and hosts module reads an sqlite file.

nickelpro2 years ago

> you cannot use other libc's on GNU/Linux the distro that uses glibc without some things not working

As the comment your replying to points out, you can statically link your libc requirements and work on any Linux distro under the sun.

You can also LD_PRELOAD any library you need, and also work on any Linux distro under the sun. This is effectively how games work on Windows too, they ship all their own libraries. Steam installs a specific copy of the VCREDIST any given game needs when you install the game.

If you are not releasing source code, it's unreasonable to think the ABIs you require will just be present on any random computer. Ship the code you need, it's not hard.

matheusmoreira2 years ago

> Now, do I think it would make total sense for syscall wrappers and NSS to be split into their own libs (or dbus interfaces maybe) with stable ABIs to enable other libc's, absolutely!

I worked on this a few years ago: liblinux.

https://github.com/matheusmoreira/liblinux

It's still a nice proof of concept but I abandoned it when I found out the Linux kernel itself has a superior nolibc library that they use for their own tools:

https://github.com/torvalds/linux/tree/master/tools/include/...

It used to be a single header but it looks like they've recently organized it into a proper project!

> This is something the BSD's got absolutely right.

BSDs and all the other operating systems force us to use their C libraries and the C ABI. I think Linux's approach is better. It has a language-agnostic system call binary interface: it's just a simple calling convention and the system call instruction.

The right place for system call support is the compiler. We should have a system_call keyword that makes it emit code in the aforementioned calling convention. With this single keyword, it's possible to do literally anything on Linux. Wrappers for every specific system call should be part of every language's standard library with language-specific types and semantics.

An example of one such language is Virgil by HN used titzer:

https://news.ycombinator.com/item?id=28283632

kllrnohj2 years ago

But there are other "Linux"'s that are not GNU/Linux which was I think the point. Like Android, which doesn't use glibc, and doesn't have this mess. I think that was one of the things people used to complain about, that Android didn't use glibc, but since glibc seems to break ABI compatibility kinda on the regular that was probably the right call.

__d2 years ago

Solaris had separate libc, libnss, libsocket, and libpthread, I think?

Unlike many languages, Go doesn't use any libc on Linux. It uses the raw kernel API/ABI: system calls. Which is why a Go 1.18 binary is specified to be compatible with kernel version 2.6.32 (from December 2009) or later.

There are trade-offs here. But the application developer does have choices, they're just not no-cost.

david_draco2 years ago

If in distribution discussions Linux is name for the operating system and shell, downplaying the role of GNU, then it is also fair game to say here: Linux does not have a stable ABI because glibc changed.

DiggyJohnson2 years ago

Really appreciate your stuff Bjorn, this link always brings a smile to my (too young to be cynical) face.

bjourne2 years ago

Thanks!

knewter2 years ago

I am no longer able to see this comment. It says the message body was removed.

Anyone else? I'll have to assume this is the history of how we built great things being deleted in realtime.

easton2 years ago

I think lkml.org has issues with lots of traffic: https://lore.kernel.org/lkml/CA+55aFy98A+LJK4+GWMcbzaa1zsPBR...

captainmuon2 years ago

The ABI of the Linux kernel seems reasonably stable. Somebody should write a new dynamic linker that lets you easily have multiple versions of libraries - even libc - around. Then its just like windows where you have to install some weird MSVC runtimes to play old games.

mort962 years ago

Or, GNU could just recognise their extremely central position in the GNU/Linux ecosystem and just not. break. everything. all. the. time.

It honestly really shouldn't be this hard, but GNU seems to have an intense aversion towards stability. Maybe moving to LLVM's replacements will be the long-term solution. GNU is certainly positioning itself to become more and more irrelevant with time, seemingly intentionally.

mike_hearn2 years ago

The issue is more subtle than that. The GNU and glibc people believe that they provide a very high level of backwards compatibility. They don't have an aversion towards stability and in fact, go far beyond most libraries by e.g. providing old versions of symbols.

The issue here is actually that app compatibility is something that's hard to do purely via theory. The GNU guys do compatibility on a per function level by looking at a change, and saying "this is a technical ABI break so we will version a symbol". This is not what it takes to keep apps working. What it actually takes is what the commercial OS vendors do (or used to do): have large libraries of important apps that they drive through a mix of automated and manual testing to discover quickly when they broke something. And then if they broke important apps they roll the change back or find a workaround regardless of whether it's an incompatible change in theory or not, because it is in practice.

Linux is really hurt here by the total lack of any unit testing or UI scripting standards. It'd be very hard to mass test software on the scale needed to find regressions. And, the Linux/GNU world never had a commercial "customer is always right" culture on this topic. As can be seen from the threads, the typical response to being told an app broke is to blame the app developers, rather than fix the problem. Actual users don't count for much. It's probably inevitable in any system that isn't driven by a profit motive.

FeepingCreature2 years ago

I think part of the problem is that by default you build against the newest version of symbols available on your system. So it's real easy when you're working with code to commit yourself to some symbols you may not even need; there's nothing like Microsoft's "target a specific version of the runtime".

+1
mort962 years ago
skybrian2 years ago

I wonder what Zig does?

SubjectToChange2 years ago

> What it actually takes is what the commercial OS vendors do (or used to do): have large libraries of important apps that they drive through a mix of automated and manual testing to discover quickly when they broke something.

There are already sophisticated binary analysis tools for detecting ABI breakages, not to mention extensive guidelines.

> And, the Linux/GNU world never had a commercial "customer is always right" culture on this topic.

Vendors like Red Hat are extremely attentive towards their customers. But if you're not paying, then you only deserve whatever attention they choose to give you.

LtWorf2 years ago

> As can be seen from the threads, the typical response to being told an app broke is to blame the app developers, rather than fix the problem.

This is false. Actual problems get fixed, and very quickly at that.

Normally the issues are from proprietary applications that were buggy to begin with, and never bothered to read the documentation. I'd say to a paying customer that if a behaviour is documented, it's their problem.

mook2 years ago

> Normally the issues are from proprietary applications that were buggy to begin with, and never bothered to read the documentation. I'd say to a paying customer that if a behaviour is documented, it's their problem.

… But that's exactly why Win32 was great; Microsoft actually spent effort making their OS was compatible with broken applications. Or at least, Microsoft of long past did; supposedly they worked around a use-after-free bug in SimCity for Windows 3.x when they shipped Windows 95. Windows still has infrastructure to apply application-specific hacks (Application Compatibility Database).

I have no reason to believe their newer stacks have anything like this.

+3
jandrese2 years ago
alerighi2 years ago

Well you talk about Windows, that was true in the pre-Windows 8 era. Have you used Windows recently?

I bought a new laptop, and decided to give Windows a second chance. With Windows 11 installed, there were a ton of things that didn't worked. To me it was not acceptable for a 3000$ laptop. Problems with drivers, blue screens of death, applications what just didn't run properly (and commonly used applications, not something obscure). I never had these problems with Linux.

I mean we talk about Windows that is stable mostly because we use Windows versions after they are out since 5 years and most of the problems were fixed. Good, now companies are finishing the transition to Windows 10, not Windows 11, after staying with Windows 7 for years. After 10 years they will probably move to Windows 11, when most of its bug are fixed.

If you use a rolling-release Linux distro, such as ArchLinux, some problems on new software are expected. It's the equivalent of using an insider build of Windows, with the difference that ArchLinux is mostly usable as a daily OS (it requires some knowledge to solve the problems that inevitably arrive, but I used it for years). If you use let's say Ubuntu LTS, you don't have these kind of problems, and it mostly run without any issue (less issues than Windows for sure).

By the way, maintaining compatibility has a cost: have you ever wandered because a full installation of Ubuntu that is a complete system with all the program that you use, an office suite, driver for all the hardware, multimedia players, etc is less than 5Gb while a fresh install of Windows is minimum 30Gb but I think nowadays even more?

> And then if they broke important apps they roll the change back or find a workaround regardless of whether it's an incompatible change in theory or not, because it is in practice.

Never saw Microsoft do that: whey will simply say that it's not compatible and the software vendor has to update. That is not a problem by the way... OS developer should move along and can't maintain backward compatibility forever.

> The GNU and glibc people believe that they provide a very high level of backwards compatibility.

That is true. It's mostly backward compatible, having a 100% backward compatibility is not possible. Problems are fixed as they are detected.

> What it actually takes is what the commercial OS vendors do (or used to do): have large libraries of important apps that they drive through a mix of automated and manual testing to discover quickly when they broke something.

There is one issue: GNU can't test non-free software for obvious licensing and policy issues (i.e. an association that endorses free software can't buy licenses of proprietary software to test it). So a third party should test it and report problems in case of broken backward compatibility.

Keep in mind that binary compatibility is something that is not fundamental on Linux, since it's assumed that you have the source code of everything and in case you recompile the software. GNU/Linux born as a FOSS operating system, and was never designed to run proprietary software on it. There are edge cases where you need to run a binary for other reasons (you lost the source code, compiling it is complicated or takes a lot of time) but surely are edge cases and not a lot of time should be spent to address them.

Beside that glibc it's only one of the possible libc that you can use on Linux: if you are developing proprietary software in my opinion you should use MUSL libc, it has a MIT license (so you can statically link it into your proprietary binary) and it's 100% POSIX compliant. Surely glibc has more feature, but probably your software doesn't use them.

Another viable option is to distribute your software with one of the new packaging formats that are in reality containers: snap, flatpack, appimage. That allows you to distribute the software along with all the dependencies and don't worry about ABI incompatibility.

CoolCold2 years ago

I literally run on Windows insider for two my laptops - primary one is on beta channel and auxiliary laptop is on alpha channel. Both running Windows 11 and had 10 running before. Auxiliary one lives on insider for I think 5 years of not 6 and definitely had issues, like Intel wifi stopped working and some other minor ones, but main one, had, I guess 3-4 BSODs over 2 years and around 10 times not waking up from sleep. That's pretty much all of the issues.

For me it's impressive and I cannot complain on stability.

suby2 years ago

I believe that appimage still contains the glibc compatibility issues. I've read through appimage creation guides which suggest compiling on the oldest distro possible as glibc is forward compatible but not backwards.

dataangel2 years ago
+3
game-of-throws2 years ago
+3
emaginniss2 years ago
+2
mike_hearn2 years ago
AshamedCaptain2 years ago

GNU / glibc is _hardly_ the problem regarding ABI stability. TFA is about a library trying to parse executable files, so it's kind of a corner case; hardly representative.

The problem when you try to run a binary from the 90s on Linux is not glibc. Think e.g. one of Loki games like SimCity. The audio will not work (and this will be a kernel ABI problem...). The graphics will not work. There will be no desktop integration whatsoever.

jle172 years ago

> Think e.g. one of Loki games like SimCity. The audio will not work (and this will be a kernel ABI problem...). The graphics will not work. There will be no desktop integration whatsoever.

I have it running on an up to date system. There is definitely an issue that it's a pain to get working, especially for people not familiar with the cli or ldd and such, as it wants a few things that are not here by default. But once you get it the few libs it needs and ossp to emulate the missing oss in the kernel, there is no issue with gameplay, graphics or audio aside from the intro video that doesn't run.

So I guess the issue is that the compatibility is not user friendly ? Not sure how that should be fixed though. Even if Loki had shipped all the needed lib with the program, it would still be an issue not to have sound due to distro making the choice of not building oss anymore.

int_19h2 years ago

It would seem from your example that the issue is a lack of overall commitment to compatibility. There are Windows games from 1990s that still run fine w/sound - which is not surprising, given that every old Win32 API related to sound is still there, emulated as needed on top of the newer APIs. It sounds like Linux distros could do this here as well, since emulation is already implemented - they just choose to not have it set up out of the box.

AshamedCaptain2 years ago

> So I guess the issue is that the compatibility is not user friendly ?

I don't understand this point -- this is like claiming Linux has perfect ABI compatibility because at the end of the day you can run your software under a VM or a container. Of course everything has perfect compatibility if you go out of your way using old installations or emulation layers -- people under Windows actually install the Wine DX9 libraries since they have better compatibility than the native MS ones. But this means nilch for Windows' ABI compatibility record (or lack thereof).

frozenport2 years ago

Windows installs those MSVC runtimes via windows update for the last decade.

With Linux, ever revision of gcc has its own glibcxx, but distros don't keep those up to date. So that you'll find that code built with even an old compiler (like gcc10) isn't supported out of the box.

ctoth2 years ago

I read "old compiler" and thought you meant something like GCC 4.8.5, not something released in 2020!

matheusmoreira2 years ago

The Linux kernel ABIs are explicitly documented as stable. If they change and user space programs break, it's a bug in the kernel.

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

LtWorf2 years ago

Someone should invent a command to change root… we should call it chroot!

simonh2 years ago

The article seems to document ways in which it isn't. I have no idea personally, are these just not really practical problems?

mort962 years ago

The article is talking about userland, not the kernel's ABI.

MayeulC2 years ago

Sounds like you want Flatpak, Docker or Snap :)

anon2912 years ago

Just use nix.

guardiangod2 years ago

I can run a Windows 95 app on Windows 10 and it has a reasonable chance of success.

Should Linux (userland) strive for that? Or is Year of the Linux Desktop only covers things compiled in the last 10 years?

ivraatiems2 years ago

It's what the kernel strives for. They're remarkably consistent in their refrain of "we never break userspace."

I think it would be reasonable for glibc and similar to have similar goals, but I also don't run those projects and don't know what the competing interests are.

pantalaimon2 years ago

> I think it would be reasonable for glibc and similar to have similar goals

I don’t think userspace ever had this goal. The current consensus appears to be containers, as storage is cheap and maintaining backwards compatibility is expensive

lpghatguy2 years ago

Containers are not a great solution for programs that need graphics drivers (games) or quick startup times (command line tools).

I've been wrestling with glibc versioning issues lately and it has been incredibly painful. All of my projects are either games or CLI tools for games, which means that "just use a snap/flatpak/appimage" is not a solution.

+2
bityard2 years ago
SubjectToChange2 years ago

> Should Linux (userland) strive for that?

The linux "userland" includes thousands of independent projects. You'll need to be more specific.

> Or is Year of the Linux Desktop only covers things compiled in the last 10 years?

If you want ABI compatibility then you'll have to pay, it's that simple. Expecting anything more is flat out unreasonable.

cbarrick2 years ago

> The linux "userland" includes thousands of independent projects. You'll need to be more specific.

I think it's pretty clear from the context.

The core GNU userland: glibc, coreutils, gcc, etc.

CommanderData2 years ago

Just try changing your hosts or nameservers across different versions of Ubuntu Server.

The fragmentation is such a mess even between 1.x major versions. Their own documentation is broken or non existant.

Woodi2 years ago

Here is some game from '93. Compile it yourself (with some trivial changes).

https://github.com/DikuMUDOmnibus/ROM

Trivial !

But if you still have some obiections then let's wait ~27 years and then talk about games developed on Linux / *nix.

erk__2 years ago

Does that not miss the point of the above poster, this does not show that Linux have good binary compatibility, but that C is a very stable language. Would it run fine if you compiled it on a 27 year old compiler and then tried to run it on Linux is the question that should be asked if I am not mistaken.

janef04212 years ago

Is it really a reasonable goal to want an operating system to run a 27 year old binary without any modification or compatibility tool? There does need to be some way to run such binaries, but doing that by making the kernel and all core ABIs stable over several decades would make evolving the operating system very difficult. I think it would be better to provide such compatibility via compatibility layers like wine and sandboxing in the style of flatpak.

+1
kibwen2 years ago
Woodi2 years ago

I think it shows that compiling is prefered way to go. So it's more like twisting around the point :)

But what with old, binary only games ? Same as with old movies you want to watch and Hollywood prefers to not show anymore... They are super stupid IMO but maybe they have their reasons.

And that DT_HASH lack can be easily patched if someone wants. And if GNU will keep to sabotage like this then there is time to move off GNU. Ah, right - noone wants to sponsor libc fork for few years... Maybe article is right about binaries after all ;)

BugsJustFindMe2 years ago

YSK, this code will likely fail in weird ways on platforms with default unsigned char like ARM because it makes the classic mistake of assuming that the getc return value is compatible with char type despite getc returning int and not char. EOF is -1, and assigning a char on ARM changes to 255 so you'll read past the end of some buffers and then crash.

Woodi2 years ago

Maybe there will be some problems on weird platforms. But if game is good some datails can be resolved. With bad games too ;) With source code, that is.

cosmiccatnap2 years ago

This is a long standing question and has nothing to do with Linux or windows. It's a design philosophy.

Yes the win32 abi is very stable. It's also a very inflexible piece of code and it drags it's 20 year old context around with it. If you want to add something to it you are going to work and work hard to ensure that your feature plays nicely with 20 year old code and if what you want to do is ambitious...say refactor it to improve it's performance...you are eternally fighting a large chunk of the codebase implementation that can't be changed.

Linux isn't about that and it never has been, it's about making the best monolithic kernel possible with high level Unix concepts that don't always have to have faithful implementations. The upside here is that you can build large and ambitious features that refactor large parts of how core components work if you like, but you might only compile those features against a somewhat recent glib.

This is a choice. You the developer can link whatever version you want. If you want to build in support for glib then just use features that only existed 10 years ago and you'll get similar compatibility to win32. If not then you are free to explore new features and performance you don't have to implement or track provided you consider it a sensible use cases that someone has to be running a somewhat recent version of glib.

The pros and cons are up for you to decide but it's not as simple as saying that windows is better because it's focus is backwards compatibility. There is an ocean of contexts hidden behind that seemingly magical backwards support...

itvision2 years ago

A design philosophy of not being able to run old software?

A design philosophy of always having to update your system?

A design philosophy of being unable to distribute compiled software for all Linux distros?

Most Win32 applications from Windows 95 work just fine in Windows 11 in 2022. That's proper design.

karamanolev2 years ago

According to Wikipedia, "Win32 is the 32-bit application programming interface (API) for versions of Windows from 95 onwards.".

Also from there "The initial design and planning of Windows 95 can be traced back to around March 1992" and it was released in '95. So arguably, the design decisions are closer to 30 years old than 20 :)

jdsully2 years ago

The main structure is from win16, although adding support for paging and process isolation was a pretty big improvement in win32. IMO its held up extremely well considering its 40 years old.

shp0ngle2 years ago

Yeah but as a consequence, games (closed source games, which means basically all of them) don’t even bother targeting Linux.

wmf2 years ago

I assume Flatpak fixes this by locking your app to a compatible version of glibc.

mananaysiempre2 years ago

Surprisingly, that seems correct—a Flatpak bundle includes a glibc; though that only leaves me with more questions:

- On one hand, only one version of ld.so can exist in a single address space (duh). Glibc requires carnal knowledge of ld.so, thus only one version of glibc can exist in a single address space. In a Flatpak you have (?) to assume the system glibc is incompatible with the bundled one either way, thus you can’t assume you can load host libraries.

- On the other hand, a number of system services on linux-gnu depend on loading host libraries. Even if we ignore NSS (or exile it into a separate server process as it should have been in the first place), that leaves accelerated graphics: whether you use Wayland or X, ultimately an accelerated graphics driver amounts to a change in libGL and libEGL / libGLX (directly or through some sort of dispatch mechanism). These libraries require carnal knowledge of the kernel-space driver, thus emphatically cannot be bundled; but the previous point means that you can’t load them from the host system either.

- Modern toolkits basically live on accelerated graphics. Flatpak was created to distribute graphical applications built on modern toolkits.

- ... Wait, what?

anon2912 years ago

There is no 'system' glibc. Linux doesn't care. The Linux kernel loads up the ELF interpreter specified in the ELF file based on the existing file namespace. If that ELF interpreter is the system one, then linux will likely remap it from existing page cache. If it's something else, linux will load it and then it will parse the remaining ELF sections. Linux kernel is incredibly stable ABI-wise. You can have any number of dynamic linkers happily co-existing on the machine. With Linux-based operating systems like NixOS, this is a normal day-to-day thing. The kernel doesn't care.

> These libraries require carnal knowledge of the kernel-space driver, thus emphatically cannot be bundled; but the previous point means that you can’t load them from the system either.

No they don't. The Linux kernel ABI doesn't really ever break. Any open-source driver shouldn't require any knowledge of internals from user-space. User-space may use an older version of the API, but it will still work.

> whether you use Wayland or X, ultimately an accelerated graphics driver amounts to a change in libGL and libEGL / libGLX (directly or through some sort of dispatch mechanism)

OpenGL is even more straightforwards because it is typically consumed as a dynamically loaded API, thus as long as the symbols match, it's fairly straightforwards to replace the system libGL.

mananaysiempre2 years ago

I know, I both run NixOS and have made syscalls from assembly :) Sorry, slipped a bit in my phrasing. In the argument above, instead of “the system glibc” read “the glibc targeted by the compiler used for the libGL that corresponds to the graphics driver loaded into the running kernel”. (Unironically, the whole point of the list above was to avoid this sort of monster, but it seems I haven’t managed it.)

kmeisthax2 years ago

> No they don't. The Linux kernel ABI doesn't really ever break. Any open-source driver shouldn't require any knowledge of internals from user-space.

[laughs in Nvidia]

anon2912 years ago

NVIDIA is not an open-source driver [1], and if you look in your dmesg logs, your kernel will complain about how it's tainted. That doesn't change the truth value about what I said about 'open-source' drivers.

[1] I think this may have changed very very recently.

eklitzke2 years ago

This is all correct and I'd also add that ld.so doesn't need to have any special knowledge of glibc (or the kernel) in the first place. From the POV of ld.so, glibc is just another regular ELF shared object that uses the same features as everything else. There's nothing hard-coded in ld.so that loads libc.so.6 differently from anything else. And the only thing ld.so needs to know about the kernel is how to make a handful of system calls to open files and mmap things, and those system calls that have existed in Linux/Unix for eternity.

mananaysiempre2 years ago

Needs to have? In an ideal world, probably not. Has and uses? Definitely. For one thing, they need to agree about userland ABI particulars like the arrangement of thread-local storage and so on, which have not stayed still since the System V days; but most importantly, as a practical matter, ld.so lives in the same source tree as glibc, exports unversioned symbols marked GLIBC_PRIVATE to it[1], and the contract between the two has always been considered private and unstable.

[1] https://sourceware.org/git/?p=glibc.git;a=blob;f=elf/Version...

vetinari2 years ago

> On one hand, only one version of ld.so can exist in a single address space (duh). Glibc requires carnal knowledge of ld.so, thus only one version of glibc can exist in a single address space.

Yes

> In a Flatpak you have (?) to assume the system glibc is incompatible with the bundled one either way, thus you can’t assume you can load host libraries.

Not exactly. You must assume that the host glibc is incompatible with the bundled one, that's right.

But that does not mean you cannot load host libraries. You can load them (provided you got them somehow inside the container namespace, including their dependencies) using the linker inside the container.

> hether you use Wayland or X, ultimately an accelerated graphics driver amounts to a change in libGL and libEGL / libGLX (directly or through some sort of dispatch mechanism).

In Wayland, your app tells the server to render a bitmap. How you got that bitmap rendered is up to you.

The optimization is that you send dma-buf handle instead of a bitmap. This is a kernel construct, not userspace driver one. This allows also cross-API app/compositor (i.e. Vulkan compositor and OpenGL app, or vice-versa). This also means you can use different version of the userspace driver with compositor than inside the container and they share the kernel driver.

> These libraries require carnal knowledge of the kernel-space driver, thus emphatically cannot be bundled; but the previous point means that you can’t load them from the host system either.

Yes and no; Intel and AMD user space drivers have to work with variety of kernel versions, so they cannot be too tight. Nvidia driver has tightly coupled user space and kernel space, but with the recent open-sourcing the kernel part, this will also change.

> but the previous point means that you can’t load them from the host system either.

You actually can -- bind mount that single binary into the container. You will use binary from the host, but load it using ld.so from inside container.

mananaysiempre2 years ago

>> In a Flatpak you have (?) to assume the system glibc is incompatible with the bundled one either way, thus you can’t assume you can load host libraries.

> Not exactly. You must assume that the host glibc is incompatible with the bundled one, that's right.

> But that does not mean you cannot load host libraries. You can load them (provided you got them somehow inside the container namespace, including their dependencies) using the linker inside the container.

I meant that the glibcs are potentially ABI-incompatible both ways, not just that they’ll fight if you try to load both of them at once. Specifically, if the bundled (thus loaded) glibc is old, 2.U, and you try to load a host library wants a new frobnicate@GLIBC_2_V, V > U, you lose, right? I just don’t see any way around it.

>> These libraries require carnal knowledge of the kernel-space driver, thus emphatically cannot be bundled; but the previous point means that you can’t load them from the host system either.

> Yes and no; Intel and AMD user space drivers have to work with variety of kernel versions, so they cannot be too tight. Nvidia driver has tightly coupled user space and kernel space, but with the recent open-sourcing the kernel part, this will also change.

My impression of out-of-tree accelerated is mainly from fighting fglrx for the Radeon 9600 circa 2008, so extremely out of date. Intel is in-tree, so I’m willing to believe it has some degree of ABI stability, at least if an i915 blog post[1] is to be believed. Apparently AMD is also in-tree these days. Nvidia is binary-only, so the smart thing for them would probably be to build against an ancient Glibc so that it runs on everything.

But suppose the year is 2025, and a shiny new GPU architecture has come out, so groundbreaking no driver today can even lay down command buffers for it. The vendor is kind enough to provide an open-source driver that gets into every distro, and the userspace portion compiled against a distro-current Glibc ends up referencing an AVX-512 memcpy@GLIBC_3000 (or something).

I load a flatpak using Gtk3 GtkGLArea from 2015.

What happens?

[1] https://blog.ffwll.ch/2013/11/botching-up-ioctls.html

vetinari2 years ago

> I meant that the glibcs are potentially ABI-incompatible both ways, not just that they’ll fight if you try to load both of them at once. Specifically, if the bundled (thus loaded) glibc is old, 2.U, and you try to load a host library wants a new frobnicate@GLIBC_2_V, V > U, you lose, right? I just don’t see any way around it.

Yup. So the answer is to minimize the amount of loaded host libraries, ideally 0. If that cannot be done, the builder of that host library will have to make sure it is backward compatible.

> But suppose the year is 2025, and a shiny new GPU architecture has come out, so groundbreaking no driver today can even lay down command buffers for it. The vendor is kind enough to provide an open-source driver that gets into every distro, and the userspace portion compiled against a distro-current Glibc ends up referencing an AVX-512 memcpy@GLIBC_3000 (or something).

> I load a flatpak using Gtk3 GtkGLArea from 2015.

Ideally, that driver would be build as an extension of the runtime your flatpak uses. I.e. everything based on org.freedesktop.Platform (or derivatives like org.gnome.Platform and org.kde.Platform) has extensions maintained with appropriate Mesa and Nvidia user space drivers.

So if new open source driver, if it is not part of Mesa, or you are not using above mentioned runtimes, would need to be build against the 2015 runtime. The nice thing is, that the platforms have corresponding sdks, so it is not a problem targeting specific/old version.

captainmuon2 years ago

Does graphics on Linux work by loading the driver into your process? I assumed it works via writing a protocol to shared memory in case of Wayland, or over a socket (or some byzantine shared memory stuff that is only defined in the Xorg source) in case of X11.

From my experience, if you have the kernel headers and have all the required options compiled into your kernel, then you can go really far back and build a modern glibc and Gtk+ Stack, and use a modern application on an old system. If you do some tricks with Rpath, everything is self-contained. I think it should work the other way around, with old apps on a new kernel + display server, as well.

bnieuwen2 years ago

So there are two parts to this: the app producing the image in the application window and then the windowing system combining multiple windows together to form the final image you see on screen.

The former gets done in process (using e.g. GL/vulkan) and then that final image gets passed onto the windowing system which is a separate process and could run outside the container.

As an aside, with accelerated graphics you mostly pass a file descriptor to the GPU memory containing the image, rather than mucking around with traditional shared memory.

wmf2 years ago

Does graphics on Linux work by loading the driver into your process?

Yes, it's called Direct Rendering (DRI) and it allows apps to drive GPUs with as little overhead as possible. The output of the GPU goes into the shared memory so that the compositor can see it.

AshamedCaptain2 years ago

Static linking -- always the ready-to-go response for anything ABI-related. But does it really help? What use is a statically linked glibc+Xlib when your desktop no longer sets resolv.conf in the usual place and no longer speaks the X11 protocol (perhaps in the name of security) ?

Beltalowda2 years ago

I guess that kind of proves the point that there is no "stable", well, anything on Linux. Something like /etc/resolv.conf is part of the user-visible API on Linux; if you change that, you're going to break applications.

/etc/sysctl.conf is a good example; on some systems it just works, on some systems you need to enable a systemd service thingy for it, but on some systems the systemd thingy doesn't read /etc/sysctl.conf and only /etc/sysctl.d.

So a simple "if you're running Linux, edit /etc/sysctl.conf to make these changes persist" has now become a much more complicated story. Writing a script to work on all Linux distros is much harder than it needs to be.

vetinari2 years ago

> Something like /etc/resolv.conf is part of the user-visible API on Linux; if you change that, you're going to break applications.

Apps were not supposed to open /etc/resolv.conf by themselves. If they did, they are broken. Just because the file is available, transparently, doesn't mean it is not a part of the internal implementation.

Even golang runtime checks nsswitch for known, good configuration before using resolv.conf instead of thunking to glibc.

+1
Beltalowda2 years ago
gabereiser2 years ago

Even statically linked, the problems you just described are valid. The issue is x11 isn’t holding up and no one wants to change. Wayland was that promise of change but has taken 15+ years to develop (and still developing).

Linux desktop is a distro concern now. Not an ecosystem concern. It’s long left the realm of an linux concern when MacOS went free (with paid hardware of course) and Windows was giving away free windows 10 licenses to anyone who asked for it.

Deepin desktop and elementary are on the top of my list for elegance and ease of use. Apps and games need a solid ABI and this back and forth between gnome and kde doesn’t help.

With so many different wm’s and desktop environments, x11 is still the only method of getting a window with an opengl context in any kind of standard way. Wayland, X12, whatever it is, we need a universal ABI for window dressing for Linux to be taken seriously on the desktop.

cogman102 years ago

With the rise of WSL, I have a real hard time justifying wanting a linux desktop.

I've got a VM with a full linux distro at my fingertips. Virtualization has gotten more than fast enough. And now, with windows 11, I get an X server integrated with my WSL instance so even if I WANTED a linux app, I can launch it just like I would if I were using linux as my host.

It does suck that the WSL1 notion of "not a vm" didn't take off, but at the same time, when the VM looks and behaves like a regular bash terminal, what more could you realistically want?

+2
blibble2 years ago
+2
vetinari2 years ago
+1
pxc2 years ago
+2
gabereiser2 years ago
pengaru2 years ago

> Linux desktop is a distro concern now. Not an ecosystem concern. It’s long left the realm of an linux concern when MacOS went free (with paid hardware of course) and Windows was giving away free windows 10 licenses to anyone who asked for it.

You seem fixated on the Free Beer misinterpretation of Free Software.

gabereiser2 years ago

No, but it sounds that way I guess. It’s more about where the developer en-masse focus lays. Few developers are interested in the desktop for Linux because they are supported on windows or Mac and during the time period I mentioned, it didn’t cost them anything monetary.

There were indications that windows and Linux May converge. Instead we got WSL2. A lot of times we decide to develop something because of the pain of using the other thing. Sometimes we develop something as a “me too”. Sometimes we develop something that is just better. Sometimes, it’s worse.

My point is the fight for a foothold in Linux desktop looked promising for a bit. SteamOS looked like it was gaining, steam…

The reality is there are complexities at that level that people don’t want to deal with and we all have opinions on how it should work, should look, and should be called.

Red Hat (former RH’er myself) should take this on and really standardize something outside of core and server land. And no, it should not be Gnome.

layer82 years ago

…also locking in any security vulnerabilities.

rtontic2 years ago

I mean, we are talking about videogames here.

formerly_proven2 years ago

Multiplayer is a thing, where both crashing servers and also attacking other clients (even in non-p2p titles) is not that uncommon. Many titles don't permit community servers any more, of course.

maeln2 years ago

Wasn't it Elden Ring or another From Software game that had a RCE ? This article talks about it : https://wccftech.com/dark-souls-rce-exploit-fixed-elden-ring...

A lot of games have multiplayer functionalities these days. That make them a potential target for RCE and related vulnerabilities. Granted, if you don't play video game as root, the impact should be limited, but it is still something to be aware of.

Beltalowda2 years ago

Skimming over the details, that seems like a bunch of bugs in the game code. I don't think dynamic linking would help there.

spicybright2 years ago

If you're never going to update your program and don't care about another heartbleed effecting your product and users, then sure.

lanstin2 years ago

Would you rather run a statically linked go or rust binary with the native crypto implementations of ssl or a dynamically linked OpenSSL that is easier to upgrade? (Honest question)

sylware2 years ago

The glibc libs should be ELF clean. Namely, a pure and simple ELF64 dynamic exe, should be able to "libdl"/dynamically load any glibc lib. It is maybe fixed and possible in latest glibc.

The tricky part is the sysv ABI for TLS-ized system variables: __tls_get_addr(). For instance errno. It seems the pure and simple ELF64 dynamic exe would have to parse the ELF headers of the dynamically loaded shared libs in order to get the "offsets" of the variables. Never actually got into this for good though.

And in the game realm, you have c++ games (I have an extremely negative opinion about this language), and the static libstdc++ from gcc does not "libdl"/dynamically load what it requires from the glibc, and it seems even worse, namely it would depends on glibc internal symbols.

sylware2 years ago

Then, if I got it right for TLS-ized variables, dlsym should do the trick. Namely, dlsym will return the address of a variable for the calling thread. Then this pointer can be cached the way the thread wants. On x86_64, it can "optimize" the dlsym calls by reusing the same address for all threads, namely 1 call is enough.

Now the real pain is this static libstdc++ not libdl-ing anything, or worse expecting internal glibc symbols (c++ ...)

blibble2 years ago

glibc != Linux

a better analogy would be targeting the latest version of MSVCRT that happens to be installed on your system (instead of bundling it)

... also which mostly works but sometimes breaks

int_19h2 years ago

Windows has switched from app-redistributed MSVC runtimes to OS-distributed "universal CRT" since Windows 10 (2015). Unlike MSVCRT, uCRT is ABI-stable.

itvision2 years ago

Nearly 99% of Linux software is linked to glibc.

What the f are you talking about?

This is not MSVCRT by a long shot.

justin662 years ago

I was going to make a joke about a.out support (and all the crazy stuff that enables, like old SCO binaries) but apparently a.out was removed in May as an option in the Linux kernel.

https://lwn.net/Articles/895969/

At least we still have WINE.

phendrenad22 years ago

One way to achieve similar results on Linux might be for the Linux kernel team to start taking control of core libraries like x11 and wayland, and to extend the same "don't break userspace" philosophy to them also. That isn't going to happen, but I can dream!

rodgerd2 years ago

There was a period where a Linux libc was maintained, but it was long-ago deprecated in favour of glibc. Perhaps that was a mistake.

yjftsjthsd-h2 years ago

My understanding is that the Linux devs like only having "Linux" only be the kernel; if they wanted to run things BSD-style (whole base system developed as a single unit) I assume they would have done that by now (it's been almost 30 years).

matheusmoreira2 years ago

I'm not sure about taking over the entire GUI ecosystem but I certainly do want more functionality in the kernel instead of user space precisely because of how stable and universal the kernel is. I want system calls, not C libraries.

kazinator2 years ago

"DT_HASH is not part of the ABI" is like saying "DNS is not part of the Internet".

adolph2 years ago

Maybe a counterpoint is "x86-64 Linux ABI Makes a Pretty Good Lingua Franca" [0] from αcτµαlly pδrταblε εxεcµταblε of Aug 2022.

0. https://justine.lol/ape.html

amadeuspagel2 years ago
cowtools2 years ago

I used to disagree with this browser-as-OS mentality, but seeing as it's sandboxed and supports webGL, wasm, WebRTC, etc. I find it pretty convienient (if I'm forced to run zoom for example, I can just keep it in the browser). Just as long as website/application vendors test their stuff across different browsers.

littlestymaar2 years ago

At this point I'm pretty convinced that no-one at Microsoft ever did a better job in keeping people on Windows than what the maintainers of glibc are doing …

CoolCold2 years ago

Well, the WSL team did a lot I think (including new Terminal). WSL, WSL2, WSLg, WSA - I almost never use full Linux VMs now, my pretty simple needs are covered with it.

beebeepka2 years ago

Not news. In fact, wine/proton really is the preferred wayof doing things.

Valve saw the light years but they weren't the first. Even Carmack has been saying it before the whole gaming on Linux thing became viable.

userbinator2 years ago

The change that caused the break would be equivalent to the PE file format changing in an incompatible way on Windows, to give an idea of how severe it is.

titzer2 years ago

Dynamically-linked userspace C is a smouldering trash heap of bad decisions.

You'd be better off distributing anything--anything--else than dynamically-linked binaries. Jar files, statically-linked binaries, C source code, Python source code, Brainfuck code ffs...

The "./configure and recompile from source" model of Linux is just too deeply entrenched. Pity.

myself2482 years ago

like Excel is more stable than Windows itself, you can open a spreadsheet from the win16 days and it'll just work....

mananaysiempre2 years ago

Personal experience: Office 2021 and Office 97 do not paginate a DOC file created (by Microsoft employees) in Office 97 the same way, so the table of contents ends up different.

croes2 years ago

Nope, had multiple that did even work after the latest update

pabs32 years ago

Heh, I've just been debugging an issue that is triggered by the upgrade to glibc 2.29 (Debian bullseye era).

https://github.com/pst-format/libpst/issues/7

bartwe2 years ago

As a gamedev that tried shipping to linux, we really need some standardized minimized image to target with ancient glibc and such, and some guarantees that if it runs on the image that it runs on future linux versions.

sph2 years ago

Just target Flatpak. You got a standardised runtime, and glibc is included in the container. If it works on your machine, it'll work on my machine, since the only difference will be the kernel and Linus is pretty adamant in retaining compatibility at the syscall level.

hu32 years ago

Sidenote. I remember when Warcraft 3 ran better in Wine+Debian than in Windows. Athlon II x2 CPU and Nvidia Geforce 6600 GT with a wooping 256MB of VRAM. That was one hot machine. Poor coolers.

benibela2 years ago

I tried to run WoW and Starcraft 2 with Wine, and it did not install/run

knorker2 years ago

Yeah Linus's "we don't break user space" is a joke.

Great, the kernel syscalls API is stable. Who cares, if you can't run a 7 year old binary because everything from vdso to libc to libstc++ to ld-linux.so is incompatible.

Good luck. No it's not just a matter of LD_LIBRARY_PATH and shipping a binary with a copy. That only helps with third party libs, and only if vdso and ld-linux is compatible.

My 28 year experience running Linux is that it's API (source code) unbroken, but absolutely not ABI.

MichaelCollins2 years ago

Linus does provide a stable ABI with Linux, it's GNU who drops the ball and doesn't. You're criticizing Linus for something he has nothing to do with. What's the point in that?

knorker2 years ago

Linus limited his scope to something that doesn't matter for users.

I think this is a valid criticism.

It's admirable to do the dishes, but the house is also on fire, so nobody will be able to enjoy the clean plates, so what's then even the point of doing the dishes?

In fact, in this analogy he could have saved the kitten instead of done the dishes.

Err, back from analogy land: ABI stability makes it harder to make things better, improving and replacing APIs. This is expected. But here we are in the worst of both worlds. Thanks to the kernel we are slowed down in improvements, and thanks to kinda-userspace (i.e. vdso & ld-linux), and userspace infra (libc, libstdc++, libm) we don't have ABI compatability either.

So it's lose-lose.

phendrenad22 years ago

Linus chose to only care about the kernel. So there's possibly some fault there.

sph2 years ago

So it's Linus' fault he didn't just write the kernel but should also decided to being responsible for the entire userspace as well? What kind of argument is that.

phendrenad22 years ago

By that logic nothing is anyone's fault. "I did exactly what I meant to do, nothing more, nothing less. Therefore you cannot say anything negative about it. Have a nice day." I feel like there's a Peanuts comic like this: "No matter what happens any place or any time in the world, this absolves me from all blame!"

zbird2 years ago

He should have run for president.

ajuc2 years ago

I wrote a game for my masters thesis in 2008. I wrote it in C++ and targeted Linux. Recently i tried to run it and not only binaries didn't work (that's a given), but even making it compile was a struggle cause I used some gui libraries that were abandoned and there was no version working with modern libc. It was easier to port the game to windows than to make it compile on Linux again...

CameronNemo2 years ago

Proprietary devs should use static linking (with musl) or chroots/containers. What makes the author think they are the target audience of glibc?

davikr2 years ago

Thanks, but I think I'll stick with Windows: their target audience is famously everyone and for an unlimited time.

calvin_2 years ago

Have fun with libGL!

CameronNemo2 years ago

I hadn't thought of that... Flatpaks let you use specific mesa versions, though.

z3t42 years ago

Linux has a more stable windows ABI then windows itself. If a game stops working on windows it will likely still work with wine on Linux.

evasb2 years ago

In my opinion, Valve+distros should fork glibc and do a glibc distribution that focuses in absolute stability.

Didn't glibc devs said that distros have the freedom to choose what to maintain to not break their applications? This would be just a collaboration between the distros to maintain the stability.

zzo38computer2 years ago

I suppose that Win32 can be helpful if you want to make programs that run on both Windows and on Linux (and also ReactOS, I suppose), but might not work as well for programs with Linux specific capabilities.

(Also, I have problems to install Wine, due to package manager conflicts.)

There are other possibilities, such as .NET (although some comments in here says it is less stable, some says it works), which also can be used on Windows and on Linux. There is also HTML, which has too many of its own problems and I do not know how stable HTML really is, either. And then, also Java. Or, you can write a program for DOS, NES/Famicom, or something else, and can be emulated in many systems. (A program written for NES/Famicom might well run better on many systems than a native code does, especially if you do not do something too tricky in the code (in which case some implementations might not be compatible).) Of course also the different ways they have different advantages and disadvantages, with compatibility, efficiency, capability, and other features.

IYasha2 years ago

I laugh so hard... with tears! But, to be fair, Unreal 2004 still works almost perfectly on not too obsolete Ubuntu. Or did I have to do some glibc trickery?.. Can't remember for sure.

ece2 years ago

If anything, not breaking things makes you more careful about what you put in. I feel like that's not a bad rule to go by.

skyde2 years ago

what about the web browser. Isn't that also a stable ABI ? Or its not a "Binary interface" because it only support Javascript?

What about web assembly ?

0x4572 years ago

If web browsers had any kind of stable interface we wouldn't have: https://caniuse.com/, polyfils, vendor CSS prefixes and the rest of crutches. WASM isn't binary. But that's all irrelevant when we talk about ABI.

ABI is specifically binary interface between binary modules. For example: my hello_world binary and glibc or glibc and linux kernel or some binary and libsqlite3.

ddevault2 years ago

The kernel <> userspace API is stable, famously so. Dynamic linking to glibc is a terrible idea, statically link your binaries against musl and they'll still work in 100 years.

sylware2 years ago

game binaries need to dynamically load system libs. A statically linked binary would have to include a full ELF loader.

nibbleshifter2 years ago

Trying to statically link with glibc throws specific warnings that certain calls aren't portable.

With musl? No such problem.

Fuck even uclibc is more portable than glibc and its a dead project afaik

SubjectToChange2 years ago

> With musl? No such problem.

Does musl even implement the functionality glibc was warning about?

mid-kid2 years ago
jwilk2 years ago

OK, glibc ABI stability may not be perfect, but is there any evidence that Wine is any better? That sounds implausible to me.

modeless2 years ago

The difference is if Wine breaks an application that works on Windows, it's considered a bug that should be fixed, regardless of why.

cosmin8002 years ago

oh, no, not again: kids working for big tech constantly, but randomly, deprecating, removing and breaking apis/abis/features in the kernel/libraries/everywhere. I honestly belive that all relationships between big tech companies and opensource are toxic and follow the microsoft principle of the embrace, extend, and extinguish.

Kukumber2 years ago

It's not, and it is super sad to hear people advocating for such horrible idea

Linux being infested by windows is the beginning of its death to me, what a tragedy

A well deserved death after the system-d drama anyways

phendrenad22 years ago

Why is it a horrible idea?

dvfjsdhgfv2 years ago

This is by design, and everybody should be aware of that. I don't know about glibc, but as far as the kernel is concerned, Linus has never guaranteed ABI stability. API, on the other hand, is extremely stable, and there are good reasons for that.

In Windows, software is normally distributed in a binary form, so having ABI compatibility is a must.

mort962 years ago

Uh, the kernel ABI is extremely stable. You could take a binary that's statically compiled in the 90s and run it on the latest release of the kernel. "Don't break userspace" is Linus's whole schtick, and he's taking about in terms of ABI when he says that.

This is about the ABIs of userspace "system-level" libraries, glibc in particular.

josephcsible2 years ago

The kernel absolutely does guarantee a stable userspace ABI. This post is entirely about other userspace libraries.

wmf2 years ago

The Linux kernel maintains userspace API/ABI compatibility forever but inside the kernel (e.g. modules) there is no stable API/ABI.