I used to work at Adobe on the infrastructure powering big applications like Photoshop and Acrobat. One of our worst headaches was making these really powerful codebases work on desktop, web, mobile, and the cloud without having to completely rewrite them. For example, to get Lightroom and Photoshop working on the web we took a winding path through JavaScript, Google’s PNaCl, asm.js, and finally WebAssembly, all while having to rethink our GPU architecture around these devices. We even had to get single-threaded builds working and rebuild the UI around Web Components. Today the web builds work great, but it was a decade-long journey to get there!
The graphics stack continues to be one of the biggest bottlenecks in portability. One day I realized that WebAssembly (Wasm) actually held the solution to the madness. It’s runnable anywhere, embeddable into anything, and performant enough for real-time graphics. So I quit my job and dove into the adventure of creating a portable, embeddable WASM-based graphics framework from the ground up: high-level enough for app developers to easily make whatever graphics they want, and low-level enough to take full advantage of the GPU and everything else needed for a high-performance application.
I call it Renderlet to emphasize the embeddable aspect — you can make self-contained graphics modules that do just what you want, connect them together, and make them run on anything or in anything with trivial interop.
If you think of how Unity made it easy for devs to build cross-platform games, the idea is to do the same thing for all visual applications.
Somewhere along the way I got into YC as a solo founder (!) but mostly I’ve been heads-down building this thing for the last 6 months. It’s not quite ready for an open alpha release, but it’s close—close enough that I’m ready to write about it, show it off, and start getting feedback. This is the thing I dreamed of as an application developer, and I want to know what you think!
When Rive open-sourced their 2D vector engine and made a splash on HN a couple weeks ago (https://news.ycombinator.com/item?id=39766893), I was intrigued. Rive’s renderer is built as a higher-level 2D API similar to SVG, whereas the Wander renderer (the open-source runtime part of Renderlet) exposes a lower-level 3D API over the GPU. Could Renderlet use its GPU backend to run the Rive Renderer library, enabling any 3D app to have a 2D vector backend? Yes it can - I implemented it!
You can see it working here: https://vimeo.com/929416955 and there’s a deep technical dive here: https://github.com/renderlet/wander/wiki/Using-renderlet-wit.... The code for my runtime Wasm Renderer (a.k.a. Wander) is here: https://github.com/renderlet/wander.
I’ll come back and do a proper Show HN or Launch HN when the compiler is ready for anyone to use and I have the integration working on all platforms, but I hope this is interesting enough to take a look at now. I want to hear what you think of this!
A talk given by OP which is a fantastic intro with 2 successful demos across 2 platforms :)
https://www.youtube.com/watch?v=CkV-nWFXvbs
Disclaimer: I currently work at a company in the WebAssembly space that was involved with this conference
Skip the PAL step and just go right into SetupRuntime with the arguments. Non-gfx devs don’t know about these things and adding extra steps in your API is unnecessary. Since PAL isn’t used anywhere else. Other than that, I would highly recommend getting on the WebGPU train using wgpu-native or dawn. (IPal should be a member of IRuntime and is ripe for removal for WebGPU context).
Keep it up! Bookmarked.
Great suggestion, appreciate it. wgpu is coming!
Awesome project. What are you planning for text and font support? Some graphics engines don’t support all the ways you might want to display text. Will we be able to load OTF or WOFF2 files and display arbitrary strings? :-)
Thanks! I haven't looked deeply into font yet, but I've always been partial to HarfBuzz for shaping, so will probably build on top of that. It also has an experimental Wasm shaper which certainly served as a bit of inspiration for the design of this.
There is a well maintained Wasm build of harfbuzz: <https://github.com/harfbuzz/harfbuzzjs> with both OpenType and AAT shapers support, which should be enough but you can also provide your own shaper implementation in Wasm yes.
We're successfully using Wasm harfbuzz to render text in a web-based design tool with relatively high usage so there should be no issues integrating it :)
Nice! Looking forward to your alpha release. (And eventual HarfBuzz integration.)
Second that - while there are a lot of graphics libs out there - text rendering support always seems to be lagging.
Decent support there would be a differentiator in my view.
We've been doing work in Godot Engine trying to get wasm working.
How did you overcome the shared array buffer accessibility problem on safari vs access to ad networks which is important for online games?
I called it single threads vs regular builds.
Hope to help make sure there's a diverse set of rendering kernels for everyone.
Edited: Link to our work at making portable 3d graphics on the web with an editor. https://editor.godotengine.org/releases/latest/
We also collaborate with https://github.com/thorvg/thorvg.
We were impressed by your work, https://github.com/rive-app and https://graphite.rs/
Thanks! If perhaps we spoke at GDC (taking a guess given the context), it was nice meeting you! (Keavon from Graphite here.)
Hey Keavon, I'm also a big fan, been lurking in your Discord for years! The design of the render tree of Wasm nodes certainly took inspiration from Graphite's node system.
Big fan of Godot! I think it has done wonders to make graphics more accessible.
From an Adobe perspective - it doesn't. If you go to photoshop.adobe.com in Safari, you will see the answer. Things can work in a single-threaded build, but that is not production code.
I can't speak for the Safari team, but I do see this getting traction soon with the current priorities for Wasm. Seems like now the most common answer is just to use Chrome.
whew game engine running on WASM + WebGPU would finally be what it takes to power browser based AAA titles. We shouldn't have to download executables via garden walled ecosystems that take a huge chunk of devs revenues
I don't see WASM/WebGPU changing anything when it comes to gaming, as an industry, personally. 3d visualizations and interactive websites? Yeah definitely a nice improvement over WebGL 2, if years late. The OP's experience with Adobe is a great example of this.
WebGPU is pretty far behind what AAA games are using even as of 6 years ago. There's extra overhead and security in the WebGPU spec that AAA games do not want. Browsers do not lend themselves to downloading 300gb of assets.
Additionally, indie devs aren't using Steam for the technical capabilities. It's purely about marketshare. Video games are a highly saturated market. The users are all on Steam, getting their recommendations from Steam, and buying games in Steam sales. Hence all the indie developers publish to Steam. I don't see a web browser being appealing as a platform, because there's no way for developers to advertise to users.
That's also only indie games. AAA games use their own launchers, because they don't _need_ the discoverability from being on Steam. So they don't, and avoid the fees. If anything users _want_ the Steam monopoly, because they like the platform, and hate the walled garden launchers from AAA companies.
EDIT: As a concrete example of the type of problem's WASM for games face, see this issue we discovered (can't unload memory after you've loaded it, meaning you can never save memory by dropping the asset data after uploading assets to the GPU, unless you load your assets in a very specific, otherwise suboptimal sequence): https://github.com/bevyengine/bevy/issues/12057#issuecomment...
(I work on high end rendering features for the Bevy game engine https://bevyengine.org, and have extensive experience with WebGPU)
Lots of interesting points in there, and working on Bevy I'm sure you have much more extensive WebGPU expertise than me.
I agree that the feature set around WebGPU is constrained and becoming outdated tech compared to native platforms. It shouldn't have taken this long just to get compute shaders into a browser, but here we are. The lack of programmable mesh pipelines is a barrier for a lot of games, and I know that's just the beginning.
For memory, architecturally, that's why I'm treating wander as a tree of nodes, each containing Wasm functions - everything gets its own stack, and there is a strategy to manage Store sizes in wasmtime. Deleting that is the only way to free memory vs a singular application compiled to Wasm with one stack/heap/etc. More of a data-driven visualization framework than a full engine like Bevy, which I still think is one of the most elegant ways to build browser based games and 3d environments.
It should be noted that the reason we don't have compute shaders on WebGL was Chrome team dropping the ball on them.
https://github.com/9ballsyndrome/WebGL_Compute_shader/issues...
https://www.khronos.org/webgl/public-mailing-list/public_web...
Consider also the dramatic ... ahem ... success of the attempt to launch zero-day test versions of games via essentially VNC-via-java-in-the-browser.
AFAICT (I was peripherally involved with one of the companies that did this work), this really went nowhere, even though it offered "play this new game from any java-equipped browser".
I fully agree with you, hence why most game studios on the Web rather use streaming from hardware where those GPU capabilities are fully available than with constrained browser APIs.
WebGL and WebGPU are mostly fine for visualization and ecommerce, and that is about it.
Ah, and shadertoy like demos as well, probably their biggest use case.
> The users are all on Steam, getting their recommendations from Steam, and buying games in Steam sales. Hence all the indie developers publish to Steam.
Which is how Steam charges 30%. Devs yell at Apple because they can say Apple is overcharging because of its "monopoly", they can't blame the monopoly on Steam.
With Steam, devs recognize that retailers get paid for shelf space, both as a percentage of 'retail price' the buyer pays above wholesale, and as literal payments for shelf space, inclusion in weekly mailings, posters on the windows, and more.
That these models worked like this long before digital distribution, and still work like this on platforms with no technical barrier to creating competing stores, gets ignored.
> Apple is overcharging because of its "monopoly", they can't blame the monopoly on Steam.
But that's the core the issue. In one case (steam) the developers pay 30% because they estimate the services they are getting from steam are worth it, in apple case the devs. pay because they have to.
The problem is not with the business model or the % cut apple takes, the problem is that the business relies on monopolistic behavior. The solution would be simple, decouple IOS the platform from the app store the service. If the apple store is really worth a 30% cut the market would re converge to that price.
> With Steam, devs recognize that retailers get paid for shelf space, both as a percentage of 'retail price' the buyer pays above wholesale, and as literal payments for shelf space, inclusion in weekly mailings, posters on the windows, and more.
> That these models worked like this long before digital distribution, and still work like this on platforms with no technical barrier to creating competing stores, gets ignored.
I would argue that digital distribution and platform are fundamentally different to brick and mortal retailers.
For one, the marginal cost of an app on the store vs space on the shelves is different. My understanding was that what actually drives the cost of shelve space is competition between product manufacturer and the price setting is closer to an auction as opposed to a set price. Nobody would have an issue if all apple was doing was selling promotion/ads spot on the app store.
Also apple shares on digital distribution is much larger than any single retails chain in the US. Thus giving them extreme pricing power.
taps the sign
It's Not The 30% Fee That Makes People Angry, Rather The Lack Of Competing Alternatives Holding Said Fee Accountable
Can't https://bevyengine.org/ do this?
AFAIK https://wgpu.rs/ makes this possible with Rust.
---
But this is very different than what was demonstrated in the vimeo video.
Not all that different. See these WGPU demos.[1]
As someone who's been using the Rend3/WGPU/Vulkan stack for over three years, I'd like to see some of these renderer projects ship something close to a finished product. We have too many half-finished back ends. I encourage people who want to write engines to get behind one of the existing projects and push.
That's exactly it. With renderlet, the goal is to compile the "frontend" code that's driving the rendering pipeline to WebAssembly, and provide a runtime that embeds that in any app, with the host app providing any configuration necessary to connect renderlet modules and use its canvas.
On the "backend", we will switch fully to wgpu as we retool around wasi-webgpu. I explicitly don't want to rebuild a project like wgpu, and everybody should commit upstream to that - we will likely have stuff to upstream as well.
We already have that ability, both Unity and Unreal can run in the browser (though it looks like Unreal moved their HTML5 stuff into a public plugin). I think Godot also works if you don't use C# and stick to GDScript, or use Godot 3.x instead of the current 4.x.
The problem isn't the tech to run the game, it's the marketplace - how do you actually sell the games without losing the huge customer base that buys through Steam and platform-specific stores? If you're popular enough you probably still get a lot of customers, but I doubt it's anywhere near what Steam does for you.
Oh, piracy and anti-cheat are also a problem, because you just can't have a AAA game without Denuvo and Kernel Backdoors anymore (greeting to the Apex Legends players out there!).
There are probably still a few issues that would have to be solved on the game engine side, but I'm willing to say that the game engine is not the problem with browser-based games.
Here's links to Unreal Engine 5 running in the browser, for anyone interested:
This right here. The web is the OS of the future - the standards are getting there, the tools are just starting to catch up.
What platform(s) do you think browsers will run on?
More of a comment on how apps are being built in the future - web-first is becoming the default.
I already see web even taking over in things like embedded UIs where native toolkits like QT historically were popular.
I think something like this is inevitable and possibly great. The problems I see are the niceties of my platform of choice that I may loose when everything is rendered in a canvas tag.
For instance, I grab individual elements of the UI all the time for sharing screenshots on macOS (like an individual menu, in a transparent png with drop shadow). I have several text shortcuts that work everywhere except on electron apps. Or, for example, how would accessibility work?
Like I said, I think a future where a cross platform open source web stack becomes the standard for UI development is kind of inevitable. I just hope it’s a great stack, with the best ideas from Mac, Windows, Gnome, KDE, etc, and not a lowest common denominator, which is usually the case.
Anyway, this is extremely cool and I’ll keep an eye on the project.
Have you read this article by the lead developer of Flutter, Ian Hickson [0]? It describes using WASM just as you describe to have a fully cross platform UI framework, which is a concept that Flutter uses.
[0] https://docs.google.com/document/d/1peUSMsvFGvqD5yKh3GprskLC...
Thanks - that link does not appear to be open access, anyways I don't think I've seen it. I'm familiar with Flutter at a high-level (Kevin Moore gave a great talk on it at Wasm I/O), and I think other than requiring users to work in Dart, it is probably one of the most powerful ways to do cross-platform UI today.
Worth noting that their original GPU backend was Skia, and now they are retooling around Flutter GPU (Impeller)[0], which is kind of designed similarly as an abstract rendering interface over platform-specific GPU APIs.
I edited the link to be public, let me know if that still works.
I think the ideal in that article is that people can write components in whatever languages they want, and when they compile to WASM, they can all interoperate. It reminds me of all of those compile-to-Javascript languages for writing micro-frontends, although there is not as much interoperability from a React boundary to say, a ClojureScript boundary.
By the way, what are you building as a solo founder for YC? Is it related to this project? For this project, I'm curious to see how exactly WASM interoperates with the GPU directly, bypassing the platform specific APIs. Do you still have to write GPU-specific parts for each of the GPU manufacturers? I wonder if there would be an open standard called WASM-GPU in the future that abstracts over these but doesn't necessarily touch any of the OS directly.
Got it, thanks. Not what I was expecting.
To me, this reads like the intersection of "Web Components as Wasm" and "The Browser as an OS" - almost something analogous to WASI as browser APIs that are delivered via Wasm ABI instead of JS/WebIDL. It's an interesting take, and as long as it can operate alongside existing code, I'm all for that.
There are strong parallels to what we're building - small modules of Wasm graphics code that can interoperate across a common interface.
Check the repo for the GPU integration - it's like a super trimmed down version of wgpu, where graphics data is copied out of Wasm linear memory and a host specific API (WebGPU/OpenGL/DirectX) takes care of the upload to the GPU. There is a wasi-webgpu WebAssembly L1 proposal that I am involved with in the works, driven by Mendy Berger, and at some point all of this will be tooled on top of that with wgpu as a backend.
For renderlet the company, the goal is to build developer tools that make it easy to build renderlets and these kinds of applications without having to write raw Wasm code. The meta-compiler in the video is the first step in that direction! The runtime itself will always be open-source.
It reminds me of something else, actually.
"More than 20 programming tools vendors offer some 26 programming languages — including C++, Perl, Python, Java, COBOL, RPG and Haskell — on .NET."
https://news.microsoft.com/2001/10/22/massive-industry-and-d...
"The EM intermediate language is a family of intermediate languages created to facilitate the production of portable compilers."
"EM is a real programming language and could be implemented in hardware; a number of the language front-ends have libraries implemented in EM assembly language.", namely C, Pascal, Modula-2, Occam, and BASIC.
https://en.wikipedia.org/wiki/EM_intermediate_language
"The high-level instruction set (called TIMI for "Technology Independent Machine Interface" by IBM), allows application programs to take advantage of advances in hardware and software without recompilation"
https://en.wikipedia.org/wiki/IBM_AS/400#Technology_Independ...
WASM is just another take on this.
Indeed, as info, COM is definitely much an actual subject for anyone doing Windows development, just in case anyone thinks it is gone by now.
Related - I’ve written a Flutter package to wrap the Filament PBR rendering package and I hacked together a WASM implementation so I could build 3D apps in Flutter for web.
It’s still just experimental (I’m waiting for some upstream Dart fixes to land around WASM FFI, and shared memory support would be nice in Flutter too) but I think it’s promising. Bundle size is a bit of an issue at the moment too.
This is awesome! I'm not fluent with Flutter/Dart but would like to dig in to how the build / Wasm packaging works.
The state of shared memory for Wasm is not great, although raw SharedArrayBuffers work ok in a browser for running multiple guests. Getting multi-memory properly working through llvm is likely a better solution.
We've got a bundle size issue as well even with -O3. I thought it was due to the amount of templated glm simd code we run, but now am convinced its deeper than that into Emscripten. Haven't been able to look into deeply yet.
Might be this article:
https://docs.google.com/document/u/0/d/1peUSMsvFGvqD5yKh3Gpr...
nope
This is the public link https://t.co/3xeGnKhwYr
This is super neat and I am very interested!
I'm in a rush so I can't look to closely now but I have a few questions (and please forgive any stupid questions, I'm not a graphics dev, just a hobbyist):
What's the runtime like? Is there an event loop driving the rendering? (who calls the `render` on each frame? are there hooks into that? ) FFI story? Who owns the window pointer?
I'm interested in audio plugins, and VSTs (etc) have a lot of constrains on what can be done around event loops and window management. JUCE is pretty much the de-facto solution there, but it's pretty old and feels crufty.
The limits on what audio plugins can do is not a function of the drawing toolkit, but the fact that they do not own the event loop if the GUI is run in-process with the host. And as long as they do, they will never own the evelop loop. In addition (and mostly related to this) the top level window they appear in is owned by the host, which also inherently limits the plugin's role in window management.
If you want more, use the capability built into LV2, AU and VST3 for out-of-process GUIs for a plugin (LV2 has had this for more than a decade). CLAP has, I think, abandoned plans to support this based on lack of uptake elsewhere.
I'd hardly call JUCE "pretty old", but then I'm a lot older than JUCE. And it's likely only crufty if you're more used to other styles of GUI toolkits; in terms of the "regular" desktop GUI toolkits, it's really bad at all.
Hey Paul, thanks for sharing!
Yes I think JUCE is great, It's very well made, but it drives you into a very narrow path of either using everything in the library, or leaving you to fend for yourself (which I admit may be a normal experience for C++ devs). For instance, the ValueTrees frequently used for UI state are very powerful, but they're not very type safe (or thread safe), and they feel clunky compared to more contemporary reactive state management patterns like signals.
I'm sure folks who use ValueTrees are happy, but I don't see much advancement to that pattern being shared in the JUCE forums. If y'all have some better tricks over in the Ardour project I'd love to know! (BTW, I'm a fan of y'all's work. I really enjoyed reading some of the development resources, like the essay on handling time [0]).
Great questions!
The host app owns the event loop. I don't foresee that changing even once we re-architect around WebGPU (allowing the Wasm guest to control shaders), as the host app is responsible for "driving" the render tree, including passing in state (like a timer used for animations). The host app owns the window pointer, as renderlets are always designed to be hosted in an environment (either an app or a browser). Open to feedback on this, though.
FFI is coming with the C API soon!
I don't know much about audio but I see a ton of parallels - well-defined data flow across a set of components running real-time, arbitrary code. Simulations also come to mind.
Thank you for the reply! I'm excited to watch as this project progresses, and I wish you the best of luck!
> JUCE is pretty much the de-facto solution there,
Is it though? iPlug nee wdl-ol nee iPlug2 seems pretty good too. JUCE stuff has a pretty distinct and slightly obnoxious look and feel that takes a fair bit of effort to strip out
Not much to add, just wanted to say I thought your presentation at wasm I/O in Barca a couple of weeks ago was amazing and it's great to see this work getting some attention!
Barca -> rowing boat
Barça -> the football club
Barna -> cute form of Barcelona
I guess a lot of the English-speaking world has (mis)appropriated the anglicised name of the football club?
Still, for what it's worth, b7a is my favourite city so far.
Tbh, I've always been a fan of city 17.
oh my god this is awesome! that's exactly what ive been dreaming about for the past few years... wasm has a lot of potential as a portable unit of graphics/audio/multimedia computation! im glad you were able to take the time to build it!
I understand the appeal of Rive. However, even if their renderer is open source now, their editor isn't and their free tier is quite limited.
Have a look into supporting Ruffle/SWF content, Lottie, etc.
Also, for a renderer there is one by Mozilla called Pathfinder: https://github.com/servo/pathfinder
Can you share what you find limiting in the free tier? Would love to know more! I'm one of the founders btw
Sure, I think it's probably generous from the user count point of view but incredibly limited from the number of files. And it seems you have to use provided fonts in the free tier... I think Rive should offer the Editor free like Unity and then charge for additional services like console support, dedicated support, troubleshooting, etc. as that's much more common business model for game middleware. The same applies to Unreal Engine where Switch/PS5/Xbox support is gated behind the respective access to the official dev portal and Epic's own Perforce rather than GitHub. And Perforce support for example should be promoted for pro tiers.
I see a banner mentioning "Rive for Game UI" which is great to see but really the whole platform should be a Flash replacement. It shouldn't just be for doing UIs in games or animated content, it could be used to make full 2D games. Flash was so popular because of its versatility. There were middleware taking Flash content directly into game UIs (ScaleForm) and there is middleware supporting WebKit for game UIs (Coherent labs). Both of these have extensive scripting support (respectively ActionScript and JavaScript) allowing UI designers and coders to create reactive and flexible content, even procedural content like lists of things etc.
By the way, the only way from mobile to get to the downloads link on the main site is only behind the online editor login. I get why but I thought at first that the Editor was online only because of that.
I think that perhaps what you’re missing is that most of those tools charge for the runtime in some capacity. We took a different approach. The Rive Runtime and file format is free and open source, the editor is how we monetize. Users can have confidence that they will forever have access to the runtime and their files. Anybody can build an editor.
Regarding file limits, stay tuned for some announcements there.
Regarding Flash, yep that’s where we’re headed (and most of the use cases on the site should support that). We have some big features launching this year like audio, fluid layouts, and scripting. The banner was added because we’ve been attending game conferences and the game ui market segment is something we’re highlighting right now. Game UI is in dire need of better tools and it’s a market segment we can quickly lead with our current feature set.
[dead]
Interested as well - I think you've built an incredibly productive editor with Rive - a spiritual successor to Flash!
Thanks! Lottie should be straightforward. SWF is a much higher bar, but would be useful.
Great to see more projects in the 3D graphics/WASM space! Any tips for getting into YC?
For context, my team has spent the past few years porting Unreal Engine 5 to WebGPU and WebAssembly - we have a multi-threaded renderer as well as an asset streaming system that fetches in at runtime asynchronously (as needed) so users don't need to download an entire game/app upfront. This also frees up needing to have the whole application in memory at once. We've also built out a whole hosting platform and backend for developers to deploy their projects to online.
You can learn more about SimplyStream here:
Website: https://simplystream.com/
Blog post: https://simplystream.com/create/blog/latest
I'm probably the worst person to ask for advice about applying to YC - it just kind of happened.
I was sad when UE4 sunset HTML5 support, and glad to see a spiritual successor! There are a lot of parallels to other large in-browser apps in terms of load time for games - not just for the content but the size of game code itself. Are you able to use streaming compilation or some sort of plugin model?
I've been trying to run the demos in Firefox on Linux, with an NVidia 3070. For the ones that will start, I see "WONDER", then a "loading" screen, with about 100Mb/s download traffic for about 10 seconds. Then RAM usage increases over about two minutes to 24GB or so. Then I get "Gah. Your tab just crashed" in Firefox.
Most of the demos just kill Chrome, on latest version, running on NVidia Quadro T1000.
For cad kernels I highly recommend manifold https://github.com/elalish/manifold to embed in your app.
Manifold is awesome! Would love to get that integration going. I've implemented a lot of procedural geometry functions, but that is a long way from an actual CAD kernel.
Cool project!
Looks like it supports geometry and textures now, any plans to support shaders?
Yes! There are a few different approaches to making that work - one would be to have an intermediate shader representation generated from Wasm compile to native platform shaders on the host graphics API. Longer-term, will likely expose WebGPU WGSL shaders to Wasm directly.
Readme says it's a C++ library. Any plans to support higher level languages such as Go or even Python?
Yes! It's kind of a pain to build now, so will probably shift to shipping as a .so/dll with a raw C api in a future version. With that, should be easy to generate bindings for any language - the host API footprint is minimal.
I don't follow this part. if the lib is shipped as .so/dll, how can it be compiled into wasm?
There is the host API - wander, which contains the Wasm runtime and interfaces with the GPU. The actual graphics code is always compiled to Wasm.
Yes! Not in the open-source repo yet (because it's currently broken :) ) but you can see it in the video, and will have it working again soon.
That's exactly the goal - one wasm binary with defined input/outputs that can be loaded either in a browser or running in any app outside of a browser.
Great work Sean, this is quite impressive. It seems you were previously an EM, but clearly very technical. I'm currently an EM and technical as well. I'm wondering what made you take the jump and decide to build this on your own. I'm in a different domain, but my industry seems stuck in a local-minimum that just sort of works well enough, and I have a lot of thoughts about what-if things were built in a different way. Can you share a bit more about your story and any advice in making this sort of jump successfully. I'm happy to take it offline (my contact is in my profile).
Is there any example project utilizing one of the available WASM runtimes that could load, instantiate and run WASM module on android in near native speed, not interpreted (not web browsers)?
Recently I investigated few WASM runtimes and honestly could not manage achieving this task. Only suggestion I got from people is load bunch of packages using termux package manager and operate in shell environment on Android to compile and run example projects.
I would appreciate link to some project that results in APK which (as part of its work) calls WASM function in non-interpeted mode on android (arm/x86).
What exactly is in a renderlet, or what assumptions does a renderlet make?
For example, if I wanted to LoadFromFile() + Render() the building renderlet into a deferred rendering pipeline, would I be able to do that?
Not much yet! :)
The renderlet is a bundle of WebAssembly code that handles data flow for graphics objects. Input is just function parameters, output writes serialized data to a specific place in Wasm linear memory. With the Wasm Component Model, in the future can use much more complex types as input and output.
LoadFromFile() - Instantiates the Wasm module
Render() - runs the code in the module, wander uploads the output data to the GPU
Functions on the render tree - do things with the uploaded GPU data - like bind a texture to a slot, or ID3D11DeviceContext::Draw, for example.
There's some nuance about shading. In the current version, the host app is still responsible for attaching a shader, so should be no issue using the data in a deferred shading pipeline. In the future, the renderlet needs to be able to attach its own shaders, in which case it would have to be configured to use a host app's deferred shading pipeline. I think it is possible, but complicated, to build an API for this, where the host and then the renderlet are both involved in a lighting pass.
Of course, if all shading is handled within the renderlet, it entirely the concept of deferred shading, and this becomes an easier problem to solve.
Unreal Engine 5 just got a WebGPU/WASM port:
Isn't that what https://github.com/gfx-rs/wgpu is providing?
Partially. wgpu can translate graphics api calls like WebGPU to platform specific APIs - this is something I had to implement on the backend. In the future, will likely tool on top of wgpu on the backend as wasi-gfx (WebGPU) becomes a reality.
What we do on top of that is compile the graphics code to wasm and provide a well-defined interface around it, so it can run/work inside any application.
First off, congrats!
Can you elaborate on what the “graphics code” might be in this case? Many Rust graphics engines seem to cover the same ground by having asset loading cfg’d on the target (wasm vs native). What does your project provide that a dev wouldn’t get with Rust + a wasm compatible engine?
What do you think of sokol in comparison?
sokol is great - I think of it as more of an "STB for apps". With renderlet/wander the goal is more to express graphics using higher level constructs and automatically generate the runtime code behind it. For example, with the Wasm build of sokol you could build a canvas-style app that directly runs in a browser, whereas with renderlet you can build a function that can parametrically render grids that can be run in (any) app like that.
TBH I would love an extended WASI standard with 'media apis' (window, 3D, audio, input) to run sokol code compiled to WASM in without having to compile/distribute per-platform native apps.
Deno seems to work on that idea [0], but having a WASI like standard would be better of course.
[0] https://github.com/deno-windowing
PS: How much work was it to "port" the Rive renderer? Would be great to see a blog post or similar about how you approached that and about any difficulties on the way :)
Yes! wasi-webgpu is coming, as well as more APIs with wasi-gfx.
Getting rive-renderer working was not hard because in the demo its running on the host side, and not in Wasm yet, although compiling for Windows/DX11 took some minor changes. Getting it fully working in Wasm outside of the browser looks to be non-trivial, but doable, but will likely require upstream changes.
Are you involved at all with the Bytecode Alliance or the decisions around the WASI proposals/standards? It feels like your take on these things would be super valuable given all the work you've done. They're a very open minded group.
Also, with the solid foundation and simple API footprint you've built for APIs like sokol_audio, would be interesting to see if they could be expressed in WIT and used as a basis for something like a wasi-audio.
+1 for Audio and lots of other APIs necessary to make WASI more like a true OS. With Preview 2 / Component model, hoping the pace of contributions rapidly increases.
So kind of like a graphics transpiler?
Yes, that's a good mental model. Input - high-level description of graphics, output low-level graphics code running in Wasm.
What’s the story with threading on the web these days? My impression was that the browsers have purposely and permanently handicapped some things necessary for performance in order to prevent things like rowhammer and speculative execution exploits. But I haven’t paid super close attention.
It works well now! You use a SharedArrayBuffer to communicate between Web Workers. Was indeed a dark couple of years where the meltdown stuff disabled that, but there are now specific cross-origin headers that are used to isolate the SharedArrayBuffer from outside access.
Classic inner platform effect.
WASM has taken 7 years ago to get where desktop C++ was 20 years ago. Or rather, within approximate psuedo-spitting-distance to desktop.
Then add the distance between 3D Web APIs and desktop 3D API and we're gold.
Also add WASM versus MSIL into the mix regarding runtime capabilities.
When you launch include WASM in the title for more traction.
I put it in this one too. Thanks!
do I still need to write platform specific shaders using this library?
The long-term goal is no - wander should handle shader compilation automatically. In the current version, the host app has to attach a shader using the host's shader API to render arbitrary geometry and textures.
Was looking at several different approaches to this, one of which would be cross-compiling wasm to spir-v. Most likely will expose a higher-level shader API (think shadertoy) and have wander compile to the platform backend. Also will be able to run WGSL shaders directly through Wasm with wasi-gfx support.
or maybe support https://github.com/shader-slang/slang
Yes! Will look into that
Could this be the foundation of an Electron replacement?
This is a natural next step of this kind of tech (and WebAssembly tech in general) -- but that seems to not be the direction that renderlet is going... No reason they couldn't do it, but actually rendering 2D/3D and specializing in making GUI application development easier are similar but not quite the same.
Someone could definitely build another "last" cross platform application development toolkit with WebAssembly right now, and have it actually work reasonably well, and be slightly more desirable than flutter (and it could absolute use flutter/skia underneath) since you could build without the Dart (for those who don't necessarily prefer Dart).
Or port QT to WASM.
Well the great thing about WebAssembly is that you can port QT or anything else to be at a layer below -- thanks to WebAssembly Interface Types[0] and the Component Model specification that works underneath that.
To over-simplify, the Component Model manages language interop, and WIT constrains the boundaries with interfaces.
IMO the problem here is defining a 90% solution for most window, tab, button, etc management, then building embeddings in QT, Flutter/Skia, and other lower level engines. Getting a good cross-platform way of doing data passing, triggering re-renders, serializing window state is probably the meat of the interesting work.
On top of that, you really need great UX. This is normally where projects fall short -- why should I use this solution instead of something like Tauri[2] which is excellent or Electron?
[0]: https://github.com/WebAssembly/component-model/blob/main/des...
[1]: https://github.com/WebAssembly/component-model/blob/main/des...
[2]: https://tauri.app/
Also know as re-inventing Common Language Specification and Common Type System from .NET.
https://www.linkedin.com/pulse/what-ctscls-fcl-bcl-crl-net-f...
Electron replacements have existed for decades since Active Desktop and XUL were a thing, either use the system browser with a daemon/service, or make use of Webwidgets.
i've been using sokol.h and it just works :)
Listen to some of the others in this thread. Don't waste your time with kid stuff. Stick to Unreal or Unity:
https://news.ycombinator.com/item?id=33452920
You can get your game/app ideas across far faster by building skills in Unreal/Unity than by using some bespoke little engine. Collaborate with more people, too.
Dismissing someone's well thought out hard work as "kid stuff" is really quite rude. Do better please.
You and the others are missing the whole point. This isn’t meant to be an Unreal or Unity replacement.
And besides that point, what is wrong with the “kid” stuff. A bunch of masterpieces have been created in such kid stuff. Celeste, Hotline Miami, and Dead Cells come to mind. I can’t wait for the day that actual kids are building their own cross platform game engines on a better tech foundation.
Cuda is nasty and implementing your own engine is great. One of those two is in the process of falling apart and they're both very complicated, no deal for me.
hmm…i do mainly web stuff but i read in some blog somewhere that the v8 runtime compiler optimizes the code better than writing the same thing in assembly script when working directly on top of typed buffer arrays, is this true?
Deserves a "Show HN".
Appreciate it. It's coming! I want to get the compiler working and generally available and everything working seamlessly for Web first. Stay tuned!
Is it limited to wasmtime or can it run in a web browser ?
Just wander itself is almost working in a browser - kind of hacked together right now, but will be pushed to the open-source soon. The same Wasm payload should be able to run in the browser or through wasmtime.
For the rive-renderer / 2D integration, it is going to be a much longer path to get working in a browser together with wander.
Is there much of a performance hit with a stereo view?
I haven't worked with stereo setups with this codebase yet, but as it is just wrapping underlying platform-specific GPU APIs, it should be a similar performance profile.
On average, running the Wasm guest code is about 80% of the speed of a native build I use. That is both dependent on what is running in Wasm and not a very scientific measurement - wander needs better benchmarks. We think that performance profile is sufficient for anything that needs a GPU except the highest-performance 3D games.
Totally off-topic, but in that video you have some preview view on the right-side of your code window in VS where you can scroll visually through a large file. How the hell do you switch that on? I've been using VS for 30 years and never seen that, but it would be really helpful for the shitty apps that I write for my own use which are single-file monsters.
Right click the scroll bar and choose "scroll bar options" then under "behavoior" select the 2nd option with the source overview set to "wide"
Thank you! :) I'd gone through every menu and right-clicked everywhere except the scroll-bar.
For anyone else that ends up here, I also had to click the radio button for "Map mode" too.
This. I think it used to be a productivity power tool that they since incorporated into the core editor.
> If you think of how Unity made it easy for devs to build cross-platform games, the idea is to do the same thing for all visual applications.
But why wouldn't I "just" use Unity?
I agree with you. Nobody cares about the platform specific details anymore, and people are willing to pay a little bit of money for an end-all-be-all middleware. I have gone my whole life not paying attention to a single Apple-specific API, and every single time, someone has written a better, more robust, cross-platform abstraction.
But Unity is already this middleware. I already can make a whole art application on top of Unity (or Unreal). People do. Sometimes people build whole platforms on top of Unity and are successful (Niantic) and some are not (Improbable). You're one guy. You are promising creating a whole game engine - you're going to get hung up on not using the word game engine, but that is intellectually honest, it is a game engine - which a lot of people 1,000x better capitalized than you have promised, and those people have been unable to reach parity with Unity after years of product development. So while I want you to succeed, I feel like a lot of Y Combinator guys have this, "We make no mistakes, especially we do not make strategic mistakes." It's going to be a long 3 years!
Without going into the motivations for building a startup and doing Y Combinator, I do agree with many of your points.
People can use Unity to build games and non-games. I personally don't think it fits a lot of different use-cases or application models and that it tends to be most successful in specific gaming verticals, but if it works well for you, by all means use it!
I'm strategically betting both on the lines between what is viewed as a game and not blurring, as well as developers needing a friendlier, more flexible way of building this kind of interactive content. I'm by no means under the illusion that strategic mistakes won't be made, or that this won't be a 10-year+ journey - realistically many (most?) successful companies have a very nonlinear path, including Unity themselves.
I agree Unreal and Unity are not appropriate but I do wonder about Godot. Its early enough where it doesn't have the strong connotations of being a game engine yet. I've seen some cool applications made in it too (https://www.youtube.com/watch?v=9kKp0oguzr8). So I wonder if you could apply your energy to making it more cross platform using WASM (if that's even necessary) and extend it with your own UI language instead of rolling your own?
I think Godot is the closest thing to this today, and I agree, would love to work with them! Particularly on the Wasm and packaging side of things.
Unity is so much larger and more complex than a graphics middleware.
It comes with physics engines, telemetry, networking, a c# runtime and probably even more.
I don’t think that any of the adobe suite would ever be built in unity bc why do they need to ship a physics engine with their photo editor.
Not to mention that unity is backed by an, imo, untrustworthy company who’s obviously willing to change pricing structure on a dime and retroactively.
I agree, I think "game engine" is a misnomer for what are better termed "game (dev) studio", like Unity. They include a sophisticated game engine but also a lot of supporting tools and GUIs.
> ever be built in unity bc why do they need to ship a physics engine with their photo editor.
I know you narrowly mean "rigidbody physics for the purpose of videogames." But Adobe did ship a physics engine with their photo editor! They discontinued their "3D" support, and raytracing is most definitely physics with a capital P, but they were shipping it for a long time. If you have an even more normal definition of physics, to include optical physics, well they have a CV solution for many features like camera RAW processing, removing distortion, etc.
> It comes with physics engines, telemetry, networking, a c# runtime and probably even more.
Because that is what people need to make multimedia applications.
There absolutely is a need for a robust cross-platform rendering/multimedia solution, more in a similar vein to SDL than Unity or Unreal. The offering of Unity, Unreal, and perhaps Godot is just abysmal when considering that for all of the man hours put into the game development space, that is basically all we got. There should be hundreds of viable cross platform game engines catering to a wide variety niches that continually stretch the bounds of what a game actually is and how it can be represented. Game libraries such as Monogame, Heaps, Raylib, Love2D, etc just wouldn't be that popular if Unity and Unreal are the be all and end all. Adobe Air was once a popular choice (a very large number of top 50 app store games were built with Adobe Air) and I'd wager still would be if it didn't collapse under its technical weight.
Currently it is the low level, cross platform layer that is the most complex and the biggest hurdle towards making a game engine viable. If it wasn't so insanely complex, and the technical barrier towards making your own engine is reduced, the tired cliche of "don't build an engine" wouldn't hold as much weight, and it opens the doors to building a bespoke, fit for purpose engine for every game you create. Don't underestimate what an individual or small teams can produce if they are operating on a solid platform that facilitates a rich ecosystem of tools.
> Currently it is the low level, cross platform layer that is the most complex and the biggest hurdle towards making a game engine viable
I couldn't agree more. My goal is not to simply build "a better game engine", but to make this kind of low-level tech accessible at a higher level and with much better dev tools to a broader class of developers and applications
> Don't underestimate what an individual or small teams can produce if they are operating on a solid platform
This gets into my motivations for building a company - larger companies have the resources to build moats, but often can't quickly realign themselves to go after novel technical opportunities. It's not either / or - both models exist for very valid reasons.
I am glad people are working on it!!
Have you seen Kha by any chance? It has similar goals. I find it quite awesome, but it won't gain mass adoption for a bunch of reasons. https://github.com/Kode/Kha
Someone built an immediate mode renderer on top https://github.com/armory3d/zui, which is utilised by ArmorPaint https://armorpaint.org. I also use Zui for my own bespoke 2D game engine.
I find this tech and tooling really quite amazing (just look at how little source code Zui has) given just how small the ecosystem around it is. I think Kha really illustrates what can be achievable if the lower levels have robust but simple APIs, just exposing the bare minimum as a standard for others to build upon. It really suggest taking a look at the graphics2 (2d canvas like) api.
For the kind of project I work on (mostly 2d games), I think it would really awesome if your framework also supported low level audio, and a variety of inputs such as keyboard, mice, and gamepads. If it also had decent text rendering support it would basically be my dream library/framework.
> The point of Haxe seems to be as a meta-compiler to generate code for a bunch of different languages/compilers?
That's basically correct, although there is also a cross platform runtime called Hashlink but is unsupported by Kha.
> Game libraries such as Monogame, Heaps, Raylib, Love2D, etc just wouldn't be that popular if Unity and Unreal are the be all and end all.
Just because it happens, doesn't mean it makes sense.
Anyway, people write their own game engines, and programming languages for game engines, because it is intellectually stimulating to do so, and something you spend 100h/wk to yield 1h of gameplay is still giving you more gameplay than something boring you spend 0h/wk on.
Then, the people who use those engines you are naming, they end up porting to Unity anyway. If you want to deploy on iOS and Switch with one codebase, it is the only game in town. And that's sometimes 60% of revenue.
> Don't underestimate what an individual or small teams can produce if they are operating on a solid platform that facilitates a rich ecosystem of tools.
Unity fits this bill exactly. I too want more competition. But in the real world I live in, if someone were to ask me, "what solid platform should I choose to make my multimedia application, as a small team, that also has a rich ecosystem of tools, and will enable me to make pretty much anything I can think of?" I would say, use Unity. Because I want them to succeed.
Two high profile examples, Celeste and Deadcells use Monogame and Heaps (Haxe). They weren’t ported to Unity. Celeste and Towerfall Ascension are two examples of very high quality games that bucked the trend and just used a low level renderer (Monogame). IMO they are better because of that.
But besides that point, the very reason why many games are ported from their niche library to Unity or Unreal is mostly just for cross platform support. Not because the game creator has a preference for Unity or Unreal. They are forced into it through lack of choice if they want cross platform. If Love2D, Phaser, Flixel, and any other niche 2D game library had an easy way to target consoles they would get a whole lot more use, but they don’t because the lower levels are extremely complex and engine/framework/library developers can’t support it. WebGPU appears to offer a path forward in that regard.
> the very reason why many games are ported from their niche library to Unity or Unreal is mostly just for cross platform support.
Indeed.
> They are forced into it through lack of choice if they want cross platform... because the lower levels are extremely complex
That is what I am saying. There are countless indie game engines of great repute, but because they are "1 guy," they cannot reach feature parity with Unity. They never will.
The Blizzard that developed the Overwatch engine had immense experience and a notoriously frugal (in terms of employee pay) culture. It still cost them about $100m and many years to develop a great engine for 3 platforms (Windows/Xbox, Switch, PS5). What hope does Godot have with $8m, or MonoGame with kopeks?
Nobody can vibes their way past the math problem of multi-platform game engines.
This is only controversial because Unity received so much ill will; and, that the indie games business and social media are very sensitive to vibes.
> WebGPU appears to offer a path forward in that regard [like supporting the Nintendo Switch].
While I would love for this to be true, it is significantly more aspirational than saying that because of Game Porting Toolkit, DirectX offers "a path forward" on macOS.
Lomiri.
There is (surprisingly) no high level commentary on what this actually is, but people banging on about how nice it would be to have a high level cross platform GPU accelerated library.
...but this..?
> Graphics data and code can be developed together in the same environment, packaged together into a WebAssembly module called a renderlet, and rendered onto any canvas. With WebAssembly, we compile graphics code to portable bytecode that allows it to safely run on any processor and GPU
So what is a renderlet?
> The renderlet compiler is currently in closed preview - please contact us for more information.
Hm... what this seems to be is a C++ library that lets you take compiled WASM and run it to generate and render graphics.
Which, I think it is fair to say, it's surprising, because you can already render graphics using C++.
Only here, you can render graphics using an external WASM binary.
So, why?
Specifically, if you're already using C++:
1) Why use WASM?
2) Why use renderlet instead of webGPU, which is already a high level cross platform abstraction including shader definitions?
What is this even for?
> wander is designed to be a rendering engine for any high-performance application. It primarily is designed as the runtime to run renderlet bundles
...but, why would I use a renderlet, if I already need to be writing C++?
I. Get. It. A cross platform GPU accelerated rendering library you can use from any platform / browser / app would be great. ...but that is not what this is.
This is a C++ library runtime that you can use to run graphics in any circumstance where you you can currently use C++.
...but, in circumstances where I can use C++, I have many other options for rendering graphics.
Look at the workflow:
Rendering code -> Renderlet compiler -> renderlet binary
App -> load renderlet binary -> start renderlet runtime -> execute binary on runtime -> rendered
vs. App -> rendering code (WebGPU) -> rendered
or, if you writing a new cross platform API over the top of webGPU App -> Fancy api -> WebGPU -> rendered
I had a good read of the docs, but I honestly fail to see how this is more useful than just having a C++ library that renders graphics cross platform like SDL.Shaders? Well, we also already have a good cross platform rendering library in webGPU; it already runs on desktop and browsers (maybe even some mobile devices); it already has a cross platform shader pipeline; it's already usable from C++.
I'm not going to deny the webGPU API is kind of frustrating to use, and the tooling for building WASM binaries is too, but... it does actually exist.
Is this like a 'alternative to webGPU' with a different API / easy mode tooling?
...or, have I missed it completely and there's something more to this?
Keeping it high level -
No, the goal is not to create a C++ API to give you GPU functions.
The C++ API for wander is used to embed the WebAssembly module of graphics code into the application. The API footprint is very small - load a file, pass parameters to it, iterate through the tree it produces.
This could be viewed as logically equivalent to programmatically loading a flash/swf file. Or similar to what Rive has built with a .riv, although this is static content, not code.
> 1) Why use WASM?
You're loading arbitrary, third-party code into an app - that is the renderlet. The benefit is to have a sandboxed environment to run code to put data on the GPU.
2) Why use renderlet instead of webGPU, which is already a high level cross platform abstraction including shader definitions?
WebGPU is a low-level API. If you are a graphics programmer, and want to build an app around WebGPU, go for it! A renderlet is more of a graphics plugin system than an entire first-party app.
> The renderlet compiler is currently in closed preview - please contact us for more information.
This is the system to build the renderlet. This is not writing raw C++ code to talk to WebGPU, this can be higher-level functions (build a grid, perform a geometric extrusion, generate a gradient) - you can see in the video it is a yaml specification. The compiler generate the necessary commands, vertex buffers, textures, etc, and soon, shaders to do this, and builds a Wasm module out of it.
> Is this like a 'alternative to webGPU' with a different API / easy mode tooling?
I certainly wouldn't describe it as an alternative to WebGPU, but easy(er) tooling to build graphics, yes.
> What is the use-case for 'I've compiled a part of my application only into a cross platform binary renderlet and I can now run that cross platform ... after I've compiled the rest of my application into a platform specific binary for the platform I'm running it on?'
Let's take an example - Temporal Anti-Aliasing. There are libraries that exist to implement this, or you can implement it through raw code. This requires changes structural changes to your pipeline - to your render targets, additional outputs to your shaders, running additional shaders, etc. Wouldn't it be nice to easy connect a module to your graphics pipeline that contains the code for this, and the shader stages, and works across platforms and graphics APIs, with data-driven configuration? That is the vision.
> ... rest of your application into WASM/platform native code... is that not strange? It seems strange to me
There is not really such a thing as a standalone Wasm application. It has seen great success as a data-driven plugin model. In a browser, it is hosted with / interacts with JavaScript. Even built for pure WASI, as a standalone app where everything is compiled into a single module, there is stil a runtime/host environment.
Does that help clarify?
> A renderlet is more of a graphics plugin system than an entire first-party app.
I see.
So this is basically flash?
A high level API to build binary application bundles (aka .swf files, ie. renderlets) and a runtime that lets you execute arbitrary applications in a sandbox.
renderlet = .swf file
wander = flash runtime
renderlet compiler = magic sauce, macromedia flash editor
yeah?
> Let's take an example - Temporal Anti-Aliasing. There are libraries that exist to implement this, or you can implement it through raw code.
Mhm. You can certainly do it in a cross platform way using webGPU, but I suppose I can see the vision of 'just download this random binary and it'll add SMAA' but it sounds a lot like "and then we'll have a marketplace where people can buy and sell GPU plugins" or "if you're building a web browser" rather than "and this is something that is useful to someone developing a visualization application from scratch".
The majority of these features could exist with just a C++ library and no requirement to 'pre-compile' some of your code into a renderlet... hosting external arbitrary 3rd party binaries in your application seems... niche.
Really, the only reason you would normally ever not just do it from source as a monolithic part of your application was if you didn't have the source code for some reason (eg. because you bought it as a WASM binary from someone).
Smells like Flash, and I'm not sure I like that, but I guess I can see the vision now, thanks for explaining.
top G cool project.
Saved. This is the sort of project that would be an amazing canvas for a nice widget kit and interaction model to make cross-platform GUIs with. The C/C++ backend and WASM target means people could build FFIs in almost any language. I'm sure I'm saying nothing new, but this is promising.
[dead]
how about 1D?
He draws the line at 2D.
Great talk! Same one I was referring to in my comment below.
[flagged]