NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Making a 3D modeler in C in a week (danielchasehooper.com)
netule 4 days ago [-]
I agree entirely with the author on the limitations of Raylib. I'm currently working on a tower-defense style game that I started in Raylib, but I'm running into many of the same limitations (and more). Things such as toggling fullscreen not working consistently across platforms, not being able to enumerate screen modes, toggling rendering features at runtime, saving compiled shaders etc., etc. Having said that, I appreciate Ray's work on this library and will continue to sponsor him. Raylib is great for quickly banging out a prototype, but not much beyond that unless you're okay with living with severe limitations.

Lesson learned, for sure, but I'm too far into the development to swap all of the Raylib stuff out for SDL (or something else) now.

oersted 4 days ago [-]
Quick appreciation for the detail that Raylib is named after the creator's name Ray and not ray-tracing, fun.

Things Unexpectedly Named After People: https://notes.rolandcrosby.com/posts/unexpectedly-eponymous/

TrainedMonkey 4 days ago [-]
How do you know Ray was not named after ray-tracing?
dymk 4 days ago [-]
The author's name is the first hint, and the lack of ray tracing the second
oersted 4 days ago [-]
I choose to interpret it as: How do you know that *the author* was not named after ray-tracing?

Which is amusing :)

linkdd 4 days ago [-]
You missed the joke, so let me ruin it by explaining it:

What if Ray the person was named after ray-tracing by his parents?

MenhirMike 4 days ago [-]
Plot twist: Ray Tracing was the name of a person that was very important to both of them and unfortunately passed away, so they named their son Ray as a tribute.
miuramxciii 3 days ago [-]
I can confirm that. And Ms. Litre went to the same school and was best friends with Mr. Tracing. [https://en.wikipedia.org/wiki/Claude_%C3%89mile_Jean-Baptist...]
dymk 3 days ago [-]
Ah shit, I was That Guy on the internet, sorry. I guess it happens to everyone eventually.
mohas 2 days ago [-]
plot twist: that Ray person was implementation of optical ray tracing made by his parents in a week
airstrike 4 days ago [-]
Such a good list.. worth a submission of its own IMHO
oersted 4 days ago [-]
It got good traction a couple times before, many more fun examples in the comments.

https://news.ycombinator.com/item?id=39462516

https://news.ycombinator.com/item?id=23888725

WJW 4 days ago [-]
I wish they'd add French drains.
rwbt 4 days ago [-]
Raylib is easy to get started but once the project gets a little complex it bites back. SDL on the other hand takes more time to setup everything but scales extremely well as the project gets bigger and bigger. Also, SDL is exceptionally well written code.
visil 4 days ago [-]
And an exceptionally well written documentation, too! One of the first big-ish projects of mine was a raytracer I wrote in C with SDL.
throwaway2046 3 days ago [-]
Got a link to that raytracer?
z3phyr 3 days ago [-]
It would be like a regular raytracer, but instead of writing pixels to a file, you write them to your buffer/texture.
vsuperpower2020 4 days ago [-]
Raylib has a lot of issues that are never going to be fixed, but I wouldn't blame fullscreen on it. Fullscreen is just absolutely unusably FUBARed on windows and has been for decades. It's probably the same for other platforms. The modern strategy is to just do borderless windowed and pretend true fullscreen doesn't exist.
duxup 4 days ago [-]
I have no knowledge about such things but…

> The modern strategy is to just do borderless windowed and pretend true fullscreen doesn't exist.

This explains a lot as to why I experienced some interesting inconsistency going full screen in different applications in Windows ;)

sph 3 days ago [-]
In Linux/Wayland, Windows-style fullscreen does not exist and has no reason to exist either.

In a composited environment, fullscreen windows are just maximized windows without a border, which triggers some heuristic to unredirect windows (i.e. does not have to pay the price of compositing) when they are the sole thing that is drawn on the screen.

So, on Linux, fullscreen and maximized borderless are the same. Given that Windows is a fully compositing desktop as well, I wonder why the difference still exists.

Joker_vD 3 days ago [-]
But what does "Windows-style fullscreen" actually do? It feels like it switches into a different video mode (even if the resolution is the same), and back when my monitor was a CRT, I could even hear a faint click inside it whenever a game entered/exited the full-screen mode. So there was definitely something special going on, but what?
lelandbatey 3 days ago [-]
Windows style fullscreen made it so that the game controlled the resolution being sent "over the wire" to the display, and as I understand it, basically gave exclusive display access to the game. This had the interesting effect of delegating all resolution scaling to the display, which was great for CRTs and often very bad on digital displays which (usually) do very muddy bilinear scaling even if being sent a resolution that was an exact integer division of the total display resolution.

The current day approach is to instead do the "large borderless window" and then do software scaling of the image in order to match the display resolution. One cool result of this is you can get much better scaling than your display would do naively. Low-res games really benefit from software integer scaling instead of "whatever scaling the display firmware does, usually making all the edges blurry".

Rescaling in software is so fast now that some graphically intense games can do per-frame scaling in order to keep per-frame latencies below a particular goal threshold, at the cost of your game losing fidelity (getting blurier). It's so common that most big-name titles now do this.

easyThrowaway 3 days ago [-]
> I wonder why the difference still exists.

Some legacy apps (that is, games) react very badly when they're not in full control of their window sizing, priority, input and device polling, ecc.

spookie 4 days ago [-]
At least on Xorg it isn't an issue, but to ensure cross-compatibility your solution trumps all. I cannot comment on other platforms.
Maken 3 days ago [-]
In Xorg it is definitely an issue. Virtually all games use a borderless window because actual X11 fullscreen is awful: it captures all input events and changes the system's screen resolution to whatever the application is running at.
badsectoracula 3 days ago [-]
There is no all-encompassing "actual X11 fullscreen", X11 does not have an API for applications to have control over the entire screen like you'd find in, e.g, DirectX 9 (what most people think of with "real fullscreen" in Windows). What games (and SDL1) ages ago did was to use APIs that changed the video mode and APIs that captured input, both of which are completely separate (they're even from different sources: the video mode API is from an extension, the capture API is from the core protocol) - and then created a borderless window the normal way.

There are three different aspects here, each one can be done differently and all can be (and have been) mixed or ignored:

1. Set the video mode. You can either do that or not and use whatever the desktop is running at. You may want to change the video mode not just for resolution but for using a different refresh rate. Both native games (like, e.g. Quake source ports using SDL2) and Wine still do that (you can disable the modesetting in Wine via the registry so that games only see the desktop resolution as the only available one).

2. Create a fullscreen window. There are two ways to do that: either create a window that bypasses redirection ("redirection" here means that the window manager wont manage it, it has nothing to do with compositing - if a compositor is used - though some compositors also use it as a hint for that) and have it cover the entire desktop area (you may want to use randr to figure out the monitor area for multiple monitors) or alternatively use the fullscreen hint on the window and let the window manager handle this. Note that some window managers may not support this or have bugs with it.

3. Handle input. X11 provides an API to capture mouse and/or keyboard input, though strictly speaking this is not really required and TBH i do not see why games ever did that (i can only think of conflicts with other programs but i'd expect this to be something for the user to deal with). In any case, it is rare for games to do that these days. You can just handle input "normally" like any other program. You can provide a (by default disabled) option if you really want to though as the API for capturing is trivial.

For my last game engine i did #1 (with an option to use whatever the desktop resolution is) and #2 (using the fullscreen flag by default with the redirect flag as an alternative for window managers that had issues with the fullscreen flag) but i didn't bother with #3.

keyle 4 days ago [-]
I do the same, I do wonder however, if there are performance issue at the OS level, from running (on a 4K screen) 4K borderless vs. 4K fullscreen.

Like, the whole giving the application maximum priority would work the same?

Just_Harry 3 days ago [-]
Whether there's a performance detriment from borderless-fullscreen vs exclusive-fullscreen depends on how a program presents its frames (and on the OS, of course).

On Windows, under ideal circumstances borderless-fullscreen performs identically to exclusive-fullscreen as Windows will let the program skip the compositor and present its frames more-or-less directly to the display. (Under really ideal circumstances the same applies to bordered non-fullscreen windows.)

If the compositor can't be skipped, borderless-fullscreen can be a bit brutal on performance: on a 4K 160Hz screen I've experienced an additional 40-milliseconds+ of frame-latency purely from borderless-fullscreen being used.

The Special K wiki has some pages that go into more detail about the situation on Windows: https://wiki.special-k.info/SwapChain, https://wiki.special-k.info/Presentation_Model

ImHereToVote 3 days ago [-]
This is absolutely bonkers.
netule 4 days ago [-]
True, it’s just one of the many issues that came to mind while writing this.

My solution to the issue is also full screen borderless combined with resolution scaling.

spacechild1 4 days ago [-]
> just do borderless windowed and pretend true fullscreen doesn't exist.

Hah, good to know I am not the only one.

diath 4 days ago [-]
Kind of share the same feeling, started a project about 2 months ago and chose to use Raylib, and while the basic stuff is really simple to get going, the more you use it, the more random minor inconveniences you run into, but at this point I've invested too much into this project to back out of using Raylib. My biggest issue with it right now is the font handling and text rendering, I think I'll have to switch from TTF fonts to pre-baked bitmap fonts for it (which will suck for localization later). The biggest two features I'm missing after switching from Love2D is being able to render multi-color text (with Raylib, you have to manually split up the text into chunks based on color markup, then apply width offset, then call the draw function for each chunk, while also taking into account things like linebreaks and such), not to mention that it seems to tank the FPS a lot when you try to draw a lot of text on the screen (perhaps the draw call batching is broken for text?) and being able to easily chop up textures and make them wrap or tile, Raylib used to have tiled texture draw function in the past but for some reason they removed it.
abnercoimbre 4 days ago [-]
I'm the organizer for the conference [0] mentioned in TFA.

We had a professional UI/UX designer react to ShapeUp [1], and one of the things she commented on was the font being hard to visually parse.

I laughed a little when the author yelled "raylib!" to make sure blame was assigned appropriately XD. I'm currently the top GitHub sponsor for raylib, so there's no hate, but I wish he changed some of his defaults.

[0] https://handmadecities.com/seattle

[1] https://vimeo.com/887532756/2972a82e55#t=49m58s (timestamped)

edflsafoiewq 3 days ago [-]
Why did you switch from Love2D?
diath 3 days ago [-]
I need multithreading, async TCP, and have to implement systems that benefit from type safety. Though, I do still use Lua (using sol2), for interface modules, but I miss the features that Love2D provided on the framework level.
sgt 4 days ago [-]
This made me want to look at raylib. It comes with some cute examples that run using WebAssembly: https://www.raylib.com/examples.html

One thing that's always bothered me about Wasm and browser 3d/2d graphics is that I often find minor issues such as scrolling. Look at the example called "Background scrolling & parallax" here: https://www.raylib.com/examples.html

I've tested on several devices and it's definitely not smooth scrolling, unless there's something wrong with my eyes.

How can 2D smooth scrolling not be a solved problem in 2024?

modeless 4 days ago [-]
In that sample the foreground scrolls perfectly smoothly for me, but the background looks jittery. This indicates to me that it's not a platform issue at all. That sample is just doing something weird with the background.
joeld42 4 days ago [-]
This jitteryness is because the sample doesn't have antialiasing enabled (since it's pixel art) and the background scrolling is 0.1pixels per frame, which means every 10 frames it snaps 1 pixel. The scrolling is also updating on fixed amount per frame instead of looking at deltaTime, so if there are lags or small differences in frame time this might look choppy.

But I think it's more meant to demonstrate drawing parallax layers rather than subpixel scrolling.

sgt 3 days ago [-]
So by modifying it to look at delta time it would be smooth?
sgt 4 days ago [-]
Yes, the background is odd but the foreground is definitely not smooth. I see small little jitters occasionally. At one point I had to wait 15 seconds for it to jitter, though.
jay_kyburz 4 days ago [-]
Yes the back and foreground is quite jittery for me on Firefox, and I'm almost certain its the browsers own requestAnimationFrame that's the problem.

Update: Although, having a closer look at the scene, I see its pixel art, so I bet the author is snapping floating point positions to a pixel point to prevent sub pixel blurring.

Another small update: I was sure requestAnimationFrame was locked to 60fps, but I noticed on Chrome the other day it was 144hz, the full speed of my monitor.

kragen 3 days ago [-]
yeah, http://canonical.org/~kragen/sw/dev3/qvaders is unplayable on 120-hertz monitors because i'm running the game physics from raf ;)
sgt 2 days ago [-]
Also noting that it's choppy on iPhone, but it's really barely noticable. Cool little game!
kragen 2 days ago [-]
thanks! the choppiness is for the same reason; if the frame rate drops temporarily due to cpu load or whatever, the vaders move slower
4 days ago [-]
dekhn 4 days ago [-]
the answer to your last question is "inner platform effect" and "second system effect".
flohofwoe 3 days ago [-]
Because it's a surprisingly tricky topic on modern operating systems [1], and even trickier in web browsers (smooth scrolling was actually much easier to achieve on hard-realtime systems like the 8- and 16-bit home computers of the 80's and early 90s).

TL;DR: if you base your animation- or scroll-speed on the 'raw' measured time between two frames, you'll get micro-stutter because it's pretty much impossible to obtain a non-jittery frame duration on modern operating systems or web browsers, all you can do is try to remove the noise via filtering, or 'align' your measured frame duration with the display refresh interval, which on some platforms cannot be queried.

In web browsers the most important problem is that you can't measure the exact frame duration (which is fallout from Spectre/Meltdown), or obtain a precise 'presentation timestamp', or even query the display refresh frequency.

Even in the native OS APIs that provide a presentation timestamp (like DXGI on Windows or CVDisplayLink on macOS) that timestamp has considerable jitter and has not much to do with the actual presentation time when the frame becomes visible to the user.

And as soon as you base your animation timings on such a jittery timestamp you'll get micro-stutter (the easiest way to get smooth animation is actually to assume a fixed frame duration, but then your animation speed will be tied to the display refresh rate).

It's often possible to eliminate the timing jitter with 'noise removal' filters or just tracking an average over the last couple dozen frames, but those then may behave funny in situations where the frame duration changes drastically (such as moving a window between displays with different refresh rate, or when rendering stops and then resumes because the window or browser tab is fully obscured and then becomes visible again).

PS: Raylib's frame pacing code on the web is also a bit on the crude side [2].

...e.g. it just sleeps for 16 milliseconds, and relies on ASYNCIFY to enable a traditional render loop in browsers. It would actually be better to use a frame callback via requestAnimationFrame (or the Emscripten wrapper function emscripten_request_animation_frame), but this means giving up the cross-platform 'own the game loop' application model. Not that requestAnimationFrame alone solves any of the above mentioned time jitter problems though.

[1] https://medium.com/@alen.ladavac/the-elusive-frame-timing-16...

[2[ https://github.com/raysan5/raylib/blob/f1007554a0a8145060797...

samiv 3 days ago [-]
Thanks for this comment, this thing you mentioned "micro stutter" is something that has had me really scratching my head in my game engine project!

Do you have any comment whether frame blending (actually using a mix of two frame states to produce the rendering) would be a workable solution?

https://github.com/ensisoft/detonator

sgt 3 days ago [-]
This is a great answer, by the way. I know have better insight into what happens in these animation loops.
jakubtomsu 3 days ago [-]
I switched from raylib to Sokol some time ago and it's the best. Super simple and single header C code, no dependencies, cross platform code and shaders, etc etc. I've shipped a steam game and I plan to continue using Sokol in the future.
user432678 3 days ago [-]
Do you mind sharing a link or game name? Despite all the advice out there picking up a game engine, I too am pursuing a goal of making a game from scratch using simple parts of C++ and SFML for now, but was also looking at SDL and Sokol. What was your experience? Are you a solo dev? Was it worth it?
wilberton 3 days ago [-]
Not the parent, but I also use sokol as the base for my engine. I've shipped 3 3d web games with it, and honestly it was the best technical decision I ever made! There's just so much to be said for owning all your dependencies (as far as possible, sokol is still a dependency but it's way more manageable than a 3rd party game engine)
user432678 3 days ago [-]
Do you mind to share your games details? No judging, I promise :) I just need examples from people like me and not AAA studios for motivation sake. I literally was called an idiot for not choosing Unity or Unreal, and was turned down for collaboration with non-tech people just for that. Game dev landscape is crazy these days.
wilberton 3 days ago [-]
It totally depends on the type of game you want to make, whether using an existing engine or rolling your own is the best option. This is one of the games I've made with my engine: https://poki.com/en/g/super-tunnel-rush My one tip is only make enough of the engine to support the game you're writing, and write the game at the same time as the engine (don't think you need to finish the engine before you can start writing the game, that route doesn't work out well :)
user432678 3 days ago [-]
This looks amazing and I was surprised it worked really good on mobile too!
nextaccountic 4 days ago [-]
> I agree entirely with the author on the limitations of Raylib.

Wow this is kind of insane. About this

> Raylib doesn’t do basic parameter validation, by design. This function segfaults when dataSize is null: (...)

The developer answered this

> For most of the raylib functions is up to the user to validate the inputs, if raylib should consider all possible bad-use scenarios it would require reviewing most of the library functions and it will increase source-code complexity.

lelanthran 3 days ago [-]
I think the word 'insane' is going to far to describe the behaviour of the specified function.

It returns an array of bytes. If you, the programmer, wrote a line that called that function, on the very next line you are going to try to use the array, realise that you don't know the length, and realise that the `NULL` that passed in on the line above is probably the output for the length!

In order to actually write a call with `NULL` for the dataSize argument, the programmer needs to be clueless about how to write a for loop.

So, no, I can't easily see a situation where a programmer accidentally uses a `dataSize` parameter of `NULL`, because that would mean they don't know that arrays in C have no length information, which is C 101.

uecker 3 days ago [-]
Arrays in C have length information. Pointers do not.
shric 3 days ago [-]
> Arrays in C have length information. Pointers do not.

This is 100% correct. The parent of the parent is confused (or is just using terminology incorrectly) as are the current siblings to my comment.

One post claims pointers carry length information so that free works. This has nothing to do with pointers, it has to do with the implementation of malloc and friends. It's only a promise of a successful malloc that you can free, nothing to do with the pointer other than it was returned by malloc. Pointers can point to all sorts of things, the only information they intrinsically have are the type of the object they point to and its address.

Two posts claim a function can return an array of bytes. This is impossible in C. You can't return an array of bytes. You can return a pointer that might point into a continuous sequence of bytes. You could also return a struct whose member is an array, but that's rarely done. When you return (or pass) what looks like an array syntactically, it is converted to a pointer to its first element.

Arrays, on the other hand, always carry length information via the sizeof operator. A special hand waving case would be you could define a pointer to an array (not a pointer to the first element of an array), so in effect the length is contained within the pointer type. E.g. char (*foo)[42] declared foo as a pointer to an array of char with length 42.

lelanthran 3 days ago [-]
> Arrays in C have length information. Pointers do not.

Sure, but they don't carry that length information when being returned from a function.

pjmlp 2 days ago [-]
Pity that they decay into pointers all the time.

So that is pretty much worthless outside the current scope where they were declared.

SJC_Hacker 3 days ago [-]
> Arrays in C have length information.

Not in any sense that is actually useful to the programmer, at runtime at least. You cannot ask an array how big it is.

Pointers actually do have length information as well, otherwise free() would not work. But its the same deal with arrays - there is no way to access it outside of compile time constants, if those are even present.

jstimpfle 2 days ago [-]
Not every pointer works with free(). Most don't.
kragen 3 days ago [-]
in c and c++ this is not just normal but almost unavoidable. if your function takes a t* and someone casts a random int to a t* and pass it in, it is gonna segfault. no possible way to validate it, though in theory you could open /proc/self/maps and iterate through it to catch the segfault cases, or install a segfault handler
pjmlp 2 days ago [-]
Even that isn't fullproof.

Microsoft has deprecated all pointer validation functions on Windows, because there were plenty of corner cases regarding pointer validation.

kragen 2 days ago [-]
aha, thanks. yeah obviously /proc/self/maps won't help much there
lelanthran 3 days ago [-]
> if your function takes a t* and someone casts a random int to a t* and pass it in, it is gonna segfault.

True, that is not something a function ought to defend against.

However, the complaint was not about that, though, it was about not checking if the dataSize parameter is NULL.

I don't really have a problem with functions that segfault when given NULL pointer parameters as long as this is clearly documented!

Sometime, however, you just need to be familiar with enough projects in C that common-sense gets built. In the specified example:

     unsigned char *LoadFileData(const char *fileName, int *dataSize);
I expected that a function that returns an array needs to tell the caller how large that array is. This specific function is not one I would complain about.

These things are usually documented in the headers. In this case it says:

      // Load file data as byte array (read)
So, yeah, it's pretty clear to me that the number of bytes has to be returned somewhere, and there's only one parameter called `dataSize`, so this isn't something I consider to be a valid complaint.

[EDIT: Escaped pointers, and added last paragraph]

naasking 3 days ago [-]
> Sometime, however, you just need to be familiar with enough projects in C that common-sense gets built.

And if every C project takes this same philosophy to poor documentation, how exactly is a new person learning C supposed to build that common sense? Brushing off poor documentation with "you just need more experience" is not a valid response. The documentation for that function could be much more clearer without being much longer.

lelanthran 3 days ago [-]
My point is not that it was good or acceptable, my point is that, while not perfect, it isn't bad enough to warrant the moniker of `insane`.
kragen 3 days ago [-]
yeah. unlike an arbitrary garbage pointer, a null pointer is something the callee could detect. but on linux or windows the operating system will reliably detect it for you, and in a way that makes it a lot easier to debug: it segfaults, making it obvious that there's a bug, and your debugger will show you the stack trace leading up to the LoadFileData call with the null pointer parameter. and, as you point out in your other comment, in the very next line after your call to LoadFileData, you need the data size, and if you passed in a statically null pointer, you don't have it, so you realize you're going to need to pass in a pointer to an actual int

so, while the documentation could and should be more explicit than the single telegraphic line you quote, the function's behavior is fine, and detecting and trying to handle the null pointer would make its behavior worse and harder to debug (except on like an arduino or ms-dos or something)

not every c api description needs to be an introductory tutorial for c

defrost 3 days ago [-]
> unsigned char

WARNING unpaired | unescaped *'s !! :-)

grrr, markup

lelanthran 3 days ago [-]
Thanks, fixed.
pests 4 days ago [-]
IIRC it defines some common words too like all the color names and uses a lot of names that should be prefixed. Good otherwise.
diath 4 days ago [-]
At least they're all-caps, but as somebody that writes C++ and uses Raylib, I just wrapped it in a namespace in my project that I include, like so (note that cstdio must be included before raylib if you're using it from C++):

    #pragma once

    #include <cstdio>

    namespace raylib {
        #include <raylib.h>
    }
4 days ago [-]
SoKamil 4 days ago [-]
> The Shapes are kept in a statically allocated array [...] Can’t fail to allocate, can’t be leaked, no fluff. Lovely. The 100 shape limit wasn’t limiting in practice. With very little time to optimize the renderer, the framerate would drop before you even got to 100 shapes.

That's the best example of avoiding premature optimization I've seen in a while.

Narishma 3 days ago [-]
I think this is the opposite. It's avoiding premature abstraction/generalization.
bruce343434 3 days ago [-]
But finding abstractions and generalizations optimizes scratching that itch inside our heads
HumblyTossed 3 days ago [-]
Best example of those who do vs. those who sit around bickering over HOW to do.
runevault 4 days ago [-]
Super interesting post, and appreciate him talking about the various decisions like his handling of memory (and the issues he ran into with raylib). As someone who's finally diving into part 2 of crafting interpreters (and using it to refresh myself on C) being reminded of what C does well is great.
ttul 2 days ago [-]
Long ago, I worked on the operating system for a desk phone. With only 64K of RAM, there was no dynamic memory management whatsoever. We made heavy use of static variables and let the compiler figure out how to allocate everything at compile time.

It’s easy to forget that many applications probably don’t need dynamic memory management at all. You can often get away with allocating a few fixed size buffers and just handling the edge cases nicely when those buffers are full.

And in such a context, C is indeed a whole lot safer. No memory leaks. Your only concern is buffer overflows, which can be managed through careful use of sizeof when all of your variables are statically allocated. I’m not saying Rust and Go aren’t great options these days, but humble old C still works and doesn’t have to be nightmarishly complex.

fallingsquirrel 4 days ago [-]
I really love the live demonstrations in the video. Forget building the app, I couldn't even produce that video in a week if I tried.
dhooper 4 days ago [-]
The video took me longer to make than the app! I don't know how youtubers do it so regularly.
an_aparallel 4 days ago [-]
They have a team :)
movedx 4 days ago [-]
Not all of us do. If you can get to an average of 300k+ view per video then you can hire an editor and maybe someone else on a contract basis to assist. Must of don't have a team though.
djmips 4 days ago [-]
Yeah sometimes they eventually hire an editor but I think channels like Cutting Edge Engineering it's a fulltime job for one of the two people to do the filming and editing.
drtgh 4 days ago [-]
Off-Topic:

Glad to see for the first time a WebAssembly interface where the text does not look blurry. I repeat, it is the first time.

Extending this to programs and some operating systems (such as Windows), in the past few years, there has been a pervasive issue with the text rasterization methods that have become a common trend and default setting.

Unfortunately, users often do not have the option of turning off anti-aliasing to get sharp text, and in the rare cases where this option is available, the interface (menus, etc.) still uses anti-aliasing.

themerone 3 days ago [-]
The crisp text is unimpressive. The typeface has no curves and there is no smoothing. That text will be crisp and blocky at any resolution.
flohofwoe 3 days ago [-]
> The crisp text is unimpressive.

Not if you actually tried to achieve that in practice across all browsers and display configurations ;)

For instance Safari had for the longest time a hardwired (e.g. not fixable via CSS) linear filter applied when a WebGL canvas had to be upscaled. It's only been fixed last year or so.

Also unfiltered upscaling done breaks down when fractional scaling outside the browser happens, the result will generally look horrible and there is no good way to fix it. The only workaround is to make hide the scaling artifacts behind a filter, which then makes everything look blurry.

Browsers running on top of Wayland most likely still have all those issues.

drtgh 3 days ago [-]
The crisp text maybe it is unimpressive in the sense that is being used an graphic library targeted to games, so is using an homologous to pixel fonts. But it is not the norm.

I'm simply increasing awareness among the development community about the problems with text rasterization in applications and OS interfaces. I'm glad not to see blurry text.

TachyonicBytes 3 days ago [-]
Why is there a link between WebAssembly and rasterization? Genuinely curious, it seems interesting
kuon 3 days ago [-]
I really like this kind of projects. I still like the low level of C. Now I work with rust a lot and elixir/erlang but I often miss the simplicity and explicitness of C. For this, I use zig a lot too. It is a very nice improvement over C while keeping a lot of its philosophy.
nesarkvechnep 2 days ago [-]
Erlang is a pretty simple language too.
rkagerer 3 days ago [-]
You could certainly make it harder on yourself by malloc-ing each Shape individually and storing those pointers in a dynamic array. Using a language like C# ... would force that allocation structure.

What's stopping you from using a fixed array of structs in C#, just as the author has done in C?

kronovecta 3 days ago [-]
Nothing. In C# it's not unusual to use struct arrays in this way.
OskarS 3 days ago [-]
Probably confusing it with Java, where the only value types are the basic ones (unless you store each individual coord in a different array).
icoder 3 days ago [-]
Still you could in Java avoid the (under the hood) memory allocation and garbage collection by reusing the same objects. If you wanted to, of course, it probably takes a bit more effort / caution at the developer's side but may provide some improvements in performance in very specific cases/situations.
OskarS 3 days ago [-]
The primary concern with this isn't necessarily the extra allocation and GC (though that plays a part too), it's the fact that you're chasing a pointer for every object. That's murder for the cache. MUCH faster to just have the data in-line in hot loops like this.
pjmlp 2 days ago [-]
If Valhala ever lands, you will be able to do as in C#, D,...

Until then, the best way currently, is to make use of Panama to create a C struct like memory layout, and have accessor methods for the low level details.

neonsunset 2 days ago [-]
Trivially solved by C# - you can write an algorithm implementation with the same data structures that would perform the same as in C, C++ or Rust but often with more convenience.
antirez 4 days ago [-]
I hope somebody will continue this project. It's a few months away to be a serious alternative to Blender / FreeCAD for certain use cases, with a much softer learning curve.
OskarS 3 days ago [-]
You should check out MagicaCSG, which is a more sophisticated version (though still free!) of this: https://ephtracy.github.io/index.html?page=magicacsg#ss-caro...
captainhorst 3 days ago [-]
There's also https://womp.com as an SDF modeler
turtledragonfly 4 days ago [-]
EDIT aha, the program already supports exporting to a mesh via marching cubes; see the youtube video on the site. I hadn't realized that (:

----

Be aware that since it fundamentally works with SDFs, it is a somewhat different modeling experience (and stores different data) than traditional meshes with triangles, verts, etc.

Transforming it from SDFs into meshes could be done with marching cubes or similar, but you'd likely need to "clean up" such data afterwards in a Blender-style app anyway.

SDFs are great though, if your renderer is SDF-based, too (most are not).

[sorry if you knew this already, wasn't sure]

antirez 4 days ago [-]
Yep the export is already done! I used TinkerCAD a ton for things ways more complex than it should be used to, so even when I use more advanced CADs at this point I tend to think in SDF terms. For many things it's faster and more natural than extruding, rotating, ... But the fact is, the engine behind TinkerCAD is quite good, but there is little interest for AutoDesk to compete with its own Fusion360, so TinkerCAD is left forever as a children / beginners tool, without the more advanced stuff that could implement.
johnnyanmac 2 days ago [-]
>SDFs are great though, if your renderer is SDF-based, too (most are not).

Thanks for reminding me that one of the best SDF renderers in the industry is stuck behind Sony, deemed a failure because Sony didn't want to publish it to PC (where all the dev scene is).

fsloth 3 days ago [-]
If you like SDF:s Womp is pretty nice for starting out. Tinkercad is another pretty good beginner “cad”.
moarinfoszszz 4 days ago [-]
Check out Dune3D and Salome-Platform
4 days ago [-]
adastra22 4 days ago [-]
Blender maybe, but not CAD work unfortunately.
nineteen999 4 days ago [-]
Not even Blender. Not even close.

No disrespect to the author, it's impressive, but Blender gives you extreme control over mesh topology and this doesn't.

adastra22 4 days ago [-]
Sometimes that’s what you want though. This is good for “digital clay modeling.”
nineteen999 3 days ago [-]
So is ZBrush, which is why its the industry standard for sculpting for VFX and games.

You still have to get decent topology out of the modelling package, and marching cubes aren't going to give you that. If you have to retopologize your sculpted model by hand to make it viable for use in a rendering pipeline you might as well have just started with good topology in the first place and save yourself the time.

naasking 3 days ago [-]
Meshes in Blender have finite resolution, SDFs do not. They can scale to an extent that simply isn't possible with meshes.
dahart 2 days ago [-]
I’m not sure that resolution helps with topology control at all. If true, more resolution doesn’t make SDFs more controllable.

But I don’t think it’s true. Why do you say SDFs don’t have finite resolution? There are no infinite precision SDF implementations. Meshes as a concept have no resolution limits either, you could in theory store verts in infinite precision, but in practice people usually use floats. It is the same with SDFs. In practice, the resolution limits are identical - it’s whatever you can store in a 32 bit float. It’s extremely common with SDF ray marchers to have a much higher threshold than the smallest fp32 delta, so I would say in practice, meshes probably have higher resolution than SDFs most of the time. And whatever, because it never matters. Nobody is pushing either SDFs or meshes to their resolution limits, and nobody wants that because you get visible quantization artifacts with both meshes and SDFs when you come anywhere close to the resolution limits.

naasking 2 days ago [-]
> There are no infinite precision SDF implementations.

But it's relatively trivial to make one. SDFs are technically polymorphic over the concrete number type used. They are intrinsically infinitely scalable and of infinite resolution, and you then compute an approximation that can be rendered according to the output constraints/requirements.

Meshes on the other hand are intrinsically finite. You've already baked in a finite precision at design time. If you're outputting a mesh, you're throwing away information.

It's like the difference between vector and raster graphics. Vector graphics clearly have superior properties when it comes to scaling and other transformations. Raster images still find more use, but vector graphics are starting to see more use for good reasons.

> whatever, because it never matters.

It's starting to matter more, particularly in CAD modelling where model reuse, parameterization by finite element analysis and other physics models, etc. are starting to align better with SDF.

I won't pretend to know as much about games or CGI in film, but I expect as the tools mature, it will just make sense to start adopting them there too.

dahart 22 hours ago [-]
I don’t think you understood my point. You are comparing the implementation of meshes to the ideal unimplemented theoretical concept of SDFs. It’s not a useful or accurate comparison. In practice, SDFs always have finite precision, just like meshes do.

> It’s like the difference between vector and raster graphics.

I get what you’re trying to say, but this analogy isn’t great. The difference between vector and raster is more like the difference between an SDF and a voxel grid. A mesh is really, literally, a type of 3d vector graphics, and so is an SDF. The difference between meshes and SDFs is the difference between vector graphics with line segments and vector graphics with CSG and round shapes. There is no inherent quantization of space with either meshes or SDFs like there is with raster images or voxel grids.

SDFs are not intrinsically infinitely scalable any more than meshes are. If you write an SDF to disk, the numbers representing the locations and sizes of it’s components will be rounded to fp32 just like the verts of a mesh typically are.

Meshes are not intrinsically finite, no information is being thrown away during the output process. The implementation of a mesh modeler does usually use floats, and so precision does have limits, but there is nothing about the concept of a mesh that requires using floats. Just think about it: one can, if one wishes, use infinite precision numbers to represent a mesh, there is nothing about the idea of a mesh that precludes infinite precision. The very same is true of SDFs. One can use infinite precision, but it in fact has never yet happened, all SDF implementations so far have used finite precision numbers (both to represent and to render SDFs).

> It’s starting to matter more

You started talking there about something unrelated to my point, and aren’t responding directly to what I said. I wasn’t arguing about the usefulness of SDFs at all, and SDFs having finite resolution doesn’t compromise their utility. SDFs aren’t useful because they have higher resolution, they are useful because they are a different kind of primitive with different properties, they have tradeoffs and so can meet certain goals better than other primitives.

nineteen999 3 days ago [-]
Wake me up when the deformable assets in VFX and video games etc are represented by SDF's instead of meshes.

How do you rig an SDF for animation (eg. skeletal, for a humanoid figure)?

johnnyanmac 2 days ago [-]
To be fair, I don't exactly think the mesh system was made with deformation in mind either. We kind of just hammered on rigs/skinning on top of it, then learned all the ways not to model for that workflow.

Shame that NURBS never really took off in the gaming scene. I get it from a hardware standpoint (GPUs like triangles, and we can cut quads into triangles), but NURBs solve so many problems that trianglular meshes just hack their way around.

>How do you rig an SDF for animation

I imagine with the most unholy mathematical models imaginable. Dreams did it, so it's not impossible. But the results seen definitely don't measure up to a AAA standard (at least in Dreams).

nineteen999 2 days ago [-]
Right, but the skeletal mesh workflow has been in production use and being developed since the late 1990's. It's extremely mature and a mind boggling amount of $$$ have been spent on optimizing it and designing hardware around it since that time.

I appreciate all the tangential discussions, but the two key points im trying to make are:

1) SDF's are not likely to supplant detailed hand modelled/cleaned up meshes any time soon, but will continue to be used alongside and in conjunction for all kinds of other cool stuff (soft shadows, ambient occlusion, fonts etc as seen in UE4/UE5)

2) ergo, a modeller based around SDF's is not going to replace mesh based modellers like Blender, Maya etc anytime soon, even to a small degree.

johnnyanmac 2 days ago [-]
So, sunk cost fallacy? I suppose.

I don't think anyone fundamentally disagrees with you, but an author making a toy 3d modeler in a week usually doesn't have the goal to disrupt an entire industry. It's a nice place to consider on a fundamental level what tools like these could and couldn't do that current work flows cannot.

naasking 3 days ago [-]
The signed distance function defining a solid can be parameterized by anything. Common ones are results of a finite element analysis, thereby allowing you to make the structure thinner where it doesn't need strength.

In the case animation, parameterize it by a global clock, and each frame the objects are in slightly different positions dictated by a rough physics model (like a person's skeleton).

nineteen999 3 days ago [-]
Bad topology generated by marching cubes from an SDF is not going to replace hand tuned topology for animated meshes in VFX and video games in the near future. Even highly dense Nanite meshes in UE5 don't support skeletal deformation yet.
freecodyx 4 days ago [-]
I sometimes think, that c is all we need
yazzku 4 days ago [-]
I write C all day and enjoy every second of it. I assume there are many like myself. There's just nothing new to make noise about, and that's a good thing, other than small quality-of-life improvements in the standard.
lionkor 3 days ago [-]
you dont need that comma fyi

While I fully support liking one language, it wouldnt feel right for me to not mention the benefits other languages bring, such as GC for large dev teams of lower skilled devs, languages with builtin unit tests, languages with templating or less bad macros, etc.

ngcc_hk 4 days ago [-]
Really agreed with his assertion of c. More to his “ Its syntax doesn’t hide complex operations. It’s simple enough that I don’t have to constantly look things up”, and further if you need to look up something about c, it is very easy and very informative. Simple and old lang has its benefits.
ederamen 4 days ago [-]
Just started using Raylib, bummed to hear about the limitations!

As a novice C programmer, the simplicity and immediacy of results opened my eyes to how C can feel as productive as higher level languages with robust standard libs.

lelanthran 3 days ago [-]
> As a novice C programmer, the simplicity and immediacy of results opened my eyes to how C can feel as productive as higher level languages with robust standard libs.

TBH, once you have halfway-good libraries for dealing with `char *` strings as-is, dynamic arrays and hashmaps, you are not going to be much slowed-down using C than using a higher-level language.

You even get much stronger isolation guarantees than most other high-level languages, while getting much more compatibility[1] with any other language you may wish to interface to: https://www.lelanthran.com/chap9/content.html

[1] I did a little Go project, and it annoyed me slightly when I wanted to do performant FFI. For Go, I think the situation has improved since I last checked, though.

neonsunset 3 days ago [-]
You can trivially replace Go with C# in order to get almost zero-cost FFI (you can make it fully zero-cost with additional effort but even the baseline is better than the alternatives, hell you can statically link other .lib/.a's into AOT compiled .NET binaries).
naasking 3 days ago [-]
> TBH, once you have halfway-good libraries for dealing with `char *` strings as-is, dynamic arrays and hashmaps, you are not going to be much slowed-down using C than using a higher-level language.

That can't possibly be true. Not having to even think about object lifecycles and ownership because all memory is GC'd saves a lot of time all by itself, not even getting into debugging issues when you get it wrong.

lelanthran 3 days ago [-]
> That can't possibly be true. Not having to even think about object lifecycles and ownership because all memory is GC'd saves a lot of time all by itself, not even getting into debugging issues when you get it wrong.

I think perhaps the context may make it clearer: it was about simplicity.

>>> the simplicity and immediacy of results opened my eyes to how C can feel as productive as higher level languages with robust standard libs.

So, sure, no one is saying that you'll be faster in C, but with such a small cognitive footprint, you can be faster than you'd think.

When programming in C, I don't spend much time thinking about the language, I think about the problem more than the language. I don't think about complex relationships between language features; about what might happen if I use a reference in a lambda. I don't need to remember what the `this` keyword refers to depending on how the function was created. I don't need to puzzle my way out of a painted corner due to colored functions.

It's the simplicity that I was responding to. You go faster than you would expect.

As far as object lifecycles go, there's a small number of idiomatic ways to mitigate the problems. Not foolproof, but with such a simple language, whatever valgrind reports can be quickly fixed.

Regarding ownership: I'm not really aware of how GC languages, by way of being GC, helps there. I'm pretty certain it doesn't. If you pass an object to a method in Java, C#, whatever, and that method starts a new thread with it, you're still going to be boned if the callee starts modifying it at the same time.

Whatever ownership issues you have in C, you'll have in most other GC languages as well.

naasking 3 days ago [-]
> So, sure, no one is saying that you'll be faster in C, but with such a small cognitive footprint, you can be faster than you'd think.

I would agree that you can be faster than you'd think on problems that C is reasonably good for. This is a fairly small subset of problems though, where your original comment was phrased like a general statement for any sort of problem / general purpose programming. That's what I take issue with.

If you're going to do any kind of programming that depends on interfacing with the world, UTF, protobufs, even rendering to a screen as with this article, you're going to be pulling in those same sorts of dependencies that you denounce from all of those other languages.

> Whatever ownership issues you have in C, you'll have in most other GC languages as well.

I agree you have similar thread safety issues, the ownership issues I was referring to was for managing lifetimes leading to double-frees or leaks. Yes there are some idioms that almost work, but "almost work" is exactly the point, in GC'd languages they actually do just work.

I understand the appeal behind the economy of C but we just shouldn't pretend it's something that it's not.

neonsunset 3 days ago [-]
> If you pass an object to a method in Java, C#, whatever, and that method starts a new thread with it, you're still going to be boned if the callee starts modifying it at the same time.

Which is why the concept of thread-safe types exists. The documentation usually explicitly states if a particular container/service/anything else is safe to call from multiple threads concurrently or not, and the standard library goes to great lengths to either make the types that are expected to be used in multi-threaded scenarios thread-safe or to offer thread-safe alternatives (like ConcurrentDictionary<K, V>).

Worst case, most types are engineered in such a way that should thread-safety be violated, it would either lead to an exception or an invalid state but not to memory safety issues.

jesse__ 3 days ago [-]
It's definitely true.

Memory allocation takes a very small percentage of my time. I'd guess < 1%. Debugging memory issues used to take me more time, but these days it's basically inconsequential, too.

Having written interpreted languages previously to jumping to C & C++ (>10y programming exp), there's a massive cost to using interpreted and/or GCd languages that comments like this one never seem to acknowledge. Random package bit-rot, high-difficulty memory leaks, high-difficulty AND high consequence manual memory management (to avoid the GC), poor performance tooling, nearly completely opaque performance characteristics, inability to optimize performance past a surface level... etc.

If you're building anything more complex than simple web apps, this shit all adds up to a lot. I've worked at a couple shops that these issues hit like a ton of bricks.

pjmlp 2 days ago [-]
The joy of debugging memory corruption issues on production, while helpdesk keeps pinging dev team several times a day.

Yep, I have multiple scars and gray hairs due to them.

lelanthran 20 hours ago [-]
> Yep, I have multiple scars and gray hairs due to them.

Look on the bright side: at least you have your hair :-)

I swore off C++ because (maybe coincidentally, maybe not) I started losing hair around the time I was a f/time C++ developer.

I'm liking Go and C# (and some Java and Kotlin, sometimes) these days for programs that are not suitable to be written in plain C.

I've tried liking Python/PHP type languages, but getting a type error only when it is encountered in production gives me the heeby-jeebies.

neonsunset 3 days ago [-]
What kind of GC-based languages were the source of issues for these use cases? Was C# among them (in any recent time)?
lelanthran 3 days ago [-]
I've not used C# in recent years.

While I actually liked the language, it's complexity is increasing with diminishing returns.

There's no point in getting to a complexity level of, for example, C++ for any language - people who want such levels of complexity will be happy to use C++.

neonsunset 3 days ago [-]
While it certainly will eventually die C++'s death (can't remove language features, only add), it's luckily far from that predicament today.

I don't think you could reasonably compare the two in amount of tacit knowledge one has to posses to avoid all kinds of footguns and get best results. The rule of thumb today for C# is to go with simplest way to do something and don't ignore IDE/analyzers' suggestions or warnings. A lot of focus has been put on terseness and simplicity.

jesse__ 3 days ago [-]
The experience I was referring to was JS, Python and Ruby
naasking 3 days ago [-]
Literally all of the costs you list apply to C/C++ as well, except you have the additional hazards of having to worry about memory safety and leaks all of the time rather than only once every 5 years. Sorry, I don't find your claims plausible at all. It's just too easy to forget what you actually spend your time on.

Edit: and the most significant evidence for this is in comparing all the CVEs for C/C++ vs. memory safe languages like C#/Java.

> I've worked at a couple shops that these issues hit like a ton of bricks.

What you're missing is that 99% those shops wouldn't have existed at all if they had tried to go the C/C++ route because their products just wouldn't have gotten to a viable state. What your experience shows is that working in memory safe languages is so much easier that even average or mediocre programmers can get a viable product.

jesse__ 3 days ago [-]
In my experience, GC'd languages leak much more frequently because people figure 'oh, the GC will take care of it for me.'

There are excellent tools for detecting memory leaks/safety issues in C, and you can even write all your own allocators for your own edification / amusement / sanity, but in a GC'd languages, you're pretty much fucked across the board. There's some tooling, but it pales in comparison to the tools available for C.

I would also like to acknowledge the topic of CVEs you brought up. Yes, mistakes in mission critical systems happen. And for those systems, maybe something with better memory safety features is more productive in the long run. The original comment I replied to suggested C can be surprisingly productive with just a few tools, which I stand by supporting.

> What you're missing is that 99% those shops wouldn't have existed at all if they had tried to go the C/C++ route [...] average or mediocre programmers can get a viable product

Hard disagree. The two places I have in mind hired average/mediocre people to do somewhat challenging graphics work. Both had interesting products that may have actually been viable (think matterport, figma) but both failed because the UX sucked .. due to what can only be described as UI jank.

Lastly, it is easy to forget what you spend time on. I ve been tracking all my bugs that took more than 30 minutes for the last 10 years. The vast majority are graphics bugs due to API misuse. Very few are memory safety bugs, especially recently.

EDIT: also, half the costs I listed were related to performance. How the fuck do you justify the statement that those apply to C? What language would you pick to have more control over the assembly the machine is running.. other than assembly I guess..

naasking 2 days ago [-]
> In my experience, GC'd languages leak much more frequently because people figure 'oh, the GC will take care of it for me.'

"Frequent memory leaks" has never happened to me in 20 years of programming in GC'd languages.

> There are excellent tools for detecting memory leaks/safety issues in C

A process you don't even need in GC'd languages. I think I've had maybe a couple of non-critical leaks in those 20 years due to finalizer bugs.

> also, half the costs I listed were related to performance. How the fuck do you justify the statement that those apply to C?

The vast majority of performance issues are related to algorithmic choices. With the right choice of algorithms and data structures, any language will likely get within a constant factor of C.

Sometimes that constant factor matters, most often it does not given the added costs of eliminating that constant factor, eg. in dev time and risk of introducing bugs or security vulnerabilities. And even where it does matter, you're almost certainly better off writing the performance critical kernel in C and then calling into it from a higher level language, as is common in machine learning.

lelanthran 3 days ago [-]
> Literally all of the costs you list apply to C/C++ as well,

But we aren't talking about C/C++.

At least, we weren't, but your comments make a lot more sense in the context of C++.

> Edit: and the most significant evidence for this is in comparing all the CVEs for C/C++ vs. memory safe languages like C#/Java.

Wasn't the most expensive RCE the world has ever seen written in Java?

ffitch 4 days ago [-]
> The project is 2024 lines of C

got to appreciate the effort to make the irony possible : )

poopicus 4 days ago [-]
As someone who has difficulty in detecting irony, could you explain the irony in this statement?
booleandilemma 4 days ago [-]
2024 is the current year and it's the same as the number of lines of code. I don't think describing it as ironic is correct though.
roland35 4 days ago [-]
It's about ironic as rain on your wedding day.
lelanthran 3 days ago [-]
> It's about ironic as rain on your wedding day.

Ah!!! It's the Alanis Morrisette meaning of irony, not the dictionary one!

vsuperpower2020 4 days ago [-]
That certainly is gregarious!
ngcc_hk 4 days ago [-]
Never get that. Bad in literature. Thanks.
enumjorge 3 days ago [-]
The other response is correct that this is not ironic. Roughly speaking, irony is when something happens that is the opposite of what you'd expect. A firefighter's home burning down is ironic. Sometimes irony is related to unfortunate or funny coincidences/timing, and it's easy to confuse the two. Alanis's song Ironic famously has a lot of examples of this. Rain on your wedding day--is that ironic? Maybe? You certainly hope there is no rain on your wedding day, but I don't think there's an expectation that there won't be rain. Now if your parents decided to get a divorce on your wedding day, I think that's ironic.

But the parent commenter dilutes the definition further. A project with 2024 lines of code in 2024 is just an amusing coincidence. There's no reason why you'd expect a project in 2024 to not have 2024 lines of code.

4 days ago [-]
parasti 4 days ago [-]
There's something really powerful about taking the tools that you know very well and just making something cool with them. Really enjoyed this writeup, thanks.
gorkermann 4 days ago [-]
To get a look at SDF rendering in a game, check out the blue clouds on the ground in Solar Ash:

https://youtu.be/HqQpYSQDIZQ?si=vMKplmGJIGvUn_LT

koushik 4 days ago [-]
This looks cool, after 3 years in financial technology industry working on c/c++ projects I’m in process of revisiting textbooks and relearning computer science fundamentals! Added raylib to my ‘explore’ list

I love this idea of reinventing wheel with such explicit goal (even if it sounds counterintuitive to some), we can rethink initial assumptions, best case scenario we come out with better implementations and paradigms than existing standards, worst case scenario you learn internals of tools and techniques you use daily!

jasonjmcghee 4 days ago [-]
This looks like such a fun jam - wish I'd have known about it!

When's the next one?

bvisness 4 days ago [-]
The next Wheel Reinvention Jam will be in September! We're just finalizing our plans for that jam and our smaller Visibility Jam in July. If you're interested in participating, then join the Handmade Network Discord server (link on our home page at https://handmade.network/).
neonsunset 3 days ago [-]
"Using a language like C#, Javascript, or Python would force that allocation structure."

No. C# structs are C structs. Shape[], Span<Shape> or Shape* would not be an array of pointers. Any of the following would work:

    var fromHeap = new Shape[100];
    var fromStack = (stackalloc Shape[100]);
    var fromPool = ArrayPool<Shape>.Shared.Rent(100);
    var fromMalloc = (Shape*)NativeMemory.Alloc(sizeof(Shape) * 100);
Jach 4 days ago [-]
Really cool and the video was great too. Indie devs should probably consider doing this sort of thing more often, building these simple tools in service of a game rather than just using the industry standard tool. It can be a great way to add artistic character as well as rigorously enforce some limitations if that's what you're after.
dhooper 4 days ago [-]
Thanks for the share!
swiftcoder 4 days ago [-]
That's some impressive development speed. Really enjoyed the explainer video too!
naasking 3 days ago [-]
Nice, I was considering a project like this myself. Signed distance fields are awesome. Everyone into modelling should check out ImplicitCAD based on SDFs.
kewp 3 days ago [-]
I wish he had tried instead to do the faster subset of typescript, that is a pet peeve of mine and I've love to see how it would be done!
cnity 3 days ago [-]
AssemblyScript?
gromneer 2 days ago [-]
getting tired of the underhanded shilling for c and procedural style programming. the people doing this still pretend they are the underdogs but their point of view is over saturated.
TacticalCoder 2 days ago [-]
I don't know why you got that feeling, TFA says this:

> Some view C as a language so simple and raw that you’ll spend all your time working around the language’s lack of built in data structures, and fixing pointer bugs. The truth is that C’s simplicity is a strength. It compiles quickly. Its syntax doesn’t hide complex operations. It’s simple enough that I don’t have to constantly look things up. And I can easily compile it to both native and web assembly. While C has its share of quirks, I avoid them by habits developed over 22 years of use.

Which is nothing unreasonable.

I code mostly in a Lisp dialect but I don't take offense at, say, the Linux kernel being mostly C.

gromneer 2 days ago [-]
i am not assessing the truthfulness, the reasonableness, the correctness, or the veracity of the claims made in the article in any way. i simply do not care because the claims are purely subjective, untestable, unmeasurable. it is an opinion piece and one that is dulled out ad nauseam in the programming world today. enough to be annoying now. one opinion for another.
2 days ago [-]
layer8 2 days ago [-]
I’m getting tired of people not using the Shift key.
CamperBob2 2 days ago [-]
Other sites beckon.
syphiant 4 days ago [-]
Impossible! Wow! This is absolutely mind blowing.
sandwichukulele 4 days ago [-]
is the source code available? I looked through the blog post and linked videos but could not find a github repo or anything similar
dhooper 4 days ago [-]
jbritton 4 days ago [-]
How did you create / obtain the example shapes? Is there a standard format your code parses?
dhooper 4 days ago [-]
the example files in the repo were made using the macos build of shapeUp and saved (the web build doesn't have saving)
TruthWillHurt 4 days ago [-]
Or just being able to save/load creations would be nice :)
dhooper 4 days ago [-]
yeah I never implemented saving/loading for the web. Thats one example of how raylib doesn't totally abstract the underlying platform for you.
rationalfaith 4 days ago [-]
Good stuff! As a c/c++ coder maintaining his game engine (for commercial and hobby purposes), this is always good to see!
RamiAwar 4 days ago [-]
Amazing write up, thanks! Really enjoyed it, miss working on C/C++ apps from scratch and having full control
xixixao 4 days ago [-]
Super impresssive for getting this done in a week. Being able to make pretty demo models definitely helps too! :)
JabavuAdams 3 days ago [-]
This is really great! Nice work, and thanks for sharing.
emmanueloga_ 4 days ago [-]
What is a stablished 3d modeler that uses the same kind of modeling as this one?
Lichtso 4 days ago [-]
Many 3D content creation tools such as Blender [0] have SDF [1] (e.g. metaballs in Blender) and CSG [2] (e.g. boolean modifier in Blender) features. But these are rarely used as they can only define volumes, not surfaces. And we are usually interested in surfaces for assigning textures and materials. Thus, polygons / meshes and curves / splines dominate the industry.

[0]: https://www.blender.org/ [1]: https://en.wikipedia.org/wiki/Signed_distance_function [2]: https://en.wikipedia.org/wiki/Constructive_solid_geometry

dantondwa 4 days ago [-]
The two best SDF modelers in existence are MagicaCSG[1] and Adobe Substance Modeler[2]. There are also a few others, like Womp, but those two are the most feature-complete. Blender is also adding them as part of geometry nodes, and there is also an add-on that is working on adding SDF for hard-surface.

[1] https://ephtracy.github.io/index.html?page=magicacsg & https://www.patreon.com/magicavoxel

[2] https://www.adobe.com/products/substance3d/apps/modeler.html

4 days ago [-]
joeld42 4 days ago [-]
Blender Geometry nodes supports SDF modelling. It's just not a widely known or used technique. But it's super powerful.
tcfunk 3 days ago [-]
Plasticity (https://www.plasticity.xyz/) is a new-ish one that looks neat, but I wish it offered a lower-price entry point.
starmole 4 days ago [-]
Adobe/Substance3D Modeler
antirez 4 days ago [-]
TinkerCAD.
anthk 3 days ago [-]
Can I compile it with GNUStep?
npigrounet 3 days ago [-]
[dead]
jimmychu 4 days ago [-]
[flagged]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 14:43:34 GMT+0000 (Coordinated Universal Time) with Vercel.