Also known as @VeeSilverball

  • 0 Posts
  • 15 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle
  • Some of my own thoughts, which rebut the article in parts:

    1. Godot does have “barbell performance” - you can make it go fast if you drop to C++ and do low-level engine things to add new nodes, resources, etc. You can also make it go fast when you use the premade nodes without a great deal of script in between(and the nodes are, FWIW, pretty flexible and composable). What it doesn’t do at present is the thing Unity users are used to, which is “fast scripting”. Fast scripting still means working around the garbage collector and the overheads of going between native and a runtime. C# is a kind of flytrap for the needs of high-end games, and Unity has only seemingly surmounted the issues by doing a lot of custom engineering for their use-case. That is, you don’t really code standard C# in Unity, you code Unity’s C#, which is nearly as bespoken as GDScript.
    2. Saying the engine is coded in a naive way is actually not as smart as it seems, because there’s a maintenance cost to always doing things in exactly the most optimal way. The target for what is fastest changes every time the platform changes. As a (up until recently) relatively small project, it’s overall better that the engine stay relatively easy to build and straightforward to modify, which is what it’s done. The path it’s taken has helped it stay “lightweight”. The price of that is that sometimes it doesn’t even take low-hanging fruit that would be a win for 90% of users.
    3. The 3D in Godot 4 is capable of good test scenes, but everyone seems to agree that it’s not really ready for production for speed reasons. Any specific point on this just backs that up. And that’s disappointing in one sense, but pretty okay in others. If you need high-end graphics, Unreal will welcome you for the time being.
    4. On that note, developing for console always comes with fussy limitations, at minimum just meeting TRC/TCR/lot check; that’s why professional porting is a thing. Engine devs usually end up in the position of maintaining these multiple-API abstractions because it’s necessary for porting. It’s the same deal with the audio code, the persistent storage, the controllers, the system prompts, it just goes on and on like that. So, rewriting the rendering bindings to do things in the D3D way and not the Vulkan way is actually a bit of a whatever; it’s more rendering code. It changes some assumptions about what binds to what. But it accesses the same kind of hardware, running the same kind of shaders. A lot of ports in the not-so-distant past basically had to start over because the graphics hardware lacked such a common denominator.

    The author’s bio says that they have been doing this as a professional for about 5 years, which, face value, actually means that they haven’t seen the kinds of transitions that have taken place in the past and how widely game scope can vary. The way Godot does things has some wisdom-of-age in it, and even in its years as a proprietary engine(which you can learn something of by looking at Juan’s Mobygames credits the games it was shipping were aiming for the bottom of the market in scope and hardware spec: a PSP game, a Wii game, an Android game. The luxury of small scope is that you never end up in a place where optimization is some broad problem that needs to be solved globally; it’s always one specific thing that needs to be fast. Optimizing for something bigger needs production scenes to provide profiling data. It’s not something you want to approach by saying “I know what the best practice is” and immediately architecting for based on a shot in the dark. Being in a space where your engine just does the simple thing every time instead means it’s easy to make the changes needed to ship.


  • Arch is always “latest and greatest” for every package, including the kernel. It lets you tinker, and it’s always up to date. However, a rolling release introduces more ways to break your system - things start conflicting under the hood in ways that you weren’t aware of, configurations that worked don’t any longer, etc.

    This is in contrast to everything built on Debian, which Mint is one example of - Mint adds a bunch of conveniences on top, but the underlying “how it all fits together” is still Debian. What Debian does is to set a target for stable releases and ship a complete set of known-stable packages. This makes it great for set and forget uses, servers that you want to just work and such. And it was very important back in the 90’s when it was hard to get Internet connectivity. But it also means that it stays behind the curve with application software releases, by periods of months to a year+. And the original workaround to that is “just add this other package repository” which, like Arch, can eventually break your system by accident.

    But neither disadvantage is as much of a problem now as it used to be. More of the software is relatively stable, and the stuff you need to have the absolute latest for, you can often find as a flatpak, snap, or appimage - formats that are more self-contained and don’t rely on the dependencies that you have installed, just “download and run.”

    Most popular distros now are Arch or Debian flavored - same system, different veneer. Debian itself has become a better option for desktop in recent years just because of improvements to the installer.

    I’ve been using Solus 4.4 lately, which has its own rolling-release package system. Less software, but the experience is tightly designed for desktop, and doesn’t push me to open terminals to do things like the more classical Unix designs that guide Arch and Debian. The problem both of those face as desktops is that they assume up-front that you may only have a terminal, so the “correct way” of doing everything tends to start and end with the terminal, and the desktop is kind of glued on and works for some things but not others.


  • To me, a big difference is in the lengthy prelude, which follows the model of TOS, just with an updated production. First the synths layered with strings, which are very 80’s wonder-music(it could be right out of the score for Flight of the Navigator or The Goonies) and then the french horns come in playing a round, which adds a Wagnerian element.

    The percussive “march music” elements quoting TMP are subdued in TNG’s arrangement - it’s a less compressed, “punchy” sound, and I believe the mic has been set farther back or they’ve EQ’d out some higher frequencies. Those decisions, plus a few choices of instrumentation like the harp glissandos, tone down the bombastic energy and add a gliding, romantic quality. Again, more like TOS, but updated.


  • My favorite example of “weird camera” is Journey to the Planets. It’s an Atari 800 game with graphics that are more 2600-esque. It’s mostly side view, but the proportions are abstract, like a child’s drawing: the spaceship is about 1/3rd the size of the player sprite, but then as you lift off it shows zoomed out terrain and the sprite is the same size. The game is based around solving adventure game puzzles with objects that are mostly just glowing rectangles, but your way of interacting with the puzzles involves a lot of shooting. Even though there’s so little detail, every room feels “hand-crafted”.

    I’m pretty sure the game permanently altered my sense of aesthetics.


  • Not dead, just sleeping. It’s a tougher, higher interest-rate market which cuts out a lot of the gambling behavior. I remain invested but my principle has shifted away from the financial and trad-economic terms to this:

    Blockchains are valuable where they secure valuable information. Therefore, if a blockchain adds more valuable information, it becomes more valuable.

    And that’s it. You don’t have to introduce markets and trading to make the point, but it positions those elements in a supporting role, and gets at one of the most pressing issues of today: where should our sources of truth online start? Blockchains can’t solve the problems of false sensation, reasoning or belief, but they fill in certain technical gaps where we currently rely on handing over custody to someone’s database and hoping nothing happens or they’re too big to fail. It’s just a matter of aligning the applications towards the role of public good, and the air is clear for that right now.


  • I think it’s reasonable for some instances, where there’s good alignment. There was a thread I replied in a few days back around how/if TTRPG creators(who are mostly small enthusiasts themselves) could advertise in related magazines, and legitimizing that business wouldn’t really pose a conflict for the hobby - that’s how it was built in the first place! It’s just a matter of finding a place for it and defining the technical solutions.

    As a general “let in all the advertisers and promise riches for someone” measure, it does cause known problems. There is some freedom to figure out what works in a specific case here, it’s not defined top-down since it isn’t centralized.


  • Drawing gets a lot easier if you approach it as a muscle-memory skill like calisthenics or juggling - if you can write letters neatly, you can also learn to draw shapes you’ve practiced. The early exercises in books like Keys to Drawing (Dodson) or The Natural Way to Draw (Nicolaides) introduce ways to practice those skills, and then the rest is “find subjects you want to draw”, which can be as simple as watching a video, pausing it, and quickly using that for the exercise. Do that for a few minutes a day for a few weeks and drawing skills will magically emerge.

    There are tons of “how to draw tutorials” that don’t explain any of this, speak about it conceptually, and tell you to go draw a thousand cubes, which will make you better at drawing…cubes. (There is some point to that kind of technical skill, but it’s not the thing to invest in if you just want to use images to tell a story)


  • The thing about larger-scale architecture is that you can be correct in any specific sense that it’s more than you need, but when you actually try to make the thing across a development team, you end up there because the code reflects the organization, and having it broken up like that lets you more easily rewrite your previous decisions.

    At the small scale this occurs when you notice that the way in which you have to approach a feature is linguistically different - it needs conversion to a substantially different data structure, or an interface that compiles imperative commands from a definition. The whole idea of the database having a general purpose structure and its own query language emerges from that - it lets you defer the question of exactly how you want to use the data. The more configuration you add, the more of those layers you need. When you start supporting enterprise-grade flexibility it gets out of control and you end up with a configuration language that resembles a general purpose programming environment, but worse.

    Casey Muratori talks about this kind of thing in some depth.

    In the end, the point of the code is to help you “arrive in the future” in some sense - it’s instrumental, the point of automating it is to improve the quality of your result by some metric(e.g. fewer errors). For a lot of computations, that means you should just use a spreadsheet - it aids the data entry task, it automates enough of the detail that you can get things done, but it also gets out of the way instead of turning into a professionalized project.



  • With respect to how it works in the microblogging corner of Fedi, the tendency is to be actively collaborative, and aggregate some moderation resources, sometimes through backchannels, other times through a tag like #FediBlock - all of which have political implications that have been years-long meta discussions. The emphasis, at least among instances that want to moderate heavily, is on allowing users to feel undisturbed in their own space and not be challenged on literally everything they say, but to still expand that space where it makes sense.

    I’m not sure the exact same dynamic will take place over here. The existence of many distinct spaces on the same instance mitigates a major initial problem Mastodon faced in its early waves: when you literally put everyone leaving Twitter on the same public timeline, old grudges spark up and they start campaigns to harass each other off the platform. That’s how it came to pass that Mastodon ended up with a ton of user privacy features, and over years, instances warring over ideology and trying to colonize each other, which of course ends in mutual blocking.

    In our case I think there’s a good chance for small aggregator instances that just “do one thing well” to thrive and see a lot of external traffic, while not having to moderate their entire comments section, since you can opt to not federate that - not your site, not your concern.




  • It will never show an consistent number. The way Activitypub operates is “you see what you’re subscribed to”, and that occurs in a technical/political sense of “these instances have agreed to federate”, and in many cases they don’t federate everything that happens. So if someone on instance A upvotes something posted on instance B, but instance C is not subscribed to instance A, A and B will see the upvote, C won’t.

    You don’t have to give up on your clout-chasing dreams, but the numbers won’t tell the whole story.


  • I’ve had some thoughts on, essentially, doing more of what historically worked; a mix of “archival quality materials” and “incentives for enthusiasts”. If we only focus on accumulating data like IA does, it is valuable, but we soak up a lot of spam in the process, and that creates some overwhelming costs.

    The materials aspect generally means pushing for lower fidelity, uncomplicated formats, but this runs up against what I call the “terrarium problem”: to preserve a precious rare flower exactly as is, you can’t just take a picture, you have to package up the entire jungle. Like, we have emulators for old computing platforms, and they work, but someone has to maintain them, and if you wanted to write something new for those platforms, you are most likely dealing with a “rest of the software ecosystem” that is decades out of date. So I believe there’s an element to that of encoding valuable information in such a way that it can be meaningful without requiring the jungle - e.g. viewing text outside of its original presentation. That tracks with humanity’s oldest stories and how they contain some facts that survived generations of retellings.

    The incentives part is tricky. I am crypto and NFT adjacent, and use this identity to participate in that unabashedly. But my view on what it’s good for has shifted from the market framing towards examination of historical art markets, curation and communal memory. Having a story be retold is our primary way of preserving it - and putting information on-chain(like, actually on-chain. The state of the art in this can secure a few megabytes) creates a long-term incentive for the chain to “retell its stories” as a way of justifying its valuation. It’s the same reason as why museums are more than “boring old stuff”.

    When you go to a museum you’re experiencing a combination of incentives: the circumstances that built the collection, the business behind exhibiting it to the public, and the careers of the staff and curators. A blockchain’s data is a huge collection - essentially a museum in the making, with the market element as a social construct that incentivizes preservation. So I believe archival is a thing blockchains could be very good at, given the right framing. If you like something and want it to stay around, that’s a medium that will be happy to take payment to do so.


  • Ask anyone of the popular reputation of philosophers and it’s basically the same as programmers. Socrates would definitely piss a few people off in code review meetings. Programming as a pursuit is very prone to sophistry because it’s unclear even how to start defining the problem space, and there are always categories of problem where, when encountered, everyone either solves it with the exact same falsehood, or uses the one dependency that actually solves the problem. And then in the end, the software ends up not being used, so the wrong problem was solved.