Hacker Newsnew | past | comments | ask | show | jobs | submit | more jfkebwjsbx's commentslogin

> C has no pattern for breaking out of multiple for loops at the same time.

True, but it is cleaner to reorganize the code into several functions and use the return value to propagate across layers if needed. Performance should be the same.

> it's not applicable here

Why? If there is a new/delete pair anywhere, it should have been an object.

> Game developers have a long-seated distrust of std::vector, and for good reasons.

Which reasons? std::vector (and std::unique_ptr) is universally useful (unlike many other std data structures) and codegen should be on par as a new/delete pair.

If the std is completely broken in some of the console platforms they need to support (likely from what I hear here) then it can also be done with a custom heap array wrapper.

So I don’t see the reason why that shouldn’t have been an object.


> True, but it is cleaner to reorganize the code into several functions and use the return value to propagate across layers if needed.

I disagree. Having to jump to another function definition which is inline is a bigger mental block than following a goto. The large amount of arguments you'd need to pass might also be a barrier, as is the mental overhead of checking to see if this function might be called from elsewhere. But reasonable people can disagree on this point.

I suggest you try to clean up the function yourself and see if your function-ed version is in any way improved.

> Why? If there is a new/delete pair anywhere, it should have been an object.

The case we want is that we have a large array of pairwise collisions. This is a temporary array used inside the method. If we ever have more than this, we need a new (contiguous) array. Are you suggesting that the code should have done something like this?

template<typename T> class TempArray { T* storage; void resize(int n) { delete[] storage; storage = new T[n]; };

I mean, sure, it's a very minor cleanup. It changes like, two lines though, and makes us have to access the array through an awkward ->storage pointer. Not ideal IMO.

> codegen should be on par as a new/delete pair.

Here's modern MSVC. Let's play "spot the difference": https://gcc.godbolt.org/z/KqR-LN

I'm not even doing anything but allocating the vector / array. It took me 2 minutes to navigate to godbolt and type this in. Don't just say "should be"... test it yourself!


If you enable /Ox, the codegen basically drops to what you would expect: the vector version drops down essentially identical code to the new/delete (modulo a memset to enforce the clear to zero condition)

It is a good illustration of why debug stl builds are such hot garbage though...


It is a good illustration for why using the STL is not always a good idea: you can't blindly take the perf hit from that kind of overhead in a debuggable build of a game that you still want to run at reasonably interactive frame rates.


False, most standard libraries out there allow you to configure whether you want extra checks or not.


But at least in the case of MSVC that comfiguration option leads to binary incompatibilities that make it an all or nothing option for everything that gets linked together statically. And the checks are so heavy that the "all" option becomes unbearably slow quite quickly. If your project hits a reasonable size, you end up requiring some clever solutions.


> Having to jump to another function definition which is inline is a bigger mental block than following a goto.

Local lambdas are ideal for this.

> Are you suggesting that the code should have done something like this?

Yes, but you can manage the array inside too.

> I mean, sure, it's a very minor cleanup. It changes like, two lines though

The point is that TempArray can be reused everywhere. This is a typical class that many projects use (stack if small, heap is bigger than threshold).

> Here's modern MSVC. Let's play "spot the difference"

The optimizer has been asked to leave everything as it is, so that is the expected result.

BTW, MSVC is not what you should be using if you want performance.

> Don't just say "should be"... test it yourself!

I always test codegen for all abstractions I use! So I agree.


> Local lambdas are ideal for this.

The problem with lambdas, you can't mark their operator() with __forceinline or __attribute__((always_inline)) attributes. For this reason, when writing high-performance manually vectorized code, lambdas are borderline useless.

> MSVC is not what you should be using if you want performance.

Security and compatibility has higher priority. gcc and clang don't deliver their C runtime libraries with windows updates. Also, debugging and crash diagnostic is much easier with MSVC.

It's same on Linux BTW, only with gcc.


> when writing high-performance manually vectorized code

The point was not about manually vectorized loops in particular. Why is that a problem if you are manually doing it, though?

> Security and compatibility has higher priority.

In commercial games, not really.

As for "compatibility", I am not sure what you mean.

> gcc and clang don't deliver their C runtime libraries with windows updates.

AFAIK you can use Windows libraries just fine. No need for using a different libc.

> Also, debugging and crash diagnostic is much easier with MSVC.

AFAIK, Clang can produce debugging info that you can use with VS.

I don't work on the environment, but it is what I have read here.


> Why is that a problem if you are manually doing it, though?

Here’s couple examples: https://github.com/Const-me/DtsDecoder/blob/master/Utils/App... https://github.com/Const-me/SimdIntroArticle/blob/master/Flo... I would like to use lambdas instead of classes, but can’t, due to that defect of C++.

> In commercial games, not really.

In commercial games too. While they don’t care about security, they do care about compatibility and crash report diagnostics.

> As for "compatibility", I am not sure what you mean.

A windows update shouldn’t break stuff. A software or a game should run on a Windows released at least 10 years in the future.

> Clang can produce debugging info that you can use with VS.

According to marketing announcements. This page however https://clang.llvm.org/docs/MSVCCompatibility.html only says “mostly complete” and also “Work to teach lld about CodeView and PDBs is ongoing”. PDB support is not about VC compatibility, it’s about Windows compatibility really: the debugger engine is a component of OS, even of the OS kernel. WinDbg is merely a GUI client for that engine, visual studio is another one.

Overall, in my experience, the platform-default compilers cause the least amount of issues. On Windows this means msvc, on Linux gcc, on OSX clang. Technically gcc and clang are very portable. Practically, when you’re using a non-default toolset, you’re a minority of users of that toolset, e.g. the bugs you’ll find will be de-prioritized.


I'm still getting up to speed on C++ (& ASM) so forgive the ignorance - are the differences you're referring to all the cleanup code that the vector does in the destructor?

As a side note, you're not calling delete[] on the array code, but I suppose it makes little difference if you take the vector cleanup into account.


unfortunately MSVC produces hot garbage asm, clang and gcc produce pretty much the same asm output using your example.


It is not perfectly explainable by us, though.

And it may not actually perfectly explainable ever. Nothing guarantees we can get a perfect understanding of physics.


Hanging or crashing without feedback is pretty much "not handling errors".


> cheaper cloud bills

That remains to be seen.


Graviton2 instances are available on AWS. They claim 40% better performance than their x86 peers - https://aws.amazon.com/ec2/instance-types/m6/


> Those who "pirate" will never convert to paid customers

That's not true. Many people won’t pay for something if they can get it for free reasonably easy.


A pointer does not contain what it points to, it just points to it.

The content (the value) of a pointer is an address.


I guess most games cap velocities for most objects if not all, so I doubt that is an issue.


That's not practical. If you cap velocities, you'll also need to enforce a minimum size for all objects. If you don't do this, it will still be possible for objects to pass through each other. So you can either have really small but really slow objects, or really large but really fast objects.

What most games do is enable continuous collision detection for important fast objects only, like the player or projectiles.


RAII can be used without constructors and without exceptions. The acronym is misleading.


Yes, but if you have a large existing codebase that throws exceptions from constructors you can't just use it unchanged.


It is spot on. Macs are less than 5% of the Steam audience, software now needs notarization, developer tools cost money in the form of hardware, GPU drivers are buggy (unless you use Metal and buy into Apple ecosystem), modern OpenGL was abandoned years ago, etc. So developers only target macOS if the game gets big enough.

From the point of view of gamers, it is annoying. The recent 32-bit support drop killed a huge amount of macOS games. Now I have to bootcamp to play some of my favorite games. The GPU performance is also quite bad due their fight with NVIDIA.



Those were on Intel Metal Drivers.

Hopefully drivers quality will improve once Apple take full control of their GPU as well.


I'm wondering if any of this will change now that Apple is going all-in on vertical integration, basically unifying their platforms so you can write a game once and run it on Mac, AppleTV, iPad and iPhone with (almost) no changes. Their hardware will all run comparable Apple CPU's & GPU's, same API's, appear to fully support game controllers etc, and the development and deployment platform are pretty much the same thing. This makes a small step towards a unified platform for light gaming.

I get that MacBooks and AppleTV's will never be anywhere near the performance of PC, Xbox or PlayStation, but Nintendo has shown that doesn't have to matter. I'm pretty sure any Apple SoC used in the last ~2 years already beats the Nintendo Switch for example. All Apple needs to do is put some effort towards getting more 'real' games in their store, ie: games like you would play on switch with traditional controls, and not glorified mobile games that also have to work with the ridiculous AppleTV remote.

My feeling has been for a long time already that Apple is really leaving a huge opportunity on the table here. But the vibe I got from WWDC keynote and platform state of the union was that they seem to have more focus on this now, showing unity, maya and some other tools that can be used for game dev, running on their Apple Silicon. So who knows what they are planning...


> the vibe I got from WWDC keynote and platform state of the union was that they seem to have more focus on this now

It doesn't matter - I don't think they really understand the dynamics of the traditional gaming market. They are bewildered that their strong position in the mobile-game arena, which is due almost entirely to the economic characteristics of their customers' demographic, is doing nothing to improve their position in the desktop/AAA game, which plays by a different set of rules and appeals to very different constituencies.

It's a bit like little chains of trendy indie cinemas were trying to get Marvel or Warner to make films with them in mind: it ain't going to happen, they just don't care, their business works on a different level both on quality and quantity.


> I'm wondering if any of this will change now that Apple is going all-in on vertical integration, basically unifying their platforms so you can write a game once and run it on Mac, AppleTV, iPad and iPhone

Nothing will change. Mobile games will continue being mobile games. Hardly anyone else will bring their games to the "unified platform" for the simple fact that you cannot run the same game on desktop and mobile.


I think that currently there are some games in Apple Arcade that works on macOS, iOS and Apple TV; that said we’re not talking about the usual AAA games of course.


> I think that currently there are some games in Apple Arcade that works on

Mobile games will continue being mobile games.

;)

That's the only type of game that can work on both desktop and mobile.


If you can run iOS apps on mac in the future, that includes games too.


Of course, but it doesn't factor in the interface and input differences. It is next to impossible to port a game optimized for PC to mobile. And nobody cares about mobile games making their way onto desktops.


With the recent move to ARM, you won't be able to use bootcamp for games anymore either.

I think this move was the final nail in coffin for Mac gaming.

I've always found it frustrating that a 3k Mac has a barely acceptable GPU compared to its PC counterparts.


> The recent 32-bit support drop killed a huge amount of macOS games. Now I have to bootcamp to play some of my favorite games.

I just see no reason to upgrade the OS to the version that dropped 32-bit support. High Sierra runs everything and feels great.


Xcode 11.5+ runs only on Catalina. If you want to use it, you’re mostly forced to.


This is one of the real problems: Apple knows that a significant chunk of desktop/laptop users are iOS devs, and while they started to use Xcode as a carrot, it’s now very clearly a warship-mounted harpoon, dragging everyone along in to waters infested with piranha (biting in to freedom of speech, ability to use 3rd party tools, ability to hack away at the hardware of our choice, and many others).


> Macs are less than 5% of the Steam audience

Isn’t this one a bit of a chicken/egg situation?


If you're only 5% of the market, you can't afford to try vendor lock-in tactics. But that's exactly what Apple is doing with Metal.


Maybe, but Apple's choices (described by other replies ITT) only made it worse. Deprecating OpenGL and dropping 32-bit compatibility, for instance.


As a developer, you don't have much incentive to fix it yourself.


In fact, one could argue that you’re actually penalized for trying to fix it yourself.


> it is not the notorious "write only" language that many troll it to be

I dispute that.

I have worked with many languages, and Perl has always been one of the hardest to remember due to non-standard symbol abuse and a few strange semantics.

There is a reason it has that reputation.

Even writing it isn’t easy. I can describe the Python syntax after years of not using it. Perl? No way I remember it.


Understood. I stated that to say my problems with perl is not the language itself. To say it is the "community" is not right either. But basically "what is left of the community" is the issue. Languages like python, ruby, perl, etc are only good (if you want to freelance and make money at least) at being a "skin" around a database. That also entails a knowledge of frontend frameworks and workflows as well. Yet the words "database", "javascript", or "webpack" are very rarely mentioned at any of the perl conferences in the last 5 years! In the case of webpack, that's probably never been mentioned.


Well, I've given talks on databases (Postgres), javascript (vue) and modern devops (k8s etc) at various Perl conferences in the last years, so I cannot agree.

In fact, I'm giving a small talk on Docker today at the Perl Conference in the Cloud, and also a lightning talk about a ~50 line tool to provide an async web server to post to Matrix. https://tpc20cic.sched.com/

Yes, there are a few people entrenched in old ways (and sometimes it pays off to not always use the newest tech), but there are also a lof of Perl devs doing current stuff.


> I have worked with many languages, and Perl has always been one of the hardest to remember due to non-standard symbol abuse and a few strange semantics.

For example?


I mean, the combinations of $%@ are really astoundingly bad. $ alone has so many different meanings and operations depending on context. You won't find that sort of thing in any other language. Not even PHP which for some bizarre reason brought $ along for the ride.

Here's a fun exercise, write a dictionary of arrays. Now write a dictionary of 2d arrays.

In any other language, that's hardly a challenge to both read and write such a structure. Not so with perl.


> Here's a fun exercise, write a dictionary of arrays.

Umm … ok:

my %dict = ( foo => [ 1, 2, 3 ], bar => [ 4, 5, 6 ], );

> Now write a dictionary of 2d arrays.

my %dict2d = ( foo => [ [1, 2, 3], [4, 5, 6] ], bar => [], );

It’s pretty much identical to JS. Not sure how that’s supposed to exemplify how hard it is to write Perl.


I'm not the original comment author, neither am I fluent in Perl. But I suspect they meant nested hashes and not a hash whose values are arrays of arrays. As you point out, the latter is trivial.

To be clear, even a nested hash is all well as long as the depth and the keys are literals (as in your example). It is when keys are dynamic that things get unruly. And god forbid if the _depth_ of nesting is dynamic. All of these are trivial to implement in Python, Ruby, JS, PHP, ...


    %nested = (foo => { bar => { baz => "quux" } });
Access the leaf with

    $nested->{foo}{bar}{baz}
If the keys are in variables, it becomes

    $nested->{$x}{$y}{$z}
For dynamic depth, do you have a specific use case in mind? If I have a tree, I’m probably going to search it rather than using a hardcoded path.


This is why we use strict! The extra arrow is unneeded, as you're working on the actual hash, not a reference to the hash. This is a compiler error with use strict though, so it's not something you would generally be bitten with when writing real code, just internet comments. :)


Whoops, good catch! I had references on the brain. For completeness, that makes it either

    $nested{$x}{$y}{$z}
or after initializing with curly brackets rather than parentheses to construct a new reference to an anonymous hash

    $nested = { … };
and then use the arrow to dereference as in the grandparent.


I think the point is not that Perl's rules don't make sense -- it's that they're much less obvious than those in many competing languages. In Python, lists are `l = [1, 2, 3]` and dicts are `d = {"key": val}`. Both structures can be nested without any change to the semantics.


> $ alone has so many different meanings and operations depending on context.

Used as a sigil, $ always tags a scalar, a single value.

    $foo = 3;
A reference to something else is a single value.

    $bar = [qw/ apple orange banana /];
The special variable $$ is the pid of the current process, borrowed from the shell. Special variables with punctuation names have length at most one. The leading $ is the scalar sigil, and the latter is its name.

Dereferencing a reference to scalar is rarely used and looks like either

    $$baz = $quux;
or

    ${$baz} = $quux;
In a regex, $ broadly means end of line, but the exact meaning is context sensitive: possibly end of buffer, just before a newline at the end of a buffer, or end of logical line within a buffer. Where it matters, the anchors \z and \Z have less flexible meanings.

Of those, the kinda fuzzy one is $ inside a regex. The rest follow consistent, simple rules. I’m not sure what you mean by different operations on $. Am I skipping a difficult case you have in mind?


I think shells did $ for vars first, then Perl added @ and % for arrays and hashes (and then $ was refs)


$string was in BASIC. I'm sure it's even older, though (the oldest BASIC I've used is GWBASIC).


I thought BASIC had $ at the end, not at the start? It has been a long time since those days, though.


> I mean, the combinations of $%@ are really astoundingly bad. $ alone has so many different meanings and operations depending on context. You won't find that sort of thing in any other language.

I kind of felt lost first getting started with Scala where I felt like there was symbol overload.


A classic (hope I didn't get it wrong):

    print "[", +( join '', map { "-> $_"  } @$_ ), "]" for @{$ref}


The concatenation operator is . in Perl and ~ in Raku. You could write a normal, braced foreach loop with another one inside and not use $_.


> print "[", +( join '', map { "-> $_" } @$_ ), "]" for @{$ref}

Because computers are so slow, memory is so limited and disk space is so expensive the programmer absolutely had to write it in a single line in the most convoluted way possible? Yeah, that's definitely a language fault.


This is pretty idiomatic Perl, and I've seen similar constructs countless times at work. You could indent the map body but it normally wouldn't be done for such a short expression. There is not much room for maneuver there if you don't want intermediate variables.

And this is a quick sample, there is much, much hairier stuff to be found in real code. "Knowing the language well" is seen as a virtue for many.


The problem is that it takes a lot of skill to write elegant Perl. The easy way is the one that produces write-only crap.


The problem with any language like this is that you have to work with others, and if others produce write only code that's your problem, too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: