Hacker Newsnew | past | comments | ask | show | jobs | submit | whobre's commentslogin

It’s fine. Why fixing something that isn’t broken?

I know it was just an example, but out of curiosity, why 1982 for home computers? Commodore 64?

Not sure why it was listed as '82, but if it was the C64, it makes sense. It's the most sold personal computer ever.

https://en.wikipedia.org/wiki/Commodore_64

Nuance around this fact would be completely lost in 100 years, let alone 1000 from now.


I’d probably put 1978, but it doesn’t really make much difference if you say it was the TRS-80, Apple II or the c64. The thing happened about then.

This is way worse than dotcom craziness. Databricks has been losing more and more money and their evaluation just keeps rising.

It’s not slow at all. The problem with 11 is the built-in commercials. Disable that and the OS is fine.

Dreadfully sorry but I don't understand this - I'm not a Windows user. Don't you have to buy Windows 11? It's not ad supported (like US broadcast TV) is it?


Wow, as a non-Windows person, I'm kind of shocked that Windows users put up with ads and "promoted" content. If you pay for Windows, it should be yours to do with as you wish.


This is from 2019, prior to the finalization of modules in the standard. I'd be interested in how many of these issues were unaddressed in the final version shipped.

There isn't much of a final version shipped. It's pretty well understood that modules are underspecified and their implementation across MSVC, clang, and GCC is mostly just ad-hoc based on an informal understanding among the people involved in their implementation. Even ignoring the usual complexity and ambiguity of the C++ standard, modules are on a whole different level in terms of lacking a suitable formal specification that could be used to come close to independently implementing the feature.

And this is ignoring the fact that none of GCC, clang, or MSVC have a remotely good implementation of modules that would be worth using for anything outside of a hobby project.

I agree with the other commenter who said modules are a failure of a feature, the only question left is whether the standards committee will learn from this mistake and refrain from ever standardizing a feature without a solid proof of concept and tangible use cases.


You should get in there and put all your expertise to work.

I did prior to 2017. I realized the committee was 75% politics and people with a lot of time and devotion pushing their pet projects, and about 25% about addressing actual issues faced by professional engineers and decided it was no longer worth the time and effort and money.

The committee is full of very smart and talented people, no dispute about that, but it's also very silo'd where people just work on one particular niche or another based on their personal interests, and then they trade support with each other. In discussions it's almost never the case that features are added with any consideration towards the broader C++ audience.


You implemented modules in 2017 and they didn't use it?

Today I learnt that Office is an hobby project.

You learned nothing because the extent of your knowledge tends to be rather superficial when it comes to C++.

Office does not use C++ modules, what Office did was make use of a non-standard MSVC feature [1] which reinterprets #include preprocessor directives as header units. Absolutely no changes to source code is needed to make use of this compiler feature.

This is not the same as using C++20 modules which would require an absolutely astronomical amount of effort to do.

In the future, read more than just the headline of a blog post if you wish to actually understand a topic well enough to converse in it.

[1] https://learn.microsoft.com/en-us/cpp/build/reference/transl...


My dear, I have written more C++20 modules code than you ever will.

Feel free to roam around on my Github account.

Also go read the C++ mailings regarding what is standard or not in modules.


At least my coworkers usually don’t hallucinate.

Are you sure? I've been confidently wrong about stuff before. Embarrassing, but it happens.. And I've been working with many people who are sometimes wrong about stuff too. With LLMs you call that "hallucinating" and with people we just call it "lapse in memory", "error in judgment", or "being distracted", or plain "a mistake".

True, but people can use classifier words like "I think …" or "Wasn't there this thing …", which allows you to judge their certainty about the answer.

LLMs are always super confident and tell you how it is. Period. You would soon stop asking a coworker who repeatedly behaved like that.


Yeah, for the most part. But I've even had a few instance in which someone was very sure about something and still wrong. Usually not about APIs but rather about stuff that is more work to verify or not quite as timeless. Cache optimization issue or suitability of certain algorithms for some problems even. The world is changing a lot and sometimes people don't notice and stick to stuff that was state-of-the-art a decade ago.

But I think the point of the article is that you should have measure in place which make hallucinations not matter because it will be noticed in CI and tests.


It’s different. People don’t just invent random API that doesn’t exist. LLM does that all the time.

For the most part, yes. Because people usually read docs and test it on their own.

But I remember a few people long ago telling me confidently how to do this or that in e.g. "git" only to find out during testing that it didn't quite work like that. Or telling me about how some subsystem could be tested. When it didn't work like that at all. Because they operated from memory instead of checking. Or confused one tool/system for another.

LLMs can and should verify their assumptions too. The blog article is about that. That should keep most hallucinations and mistakes people make from doing any real harm.

If you let an LLM do that it won't be much of a problem either. I usually link an LLM to an online source for an API I want to use or tell it just look it up so it is less likely to make such mistakes. It helps.


Again with people it is a rare occurrence. LLM does that regularly. I just can’t believe anything it says

I do agree. I still think that the article articulates a very interesting thought... the better the input for a problem, the better the output. This applies both to LLMs but also for colleagues.

When, not if.

I remember how computer enthusiasts had hard time explaining to “regular” people what personal computers were good for. They would often mention things like cooking recipes and balancing checkbooks which really was not convincing at all…

> The first TRS-80 ad was similarly scattershot, promising to be everything to everyone: “[p]rogram it to handle your personal finances, small business accounting, teaching functions, kitchen computations, innumerable games

Innumerable games sounds very compelling (though the Apple II was more solidly in the video game system business with support for color graphics and game controllers; but Apple's Mac later yielded many nice black-and-white games, aided by the Mac's sharp, though tiny, display.)

Nice article though showing a spreadsheet (two versions of VisiCalc), two word processors (Electric Pencil and WordStar - of George R.R. Martin fame), and not just games (MicroChess) but a rather interesting, if primitive, abstract animation program (Electric Paintbrush).

Digital art/creativity is I think an underappreciated application area for computers, though programs like {Mac,MS,Deluxe}Paint etc. and Photoshop were milestones, and demoscene software formed its own art practice and culture. Processing is perhaps a modern heir to Electric Paintbrush.


I am actually surprised that 3% allegedly can…

Ugh. I know all that and I am still sick of hearing about memory safety. My teammates spend way more time fixing security issues in “safe” languages than C/C++/whatever. It simply doesn’t matter…

It's hard to compare the two. A low-level memory safety issue can intersect with security. So can a flaw in logic that touches on security, but is reproducible and not undefined in any way.

The latter can often be more easily exploited than the former, but the former can remain undetected longer, affect more components and installations, and be harder to reproduce in order to identify and resolve.

As an exmaple of "more easily exploited". Say that you have a web application that generates session cookies that are easy to forge, leading to session hijack. Not much skill is needed to do that, compared to exploiting a memory safety problem (particuarly if the platform has some layers defenses against it: scrambled address space, non-executable stacks, and whatnot).


What security issues are biting you in safe languages that wouldn't also appear in C/C++ ?

Perhaps he's talking about risk compensation (https://en.wikipedia.org/wiki/Risk_compensation) --- e.g. maybe safe languages structurally excluding memory corruption and concurrency problems tempts developers to let their guard down with respect to correctness generally and produce security vulnerabilities that wouldn't occur in a language with C's need for rigor.

Doubtlessly, there is some of that going on. I doubt the risk compensation erases the benefit of memory safety, but let's not kid ourselves.


A large number of real world security issues are attacks on humans not software. No programming language can solve social engineering problems.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: