Hacker Newsnew | past | comments | ask | show | jobs | submit | LittleCloud's commentslogin

> There's a divide between people who enjoy the physical experience of the work and people who enjoy the mental experience of the work. If the thinking bit is your favorite part, AI allows you to spend nearly all of your time there if you wish, from concept through troubleshooting. But if you like the doing, the typing, fiddling with knobs and configs, etc etc, all AI does is take the good part away.

I don't know... that seems like a false dichotomy to me. I think I could enjoy both but it depends on what kind of work. I did start using AI for one project recently: I do most of the thinking and planning, and for things that are enjoyable to implement I still write the majority of the code.

But for tests, build system integration, ...? Well that's usually very repetitive, low-entropy code that we've all seen a thousand times before. Usually not intellectually interesting, so why not outsource that to the AI.

And even for the planning part of a project there can be a lot of grunt work too. Haven't you had the frustrating experience of attempting a re-factoring and finding out midway it doesn't work because of some edge case. Sometimes the edge case is interesting and points to some deeper issue in the design, but sometimes not. Either way it sure would be nice to get a hint beforehand. Although in my experience AIs aren't at a stage to reason about such issues upfront --- no surprise since it's difficult for humans too --- of course it helps if your software has an oracle for if the attempted changes are correct, i.e. it is statically-typed and/or has thorough tests.


> Usually not intellectually interesting, so why not outsource that to the AI.

Because it still needs to be correct, and AI still is not producing correct code


I happen to be a shareholder in EA as part of a diversified portfolio. Not that my vote matters but I'll definitely be taking PE money. +20% in one day is nothing to sneeze at.

From the other comments about EA's games, it's not like EA is that special of a company. There's always going to be some other company (not necessarily an AAA-games maker) worth putting your capital in and end up doing as well as what the hypothetical EA could do if it were not taken private. (Obviously finding such market-outperformer isn't easy but by the same argument I'm not convinced that EA would be obviously that outperformer either.)


Speaking as a C# developer, who had in the past wanted to learn F#, but never got very far. What discourages me every time:

- C#'s good enough. Nothing's stopping you from writing functionally-oriented code in C# (and I do prefer that over traditional "enterprisey" object-orientation.)

- It's relatively difficult to have a codebase that is partly in C# and F# for incrementally trying things out. (I understand this is not really F#'s fault, owing to the .NET compilation model where C# and F# compilers each produce their own assemblies. And that Microsoft hardly cares about F#, and the tooling leaves a lot to be desired - admittedly I'm spoiled by C# tooling. )

- F# having its own implementations of concepts like async, option types introduces friction with C# obviously. I get that F# async is more powerful in some ways, but then again... F#'s option type is a reference type unlike C#'s Nullable<> value type, it's hard to see what's the advantage in that other than worse performance. One almost gets the impression that F# designers don't care about performance (while C# designers do in the past few years with additions to the ecosystem like Span<T>). This makes it hard to justify coding in F# for infrastructure libraries (which is what I often do).


Lack of expressions is the main C# deficiency that will never get resolved. The other big advantage of F# is that it can be very easy to understand: file ordering means that you can read through from start to end to understand the code.

Ordinarily in a mixed codebase, the F# comes nearer the start, with C# depending on F#. That's because C# has lots of glue code to connect to the outside world (xaml, cshtml, EF...) and this is less straightforward to migrate to F#. The only problems with mixing languages is when you want F# in the middle of some project where it depends on some parts of the project and other parts of the project depend on it. But if you can identify something independent in a C# project and extract that out, you have already made the project simpler.

You can ignore async and use task. You can use async in the (very rare) cases when you don't want a hot task. You can also ignore Option and use ValueOption all the time. The struct types are new and have meant that F# does not have a performance deficit.

ValueOption is just better than Nullable<> since Nullable<> restricts to value types only. Resulting in Nullable composing terribly and requiring ad-hoc code everywhere.


C# 1.0 did not have generics, period. So the standard dictionary (Hashtable†) type took keys and values typed as "System.Object". As seen in the linked documentation this class still exists in the latest .NET to this day.

Occasionally one would still encounter non-generic classes like this, when working with older frameworks/libraries, which cause a fair bit of impedence mismatch with modern styles of coding. (Also one of the causes of some people's complaints that C# has too many ways of doing things; old ways have to be retained for backwards compatibility, of course.)

https://learn.microsoft.com/en-us/dotnet/api/system.collecti...

The paper that the other commentator was referring to might be this: https://www.microsoft.com/en-us/research/wp-content/uploads/...


Yeah, that does not mean type erasure. There were just no generics. That is just making use of polymorphism in the language. At no point is there a difference between what the language knows a container can contain, and what the runtime knows it can contain. There is nothing erased.


> On the other hand I cannot not help thinking that this is similar to the arguments brought forward when the internet was new. How could correctness and plausibility be established if you don't have established trustworthy institutions, authors and editors behind everything?

Not long ago I had the same viewpoint as you! But thinking back now — it dates me but I definitely lived a childhood without Internet access — probably the optimistic belief before our age of "misinformation" is that, in the marketplace of ideas, the truth usually wins. Goes along with "information wants to be free" — remember that slogan?

For us that grew up learning things "the hard way" so to speak, that made perfect sense: each of us, as should have the capability to discern what is good or bad as, individual independent thinkers. Therefore, for any piece of information, there should be a high probability, in the aggregate, that it is classified correctly as to its truth and utility.

I think, that was to some extent, even a mainstream view. Here's what Bill Clinton's said in 2000 advocating to admit China to the WTO: (https://archive.nytimes.com/www.nytimes.com/library/world/as...)

Now there's no question China has been trying to crack down on the Internet. Good luck! That's sort of like trying to nail jello to the wall. (Laughter.) But I would argue to you that their effort to do that just proves how real these changes are and how much they threaten the status quo. It's not an argument for slowing down the effort to bring China into the world, it's an argument for accelerating that effort. In the knowledge economy, economic innovation and political empowerment, whether anyone likes it or not, will inevitably go hand in hand.

I would say, what we have since learned after some 20 years, is that in the marketplace of ideas, the most charitable thing we can say that the memes with the "best value" win. "Best value" does not necessarily mean the highest quality, but rather there can be a trade-off between its cost and the product quality. Clearly ChatGPT produces informational content at a pretty low cost. The same can be said for junk food, compared to fresh food: the overall cost of the former is low. Junk food does not actively, directly harm you, but you are certainly better off not eating too much of it. It is low quality but has been deemed acceptable.

There are examples where we can be less charitable of course. We all complain about dangerous, poorly manufactured items (e.g. electronics with inadequate shielding etc.) listed Amazon, but clearly people still buy them anyway. And then, in the realm of politics, needless to say, there are many actors bent on pushing memes they want you to have regardless of their veracity. Some people on the marketplace of ideas "buy" them owing to network effects (e.g. whether they are acceptable according to political identity, etc.) in the same way that corporations continue to use Microsoft Windows because of network effects. We also probably say nowadays Clinton has been ultimately proven wrong by the government of China.

Survival of the "fittest" memes if you like: evolution does not make value judgements.

If you ask me, maybe our assumption of de-centralized truth-seeking was itself, not an absolute truth, to begin with. But it took years to unravel, as humans, collectively speaking, atrophy from disuse of the research and critical thinking skills before technology dropped the barriers of entry to producing and consuming information.


Just wanted to add some color here. The various written forms for native words (和語) like 聞く and 聴く aren't really coming from different words, they were most very likely the same word originally. Only that because Japanese adopted Chinese-character writing, it started developing these stylistic distinctions.

Native Japanese people writing these words also don't always distinguish them properly, e.g. using 聞く generally even if the more specialized meaning of 聴く would apply.

This particular kind of distinction is called 異字同訓 (の漢字の使い分け) in Japanese.

I think a better example is 話す versus 離す which I just discovered yesterday also seem to have the same pitch accent!


I'm sure what I'm going to write are not new insights but, to add to your point, generally we expect to consume information much quicker through reading than through hearing. When reading, say English, we don't really sound through each word but recognize the shape as you say. More concretely, it's probably our brains' pattern matching of frequently occurring clusters/patterns of letters: "psy" in "psychology" for example, if you spelt that "phonetically" as "saikolojee" I doubt people would recognize that.

I don't know Korean, but I've been told that it has plenty of Chinese loanwords (or Chinese-derived character compound "words") like Japanese, yet because it is less homophonous than Japanese, Korean people nowadays are very able to recognize such words by the shape of them when written in Hangul.

As a native Chinese speaker, Cantonese to be precise, my opinion is probably biased but I find Japanese to be so homophonous (for Sino-Japanese words 漢語 but even for native words 和語 to a degree) that even if Japanese adopted a more "efficient" syllabary like Hangul, words would still be difficult to decipher, spaces included. Pitch accent only helps so much (and of course that isn't written down in kana), the typical examples being 紙 ("kamì": paper), 髪 ("kamì": hair) and 神 ("kàmi": god). I'm trying to indicate the pitch accent in an adhoc manner with the accented letters; note that according to my (native) Japanese dictionary, 紙 and 髪 have the same pitch accent. So I think, a more phonetic writing system for Japanese, that remains efficient for reading, would end up annotating words with semantic or etymological hints, like the idiosyncratic spellings of Latin and Greek root words in English.

Also, the texts we read are allowed to use more sophisticated words, more literary words, with more complicated sentence structures. So while phonetic spelling ought to work to represent ordinary speech, that's not necessarily the case for general written expression. In practice, I find 漢語 in Japanese speech to be surprisingly difficult to comprehend, in the sense that, words are often too indistinct for me to pick them up by ear if I have not heard them in speech before, even if I have seen them in text before and would know the characters already (including their readings 音読み in Japanese).

That's different for me as a Cantonese speaker where I can pick up new literary compound words if their constituent characters are ones I know from other words/compounds, even rather infrequent ones. I would say it's because the sound system of Cantonese has 6 tones, mapping nearly one-to-one with the tones + voiced/voiceless distinction from Middle Chinese, which in turn is a much more monosyllabic language where characters have quite distinct sound values compared to any of the modern Chinese languages.

Incidentally I have a harder time picking up new terms in Mandarin by ear where many characters that are distinct in Cantonese sound the same; and it's widely agreed that Mandarin Chinese has evolved more disyllabic words to compensate. Again, yes, even when I know the compounds already through writing, learning to pattern-match for them in hearing in real time, has been, unfortunately, for me at least, a different story.

(Edit: added below)

I believe also that both Chinese and Japanese got more homophonous because they could get away with it (i.e. people got lazy in pronouncing more intricate sounds) when there was 漢字 to distinguish characters/words when needed. So there's certainly a feedback loop in there. If for the 2 to 3 thousand years of history there was no writing in ideographic characters, there would have been evolutionary linguistic pressure against too many homophones in the languages.


> When reading, say English, we don't really sound through each word but recognize the shape as you say

Apparently, some people do and some people don't; and each category is surprised the other exists. I'd be curious to know if the "visual" category is more prevalent for ideogram-based writing systems.

> my opinion is probably biased but I find Japanese to be so homophonous

It's difficult to evaluate what "so" means here, but it seems to me that homophony is made a bigger problem in this thread than it really is.

For instance, in French, we have words that have many homophones. For instance there is "vert" (green) "ver" (worm) "verre" (glass) "vair" (a type of squirrel) "vers" (the "to" conjunction) "vers" (verse) "verrent","verre" (conjugations of a very rare verb that means "to pounce" - probably most French speakers don't even know it; if you use it in a sentence their speech recognition module will probably segfault).

There's also common homophones "père" (father) and "paire" (pair) and "pair" (peer as in P2P), "mère" (mother) and "mer" (sea), "serre" (greenhouse) and "serre" (talon) and "sert" (conjugation of serve), "je suis" ("I am") and "je suis" ("I follow").

Some of these homophones double as homographs, as you can see. They are all "strict" homophones BTW, as we don't have distinctions between short and long vowels like English has, for instance.

But both in written and spoken language, grammar and context usually disambiguate the meaning - if any. Based on that observation, it seems to me it wouldn't be more difficult to figure which "kami" it is than which "vers" it is (worms, conjunction or verse), unless the sentence is specifically designed for that purpose.

Amusingly, the homophones "vair" and "verre" led to a small quarrel in the 19th century about what were the shoes of Cinderella made of - fur or glass? People who want to show off sometimes bring that up, because "vair" is a rarely used word [1].

[1] https://en.wiktionary.org/wiki/vair


Interesting examples from that wicked language, thank you! (Studied French before but unfortunately lost interest.)

You're right about the context and grammar being usually sufficient to disambiguate. I thought about the examples 話す ("hanâsu"; to talk) and 離す ("hanâsu"; to separate) I gave earlier, and I don't remember ever confusing the two in speech dialogue. But it's probably that there are enough non-homophonous near-synonyms for these words in Japanese that would get used in practice, if we imagine contexts where a word could conceivably be confused with another homophone, e.g. 言う【いう】, 喋る【しゃべる】, 語る【かたる】 in the case of 話す【はなす】.

The above words are "native" Japanese words 和語. Definitely the problem of homophones is way less serious for those words than for Sino-Japanese vocabulary, 漢語.

I think Japanese is untypical in that it's a language with a limited repertoire of syllables adopting words from a language with a much richer system of sounds, Chinese, and trying to map the (compound) words character by character. By the way, that probably relates to why most learners find pronunciation of any variety of Chinese to be difficult; the language needs to make the necessary aural distinctions, including tone (famously), which are apparently subtle to non-native speakers.

There are countless two-syllable compound words in Chinese, a good portion being used in common speech, but of the ones adopted into Japanese, many of them turn into essentially two "syllables" also. (Actually, some modern terms are back-borrowings from Japanese coinages through writing, just so that I'm being fair and historically accurate in this comment.)

Of these terms there is at least a few, that in Japanese, would be confused in speech, that in practice the pronunciation gets mangled to disambiguate. Here are some examples:

私立 ("privately established (institution, organization, etc.)") versus 市立 ("municipally established") has the following disambiguation in common speech:

  私立【しりつ】→【わたくしりつ】

  市立【しりつ】→【いちりつ】
科学 ("science") versus 化学 ("chemistry") has the disambiguation:

  化学【かがく】→【ばけがく】 (as if the word were 化け学 but it's never written that way)
Basically the native Japanese reading of a character 訓読み is substituted for the Sino-Japanese reading 音読み in speech, even though the latter is the proper, original reading for that character in the word in question it occurs in.

For the reader's curiosity, there's also one case of pronunciation mangling that I know in Mandarin Chinese:

炎 ("inflammation") versus 癌 ("cancer"):

  In Mandarin, we have 炎 yán, 癌 ái even though the sound value of 癌 ought to have been also yán, homophonous with 岩 according to the rules of sound change over the centuries.  Contrast with the following:

  Cantonese 炎 jim4, 癌 ngaam4, 岩 ngaam4
  Japanese 炎 en エン 癌 gan ガン, 岩 gan ガン
Basically it's the consequence of packing too much information onto individual syllables/characters while Western languages would simply devise longer words (with more syllables) for more sophisticated/technical concepts.


> Interesting examples from that wicked language, thank you! (Studied French before but unfortunately lost interest.)

I won't blame you, as this was my thought when I wrote it.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: