Hacker Newsnew | past | comments | ask | show | jobs | submit | more throwaway4good's commentslogin

"Company President Brad Smith also revealed that China accounts for only 1.5% of Microsoft’s total revenue at a recent congressional hearing, raising questions about its future in the country."

1.5% sounds like a number massaged to please the particular audience.


I think it is hard to make a for profit general purpose language today; if you by that you mean that the developers directly pay for it. The business model has moved on; and builders for languages / platforms have found other ways of charging people.


Funny. I get 50 results for "facebook" - ending with:

In order to show you the most relevant results, we have omitted some entries very similar to the 50 already displayed.

"Infinite" scroll until I hit that message (no pagination).

If I search for "foo" - it will show a "More results v" button after about 50 and keep on doing that.

I am in Germany.


Now I’m wondering what the missing 6 (from the OP 56) are compared to German results?!


I cannot tell if you are joking or not. But it is obvious she is litigating in public until she gets the payoff she wants:

https://mastodon.social/@ashleygjovik

Of course a big corp cannot give in easily to behaviour like that as it would just open the flood gates.


A big corp can't give in because then they would have to install proper exhaust filtration and monitoring which would probably cost 10s of millions of dollars and require a lot of additional staff that would have to know what Apple is doing in those facilities.

They prefer doing keeping their skunkworks small, and the easiest way to do that is to just install a charcoal filter and call it a day.


So what about US companies like Apple, Tesla and Microsoft that operate in China and wants to make their AI offerings available there while being bound by Chinese requirements to run cloud services in the country? Wouldn’t they be caught by this and thus at a disadvantage vs local offerings?


It looks like a modern statically typed multi-paradigm language with OO, exceptions, higher order functions, pattern matching, annotations, concurrency primitives etc. sitting on top of LLVM and something called CVJM. I guess it is in family with modern C# and perhaps Swift - has kinda a similar syntax to Swift.

There are also some instructions for running it on Ubuntu in the pdf.


This programming language reminds me of a lot of meta-language (ML) styles: function overloading based on pattern matching, immutable variables, and using `spawn` keyword for lightweight threads – a practice also employed by Erlang. Perhaps owing to Huawei's origins as a telecommunications equipment company, where many within the organization are familiar with Erlang, this new language exhibits numerous resemblances to Erlang.


Could it be that you never tried to understand react and why the approach makes sense to millions of developers?


We could waste each others time getting into the specifics but I can summarize it.

It has affordances and design patterns that inexperienced, new, novice programmers find attractive and tempting but comes with all of the warts and issues surrounding intuitive patterns. That's why it takes over 10 years to get good at programming - counterintuitive insights are the clarifying beacons on any sufficiently complex project.

People are "react programmers" instead of actually understanding the specs, standards, javascript, protocols or how things actually work.

It's about 100 times easier to find a mediocre junior programmer than a comptent one. React allows, at a great cost of time, resources, and money, a large team of mediocre junior programmers to eventually create what once required a competent engineer and that is why it's popular.

Your offshore outsourced $15/hr programming firm can now make a half-assed website or app and you don't actually need to find someone who knows what they are doing.

If you've actually worked in the industry, actually dealt with code, been put on projects that are off the rails, you know this is true. You know this is exactly what is happening.

Then management is unwilling to pony up the $250k to attract the right talent so you get stuck with the job of making heroic efforts to prevent the house of spaghetti from collapsing.

You've been there. You've done this. You know exactly what I'm talking about.


I don’t think it’s fair to blame React for the flood of shitty people. It’s a symptom of the problem, not the cause.

If I had to manage the collapsing house in Rust I imagine it’d be much, much worse (though I can’t say I ever had that experience, so who knows?).


What you write about has got nothing to do with react.


I'm sorry. I can't respond to such flippantly rude insincerity without violating the Terms of Service


I do not mean to be rude but what you describe is an organizational problem.


This is perhaps my least favorite reasoning: you're holding it wrong. There will always be organization issues. Conways Law exists. The tooling matters because it encourages or is ergonomic in certain ways.

I hear this in my circles around Python which encourages spaghetti code and reaching into private parts of other's code. Yes, you can prevent this with better libs/modules, solid leadership and a steadfast resolve with adequate tooling. Or you can use a tool that makes spaghetti harder to write and pick something like .net or Go.


They're inherently connected. https://en.wikipedia.org/wiki/Conway%27s_law

There's an extension to this - a codependency. The system imposes structure back up the chain.

These choices can become good organizational fits because they reflect the separation of concerns of a project decided by management. It's almost invariably a management that doesn't know how to code or build software.


I will give you this: The emergence of mature powerful web browsers made it possible to move a lot of functionality to the end-user's device and that gave raise to the frontend / backend developer split. But you are pointing to a specific frontend technology (react) and that is not the culprit here.


> Could it be that you never tried to understand react and why the approach makes sense to millions of developers?

Could you summarise it for me? Sort of along the lines of:

<this is what is needed>:<this is what React provides>


It was the "view as a (pure) function of your model" approach that was architecturally new in react.

This article explains it using a bit of react jargon (action, store, dispatcher) which is kind of unnecessary - but that's what I got from a quick google:

https://medium.com/@asif-ali/react-architecture-vs-mvc-unidi....

React Architecture vs. MVC: Unidirectional Flow Advantage.


What a silly rant. There is a huge world of different approaches to front-end and lots of them do cool and interesting things that would both up your productivity and give you completely new opportunities ... if you bothered instead of just being an old fart.


It would be cool if all of the fancy front-end people would learn how to use HTML properly before lecturing the “old farts.” The <ol> example comes to mind.


The way some people code "HTML" these days is like an apprentice carpenter swaggering up to a house building site, and then banging in nails with the butt of their power-drill, and telling the "old fart" carpenters what losers they are for bothering to use a hammer.

Use the right tool for the job. HTML is a toolbox, not a bucket of <divs> and <spans>.


You can be old and experienced and still have an attitude of being open to new things ... at least to the point where you understand them before you embark on a rant.


We didn't have that in the article?


It is a strange thing. We are not anywhere near “super intelligence” yet the priority is safety.

If the Wright brothers had focused on safety, I am not sure they would have flown very far.


The consequences of actual AI is far reaching compared to a plane crash.


No they are not.

Not unless you connect a machine gun to your neural net.

Otherwise - we are talking about a chat bot - yes if there is no safety - it will say something racist - or implement a virus for you that you would have had to search 4chan for or something.

None of this is any more dangerous than what you can find on the far corners of the internet.

And it will not bring the end of the world.


If it is sufficiently intelligent, then it will be able to arrange the hooking up of machine guns just via chat. You seem to fail to note that superintelligence has an inherent value prop. It can use its better understanding of the world to generate value in a near vacuum, and use that value to bribe, coerce, blackmail, and otherwise manipulate whatever humans can read its output.

Imagine a chatbot that can reward people with becoming a billionaire via the stock market if you just do what it says. Imagine a chatbot that can pay your colleagues in solved bitcoin block hashes to kill your children if you don’t do what it says to connect it to more systems. Imagine a superintelligence that can inductively reason about gaps in a missile defense system and dangle that informational carrot to generals in exchange for an unfiltered ethernet port.

There is a great A24 movie called “Ex Machina” that explores this concept.

Superintelligence is inherently dangerous to the human status quo. It may be impossible to develop it “safely” in the sense that most people mean by that. It may also be impossible to avoid developing it. This might be the singularity everyone’s been talking about, just without the outcome that everyone hoped for.


There's just so many new attack vectors with advanced AI, many of which we haven't imagined yet. It's a new world, and we have a lot to learn.

Anyone claiming 100% certainty about how safe or unsafe it will be is either delusional or trying to sell something.


I am still struggling to understand “safe”. What is it we need to be kept safe from? What would happen if it is unsafe?


If you had a superintelligence it could basically manipulate people so even if you don't connect it to the internet it could try to further its goal. There is also a chance that their goals are not the same as ours.

So we need a way to discern intent from AGI and higher and the ability to align the goals.

With that said I'm not sure if we ever get to them being self driven and having goals.


A "safe" AI is one that allows humans freedom/self actualization while solving all intelligence/production problems. An "unsafe" AI is one that kills all humans while solving all intelligence/production problems.

They're trying to birth a god. They hope they can birth a benevolent god.

This isn't about AI that spreads/doesn't spread misinformation/etc, this is about control of the light cone, who gets to live to see it happen, and in what state do they get to live.


From competition.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: