Hacker Newsnew | past | comments | ask | show | jobs | submit | mritun's commentslogin

> Google will eventually copy…

Weird take given Google basically invented and released through well written papers and open-source software the modern deep learning stack which all others build on.

Google was being disses because they failed to make any product and were increasingly looking like Kodak/Xerox one trick pony. It seems they have woken up from whatever slumber they were in


They didn't entirely drop the ball since they did develop TPUs in anticipation of heavy ML workloads in the future. They tripped over themselves getting an LLM out, but quickly recovered primarily because they didn't have to run to nvidia and beg for chips like everyone else in the field is stuck doing.


Like MS, Google is ubiquitous - search is much like Office and DOS before that. Anything OpenAPI or the other AI competitors create would normally be protected by patents for instance. Not so with an AI models. Google has the clout/know how to responded with similar technology - adding it to their ubiquitous search. People are both lazy and cheap. They will always go with cheaper and good enough.


Google invented the technology.

https://en.m.wikipedia.org/wiki/Attention_Is_All_You_Need

OpenAI was the copycat.

If Google had patented this technique, OpenAI wouldn’t have existed.


How do you patented it? What specific "practical, real-world application" does AGI purport to solve? All these algorithms work by using massive amounts of data. They all do it the same way or close to the same way.

"Algorithms can be patented when integrated into specific technical applications or physical implementations. The U.S. Patent Office assesses algorithm-based patent applications based on their practical benefits and technological contributions.

Pure mathematical formulas or abstract algorithms cannot be patented. To be eligible, an algorithm must address a tangible technical challenge or enhance computational performance measurably.

Patenting an AI algorithm means protecting how it transforms data into a practical, real-world application. Although pure mathematical formulas or abstract ideas aren’t eligible for patents, algorithms can be embedded in a specific process or device." [1]

[1] https://patentlawyer.io/can-you-patent-an-algorithm/#:~:text...


Why? Social security is young supporting the old.


Yes, 100% of employed workers supporting 100% of persons older than 65.

These statistics are invariant with respect to the economy.


That’s pretty much what the British did in Bengal.


“Divesting” Chrome does not even make much sense. Divestment means owner must sell the assets to an unrelated party so $$$ must trade hands in exchange for something of value.

Here is me “thinking aloud”, I don’t have that kind of money to buy Chrome.

Assuming someone has the funds to buy Chrome, they will end up buying the IP and brand. The IP is copyright on the code which is mostly open source, proprietary Google services integration bits, and then the brand.

I can see value in buying the patents & copyright to the codebase and taking it private to build a proprietary State of the art browser to sell or bundle for free with Ads.

Nobody is going to pay for a browser in 2025 so Mritun’s Chrome (my brand - I can’t call it Google Chrome anymore) is going to be ad supported. I might have to think of paid V8 runtimes like Mritun’s Electron and Mritun’s NodeJS to extract maximum value from my investment.

All in, in these high interest-rate environment, I do not see someone buying Chrome to fund it for free - however a lot of ecosystems depend on Chrome, so the best bet would be to use copyright and patents to go after the users. If I have money, I can totally see buying Chrome for approx $100M and then going after monetization.


A lot of the value of Chrome is in the expertise of it's developers - so whoever buys it better get those devs or they're not going to get far.

Might not be necessary, though. A quick transfusion of cash to our sitting President will bring that action to a screeching halt.


There is soooo much scummy surveillance and data stealing that a nefarious owner of Chrome could achieve. People say Google is "evil" but that isn't a fraction of the total tracking and wholesale private content harvesting a different owner would monetize.


Frankly there isn’t much money in surveillance - with the “Google” and the four colors gone I doubt the “Chrome” brand is going to be worth much. “Blackrock Private Equity LLC Chrome” just does not have the ring.

The money is in Chrome patents and copyright. One can go to town and blitzkrieg the NodeJS ecosystem which has super deep pockets. SCO will look baby-talk.


> investigators did not receive a response from Raoult, the corresponding author. To date, 32 papers published by IHU authors have been retracted, 28 of them co-authored by Raoult, and 243 have expressions of concern.

I am not a scientist but it of 32 faulty widgets, 28 widgets are made by a guy called Raoult, then even blue collar workers know Raoult is an idiot with no business making widgets. If that does not happen, ACME Widgets will eventually go out of business.

This is how science gets discredited - by allowing idiots do “science”. Here Raoult does not get kicked out, but is Director of the ACME - this is how entire field of medicine research gor tainted.


> This is how science gets discredited - by allowing idiots do “science”.

Not really - science has to be open to all as otherwise we risk having a "priesthood" of scientists. What we need are better systems to deal with rogue researchers and retracted papers.

The scientific principle is fine, it's just our implementation that is lacking.


The biggest issue, I think, is what happens after a paper is released; it gets spread, reinterpreted, diluted, popularised, editorialised, etc through three or four layers of media (university press room -> serious news/science outlets -> popular news/science outlets, going from "these numbers indicate with 89.1346% certainty that this exoplanet may contain traces of h2o" to "EXTRATERRESTRIAL LIFE FOUND PACK YOUR STUFF WE'RE GONNA COLONISE IT"), then onto social media.

The Wakefield paper linking autism to vaccines has become so mainstream in certain communities it's impossible to undo the damage even though it was (finally) retracted in 2010 and Wakefield himself was struck off the register. At this point only through a very long, slow and arduous process can you get this idea out of people's heads, thanks to constant repetition, reinterpretation, scaremongering, and a whole community forming. It's going to be the same with this paper and the idea that dewormer is effective against the 'rona, or any quack 'rona countermeasures for that matter.


Yes, but why do you believe publications should be optimized towards a better public opinion (whatever that means) and not the progress of science?

For the progress of science everyone should be able to share their results so that others can try to reproduce them.

This IS the real question: whether it's reproducible or not. By adding more barriers to publishing (like stricter peer reviews) you're not actually getting closer to the answer. The opposite can be true because there's a chance of censoring reproducible results that don't fit the current consensus.


I think it would be fun to see if it is possible to get the very popular notion that the Wakefield paper is the only source of all concerns about vaccines and autism out of the minds of "rational" Normatives.

I would be surprised if anyone could do it.


Funnily enough the idea that hydroxychloroquine helps (as in Raoult's paper) was replaced in cranks heads by the dewormer (ivermectin).


There is probably a reasonable amount of retractions to expect in your career- if you work for 40 years doing research and publish a couple papers per year, then I will allow that a researcher might have become involved in a study that was deeply flawed, and the related papers need retracted. But to stay in good standing with the community, the researchers should admit the flaws (as some of the authors of this paper did).

Didier Raoult is on a whole other level. He refuses to admit any issues with the study. He has been under criminal investigation for this. This isn't the first time this has been a big deal (scroll through here to see the issues he's had for the last few years https://retractionwatch.com/?s=Didier+Raoult).


This misses the context that Raoult alone has published more than 3000 papers and is one of the most highly cited researchers in his field. Take the combined sum of all the IHU authors and it's likely in the tens of thousands.

This is also why people increasingly believe whatever they want. Nobody is honest, everything is framed in the most exaggerated ways - which then makes it easy to undermine, and there's mass corruption everywhere on top of all of this especially in covid related stuff where you have geopolitics, politics, and hundreds of billions of dollars in profiteering stewing in one giant, and quite toxic, pot.


people have always believed whatever they want. hence, it has been a game of persuation for eons. the most persuasive person was always listened to regardless of what they said.

science changed the game by insisting that it is not what we believe, but whats out there. But, it doesnt come naturally to most of us. we still love narratives, and are easily fooled.


I think it is very interesting how people talk about science as if it is (near) perfect, and that this perfection passes on to those who are believers in it.

I wonder what the causality is behind so many people ending up with this same conceptualization within their minds. It has an eerie resemblance to the behavior of people in other ideological groups, like religion.


regardless of where you are in the world, your race, religion, language, whether you are left or right in political leaning,

you are better off believing in newton's theories. it will help you build tools to navigate the world better and weapons to protect you.


If this applies at the individual level, I would love to read even an attempt at a proof.

One interesting aspect of science is that so much of what's true within it requires no proof, like religion. A lot of people try to get around this by saying that people aren't a part of science, but then they always seem to lose track of the thread when asked how science accomplishes anything without having either people involved, or the supernatural (which "doesn't exist" dontcha know).


My personal view is that science is our human “interpretation” of what’s “out there” Using a language (math) and a method (scientific method) which leave no room for doubt.

It is purely a human endeavor much like other arts we pursue - except in this art, we settle opinion/belief with a specific method. For a belief to be inducted into science, it has to endure rigorous tests.

Cs peirce attempted a proof here: https://www.peirce.org/writings/p107.html

At an individual level, I suspect, we fear the uncertainty that comes with the “irritation of doubt” and we also fear being a social outcaste. Therefore to optimize happiness we would rather stick with the beliefs of our tribe - however unscientific they may be - because they serve as a mental anchor. Hence the prevalence of religions and beliefs that are tribal.


Scientists are not superhuman, even in the golden era of science it was Max Plank that remarked, 'Science advances one funeral at a time.' And things have rather worsened since his era.

It's not science that really changed anything but rather humanity freeing itself from bias and opening itself to having a discourse and debate on practically any issue, without consequence - accepting all the discomfort that that entails.

But such eras are brief and can fade rapidly. The Islamic world was once a global leader in the sciences. Algebra = al jabr, Arabic numerals, alongside countless contributions to astronomy and other fields. Oh but how fast that candle of learning can be snuffed out when it becomes inconvenient.


> Raoult alone has published more than 3000 papers

Hum...

There's something very wrong with a figure like that.


I've heard he published 300+ a year since 2009 when the French research minister of that time changed how grant attribution worked. The majority of them in two different publications directed by two of his friends/coworkers.


> This is how science gets discredited - by allowing idiots do “science”. Here Raoult does not get kicked out, but is Director of the ACME - this is how entire field of medicine research gor tainted.

It should be noted that a few months after Raoult's paper was published another one contradicting his was

* https://www.nejm.org/doi/full/10.1056/NEJMoa2022926

It's just the contradiction may not have been as widely known, especially by those that aren't involved in the field.

Isn't this how science is (at least partly) done? Claim (which is falsifiable) and counter-claim (verification/repudiation).


Based on my experience as an analyst of Looney Tunes, it would appear that being a Director of ACME Widgets is right up his alley.


He is under criminal investigation and has been banned from practicing medicine for two years - I think that substantially undercuts your argument.


> by allowing idiots do “science”.

Anyone can "do science". No permission is required. The question is who publishes what garbage.


it has always been an "attention"-economy rather than a "science"-economy. and scientists are human too, with their biases.

there are two kinds of people. the majority of us have made up our minds and find evidence for it. the rare few listen to whats out there with an open mind. once someone has put their name on something, they fight hard to protect that.

entire populations have always been and still are driven by pseudo-science (astrology etc).


Same thing happened that has the cooks and gardeners beholden to non-compete agreements in the land of the free!

Greed is universal. Slavery is one of the outcomes.


Non-determinism doesn’t mean chaos.

Record the seed for all RNGs used (if you’re using more than one for your SkipList implementation) and then your Skip lists are no more harder to debug than a linked-list.


That mitigation is neither sufficient for every situation, nor was it mentioned in the article even for the cases where it is adequate.

Again: the point of the comment wasn't "tell people not to do this", it was "give people enough information to make an informed decision".


RO2040 isn’t slow - it’s literally one of the fastest on the market. It’s a dual core microcontroller that can easily run both cores at 133-250+ MHz.

https://github.com/Wren6991/PicoDVI

The GPIO on RPi is not very useful for precision work and you’re limited to using SPI (usually to talk to an auxiliary microcontroller). The GPIO on RP2040 is so good that you can use it as 24 channel 100Msps logic analyzer in a pinch.

https://github.com/gusmanb/logicanalyzer


Yeah anyone calling the RP2040 slow for GPIO hasn't used it in anger. The PIO is amazing.


Smaller CPU aside, how do its GPIOs compare performance-wise to the older PRU contained in TI Sitara CPUs used in BeagleBoards? Many complained about the small number of channels on TI processors (only 4 if memory serves) therefore I wonder if it could be considered a successor where very fast digital I/O is needed but power and memory to run Linux is not because it's being hosted and run on a bigger nearby CPU.


RP2040 is a micro-controller and does not have an MMU. It cannot natively run any OS that relies on a MMU and that includes NetBSD. One can of course write an emulator that does and run that emulator on RP2040 and NetBSD on the emulator.

edit: Emulators, of course!


This project is an emulator that is able to run NetBSD (and Ultrix, and Linux, and potentially others)!


All this isn’t exactly a secret. ARM maintains and provides extensive documentation and so does Apple. Is there anything specific you think is being hidden or obfuscated in the documentation?


There are a lot of undocumented parts of the Apple CPUs, for instance AMX. All such undocumented features can normally be exploited only by the libraries and applications provided by Apple themselves, but not by the applications and libraries written by other parties, which are disadvantaged.

This is the same mechanism by which Microsoft has eliminated the competition for Microsoft Office, which used undocumented Windows APIs so that the products of any other vendor could not keep up with it, especially after the launch of any new Windows version.

Now one can find some more complete documentation for the Apple CPUs as the result of reverse engineering work done by various people, but after each introduction of a new Apple CPU model the reverse engineering work may need to be done again.

Examples:

https://github.com/name99-org/AArch64-Explore

https://github.com/corsix/amx


Probably because they don't want anyone to depend on AMX, and they want to be free to remove it or change it in the future. On the M4 for example AMX features are accessible thru SME, which is an official ARM extension.


AMX and SME are independent


Do they really not share any execution hardware?


I'm sure the register file and execution units are shared to some extent


Anything capable of fast C interop (so no Go and Java for you, good riddance) is free to use Accelerate. The reason Apple went with AMX first was that SME was not ready at the time, and they did want to have that. Once SME became available, they readily exposed it, as can be seen in M4, using the same hardware blocks underneath.

I'm not here to defend other anti-competitive practices by Apple but as far as just their CPUs go, there are none in that area.


Apple aren't allowed to publicly support unofficial extensions of the ARM ISA.


Apple writes the libraries for you to use AMX. They aren’t giving themselves preferential treatment here.


If that's the case, then why does the GPU portion have to be reverse-engineered for Asahi Linux? Of course I knew about the ARM portion, there are lots of ARM chips licensed to by ARM Holdings, it's not exactly a secret. But the "apple silicon" chip in its entirety, is not completely documented.


Are any competitive GPU architectures any better? I don't think nVidia, AMD, Intel, nor PowerVR openly publish the internals of their graphics products either.


AMD and Intel publish detailed GPU documentation.



The API for programming the GPU is Metal.


Peripherals are not the ISA or CPU architecture: they are usually made by numerous parties.


Apple has designed their own GPUs since they stopped using PowerVR with A11


What does that have to do with ARM64 assembly? The ISA and CPU architecture are orthogonal to all peripherals.

These peripherals are accessed with memory-mapped IO using the same instructions any other program uses.

Documentation about ARM64 assembly shouldn't and doesn't contain specific peripheral access info. ISA docs contain info common to all CPUs implementing the spec.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: