The early history of AI/cybernetics seems poorly documented. There are a few books, some articles and some oral histories about what was going on with McCulloch and Pitts. It makes one wonder what might have been with a lot of things. Including if Pitts had lived longer, been able to get out of the rut he found himself in the end (to put it mildly) and hadn’t burned his PhD dissertation, but perhaps one of the more interesting comments that is directly relevant to all this lies in this fragment from a “New Scientist” article[1]:
> Worse, it seems other researchers deliberately stayed away. John McCarthy, who coined the term “artificial intelligence”, told Piccinini that when he and fellow AI founder Marvin Minsky got started, they chose to do their own thing rather than follow McCulloch because they didn’t want to be subsumed into his orbit.
The early history of AI/cybernetics seems poorly documented.
I guess it depends on what you mean by "documented". If you're talking about a historical retrospective, written after the fact by a documentarian / historian, then you're probably correct.
But in terms of primary sources, I'd say it's fairly well documented. A lot of the original documents related to the earlier days of AI are readily available[1]. And there are at least a few books from years ago that provide a sort of overview of the field at that moment in time. In aggregate, they provide at least a moderate coverage of the history of the field.
Consider also that the term "History of Artificial Inteligence" has its own Wikipedia page[2] which strikes me as reasonably comprehensive.
[1]: Here I refer to things like MIT CSAIL "AI Memo series"[3] and related[4][5], the Proceedings of the International Joint Conference on AI[6], the CMU AI Repository[7], etc.
> We thus frame molecular docking as a generative modeling problem—given a ligand and target protein structure, we learn a distribution over ligand poses.
I just hope work on these very valid use cases doesn’t get negatively impacted when the AI bubble inevitably bursts.
A lot of cultures have not historically considered artists’ rights to be a thing and have had it essentially imposed on them as a requirement to participate in global trade.
Even in Europe copyright was protected only for the last 250 years, and over the last 100 years it’s been constantly updated to take into consideration new technologies.
The only real mistake the EU made was not regulating Facebook when it mattered. That site caused pain and damage to entire generations. Now it's too late. All they can do is try to stop Meta and the rest of the lunatics from stealing every book, song and photo ever created, just to train models that could leave half the population without a job.
Meta, OpenAI, Nvidia, Microsoft and Google don't care about people. They care about control: controlling influence, knowledge and universal income. That's the endgame.
Just like in the US, the EU has brilliant people working on regulations. The difference is, they're not always working for the same interests.
The world is asking for US big tech companies to be regulated more now than ever.
Facebook's power comes from how it gathered and monetised data, how it acquired rivals like Instagram and WhatsApp, and how it locked in network effects.
If regulators had blocked those acquisitions or enforced stricter antitrust and data privacy rules, there's a chance the social media landscape today would be more competitive. Politicians and regulators probably received some kind of incentive or didn't get it. They didn't see how dangerous Zuk's greedy algorithms would become. They thought it was just a social site. They had no idea what Facebook employees were building behind the scenes. By the time they realised, it was already too late.
China was the only one that acted. The US and EU looked the other way. If they'd stepped in back in 2009 with rules on privacy, neutrality, and transparency, today's internet could've been a lot more open and competitive.
To be fair, "copy"right has only been needed for as long as it's been possible to copy things. In the grand scheme of human history, that technology is relatively new.
Copyright predates mechanical copying. However, people used to have to petition a King or similar to be granted a monopoly on a work, and the monopoly was specific to that work.
The Statue of Anne - the first recognisable copyright law in anything remotely the modern sense dates to 1709. Long after the invention of movable type. Mechanical in the sense of printing with a press using movable type, not anything highly automated.
Having to petition for monopoly rights on an individual basis is nothing like copyright, where the entire point is to avoid having to ask for exceptions by creating a right.
I’m actually curious how they were able to exactly filter some of their less promising impulses.
Ive famously wanted the Apple Watch to be a standalone luxury product.
> Jony Ive envisioned the future of the Apple Watch as a luxury product. Not only did he want to build a $25 million lavish white tent to promote the first Watch, but he “regarded a rave from Vogue as more important than any tech reviewer’s opinion.” According to Mickle, “the tent was critical to making the event as glamorous as a high-end fashion show.”
Meanwhile Jobs always seemed to have an obsession with cubes (NeXTcube, Power Mac G4 Cube), no fans and nobody touching his products (the original iPhone “SDK” announcement was a badly received joke).
> Oracle, to my knowledge, does not profit at all off of the JavaScript name or brand.
At this time, but their ownership and past behavior indicates that if Deno or anyone else tries to have a paid offering, there’s a non-zero chance Oracle will come sniffing for low effort money.
There are some draft PDFs of the standard floating around that are easily discoverable. It appears to be incredibly vague and it’s difficult to escape the sense that ISO just wants to jump on the AI bandwagon. There are no bright line rules or anything. It looks to be little more than weak scaffolding which a certified organization applies their own controls.
Just like the GDPR, there is no way to know for sure what is actually acceptable or not. Huge chilling effect though and a lot of time wasted on unnecessary compliance.
"1 The following AI practices shall be prohibited: (...)
"f) the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons"
See recital 44 for a rationale. [1]
I don't think this is "hilarious". Seems a very reasonable, thoughtful restriction; which does not prevent usage for personal use or research purposes. What exactly is the problem with such legislation?
It effectively bans the educational use of ChatGPT and Claude because they can and do respond to the emotional expression of students. That’s what is hilarious! Do these tools actually violate the act? No one knows. It isn’t clear. Meanwhile, my university is worried enough to sit on their hands.
And this is the whole danger/challenge of the AI act. Of course it seems reasonable to forbid emotion detecting AI in the workplace — or it would 5 years ago when the ideas were discussed. But now that all major AI systems can detect emotions and infer intent (via paralinguistic features, not just a user stating their emotions) — this kind of precaution puts Europe strategically behind. It is very hard to be an AI company in Europe. The AI act does not appear to be beneficial for anyone—-except I’m sure that it will support regulatory capture by large firms.
Calling me pathological doesn’t really strengthen the argument.
One can certainly imagine a textbook QA tool that doesn’t infer emotions. If one were introduced to the market with the ability to do so, it would seem to run afoul of the law, regardless of whether it was marketed as such.
The fact is that any textbook QA systems based on a current frontier model CAN infer emotions.
If they were so forward thinking, why ban emotion detection and not emotional abuse?
And embrace the future of e.g. AI models deciding if you get healthcare or government services or a loan or if you're a fraud, or not, with zero oversight, accountability or responsibility for the organisation deploying them? Check out the Post Office scandal in the UK to see what can happen when "computer says so" is the only argument to imprison people, with no accountability for the company that sold the very wrong computers and systems, nor the organisation that bought them and blindly trusted them.
Hard pass. The EU is in the right and ahead of everyone else here, as they were with data privacy.
Their sales pitch when they released the M1 was that the architecture would scale linearly and so far this appears to be true.
It seems like they bump the base frequency of the CPU cores with every revision to get some easy performance gains (the M1 was 3.2 GHz and the M3 is now 4.1 GHz for the performance cores), but it looks like this comes at the cost of it not being able to maintain the performance; some M3 reviews noted that the system starts throttling much earlier than an M1.
I disagree, it's really damaging to refuse to let go of the reins.
Bob Iger at Disney never really took succession seriously and then never really left which has left the company somewhat rudderless because the ship is now dependent on a person and not culture/policy.
I don't think Iger is comparable; all his groomed successors left before he did, so Chapek ended up with the hot potato to terrible results.
Apple has not lost a single potential successor to Tim Cook. And don't count Johnny Ive it's ridiculous to assume he could run that company. In 2004 maybe but certainly not the giant of 2024.
reply