Hacker Newsnew | past | comments | ask | show | jobs | submit | Zafira's commentslogin

Aptos has been the default font for Microsoft Word since 2023.

With all the fanfare made over Calibri back when it was announced, TIL about Aptos

Aptos is slightly wider and taller but looks very very similar to calibri, especially calibri a point larger.

The early history of AI/cybernetics seems poorly documented. There are a few books, some articles and some oral histories about what was going on with McCulloch and Pitts. It makes one wonder what might have been with a lot of things. Including if Pitts had lived longer, been able to get out of the rut he found himself in the end (to put it mildly) and hadn’t burned his PhD dissertation, but perhaps one of the more interesting comments that is directly relevant to all this lies in this fragment from a “New Scientist” article[1]:

> Worse, it seems other researchers deliberately stayed away. John McCarthy, who coined the term “artificial intelligence”, told Piccinini that when he and fellow AI founder Marvin Minsky got started, they chose to do their own thing rather than follow McCulloch because they didn’t want to be subsumed into his orbit.

[1] https://www.newscientist.com/article/mg23831800-300-how-a-fr...


The early history of AI/cybernetics seems poorly documented.

I guess it depends on what you mean by "documented". If you're talking about a historical retrospective, written after the fact by a documentarian / historian, then you're probably correct.

But in terms of primary sources, I'd say it's fairly well documented. A lot of the original documents related to the earlier days of AI are readily available[1]. And there are at least a few books from years ago that provide a sort of overview of the field at that moment in time. In aggregate, they provide at least a moderate coverage of the history of the field.

Consider also that the term "History of Artificial Inteligence" has its own Wikipedia page[2] which strikes me as reasonably comprehensive.

[1]: Here I refer to things like MIT CSAIL "AI Memo series"[3] and related[4][5], the Proceedings of the International Joint Conference on AI[6], the CMU AI Repository[7], etc.

[2]: https://en.wikipedia.org/wiki/History_of_artificial_intellig...

[3]: https://dspace.mit.edu/handle/1721.1/5460/browse?type=dateis...

[4]: https://dspace.mit.edu/handle/1721.1/39813

[5]: https://dspace.mit.edu/handle/1721.1/5461

[6]: https://www.ijcai.org/all_proceedings

[7]: https://www.cs.cmu.edu/Groups/AI/html/rep_info/intro.html


Based on the paper for DiffDock (https://arxiv.org/abs/2210.01776) it looks like it was a great use case for a diffusion model.

> We thus frame molecular docking as a generative modeling problem—given a ligand and target protein structure, we learn a distribution over ligand poses.

I just hope work on these very valid use cases doesn’t get negatively impacted when the AI bubble inevitably bursts.


A lot of cultures have not historically considered artists’ rights to be a thing and have had it essentially imposed on them as a requirement to participate in global trade.


Even in Europe copyright was protected only for the last 250 years, and over the last 100 years it’s been constantly updated to take into consideration new technologies.


The only real mistake the EU made was not regulating Facebook when it mattered. That site caused pain and damage to entire generations. Now it's too late. All they can do is try to stop Meta and the rest of the lunatics from stealing every book, song and photo ever created, just to train models that could leave half the population without a job.

Meta, OpenAI, Nvidia, Microsoft and Google don't care about people. They care about control: controlling influence, knowledge and universal income. That's the endgame.

Just like in the US, the EU has brilliant people working on regulations. The difference is, they're not always working for the same interests.

The world is asking for US big tech companies to be regulated more now than ever.


Regulating FB earlier wouldn’t help much I think, it would grow just as fast with other, mostly US, markets and it would be just as powerful today.


I don't agree with this.

Facebook's power comes from how it gathered and monetised data, how it acquired rivals like Instagram and WhatsApp, and how it locked in network effects.

If regulators had blocked those acquisitions or enforced stricter antitrust and data privacy rules, there's a chance the social media landscape today would be more competitive. Politicians and regulators probably received some kind of incentive or didn't get it. They didn't see how dangerous Zuk's greedy algorithms would become. They thought it was just a social site. They had no idea what Facebook employees were building behind the scenes. By the time they realised, it was already too late.

China was the only one that acted. The US and EU looked the other way. If they'd stepped in back in 2009 with rules on privacy, neutrality, and transparency, today's internet could've been a lot more open and competitive.


Coincidentally that’s about when we discovered god-given rights (John Locke died in 1704), so that makes sense.


To be fair, "copy"right has only been needed for as long as it's been possible to copy things. In the grand scheme of human history, that technology is relatively new.


Copying was a thing for a very long time before the Statue of Anne. Just not mechanically. It coincided with the rise of mechanical copying.


Copyright predates mechanical copying. However, people used to have to petition a King or similar to be granted a monopoly on a work, and the monopoly was specific to that work.


The Statue of Anne - the first recognisable copyright law in anything remotely the modern sense dates to 1709. Long after the invention of movable type. Mechanical in the sense of printing with a press using movable type, not anything highly automated.

Having to petition for monopoly rights on an individual basis is nothing like copyright, where the entire point is to avoid having to ask for exceptions by creating a right.


I’m actually curious how they were able to exactly filter some of their less promising impulses.

Ive famously wanted the Apple Watch to be a standalone luxury product.

> Jony Ive envisioned the future of the Apple Watch as a luxury product. Not only did he want to build a $25 million lavish white tent to promote the first Watch, but he “regarded a rave from Vogue as more important than any tech reviewer’s opinion.” According to Mickle, “the tent was critical to making the event as glamorous as a high-end fashion show.”

Meanwhile Jobs always seemed to have an obsession with cubes (NeXTcube, Power Mac G4 Cube), no fans and nobody touching his products (the original iPhone “SDK” announcement was a badly received joke).


> Oracle, to my knowledge, does not profit at all off of the JavaScript name or brand.

At this time, but their ownership and past behavior indicates that if Deno or anyone else tries to have a paid offering, there’s a non-zero chance Oracle will come sniffing for low effort money.


There are some draft PDFs of the standard floating around that are easily discoverable. It appears to be incredibly vague and it’s difficult to escape the sense that ISO just wants to jump on the AI bandwagon. There are no bright line rules or anything. It looks to be little more than weak scaffolding which a certified organization applies their own controls.


Sadly, ISO 42001 certification doesn't ensure compliance with the EU AI Act.

Since this is European legislation, it would be beneficial if certifications actually guaranteed regulatory compliance.

For example, while ISO 27001 compliance does establish a strong foundation for many compliance requirement


The AI Act is hilarious. It makes emotion detection the highest level of risk—which makes any frontier model potentially in violation.

Most frontier models now allow you to take a picture of your face, assess your emotions and give advice — and that appears to be a direct violation.

https://www.twobirds.com/en/insights/2024/global/what-is-an-...

Just like the GDPR, there is no way to know for sure what is actually acceptable or not. Huge chilling effect though and a lot of time wasted on unnecessary compliance.


You are referring to Article 5 1.f?

"1 The following AI practices shall be prohibited: (...)

"f) the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons"

See recital 44 for a rationale. [1] I don't think this is "hilarious". Seems a very reasonable, thoughtful restriction; which does not prevent usage for personal use or research purposes. What exactly is the problem with such legislation?

[1]: https://artificialintelligenceact.eu/recital/44/


It effectively bans the educational use of ChatGPT and Claude because they can and do respond to the emotional expression of students. That’s what is hilarious! Do these tools actually violate the act? No one knows. It isn’t clear. Meanwhile, my university is worried enough to sit on their hands.

And this is the whole danger/challenge of the AI act. Of course it seems reasonable to forbid emotion detecting AI in the workplace — or it would 5 years ago when the ideas were discussed. But now that all major AI systems can detect emotions and infer intent (via paralinguistic features, not just a user stating their emotions) — this kind of precaution puts Europe strategically behind. It is very hard to be an AI company in Europe. The AI act does not appear to be beneficial for anyone—-except I’m sure that it will support regulatory capture by large firms.


Seems like you’re reading this rather broadly. Pathologically so.

An AI textbook QA tool may be able to infer emotions, but it’s not a function of that system.

> The AI act does not appear to be beneficial for anyone

It’s an attempt to be forward thinking. Imagine a fleet of emotionally abusive AI peers or administrators meant to shame students into studying more.

Hyperbolic example, sure, but that’s what the law seems to try and prevent


Calling me pathological doesn’t really strengthen the argument.

One can certainly imagine a textbook QA tool that doesn’t infer emotions. If one were introduced to the market with the ability to do so, it would seem to run afoul of the law, regardless of whether it was marketed as such.

The fact is that any textbook QA systems based on a current frontier model CAN infer emotions.

If they were so forward thinking, why ban emotion detection and not emotional abuse?


The rest of the world should simply stop bothering with European silliness tbh.


And embrace the future of e.g. AI models deciding if you get healthcare or government services or a loan or if you're a fraud, or not, with zero oversight, accountability or responsibility for the organisation deploying them? Check out the Post Office scandal in the UK to see what can happen when "computer says so" is the only argument to imprison people, with no accountability for the company that sold the very wrong computers and systems, nor the organisation that bought them and blindly trusted them.

Hard pass. The EU is in the right and ahead of everyone else here, as they were with data privacy.


Their sales pitch when they released the M1 was that the architecture would scale linearly and so far this appears to be true.

It seems like they bump the base frequency of the CPU cores with every revision to get some easy performance gains (the M1 was 3.2 GHz and the M3 is now 4.1 GHz for the performance cores), but it looks like this comes at the cost of it not being able to maintain the performance; some M3 reviews noted that the system starts throttling much earlier than an M1.


Isn't Italy also the poster child for an unstable parliamentary system?


I disagree, it's really damaging to refuse to let go of the reins.

Bob Iger at Disney never really took succession seriously and then never really left which has left the company somewhat rudderless because the ship is now dependent on a person and not culture/policy.


Lack of succession planning and an overdominant CEO are utterly distinct concepts to the approach being described here.


One usually begets the other.


Quite the opposite. Lack of succession planning is unlikely to be correlated with the effort required to organise retention of staff as Apple do.

And an overly dominant CEO generally doesn’t like senior staff who might provide counsel to stick around.


I don't think Iger is comparable; all his groomed successors left before he did, so Chapek ended up with the hot potato to terrible results.

Apple has not lost a single potential successor to Tim Cook. And don't count Johnny Ive it's ridiculous to assume he could run that company. In 2004 maybe but certainly not the giant of 2024.


Do you feel moreso people depend on culture, moreso culture depends on people, or equally interdependent?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: