What about the 'openness' of AI development. When I say 'openness' of AI I mean in research papers, spying, former employees, etc. Wouldn't that mean that after a few months to years after AGI is discovered that the other country would also discover AGI due to benefiting from the obtaining the knowledge from the other side? Similar to how the Soviets did their first nuclear test less than 5 years after US did theirs due to a large part in spying. The point here is wouldn't the country that spends less in AI development actually have an advantage over the country that does as they will obtain that knowledge quickly for less money? Also the time of discovery of AGI may be less important than the country that first implements the benefit of AGI.
This is actually an interesting question! If you look at OpenAI's change in behavior, I think that's going to be the pattern for venture backed AI: piggyback on open science, then burn a bunch of capital on closed tech to gain a temporary advantage, and hope to parlay that into something bigger.
I believe China's open source focus is in part a play for legitimacy, and part a way to devalue our closed AI efforts. They want to be the dominant power not just by force but by mandate. They're also uniquely well positioned to take full advantage of AI proliferation as I mentioned, so in this case a rising tide raises some boats more than others.
The AI labs have settled on a definition of AGI: "AI that can do the vast majority of economically valuable work at or above the level of humans."
They don't heavily advertise this definition because investors expect AGI to mean the computer from Her, and it's not gonna be that. They want to be able to tell investors without lying that they're on target for AGI in 3 years, and they're riding on pre-existing expectations.
If there is a downturn in AI use due to a bubble then the countries that have built up their energy infrastructure using renewal energy and nuclear (both have decade long returns after the initial investment) will have cheaper electricity which will lead to a future competitive advantage. Gas powered power plants on the other hand require constant gas to convert to electricity. The price of gas would become the price of electricity regardless and thus very little advantage.
-AI is leading to cost optimizations for running existing companies, this will lead to less employment and potentially cheaper products. Less people employed temporary will change demand side economics, cheaper operating costs will reduce supply/cost side
-The focus should not just be on LLM's (like in the article). I think LLMs have shown what artificial neural networks are capable of, from material discovery, biological simulation, protein discovery, video generation, image generation, etc. This isn't just creating a cheaper, more efficient way of shipping goods around the world, its creating new classifications of products like the microcontroller invention did.
-The barrier to start businesses is less. A programmer not good at making art can use genAI to make a game. More temporary unemployment from existing companies reducing cost by automating existing work flows may mean that more people will start their own businesses. There will be more diverse products available but will demand be able to sustain the cost of living of these new founders? Human attention, time etc is limited and their may be less money around with less employment but the products themselves should cost cheaper.
-I think people still underestimate what last year/s LLMs and AI models are capable of and what opportunities they open up, Open source models (even if not as good as the latest gen), hardware able to run these open source models becoming cheaper and more capable means many opportunities to tinker with models to create new products in new categories independent of being reliant on the latest gen model providers. Much like people tinkering with microcontrollers in the garage in the early days as the article mentioned.
Based on the points above alone while certain industries (think phone call centers) will be in the red queen race scenario like the OP stated there will new industries unthought of open up creating new wealth for many people.
Red Queen Race scenario is already in effect for a lot of businesses, especially video games. GenAI making it easier to make games will ultimately make it harder to succeed in games, not easier. We’re already at a point where the market is so saturated with high quality games that new entrants find it extremely hard to gain traction.
Imagine a giant trawling net scooping up the last two-three decades undeprecated of work on the web/data/game/operating system space and cutting out the people who did all that work. What do you think is going to happen to the progression in those areas? I guess it was "done"? The LLM AI is only as good as its input, as far as I can tell there is no reason to believe any of its second order outputs. RLHF is an interesting plug for that hole but its only as good as the human feedback and even then those things taken to second order aren't going to be any good. This collapses the barrier to entry to existing products, aka those people are going to be swamped with new competition.
> AI is leading to cost optimizations for running existing companies, this will lead to less employment and potentially cheaper products.
There's zero change that cost optimizations for existing companies will lead to cheaper products. It will only result in higher profits while companies continue to charge as much as they possibly can for their products while delivering as little as they can possibly get away with.
Seeing that it works for a ESP32 chip I would say that its very likely to work on a smartphones's wifi chip though the article didn't say. Many people carry phones with them everywhere and all the time. You could build a very impressive profile of a person. It could be used to see when they get excited, scared, angry, etc at depending on what they view on the phone, the phone call they received, where they physically located on the earth, who they are around (by looking at identities of other phones near them) and properly other things as well I have not thought of.
This time is different. A fact right now is that software engineers now can orchestrate LLMs and agents to write software. The role of software engineers who do this is quality control, compliance, software architecture and some out of the box thinking for when LLMs do not cut it. What makes you think advances in AI wont take care of these tasks that LLMs do not do well currently? My point is once these tasks are taken care off a CS graduate won't be doing tasks that they learnt to do in their degrees. What people need to learn is how to think of customers needs in abstract ways and communicate this to AI and judge the output in a similar way someone judges a painting.
> CS graduate won't be doing tasks that they learnt to do in their degrees
how is that different from the previous decade(s)? How often do you invert a redblack tree in your daily programming/engineering job?
A CS degree is a degree for thinking computationally, using mathematics as a basis. It's got some science too (aka, use evidence and falsifiability to work out truths and not rely on pure intuition). It's got some critical thinking attached, if your university is any good at making undergraduate courses.
A CS degree is not a boot camp, nor is it meant to make you ready for a job. While i did learn how to use git at uni, it was never required nor asked - it was purely my own curiosity, which a CS degree is meant to foster.
The original comment advised people to enroll in CS to capture the potential shortage of CS grads in the workforce. You’re saying no people shouldn’t be doing CS to make themselves ready for a job. The comment you replied to also takes a similar stance, ie no CS doesn’t make people ready for a job.
You might think you’re disagreeing with the parent comment but in fact you’re disagreeing with the top level comment.
Humans experience (play in the real world) is multi modal though vision, sound, touch, pressure, muscle feedback, gravitational, etc. Its extremely rich in data. Its also not a data point its continuous stream of information. Also I would bet that humans synthesize data at the same time. Everytime we run multiple scenarios in our mind before choosing the one we execute without even thinking about it is synthesizing data. Also humans dream which is another form of data synthesizing. Allowing AI to interact with the real world is definitely a way to go.
That's true, but still, a single individual or small group living isolated in the real world will, over a lifetime, learn only a tiny fraction of what we can learn from the written knowledge accumulated over millennia.
Having said that, I tend to agree that having AI interact with the world may be key: for one thing, I'm not sure whether there is any sense in which LLMs understand that most of the information content of language is about an external world.
I can imagine cameras connected to AI. AI detects crime in action (beating or something) and initiates the hologram which says cease now the police are coming . Might prevent a a crime from becoming bigger.
I hope this is not the reason. I think x86 is a deadend technology. ARM's energy superiority makes it a better choice. x86 only still being used due to legacy/backwards compatibility but thats changing. Apple moved completely away from x86. Theres more and more ARM based windows computers being sold. Theres no x86 chips in phones.
same here. The problem in this region is that they are too restrictive. Libraries have strict rules like not making noise in some areas and being told by security guards to take feet off low tables (which was impractical for reading), parks have so many rules including which sports can be played and not. At least in the region where we live its not the lack of facilities but a culture and rule system that makes public areas useless.