Tyre is clearly the correct spelling. I quickly tire of anyone claiming otherwise. Gaol is an archaic spelling which I've found incredibly confusing ever since I was a small kid a long long time ago.
All that fertiliser that gets put into Cattle grazing lands increases the methane and other emissions, so your read on this is to a significant extent incorrect.
I've been following the whole thing low key since the 2nd wave of neural networks in the mid 90s - and made a very very minor contribution to the field which has applications these days back then too.
My observation is that every wave of neural networks has resulted in a dead end. In my view, this is in large part caused by the (inevitable) brute force mathematical approach used and the fact that this can not map to any kind of mechanistic explanation of what the ANN is doing in a way that can facilitate intuition. Or as put in the article "Current AI systems have no internal structure that relates meaningfully to their functionality". This is the most important thing. Maybe layers of indirection can fix that, but I kind of doubt it.
I am however quite excited about what LLMs can do to make semantic search much easier, and impressed at how much better they've made the tooling around natural language processing. Nonetheless, I feel I can already see the dead end pretty close ahead.
I didn’t see this at first, and I was fairly shaken by the potential impact on the world if their progress didn’t stop. A couple generations showed meaningful improvements, but now it seems like you’re probably correct. I’ve used these for years quite intensively to aid my work and while it’s a useful rubber duck, it doesn’t seem to yield much more beyond that. I worry a lot less about my career now. It really is a tool that creates more work for me rather than less.
Would this still hold true in your opinion if models like O3 become super cheap and bit better over time? I don't know much about the AI space, but as a vanilla backend dev also worry about the future :)
I was helping a relative still in college with a project, and I was struck by how lackadaisical they are about cut-and-pasting huge chunks of code from chatgpt into whatever module they are building without thinking about why, or what it does, or where it fits, as long as it works. It doesn't help that it's all relatively same-looking Javascript so frontend or backend is kinda mixed together. The troubleshooting help I provided was basically untangling the mess by going from first principles and figuring out what goes where. I can tell you I did not feel threatened by the AI there at all, if anything I felt bad for the juniors and feeling like this is what we old people are going to end up having to support very soon.
Not sure how accurate these numbers are but on https://openrouter.ai/ highest used "apps" basically can auto-accept generated code and apply it to the project. I was recently looking at top performers on https://www.swebench.com/ and noticed OpenHands basically does the same thing or similar. I think the trend is going to get much worse, and I don't think Moore's Law is going to save us from the resulting chaos.
We know that OpenAI is verz good at least in one thing: generating hype. When Sora was announced everyone thought that this will be revolutionary. Look at how it looks like in production. Same when they started floating rumours that they have some AGI prototype in their labs.
They are the Tesla of the IT world, overpromise and under deliver.
It's a brilliant marketing model. Humans are inherently highly interested in anything which could be a threat to their well-being. Everything they put out is a tacit promise that the viewer will soon be economically valueless.
I hope people will come to the realisation that we have created a good plagiarizer at best. The "intelligence" originates from the human beings who created the training data for these LLMs. The hype will die when reality hits.
Hype is very interesting. The concept of Hyperstition describes fictions that make themselves real. In this sense, hype is an essential part of capitalism:
"Capitalization is [...] indistinguishable from a commercialization of potentials, through which modern history is slanted (teleoplexically) in the direction of ever greater virtualization, operationalizing science fiction scenarios as integral components of production systems." [0]
"Within capitalist futures markets, the non-actual has effective currency. It is not an "imaginary" but an integral part of the virtual body of capital, an operationalized realization of the future." [1]
This corresponds to the idea that virtual is opposed to actual, not real.
Religion too. Those that are told a prophecy is to come, have a lot of incentive to fulfill that prophecy. Human belief systems are strange and interesting because (IMO) of the entanglement of beliefs with identity
Generally speaking, I think it would. I’m open to being wrong. I think there is a non-trivial amount of hype around O3, and while it would certainly be interesting if it was cheap, I don’t think it would address important issues that AI currently doesn’t seem to even begin to accommodate in its current capacity to recognize or utilize contexts.
For example, I have little to no expectation that it will handle software architecture well. Especially refactoring legacy code, where two enormous contexts need to be held in mind at once.
I'm really curious about something, and would love for an OpenAI subscriber to weigh in here.
What is the jump to O1 like, compared to GPT4/Claude 3.5? I distinctly remember the same (if not even greater) buzz around the announcement of O1, but I don't hear people singing its praises in practice these days.
I gave up interest in GPT4/Claude3.5 about 6 months ago as not very helpful, producing plausible but wrong code.
Have an o3-mini model available to me on the other hand I'm very impressed with its fast, succinct, correct answers while tooling around in zsh on my mac. what things are called, why they exist. why is macports installing db48 etc. It still fails to write simple bash one liners. (I wanted to pipe the output of ffmpeg to a column of --enabled-features and it just couldn't do it)
It's a very helpful rubber duck but still not going to suffice as an agent, but I think its worth a subscription. I wanted to do everything local and self hosted and briefly owned a $3000 mac studio to run llama3.3-70B but it was only as good as GPT4 and too slow to be useful so returned it. In that context even $200/m is relatively cheap.
I don't know how to code in any meaningful way. I work at a company where the bureaucracy is so thick that it is easier to use a web scraper to port a client's website blog than to just move the files over. GPT 4 couldn't write me a working scraper to do what I needed. o1 did it with minimal prodding. It then suggested and wrote me a ffmpeg front-end to handle certain repetitive tasks with client videos, again, with no problem. Gpt4 would often miss the mark and then write bad code when presented with such challenges
>I worry a lot less about my career now. It really is a tool that creates more work for me rather than less.
when i was a team/project leader the largest part of my work was talking to the reports on what needs to be implemented and how they are going to implement it and the current progress of the implementation, how to interface the stuff, what are the issues and how to approach the troubleshooting, what are the next steps, etc. with occasional looking into/reviewing the code - it looks to me what working with coding LLM would soon be quite similar to that.
Many of the major harms of these things were neglected, and downplayed even to this day people don't recognize just how changed the world has become. The mere delusion that AI will replace work has been used to justify mass layoffs.
The persistence of indistinct ghost jobs that are generated by computer for pennies to flood and bind with prospective job seekers (similar to RNA interference), has resulted in severe brain drain in many fields. Worse, the fact these people have often been forced into poverty as a result will have a lasting impact. You might have planned for up to a year out of work pre-AI and had the financial resources, but now how long does it take? Conversion ratios for the first step have changed by two magnitudes or order (from x100 to x10,000). What are the odds of these people finding a job given their finite time, and requirements that are un-automatable for submission (nil). The media keeps claiming that everything is getting better, the stats say so (while neglecting the fact that the stats are being manipulated to the point of uselessness/fabricated), but you have 1/3 of welfare payouts now going to these people in California (in the US), just for basic food.
When you can't find work, you go where the work is abandoning the bad economic investment and choice you made regardless of how competent you were. It is a psychologically sticky decision. When there is no chance at finding work, you get desperate, and many desperate people turn to crime and unrest. This was foreseen by a number of very intelligent people many decades ago, and ignored following business as usual.
The mere demonstration that we are unable to react in time is what gave engineers such great pause to write about these things, as far back as in the 70s. Hysteresis is a lagging time problem where you can't react fast enough to avert catastrophic failure given chaotic conditions, leaving survival up to chance. Its the worst type of engineering problem with real consequences.
Given how western society is structured dependently on labor exchange, its a perfect weapon of chaos and debasement in the value of labor, that effectively destroys half of its underlying economic structure (factor markets). This forces sieving conditions of wealth that become spinodal, and eventually falter under their constraints and spiral into deflationary trends over time.
Business wins so much that they lose everything. Its quite a disadvantaged environment and the general trend is that everyone is ignoring the pink elephant. Actions (and inaction) have consequences. When people don't listen and take appropriate action, consequences get dire, it hits the fan.
I agree that we are often missing in our analyses the true materialised impact of expectations by focusing on the validity of said expectations instead. Organisations, even if not laying off, are pausing hiring plans with a conviction that AI will replace some of the workers. It then becomes a self fulfilling prophecy to some extent. It doesn’t matter if it can, what matters is if it will. And to assume that people won’t place a bet is futile, as everyone does, and even if it’s wrong the market will allocate the losses to the baseline.
There are two aspects of your line of reasoning that don't jive for me.
People can choose to not place a bet by not participating in the economy, or tying physical assets to it. In other words, de-banking, off-grid farming, unemployment on welfare (not their money, printed guaranteed loss absorbed by the baseline).
The assumption that the market can always allocate the losses to the baseline has already been shown to be foundationally flawed. It depends upon whether the baseline can absorb the losses to keep the market going, not the other way around. Those who believe in MMT don't pay head to the fact that money printing has caused societies to fail many times in the distant past, in the ever quoted phrase, but it will be different this time.
When the economic engine stalls, so too does order and money-printing/debt issuance (without fractional reserve) drives this as a sieve (which we've seen over the past several decades in the form of bailout, marketshare concentration, and consolidation).
Central banks set reserve allocations to 0% in 2020, adopting a capital reserve, risk-weighted system based in fiat that is opaque and stock-market tied (Basel III modified). Value is subjective, and fiat may have store of value, right up until it doesn't.
Of particular note, societal order is required to produce enough food for 8bn globally, without order and its now brittle dependencies, we can only feed 4bn globally. Malthus has a lot to say about population dynamics in ecological overshoot.
TL;DR Half of all people die when modern chemical production (Haber-Bosch fertilizer) and other food dependencies (climate) fail.
AI drives chaos and disruption. Its like throwing a wrench into a gear system, maybe it will stall, maybe it gets thrown out (still slowing it), maybe it runs rough wearing it faster further degrading the system towards failure.
When the baseline cannot absorb the cost in terms of purchasing power, it absorbs the cost from the resulting chaos in lives.
Intelligent people pay attention to history because the outcomes that repeat in history occur as a result of dynamics that repeat, and in matters where lives or survival are on the line risk management shifts from permissive to restrictive (where the requirements of proof are flipped).
Thank you for the thoughtful answer. There is a certain amount of cynicism in my post, to match the cynicism of reality unfortunately. Your arguments may be valid, but who cares to rationally think and act when they can easily observe and react? A collapse akin to your description would disproportionately affect the people who don’t have the power to do either, they just accept and suffer. In history, do we have any example where that was not the case, except revolution? Even with revolutions the respite is only perceived and of the shuffle to reach the new decision structures.
I'm in agreement that self-fulling dynamics occur regularly over longer time horizons, I viewed what you said as pragmatism rather than cynicism.
As for who cares to rationally think and act when they can easily observe and react?
The problem with the latter is that its a false choice. The latter simply isn't possible in any effective sense. Certain systems and dynamics become a hysteresis problem, where the indicator is lagging ahead of when it actually happens; by the time you see the indicator it can be perceived, but ultimately impossible to react to in time.
There are also simultaneous issues with rational thought being deprived broadly through induction of psychological stress using sophisticated mental coercion and torture (which isn't physical). Rational thought is the first thing to vanish, and these methods act like HIV does in cellular systems (i.e. destroying the memory of the immune system making it unable to act, instead it destroys perception blinding people).
For some reason these things remind me of the Tower of Babel story in the book of genesis. It makes god out to be the bad guy, when it seems far more likely that the dynamics became destructive. All of Humanity has psychological blindspots that can be used to manipulate them in collective and through unity. Pride often lends itself to delusion, and blindness. Destruction usually follows, and confusion occurs naturally when delusion breaks towards a witnessed reality (as survivors, where others kept dying).
It seems like the translation is off, where instead of god, they meant the inescapable forces of reality. Albeit this is getting a bit into the weeds, its an interesting perspective.
Getting back to things, the major difference today when comparing to history with regards to revolution, we are in extreme ecological overshoot (globally).
Breakdown of order translates to famine so severe that half globally die from starvation. To make matters worse, nearly every economic system on the planet is controlled indirectly by one nation through money printing, and those distortions created are chaotic (fundamentally it shares many characteristics as that of a n-body immeasurable astrophysics system that has limited visibility).
When these things happened in the past, they were largely in isolation, and outside the geographical affected areas, assistance could be leveraged for survival. This is no longer the case now. If these things collapses, it all happens to everyone at the same time. Not enough resources exist to resolve the failure, and there are no tools that would allow correcting the situation after the dynamics have passed a point of no return.
Thinking about these things rationally, preparing while we can (before it happens), is the only tool that might allow survival long term for a few. Its important that survivors know what happened and how it happened, or it will happen again given sufficient time, and that requires a foundation.
Needless to say, we have many dark times ahead.
A line appears, the order wanes, the empire falls, and chaos reigns.
I do not envy those who would have to somehow live through chaos, where nuclear weapons might be used by the delusional or insane.
I enjoy your thinking and your use of analogy. We agree more than is evident, but you think on an horizon that eludes most, myself included unfortunately. As you say, the psychological torture of mere lifestyle survival overshadows the rational concern for true survival. To some extent, we live through chaos, but don’t have the wherewithal to accept it as such and cling to a normal that is increasingly not normal at all.
> The mere delusion that AI will replace work has been used to justify mass layoffs.
AI might be the excuse but the reason is the end of zero interest rates and blitzscaling along with resentment among business leadership that some members of the labor force were actually getting a good deal for once.
You can't claim wage earners have received a good deal when they are unable to support themselves with basic necessities, let alone a wife and three children (required for risk managing 1 surviving to have children themselves). This is largely why we have a problem with birth rate today with the old crowding out all opportunities for the young.
The problem you mention doesn't really have to do with AI. It comes down to purchasing power in the economy, not wages, and business has shown over decades they will not or cannot be flexible when it comes to profit.
Additionally, money printing puts both parties at each others necks through debasement of the currency. When the currency debasement (inflation) exceeds profit legitimate business not tied to a money printer leaves the market (no competition is possible).
When the only entities left in some proposed market cooperate, the market isn't a market, its non-market socialism without the requirements for economic calculation. This fails.
Neither parties in my opinion are getting a reasonable deal. Whose to blame? The cohorts of people printing money from nothing that call themselves central bankers.
Previous generations of neural nets were kind of useless. Spotify ended up replacing their machine learning recommender with a simple system that would just recommend tracks that power listeners had already discovered. Machine learning had a couple of niche applications but for most things it didn't work.
This time it's different. The naysayers are wrong.
LLMs today can already automate many desk jobs. They already massively boost productivity for people like us on HN. LLMs will certainly get better, faster and cheaper in the coming years. It will take time for society to adapt and for people to realize how to take advantage of AI, but this will happen. It doesn't matter whether you can "test AI in part" or whether you can do "exhaustive whole system testing". It doesn't matter whether AIs are capable of real reasoning or are just good enough at faking it. AI is already incredibly powerful and with improved tooling the limitations will matter much less.
> Previous generations of neural nets were kind of useless. Spotify ended up replacing their machine learning recommender with a simple system that would just recommend tracks that power listeners had already discovered.
“Previous generations of cars were useless because one guy rode a bike to work.” Pre-transformer neural nets were obviously useful. CNNs and RNNs were SOTA in most vision and audio processing tasks.
Language translation, object detection and segmentation for autonomous driving, surveillance, medical imaging... Indeed plenty fields where NNs are indispensable
Yeah, give 'em small constrained jobs where the lack of coherent internal representation is not a problem.
I was involved in ANN and equivalent based face recognition (not on the computational side, on the psychophysics side) briefly. Face recognition is one of these bigger more difficult jobs, but still more constrained than the things ANNs are useful for.
As far as I understand none of the face recognition algorithms in use these days are ANN based, but are instead computationally efficient versions of the brute force the maths implementations instead.
From what I have seen, most of the jobs that LLMs can do are jobs that didn't need to be done at all. We should turn them over to computers, and then turn the computers off.
But here reliability comes in again. Calculators are different since the output is correct as long as the input is correct.
LLMs do not guarantee any quality in the output even when processing text, and should in my opinion be verified before used in any serious applications.
> Calculators are different since the output is correct as long as the input is correct.
That isn't really true.[0] The application of calculators to a subject matter is something that does need to be considered in some use cases.
LLMs also have accuracy considerations, and although it may be to a different degree, the subject matter to which they're applicable has a broad range of acceptable accuracies. While some textual subject matter demands a very specific answer, some doesn't: For example, there may be hundreds or thousands of various ways to summarize a text that could be accurate for a particular application.
I think your point stands, but your example shows that anyone using those calculators daily should not be concerned. Those that need precision to the 6+ decimal places for complex equations should know not to fully trust consumer-grade calculators.
The issue with LLMs is that they can be so unpredictable in their behaviour. Take the following prompt that asks GPT-4 to validate the response to "calculate 2+3+5 and only display the result":
GPT-4o mini contradicts itself, which is not something one would expect for something we believe to be extremely simple. However, if you ask it to validate the response to "calculate 2+3+5," it will get it right.
Well, not every tool is a hammer and not every problem is a nail.
If I ask my TI-89 to "Summarize the plot in Harry Potter and the Chamber of Secrets" it responds "ERR"! :D
LLMs are good text processors, pocket calculators are good number processors. Both have limitations, and neither are good at problem sets that are outside of their design strengths. The biggest problem with LLMs aren't that they are bad at a lot of things, it's that they look like they are good at things they aren't good at.
I agree LLMs are good at text processing and I believe they will obsolete jobs that really should be obsoleted. Unless OpenAI, Anthropic and other AI companies come up with a breakthrough on reliability, I think it will be fair to say they will only be players and not leaders. If they can't figure something out, it will be Microsoft, Amazon and Google (distributors of diverse models) that will benefit the most.
I've personally found it is extremely unlikely for multiple good LLMs to fail at the same time, so if you want to process text and be confident in the results, I would just run the same task across 5 good models and if you have a super majority, you can be confident that it was done right.
Neither are humans, that's why we have proofreaders and editors. That doesn't make them any less useful. And a translator will not write the same exact translation for a text longer than a couple of sentences, that does not mean translation is a dead end. Ironically, it's LLMs that made translation a dead end.
No they can't because they make stuff up, fail to follow directions, need to be minutely supervised, all output checked and workflow integrated with your companies shitty over complicated procedures and systems.
This makes them suitable at best as an assistant to your current worker or more likely an input for your foo as a service which will be consumed by your current worker. In the ideal case this helps increase the output of your worker and means you will need less of them.
An even greater likelihood is someone dishonest at some company will convince someone stupid at your company that it will be more efficacious and less expensive than it will ultimately be leading your company to spend a mint trying to save money. They will spend more than they save with the expectation of being able to lay off some of their workers with the net result of increasing workload on workers and shifting money upward to the firms exploiting executives too stupid to recognize snake oil.
See outsourcing to underperforming overseas workers because the desirable workers who could have ably done the work are A) in management because it pays more B) in country or working remotely for real money or C) cost almost as much as locals once the increased costs of doing it externally are factored in.
> No they can't because they make stuff up, fail to follow directions, need to be minutely supervised, all output checked and workflow integrated with your companies shitty over complicated procedures and systems.
What’s the difference between what you describe and what’s needed for a fresh hire off the street, especially one just starting their career?
Real talk? The human can be made to suffer consequences.
We don't mention this in techie circles, probably because it is gauche. However you can hold a person responsible, and there is a chance you can figure out what they got wrong and ensure they are trained.
I can’t do squat to OpenAI if a bot gets something wrong, nor could I figure out why it got it wrong in the first place.
The difference is that a LLM is like hiring a worst-case scenario fresh hire that lied to you during the interview process, has a fake resume and isn't actually named John Programmer.
boy do I love being in the same industry as people like you… :) while you are writing silly stuff like this us that do shit have automated 40-50% of what we used to do and not have extra time to do more amazing shit :)
> Spotify ended up replacing their machine learning recommender with a simple system that would just recommend tracks that power listeners had already discovered.
Do you have a source on this? Spotify also seems to employ a few different recomendation algorithms, for example Discover Weekly vs. continuing to play after a playlist ends. I'd be surprised if Discover Weekly didn't employ some sort of ML as it does recommend songs I have never heard before many times.
It's from the book by Carlsson and Leijonhufvud. Perhaps Spotify uses ML today, but the key insight from the book was that no ML was needed to build a recommender system. You can just show people songs from custom playlists curated by powerusers. So when your playlist ends you find other high quality playlists that overlap with the music you just listened to. Then you blend those playlists and enqueue new tracks. This is from memory so I might have gotten the details wrong, but I remember that this approach worked like magic and solved the issues with the ML system (bland or too random recommendations). No reason to use ML when you already have millions of manually curated playlists.
If you had to bet a large amount of your own money on a scenario where you have a 3200 word text and you ask ChatGPT to change a single sentence, would you bet on or against that it would change something other than what you asked it to change? I would bet that it would, every time (even with ChatGPT's new document feature). There aren't a lot of employers who are okay with persistent randomness in their output.
If there's a job that can be entirely replaced by AI, it was already outsourced to an emerging market with meager labor costs (which at this point, is likely still cheaper than a fully automated AI).
gizmo says>LLMs today can already automate many desk jobs.
I call: show me five actual "desk jobs" that LLMs have "already automated". Not merely tasks, but desk jobs - jobs with titles, pay scales, retirement plans, etc. in real companies.
I know an immigration agent who simply stopped using professional translators because ChatGPT is more than good enough for his purposes. In many ways it is actually better, especially if instructed to use the specific style and terminology required by the law.
If you think about it, human calculators (the job title!) were entirely replaced by digital electronic calculators. Translators are simply "language calculators" that perform mechanical transformations, the ideal scenario for something like an LLM to replace.
That’s professional negligence. Have the LLM prepare a draft for a human translator to review, sure. But taking the human out of the loop and letting in undetectable hallucinations? In a legal proceeding?
But it is not all or nothing here. We replaced real programmers (backend, frontend, embedded) with it, but obviously (I guess) not all. We just require 1/5th of those roles since around beginning this year. There are a lot more 'low level' jobs in tons of companies where we see the same happening because suddenly the automation is trivial to make instead of 'a project'. It will take time for the bigger ones and it won't 'eliminate' all jobs of the same type (maybe it will in time), but it will eliminate most people doing that job as now 1 people can do the work of 5 or more.
I guess we will see the actual difference in 5-10 years in the stats. Big companies are mostly still evaluating and waiting. Maybe it will remain just a few blibs and it'll fizzle out, or maybe, and this is what I expect, the effect will be a lot larger, moving many to other roles and many completely out of work.
On a small (we see many companies inside, but many is relative, of course), but real life examples I see are translators, programmers, seo/marketing writers, data entry (copying content from pdf to excel, human webscraping etc) being replaced now.
We work with some small outsourcing outfits (few 100 people per) and they noted sharp drops in business from the west where the stated reason is AI, but it's not really easy to say or see if that's real or just the current market.
Imagine the face of a guy who needs to do the work of 5 solo now... He is probably the happiest employee now and his salary raised 5-fold, surely yeah?
Yeah the internal representation of organic neural networks are also weird - check out the signal processing that occurs between the retina and the various parts of the visual cortex before any decent information can emerge from the signal - David Marr's 1980s book Vision is a mathematically chewy treatise on this. This leads me to start thinking that human intuition may well caused by different neural network subsystems feeding processed data into other subsystems where consciousness and thus intuition and explanation emerges.
Organic neural networks are pretty energy efficient in comparison- although still decently inefficient compared to other body systems - so there is the capacity to build things out to the scale required, assuming my read on what's going on there is correct, that is. So it's not clear to me that the energy inefficiency of ANNs can be sufficiently resolved to enable these multiple quasi-independent subsystems to be built at the scale required. Not even if these interesting looking trinomial neural nets which are matrix addition based rather than multiplication come to dominate the ANN scene.
While I was thinking this comment through I realised there's a possible interpretation wherin human activity induced climate change is an emergent property of the relative energy inefficiency of neural architecture.
I mean, the matrices obviously change during training. I take it your point is that LLMs are trained once and then frozen, whereas humans continuously learn and adapt to their environment. I agree that this is a critical distinction. But it has nothing to do with “meaningful internal structure.”
The reasoning is quite subtle, and because I'm not a very coherent guy I have problems expressing it. In the LLM space there are a whole bunch of pitfalls around overfit (largely solvable with pretty standard statistical methods) and inherent bias in training material which is a much harder to problem to solve. The fact that the internal representation gives you zero information on how to handle this bias means the tool can itself not be used to detect or resolve the problem.
I found this episode of the nature podcast - "How AI works is often a mystery — that's a problem": https://www.nature.com/articles/d41586-023-04154-4 - very useful in a 'thank goodness someone else has done the work of being coherent so I don't have to' way.
AlphaGo had an artificial neural network that was specifically trained in best moves and winning percentages. An LLM trained on text has some data on what constitutes winning at go, but internally doesn't have a ANN specifically for the game of go.
> AlphaGo had an artificial neural network that was specifically trained in best moves and winning percentages. An LLM trained on text has some data on what constitutes winning at go, but internally doesn't have a ANN specifically for the game of go.
This isn't addressing what the original commenter was referring to.
Do your kids have internal structures which relate meaningfully to their functionality, which allow a mechanistic explanation of what they learned in school?
Not sure if this is satirical, but absolutely yes.
Heck we have everything from fields of study, to professions that cover this. Neurology, psychology, counseling, teaching, amongst a few.
All things being equal, If a kid didn’t pick up a concept, I can sit with them and figure out what happened, and we can both work towards making sure its cleared up.
“ the fact that this can not map to any kind of mechanistic explanation of what the ANN is doing in a way that can facilitate intuition.”
Will remain true imho. We will never fully intuit AI or understand it outside of some brute force abstraction like a token predictor or best fit curve.
What are your thoughts on neuro-symbolic integration (combining the pattern-recognition capabilities of neural networks with the reasoning and knowledge representation of symbolic AI) ?
I’m not an AI expert, but from my armchair I might draw a comparison between functional (symbolic rule- and logic-based AI) and declarative (LLM) programming languages
Given you just mentioned semantic search (a term I haven’t heard in over 15 years) and the other breadcrumbs in this comment, you wouldn’t by chance be an English lecturer living in Ireland would you?
me? No. Ex trainee neruopsychologist and failed academic who was in the right place at the right time back in the mid 90s who didn't pick up computers for professional interest until the mid-late 2000s after getting excited by Neil Stephenson's Cryptonomicon when I was looking for a career change. These days I identify as an international computer hacker, but mainly to take the piss (due to the tiny element of truth sitting underneath)
Aligning the computer's and the humans' thinking processes. Cognitive load is exceptionally important - one of the few uncontravenable facts in human psychology is that healthy human short term memory has a capacity of 5 items plus or minus 2. So reliably 5. And thus the maximum number of thinking balls you should be juggling at one time.
Which then leads to thinking about designs that lead to the management of cognitive load - thus the nature of the balls changes due to things like chunking and context. Which are theoretical constructs that came out of that memory research.
So yes, this is pretty much principal zero - cognitive load and understanding the theory underneath it are the most important thing - and are closely related to the two hard problems in computer science (cache invalidation, naming things and off by one errors).
I've had some C# code inflicted on me recently that follows the pile of garbage design pattern. Just some offshore guys fulfilling the poorly expressed spec with as little brain work as possible. The amount of almost-duplicate boilerplate kicking around is one of the problems. Yeah it looks like the language design encourages this lowest common denominator type approach, and has lead into the supplier providing code that needs substantial refactoring in order be able to create automated tests as the entry points ignore separation of concerns and abuse private v public members to give the pretense of best practices while in reality providing worst practice modify this code at your peril instead. It's very annoying because I could have used that budget to do something actually useful, but on the other hand improves my job security for now.
I think the lack of discussion of perl is in part because from a bird's eye view, python and perl do exactly the same stuff in exactly the same way, but python has the mindshare. Because of perl's depth and flexibility, for the average developer team it can be really hard to manage.
There's a similar story around javascript - note how es6(?) introduced `"use strict"`. Old school javascript always struck me as a bit like crippled perl with really good vendor support.
I meant the company did not participate in it's creation nor did they fund it in any way. If they had they certainly would not have approved of it being released under an open source license. Also Perl 1.0 was released using Larry's NASA email address and not any other corporate one.
I believe it was NSA not NASA - in his hubristic style larry wall described perl as being born from "a secret project in a secret laboratory".
I'd say the biggest influence on my programming work is recognising how to use that style of humour can help deal with situations that can sometimes get to quite high pressure - e.g. "please rescue this failing 9 month long $x^e6 project before it goes live in the next fortnight"