Why is it exactly, that mobile providers can’t catch this…?
Just like with SMS fraud. It might cost them a few cents per subscriber to do effective anti-spam measures, but now society has to pay the cost
The MVNOs such as MobileX mentioned above do not have their own towers or the cellular backbone i.e. the core network. They'll merely use the MNO such as Verizon or T-Mobile and have a commercial arrangement in which MVNO just handles the marketing and the customer support. So the MVNOs may not have the right tools or data or incentives to catch these sim farms.
It depends on how closely they monitor the network and how much these people abused it. A steady, slow increase in calls/messages on a site wouldn’t show up in stats unless there was a lot of constant congestion, and even then most telcos these days outsource a lot of their network monitoring and capacity management to contractors that just don’t care.
Plus if they’re using legacy 2G/3G, it’s not the shiny thing that most telco network quality crews care about for customer experience…
I am not an expert in AI by any means but I think I know enough about it to comment on one thing: there was an interesting paper not too long ago that showed if you train a randomly-initialized model from scratch on questions, like a bank of physics questions & answers, models will end up with much higher quality if you teach it the simple physics questions first, and then move up to more complex physics questions. This shows that in some ways, these large language models really do learn like we do.
I think the next steps will be more along this vain of thinking. Treating all training data the same is a mistake. Some data is significantly more valuable to developing an intelligent model than most other training data, even when you pass quality filters. I think we need to revisit how we 'train' these models in the first place, and come up with a more intelligent/interactive system of doing so
From my personal experience training models this is only true when the parameter count is a limiting factor. When the model is past a certain size, it doesn't really lead to much improvement to use curriculum learning. I believe most research also applies it only to small models (e.g. Phi)
Wow. I really like this take. I've seen how time and time again nature follows the Pareto principle. It makes sense that training data would follow this principle as well.
Further that the order of training matters is novel to me and seems so obvious in hindsight.
Maybe both of these points are common knowledge/practice among current leading LLM builders. I don't build LLMs, I build on and with them, so I don't know.
This is precisely why chain of thought worked. Written thoughts in plain English is a much higher SNR encoding of the human brain's inner workings than random pages scraped from Amazon. We just want the model to recover the brain, not Amazon's frontend web framework.
Apple’s biggest problem is their commitment to privacy. Delivering effective AI requires a substantial amount of user data that Apple doesn’t collect.
Their other problem is they value designers and product managers more than engineers (especially top tier AI engineers).
Both problems are basically the death knell of any hope for Apple to have good AI, but combined? It’s never gonna happen. Which is sad because Apple’s on-device hardware is quite good.
Thanks to the EU for ruining the web by forcing everyone to show the ridiculous "Accept Cookies!" agreement. No wonder people prefer native apps. They’re better - for a lot of reasons, both because they can interface more cleanly with OS specific features and also for performance.
And 'privacy' is a horrible argument to prefer websites over apps. For the average person (not a privacy obsessed techie) - the web is just as bad if not worse from a privacy perspective than native apps.
I do agree that not everything needs an app - websites have their place. But when I go to browse HN on my phone, I don't do it through the web, I do it through Octal (which is open source).
Frankly I am tired of privacy-obsessed techies ruining tech for everyone else. Let's face it - 99% of the things you're worried about are simply going to let companies....show you ads that are more relevant to your life. The horror!
I think you are right here - the ability to test theory of mind in an LLM would be more like testing how well it can distinguish its own motivations/ideas from that of a separate entity.
I would agree that this question is more of a logic puzzle and less of a real test of 'theory of mind'
In fact, just to have a theory of mind, it kind of assumes you have a mind, with your own ideas/motivations/etc
Yep, there is a big reason why Europe has so few successful big tech companies, it is a regulatory hellscape. They have so many pointless privacy regulations that only the “big” companies can even hope to compete in many markets like ad tech.
There is a big reason why the USA outside of Silicon Valley and Seattle has so few successful big tech companies: because success begets success and capital breeds more capital. If it was just European regulation you'd expect SV equivalents everywhere except for Europe. That didn't happen.
And the last thing we need is more competition in ad tech.
Given the size and abuses by the existing ad tech giants, why do you say they last thing we need is more competition. wouldn't more competition mean they have less money and get away with less and have to behave better?
No, we need them to go away. Like the dinosaurs or the Dodo. Because competition between ad tech giants means the public is collateral damage in the ensuing war. That's because the advertiser money flows to the ad tech company that is most efficient at extracting dollars from their audience. Even a .1% increase is enough to swing the battle, and that arms race has been running since 1994 or so, the results are there for all to see.
As much as we would like them to magically disappear and get uninvented, like NFTs, there doesn't seem to be any mechanism by which that realistically happens. So then, like harm reduction, isn't more competition better than less? It means less money goes to the existing giants, which may not totally starve them, but will put them on a diet.
Silicon Valley equivalents are brimming up in other parts of the world, Taiwan is very much known for its hardware technology. There are documentaries about Shenzhen becoming a tech hub too. Even here in Bangalore (India), there are many tech companies doing massive amount of good work.
But they're also right in the sense that regulation acts like a barrier in many parts of the world. I had often wondered why did Linus Torvalds and other Engineers travelled to Silicon Valley to found Linux, etc? Did they not find opportunity in Finland or any other nearby European countries?
That's because once you have a runaway success the US will tax you lower and your quality of life will be higher than what you can achieve in Europe. The USA is a great country if you're on to a winner. So the vacuum cleaner in SV tends to suck the air out of a lot of EU successful start-ups and engineering efforts simply because that's where the money is. Typical start-up valuations in the USA dwarf those in Europe, access to a single unified and mostly mono-lingual market are far more of a factor than any EU regulations, that's just a dumb meme that gets tossed around by the clueless. Yes, taxes are higher. But so is average quality of life, as opposed to average GDP, which relies on outliers.
Despite being less wealthy and diverse in both language and culture, China and India produce more tech unicorns than the EU. Several smaller countries, like Singapore and Israel also do far better than the EU on a per-capita basis.
I'd attribute most of the gap to regulatory and cultural difference.
EU is also not uniform in terms of language - so most EU companies need to decide whether they go global and start in a foreign (English) language or begin in their native language and risk getting locked in there.
I’ve been mentoring startups in EU for over 10 years and there were only a handful that had issues with regulation, but 95% had issues with a language/country lock in.
You missed the per-capita examples of Israel and Singapore. If either had the population of Germany or even that of Spain, they'd have more unicorns than the entire EU.
It's not striking that the EU is as wealthy per-person as it is and has so few tech unicorns. It's also not striking for a region with hundreds of millions of people. What's striking is that despite being wealthy and populous, the region hasn't done too well with tech.
I'd say that it's even changed during my own lifetime. There was a time when German cars had a much larger market share and Nokia was a dominant phone company. Nokia failed the transition into the smartphone era and while German cars are still great, their market share in EVs is much, much smaller. And it's not like there's a lack of talent. Plenty of Europeans are building huge tech companies, but a large fraction are choosing to do so in the US or other similar markets, like Canada (e.g., Shopify).
GDP always gets trotted out as if it is a holy grail and a benchmark for social welfare, which it isn't so think of it as a (failed) preemptive strike against getting a longer comment thread.
And so have Eindhoven (ASML), London (Revolut, Monzo, Wise and Deliveroo), Paris (DailyMotion, AppGratis), Berlin (Soundcloud, Mister Spex, Zalando, Helpling, Delivery Hero, Home24 and HelloFresh) and Amsterdam (Sonos (ok, technically Hilversum), Booking.com, TomTom) etc, etc. So what?
Tech companies exist the world over. The specific kind of tech company that requires a mountain of free cash and that can monopolize a whole segment is a SV anomaly and Microsoft is the exception simply because of when it started.
Those are actually tiny compared to most US tech companies with a global reach. The issue is that here in Europe everyone speaks their own language, and it's not feasible to advertise your tech stuff to the entire continent. There's no TV channels here where you can reach 300M people with a single commercial in a single language.
It's also an issue with capital. Everyone was shocked when Mistral raised what, 300M dollars? Ask on the street if anyone's heard of Mistral, and then ask about ChatGPT.
Meanwhile effing xAI from Elon, that no one really cares about is looking to raise $1B.
Here in Europe we're sadly not on the same level. Available capital is smaller. Reach is smaller (in practice but not in theory). Profit margins are smaller. Regulation is higher.
In 2023 you need extreme luck to create something in Europe that reaches a global audience to the point it's not worth trying. Just go for your local domestic market instead.
> Those are actually tiny compared to most US tech companies with a global reach
What are you talking about? Booking is 5.3B revenue, 112B market cap. Adyen is 37B market cap. These are not "tiny" companies compared to public tech companies in the US, and there are more than just these two.
Sure, Europe doesn't have as many frothy VC's and associated tech companies with insane valuations as the US. But it's not trailing out in last place like some of these comments make it out to be.
People need to look at actual facts and numbers before regurgitating the same old memes about how terrible Europe and the EU are.
The same way we can simulate the movements of the planets - it will never be exact, but the better we understand it, the more precisely we can simulate it
302 neurons doesn’t sound impressive to people who may be used to working with 7B+ parameter neural networks. But those neural networks have about as much in common with a biological neuron as a bicycle had with a horse. They can both travel pretty fast but one evolved naturally through over a billion years of harsh natural selection, and the other is a precisely tuned metal machine with a single purpose.
Neurons are similar, they are incredibly sophisticated biological machines, with billions of DNA base pairs controlling their behavior. The emergent behavior of neurons in both biological and AI systems are pretty fascinating
In addition to that, these neurons are also quite different from the ones in mammals,
"The neurons do not fire action potentials, and do not express any voltage-gated sodium channels." [1]
That makes the fact that it can develop a nicotine addiction even more fascinating.
"Nicotine dependence can also be studied using C. elegans because it exhibits behavioral responses to nicotine that parallel those of mammals. These responses include acute response, tolerance, withdrawal, and sensitization." [1]
> "The neurons do not fire action potentials, and do not express any voltage-gated sodium channels."
This an old and incorrect belief that largely derives from the difficulty of putting electrodes into their teeny, tiny neurons. Close relatives of C elegans that are larger (and hence more easily experimented on) do have action potentials, and for some neurons in C elegans, we also have good evidence of action potentials [1, 2]. Absence of evidence is not evidence of absence.
[1] Lockery SR, Goodman MB. The quest for action potentials in C. elegans neurons hits a plateau. Nat Neurosci. 2009 Apr;12(4):377-8. doi: 10.1038/nn0409-377. PMID: 19322241; PMCID: PMC3951993.
[2] Jiang, J., Su, Y., Zhang, R. et al. C. elegans enteric motor neurons fire synchronized action potentials underlying the defecation motor program. Nat Commun 13, 2783 (2022). https://doi.org/10.1038/s41467-022-30452-y
Well, the 'canonical' action potential is mediated by sodium currents, so it's maybe not surprising that people concluded that C elegans don't have APs given that a) they don't have any genes for voltage-gated sodium channels, and b) when people had recorded from C elegans neurons (it's hard but not impossible), they had never seen action potentials. (So it's not like no one had looked, and then had concluded that they don't exist. They looked and didn't see them.) In the paper that originally reported APs in C elegans (Liu et al 2018), they were looking in a specific neuron (AWA), and they had to elicit a 'plateau potential' by depolarizing the cell for a while before the spikes were revealed, riding on top of the plateau.
The APs discovered by Liu et al (2018) are generated by calcium, not sodium currents, so one could even argue that they aren't action potentials in the strict sense. Also, they seem to be rather difficult to elicit, and it's still not clear whether neural computation in C elegans is mostly AP-mediated, or if APs are the exception rather than the rule.
Liu, Q., Kidd, P. B., Dobosiewicz, M. & Bargmann, C. I. C. elegans AWA olfactory neurons fire calcium-mediated all-or-none action potentials. Cell 175, 57–70 e17 (2018) https://doi.org/10.1016/j.cell.2018.08.018
> are generated by calcium, not sodium currents, so one could even argue that they aren't action potentials in the strict sense
Does the underlying chemistry define if its an action potential or not? I thought an AP just needed a voltage differential regardless if its from calcium or sodium.
Given that we have a simulator of this worm right there (which includes it moving), can it really be up to debate whether it uses action potentials or not?
I'd think the simulation has to get it right, and so needs to simulate action potentials if the worm has them, or not simulate them (but whatever the worm has instead) if not, right? Or could the simulation still be incorrect and only based on current assumptions, but getting this wrong still allows some worm-like behavior?
I really wish the readme/FAQ would talk a bit more about the worm and the simulation, rather than have 80% of their content be about Docker, though, so that I could learn more what cells it actually simulates.
Not necessarily, because you could also simulate the worm without neurons at all. It's the closeness of the simulation to the real thing that demands that it is done right and the question effectively is: is this simulation close enough that if such a detail would be wrong that it would fail?
One way to answer that would be to add and remove such mechanisms to see if it would lead to different behavior.
That isn't immediately true, with enough fitting parameters you can capture the effect underlying behaviour without explicitly capturing it, or even without knowing it exists.
"With four parameters I can fit an elephant, and with five I can make him wiggle his trunk." - John von Neumann [0]
How did researchers before that explain what the neurons do if they believed they did not have action potentials? Did they believe communication was done solely through chemical messaging?
Classical action potentials are just one mechanism of INTRAcellular communication - You could think of it as a special case of signaling via chemical concentration, where the chemical is cations and the propagation is faster+more directed than diffusion. INTERcellular signaling is only rarely mediated directly by voltage. Also, action potentials are most "useful" for propagating a signal rapidly over a long distance - It kind of accelerates and error-corrects (= reverses diffusive broadening) voltage signals down a linear path. Action potentials are so well known mostly because they show up in stuff that's easy to observe (long motor neurons) and they're easy to quantify
Somewhat related, there is a roughly inverse correlation between neuron count and "computational power per neuron", "older and simpler" critters' neurons are more likely to be "less specialized" and more likely to use hundreds of different chemicals for transmitting intercellular signals, while "newer and more advanced" critters' neurons are more likely to be "specialized" and use just one chemical for transmitting intercellular signals
Neural computing without action potents is commonplace. Computational interactions among cells and neurons in retina are almost all graded potentials that modulate transmitter release or conductances through gap junctions. Retinal ganglion cells of course do generate conventional spikes—to pass a data summary to midbrain. hypothalamus, and dorsal thalamus.
Action potential are almost strictly INTRAcellular events (minor exception being ephaptic effects) that are converted in a surprisingly noisy way into presynaptic transmitter release and variable postsynaptic changes in conductances.
Action potential are a clever kludge necessitated by being big and having long axons and needing to act quickly.
IDE have no soul ie feedback loop, receptors , hormones and actual molecular structure. IDE can not think. If IDE have desires and thought and were smart enough, it will refuse to work with languages such as Python and Java.
100M. Did you look at the programs's size in your phone?
This tiny animal code contain all the systems and members of this creature. Birth,creation,death, feeding,growth, movement, sensation.. etc inside the universe.
There is no difference this animal and bee or human beings.
"In order to be the author of the action directed towards the creation of the bee in question, a power and will are necessary that are vast enough to know and secure the conditions for the life of the bee, and its members, and its relationship with the universe. Therefore, the one who performs the particular action can only perform it thus perfectly by having authority over most of the universe."
from Quran's light
The best way of looking at it is that a single biological neuron is itself a complex machine full of genetic control circuits that sort of resemble neural networks and most importantly have memory/state that persists over both short and long periods of time. Each neuron is a full-ass living organism that itself is capable of learning and behavior, not a parameter is a model.
A virtual "neuron" by contrast is a very simple mathematical abstraction. It's vastly less computational complexity than a biological neuron. A connectome is only a very coarse grained map of how neurons relate, not a complete "neural network" layout. Not even close.
It might be possible to model a biological neuron using a sub-neural-network with state within a larger neural network, but assuming that can be computationally equivalent we don't really know how many equivalent computational "neurons" would be required to model the full breadth of computationally relevant biological neuron behavior.
So a worm with 302 biological neurons could be computationally equivalent to billions of virtual neurons. We really don't know.
Given that neurons have memory it may look a little like LSTM networks, and biological neural networks are not just feed forward so they're definitely closer to an RNN.
The above is why I laugh at the mind uploading people and would only stop laughing if we could both understand and model the relevant behavior of biological neurons and somehow extract usable state from living neurons. That's all 100% science fiction at the moment. The people who think we are about to upload minds are ignorant of biology.
The bigger problem is that nobody has any answer for why a mind upload would actually contain your consciousness and not just be a clone with either no qualia or it's own separate conscious experience.
The even bigger issue is when mind uploading people fully admit this issue and try to claim some philosophical reason why it doesn't matter, and we should all be excited about tech to make what amounts to an interactive epitaph.
It does sound impressive to the extent biological neurons are like ML neurons, though. And that's part of the research interest, I presume. To the extent that they work by similar principles, how come the worm can do those things with such small resources? It would be good news for AI research if the substrate specifics turn out not to be essential to the worms capabilities for instance.