I am a PhD biophysicist working within the field of biological imaging. Professionally, my team (successfully) uses deep learning and GANs for a variety of tasks within the field of imaging, such as segmentation, registration, and predictive protein/transcriptomics. It’s good stuff, a game changer in many ways. In no way however, does it represent generalized AI, and nobody in the field makes this claim even though the output of these algorithms match or out perform humans in cases.
LLMs are no different. Like DL modules that are very good at outputting images that mimic biological signatures, LLMs are very good at outputting texts that eerily mimic human language.
However — and this is a point which programmers are woefully and comically ignorant — human language and reason are two separate things. Tech bros wholly confuse the two however, and thus make outlandish claims we have achieved or are on the brink of achieving — actual AI systems.
In other words, while LLMs and DL in general can perform specific tasks well, they do not represent a breakthrough in artificial intelligence, and thus will have a much narrower application space than actual AI.
If you've been in the field you really should know that the term AI has been used to describe things for decades in the academic world. My degree was in AI back before RBMs and Hintons big reveal about making things 100000 times faster (do the main step just once not 100 times and take 17 years to figure that out).
You're talking more about AGI.
We need "that's not AI" discussions like we need more "serverless? It's still on some server!!" discussions.
I think it's even incomparable to server vs serverless discussions.
It's about meaning of intelligence. These people don't have problems claiming that ants or dolphins are intelligent, but suddenly for machines to be classified as artificial intelligence they must be exactly on the same level as humans.
Intelligence is just about the ability to solve problems. There's no implication that in order for something to be intelligent it has to perform on at least the same level as top people in that field in the World.
It just has to be beyond a simple algorithm and be able to solve some sort of problem. You have AIs in video games that are just bare logic spaghetti computation with no neural networks.
Or you're using AI as a term differently to the people in the field. SVMs are extremely simple, two layer perceptrons are things you can work out by hand!
Just stop trying to redefine AI as a term, you'll lose against the old hands and you'll lose against the marketing dept and you'll lose against the tech bros and nobody who you actually need to explain it to will care. Use AGI or some other common term for what you're clearly talking about.
So, the ‘revolutionary’, ‘earth-shattering, ‘soon-to-make-humans obsolete’ talk about ChatGPT is all bullshit and this is just another regular, run-of-the-mill development with the label of ‘AI’ slapped on somewhere, just like all the others from the last 40 years? What in the hell is even your point then? Is ChatGPt a revolutionary precursor to AGI if not AGI already? I say it’s not.
This is true. But only to a point where mimicking and more broadly speaking, statistically imitating data, are understood in a more generalized way.
LLMs statistically imitates texts of real world. To achieve certain threshold of accuracy, it turns out they need to imitate the underlying Turing machine/program/logic that runs in our brains to understand/react properly to texts by ourselves. That is no longer in the realm of the old school data-as-data statistics I would say.
The problem with this kind of criticism of any AI-related technology is that it is an unfalsifiable argument akin to saying that it can't be "proper" intelligence unless God breathed a soul into the machine.
The method is irrelevant. The output is what matters.
This is like a bunch of intelligent robots arguing that "mere meat" cannot possibly be intelligent!
> LLMs are very good at outputting texts that eerily mimic human language.
What a bizarre claim. If LLMs are not actually outputting language, why can I read what they output then? Why can I converse with it?
It's one thing to claim LLMs aren't reasoning, which is what you later do, but you're disconnected from reality if you think they aren't actually outputting language.
Is there a block button? Or a filter setting? You are see unaware and uninquisitive of actual human language, you cannot see the gross assumptions you are making.
Looks about the same as other evergreens like Politics and the English Language and The Story of Mel to me. Greenest (i.e. earliest) evergreen is almost certainly Story of Mel, ever-est (most posted), I'm not sure but I want to say I've seen bigger ones than either of those.
They are. Whenever anyone trots out this line it’s very clear that HN is in many ways a group of knowitall college guys sitting on beanbags and passing around the wizard bong.
Usually it’s followed by some misinterpretation of what “fiduciary responsibility” is, something about “shareholders”, and an implication that literally any software developer pulled off the streets of the Bay Area knows more about effectively running a business than literally anyone with an MBA.
Companies are organisations. Organisations are run by people. People are more than capable of acting ethically. I have never worked for an organisation that I haven’t seen do something that’s put something else ahead of the bottom line, and not in some silly CSR way. In fact, I routinely make these decisions on behalf of my employer, within the scope of my role, but these are on occasion quite material.
I’ve never been clear on how this can be reconciled with the utterly childish view that there’s some invisible hand that requires growth at all costs as soon as a “company” is involved.
It really speaks to how much of a bubble a lot of people here are in. I suppose if you paint all organisations with the same brush, it makes it easier to work for Meta or Palantir or whatever.
1: yes
2: build for yourself first. Make it feel great for yourself, and then your friends, or anyone you can keep bothering without worrying whether you are bothering them. These could be passionate users, or friends and family.
What kind of planning do you need though, like, "how to build it", or "what would users want"?
I as a potential user would want to see:
Is it actively maintained?
License
Category/compatibility (like want to search for things compatible with remix or react three fiber but not the latest version)
Amazon style product reviews, split between ease of use, bugs
Excellent work! I plan to use it with existing LLMs tbh, but great to see it working locally also! Thank you so much for sharing. I love the architecture.
> The following investigation is organized according to the six chronological stages of the Israeli army’s highly automated target production in the early weeks of the Gaza war. First, we explain the Lavender machine itself, which marked tens of thousands of Palestinians using AI. Second, we reveal the “Where’s Daddy?” system, which tracked these targets and signaled to the army when they entered their family homes. Third, we describe how “dumb” bombs were chosen to strike these homes.
> Fourth, we explain how the army loosened the permitted number of civilians who could be killed during the bombing of a target. Fifth, we note how automated software inaccurately calculated the amount of non-combatants in each household. And sixth, we show how on several occasions, when a home was struck, usually at night, the individual target was sometimes not inside at all, because military officers did not verify the information in real time.
Tbh this feels like making a machine that points at a random point on the map by rolling two sets of dice, and then yelling "more blood for the blood god" before throwing a cluster bomb
Removing significant percentages of the Moon's mass is a whole different ballgame from climate change on Earth, where even now people will debate if it'sreally anthropogenic.
There would be an immediately noticeable harm with the genuine ability to make Earth entirely uninhabitable, and unlike with climate change, where the mechanisms at work aren't directly relatable, with an orbit change, the mechanisms are pretty relatable to everyone, in the form of seasons and their total disruption.
As such, I don't think the typical ideas of people ignoring the problem in favor of keeping the money flowing apply, as those are all cases where the harm is some unclear time in the future involving the suffering of people who are not the decision makers.
>There would be an immediately noticeable harm with the genuine ability to make Earth entirely uninhabitable, and unlike with climate change, where the mechanisms at work aren't directly relatable, with an orbit change, the mechanisms are pretty relatable to everyone, in the form of seasons and their total disruption.
There is immediate noticeable harm to oil spills, and yet you still have people arguing that we should run pipelines next to rivers because it's easier and cheaper. I think you grossly overestimate some folks ability to have a rational discussion when they've got a financial incentive not to.
I think you're misunderstanding what I mean by immediate noticeable harm. Running pipelines next to rivers has the potential to cause an oil spill, so the harm is not immediate. There's still room for people to convince themselves that they'll make the pipeline safe and reliable. But the mass of the Moon is so high, that mining and shipping off significant portions of it has to be very deliberate and there's no question of 'potential' in causing a shift in the Earth's orbit.
For context, at the rate at which we mine metals on Earth, it'd take tens to hundreds of millions of years to mine even 0.1% of the Moon, and that's without considering that all that mass would have to be transported elsewhere for it to be a problem (and since other destinations, mostly by definition, have their own easier to access resources, that seems unnecessary). Mining all that material on human timescales would be a very deliberate operation dwarfing all human mining from the start of civilization.
> Typically, swipe fees cost merchants 2% of the total transaction a customer makes — but can be as much as 4% for some premium rewards cards, according to the National Retail Federation. The settlement would lower those fees by at least 0.04 percentage point for a minimum of three years.
Well, ultimately, you pay for the rewards because merchants set their prices to account for what they expect to pay in payment processing, just like any other marginal cost.
Even loyalty programs like the card with stamps you might get at your local coffee shop got priced in somewhere along the way.
No, I don't pay for the rewards on my premium rewards credit card.
For the most part, my rewards are paid for by everyone else who doesn't have a premium rewards credit card.
Premium rewards cards cost merchants a lot more to accept than regular credit cards and debit cards.
Rewards cards: 5% to 10% of total volume with swipe fees of 2.5% to 4.0%
Regular credit cards: 30% to 40% of total volume with swipe fees of 1.5% to 2.5%
Debit cards: 50% to 60% of total volume with swipe fees of 0.5%
Yes, merchants build their payment costs into the prices of their products, but they generally don't charge different prices depending on what card the customer uses.
So the majority of the population that doesn't qualify for a premium rewards card is effectively paying more for everything to subsidize the more wealthy people at the top.
This is the thing that people never really seem to realize.
In the end, all costs are borne by the customer because there's really no other source of revenue.
Now since the costs are already in the price, you "give up" something by paying cash, but that's getting less and less common because more and more places are offering cash discounts (now that the feds sat on the processors).
> In the end, all costs are borne by the customer because there's really no other source of revenue.
This would be considerably more relevant if the price paid was related to the costs in any meaningful way. 50+ years of economic study demonstrates rather conclusively that it is not.
The best example of this is (was?) gas stations, where the ones taking cash or debit only would be cheaper, because the credit card processing is a significant cost in a low-margin product-interchangeable market.
The bigger companies try to make you THINK that there's a difference between their gallon of gas and that other gallon over there that came from the same depot, but there really isn't.
Merchant negotiates with their payment processor (NOT Visa/MC usually, but a middleman like Stripe or their bank or whatever). Depending on this negotiation, it may be a flat percentage, a per charge fee + a smaller percentage, or even a dynamic percentage depending on card type or transaction type, or other variations - rumor has it that Costco gets 0% fees from their processing deal with the bank for their card.
Payment processor negotiates with the network, and this is much more likely to be where "you pay for the rewards" happens - Visa/MC is the other party and they pass costs along, but usually the payment processor "bundles" everything to simplify it for the merchants.
Then Visa/MC (and somewhat Amex/Discover but those are kind of special cases) negotiate with the issuing bank about what they will get (basically, the payment amount - the Visa/MC "cut").
The bank then negotiates with YOU about what rewards and other features you get with your card.
Each layer can have a negotiator decide to absorb some of the costs for certain reasons. But looking at the above, you realize why stores push their store card so hard. They can be all four of the above if they want to. And 3% is worth it.
We need national payment infra so this BS doesn't continue. I want 0% rewards and 0% merchant fees using BIP/SEPA/UPI so all this grift goes back into the economy.
Essentially. Take the Visa credit card lines for example -- Visa Infinite cards have a higher transaction fee than a Visa Signature card, and the high-end travel cards will be of the Infinite variety (Chase Sapphire Reserve).
Ohk. The trick is that the "existing" fees are different for different cards. And merchants can't opt out of the fat cards without dropping all the cards from that network. So some refuse AmEx, but very few dare drop Visa.
Yeah that was a surprise to me as well, and just sounds like one of the biggest grifts I've ever seen.
e.g. As Visa/MC, I market a product that literally discounts everything you buy, and make my business partners (the merchants) pay for it. That keeps my margins the same, limits my downside risk, and raises the upside ceiling...all on someone else's dime.