Hacker Newsnew | past | comments | ask | show | jobs | submit | oceanplexian's commentslogin

I’d argue that having the delusion that you understand another person’s point of view while not actually understanding it is far more dangerous than simply admitting that you can’t empathize with them.

For example, I can’t empathize with a homeless drug addict. The privileged folks who claim they can, well, I think they’re being dishonest with themselves, and therefore unable to make difficult but ultimately the most rational decisions.


You seem to fail to understand what empathy is. Empathy is not understanding another person’s point of view, but instead being able to analogize their experience into something you can understand, and therefore have more context for what they might be experiencing.

If you can’t do that, it’s less about you being rational and far more about you having a malformed imagination, which might just be you being autistic.

— signed, an autistic


You are right, and another angle is that empathy with a homeless drug addict is less about needing to understand/analogize why the person is a drug addict, which is hard if you only do soft socially acceptable drugs, but rather to remember that the homeless drug addict is not completely defined by that simple definition. That the person in front of you is a complete human that shares a lot of feelings and experiences with you. When you think about that and use those feelings to connect with that human it lets you be kinder towards him/her.

For example, the homeless drug addict might have a dog that he/she loves deeply, maybe oceanplexian have a dog that they love deeply. Suddenly oceanplexian can empathize with the homeless drug addict. Even though they still can't understand why on earth the drug addict doesn't quit drugs to make the dog's life better. (Spoiler alert drugs override rational behaviour, now oceanplexian also understand the homeless drug addict)

Does “connecting with that human” to be “kinder towards him/her”, in the way that you describe, actually improve outcomes?

The weight of evidence over the past 25 years would suggest absolutely not.


Improve outcomes? Like make the drug addict stop being a drug addict? If so, you misunderstand the point of being kind.

If you want to maximize outcomes I have a solution that guarantees 100% that the person stops being a drug addict. The u.s. are currently on their way there and there's absolutely no empathy involved.


At a societal level, the point isn’t to be kind. The point is to be effective.

I'm having a hard time understanding what you're getting at here. Homeless drug addicts are really easy to empathize with. You just need to take some time to talk and understand their situation. We don't live in a hospitable society. It's pretty easy to fall through the cracks and some people eventually get so low that they completely give into addiction because they have no reason to even try anymore.

Being down and unmotivated is not that hard to empathize with. Maybe you've had experiences with different kinds of people, homeless are not a monolith. The science is pretty clear on addiction though, improving people's conditions leads directly to sobriety. There are other issues with chronically homeless people, but I tend to see that as a symptom of a sick society. A total inability to care for vulnerable messed up sick people just looks like malicious incompetence to me.


It's pretty simple, AI is now political for a lot of people. Some folks have a vested interest in downplaying it or over hyping it rather than impartially approaching it as a tool.

It’s also just not consistent. A manager who can’t code using it to generate a react todo list thinks it’s 100x efficiency while a senior software dev working on established apps finds it a net productivity negative.

AI coding tools seem to excel at demos and flop on the field so the expectation disconnect between managers and actual workers is massive.


I work in FAANG, have been for over a decade. These tools are creating a huge amount of value, starting with Copilot but now with tools like Claude Code and Cursor. The people doing so don’t have a lot of time to comment about it on HN since we’re busy building things.

> These tools are creating a huge amount of value...

> The people doing so don’t have a lot of time to comment about it on HN since we’re busy building…

“We’re so much more productive that we don’t have time to tell you how much more productive we are”

Do you see how that sounds?


To be fair, AI isn't going to give us more time outside work. It'll just increase expectations from leadership.

I feel this, honestly. I get so much more work done (currently: building & shipping games, maintaining websites, managing APIs, releasing several mobile apps, and developing native desktop applications) managing 5x claude instances that the majority of my time is sucked up by just prompting whichever agent is done on their next task(s), and there's a real feeling of lost productivity if any agent is left idle for too long.

The only time to browse HN left is when all the agents are comfortably spinning away.


I also work for a FAANG company and so far most employees agree that while LLMs are good for writing docs, presentations or emails, they still lack a lot when it comes to writing a maintainable code (especially in Java, they supposedly do better in Go, don’t know why, not my opinion). Even simple refactorings need to be carefully checked. I really like them for doing stuff that I know nothing about though (eg write a script using a certain tool, tell me how to rewrite my code to use certain library etc) or for reviewing changes

I don't see how FAANG is relevant here. But the 'FAANG' I used to work at had an emergent problem of people throwing a lot of half baked 'AI-powered' code over the wall and let reviewers deal with it (due to incentives, not that they were malicious). In orgs like infra where everything needs to be reviewed carefully, this is purely a burden

I work in a FAANG equivalent for a decade, mostly in C++/embedded systems. I work on commercial products used by millions of people. I use the AI also.

When others are finding gold in rivers similar to mine, and I'm mostly finding dirt, I'm curious to ask and see how similar the rivers really are, or if the river they are panning in is actually somewhere I do find gold, but not a river I get to pan in often.

If the rivers really are similar, maybe I need to work on my panning game :)


I use agentic tools all the time but comments like this always make me feel like someone's trying to sell me their new cryptocoin or NFT.

>creating a huge amount of value Do you write software, or work in accounting/finance/marketing?

What are the AI usage policies like at your org? Where I am, we’re severely limited.

HVAC must be one of the most dishonest professions in the US. I’m not in NYC but received similar quotes($15-20k to do a few rooms).

Obviously having family in South America where there are millions of these installed by unskilled labor I decided to DIY. So I installed 2 units with 2 heads each, including pouring the concrete pads, vacuuming the line sets, and charging them. Took me two weekends and about $4000 in materials including the units themselves. It’s been two years, none of the BS fear mongering issues have happened, and they have almost paid for themselves.


Private equity has been buying out HVAC companies in the US. The technician are forced to drive up sales. So instead of repairing something, they now recommend new equipment. I saw this difference in behavior at an HVAC company I used for a 10 years. The owners were retiring and the private equity bought them out. You really have to go by word of mouth and seek out the smaller companies.

Similar thing happened for Veterinary care clinics.

https://www.reddit.com/r/HVAC/comments/16asntf/lets_talk_abo...


There is a very different pattern I learned to recognize with Private Equity electricians. Its not all negative- they have fast availability and good communication (because they have office staff), but that’s the end of the good stuff.

You call for one broken outlet and they pull out fancy branded folders and pens with checklists of every little thing that could possibly be upgraded (inplying its needed for safety) present you with a multi $K bill and then do a little magic 10% discount for some reason to make you think its a good deal.

That said I get my petty revenge by asking questions at the free consult (marketing opportunity) then hiring local guys instead, whenever I cam find them.


RAM Air in San Marcos, south of Austin. Ask for Edgar. He's self trained and likes to rebuild boards from scratch. He'll service out to west Austin if the job is big enough. He mostly trains techs. So, if you ask, he'll happily walk you through a repair in the phone. (Most jobs are only 20–40m.) He makes his money on the big jobs with repeat customers. When he retires ... fuck me.

Many HVAC technicians are not good at service calls (troubleshoot and fix); they are sales people in disguise. I also heard many local HVAC/plumbing companies are owned by private equity of sorts.

They even have DIY friendly ones now where you don't even need a vacuum pump (not that it's hard but it is one more thing). So easy

The part that always makes me chicken out is the electrical, did you have any issues with that part?

The electrical is relatively easy. It's not much harder than replacing a frayed power cord on a lamp (with an extra wire if the unit is 220 rather than 110).

Managing the lineset is the scary part (though it's not that hard). You're vacuuming copper lines that you've hopefully sealed correctly. If you get that wrong and your refrigerant yeets off into the sky, you have to call in help because it's hard for an unlicensed person to get the refrigerant legally. That half-hour of work and ~$1 of materials will cost you a punitive amount of money.


It should be super-easy these days to get the license.

Ten years ago, I downloaded a free study guide and took the test in-person at an A/C supply shop for about $50.

Today, you can take the test online.


Thanks to ripping off by HVAC contractors, many are signing up for EPA 608 certification to get the refrigerant legally.

It’s really easy, 220v is not that hard to install and is the scam scam run by people installing ev chargers.

You do have to go through the permitting process which means having someone come out to view it and write off on it and if you state it properly it should be less than $200.


Electrical is both surprisingly easy and surprisingly hard.

The actual work involved is relatively easy and straightforward. However, the code and regulations are extremely difficult to navigate. There’s a lot of non-obvious things you have to do to be code-compliant.


Some understanding of electrical circuits, split phase motors, and control circuits (using step down transformers, relays, contactors) is extremely helpful. HVAC systems contains at least two motors--blower motor inside and compressor motor outside. And control circuits are activated by a thermostat.

I can’t speak to the Tesla stuff but I run an Epyc 7713 with a single 3090 and creatively splitting the model between GPU/8 channels of DDR4 I can do about 9 tokens per second on a q4 quant.

Impressive. Is that a distillation, or the real thing?

Yeah I was going to say, as a pilot there is no such thing as "therapy" for pilots. You would permanently lose your medical if you even mentioned the word to your doctor.

Not everywhere

https://en.m.wikipedia.org/wiki/Germanwings_Flight_9525

"The crash was deliberately caused by the first officer, Andreas Lubitz, who had previously been treated for suicidal tendencies and declared unfit to work by his doctor. Lubitz kept this information from his employer and instead reported for duty. "


I don't see how people using these as a therapist really has any measurable impact compared to using them as agents. I'll spend a day coding with an LLM and between tool calls, passing context to the model, and iteration I'll blow through millions of tokens. I don't even think a normal person is capable of reading that much.

AGI isn't all that impactful. Millions of them already walk the Earth.

Most human beings out there with general intelligence are pumping gas or digging ditches. Seems to me there is a big delusion among the tech elites that AGI would bring about a superhuman god rather than a ethically dubious, marginally less useful computer that can't properly follow instructions.


That's remarkably short-sighted. First of all, no, millions of them don't walk the earth - the "A" stands for artificial. And secondly, most of us mere humans don't have the ability to design a next generation that is exponentially smarter and more powerful than us. Obviously the first generation of AGI isn't going to brutally conquer the world overnight. As if that's what we were worried about.

If you've got evidence proving that an AGI will never be able to design a more powerful and competent successor, then please share it- it would help me sleep better, and my ulcers might get smaller.


Burden of proof is to show that AGI can do anything. Until then, the answer is "don't know."

FWIW, it's about 3 to 4 orders of magnitude difference between the human brain and the largest neural networks (as gauged by counting connections of synapses, the human brain is in the trillions while the largest neural networks are low billion)

So, what's the chance that all of the current technologies have a hard limit at less than one order of magnitude increase? What's the chance future technologies have a hard limit at two orders of magnitude increase?

Without knowing anything about those hard limits, it's like accelerating in a car from 0 to 60s in 5s. It does not imply that given 1000s you'll be going a million miles per hour. Faulty extrapolation.

It's currently just as irrational to believe that AGI will happen as it is to believe that AGI will never happen.


> Burden of proof is to show that AGI can do anything.

Yeah, if this were a courtroom or a philosophy class or debate hall. But when a bunch of tech nerds are discussing AGI among themselves, claims that true AGI wouldn't be any more powerful than humans very very much have a burden of proof. That's a shocking claim that I've honestly never heard before, and seems to fly in the face of intuition.


> That's remarkably short-sighted

I agree. Once these models get to a point of recursive self-improvement, advancement will only speed up even more exponentially than it already is...


The difference isn't so much that you can do what a human can do. The difference is that you can - once you can do it at all - do it almost arbitrarily fast by upping the clock or running things in parallel and that changes the equation considerably, especially if you can get that kind of energy coupled into some kind of feedback loop.

For now the humans are winning on two dimensions: problem complexity and power consumption. It had better stay that way.


Have you noticed the performance of the actual AI tools we are actually using?

If you actually have a point to make you should make it. Of course I've actually noticed the actual performance of the 'actual' AI tools we are 'actually' using.

That's not what this is about. Performance is the one thing in computing that has fairly consistently gone up over time. If something is human equivalent today, or some appreciable fraction thereof - which it isn't, not yet, anyway - then you can place a pretty safe bet that in a couple of years it will be faster than that. Model efficiency is under constant development and in a roundabout way I'm pretty happy that it is as bad as it is because I do not think that our societies are ready to absorb the next blow against the structures that we've built. But it most likely will not stay that way because there are several Manhattan level projects under way to bring this about, it is our age's atomic bomb. The only difference is that with the atomic bomb we knew that it was possible, we just didn't know how small you could make one. Unfortunately it turned out to be that yes, you can make them and nicely packaged for delivery by missile, airplane or artillery.

If AGI is a possibility then we may well find it, quite possibly not on the basis of LLMs but it's close enough that lots of people treat it as though we're already there.


I think there are 2 interesting aspects: speed and scale.

To explain the scale: I am always fascinated by the way societies moved on when they scaled up (from tribes to cities, to nations,...). It's sort of obvious, but when we double the amount of people, we get to do more. With the internet we got to connect the whole globe but transmitting "information" is still not perfect.

I always think of ants and how they can build their houses with zero understanding of what they do. It just somehow works because there are so many of them. (I know, people are not ants).

In that way I agree with the original take that AGI or not: the world will change. People will get AI in their pocket. It might be more stupid than us (hopefully). But things will change, because of the scale. And because of how it helps to distribute "the information" better.


To your interesting aspect, you're missing the most important (IMHO): accuracy. All 3 are really quite important, missing any one of them and the other two are useless.

I'd also question how you know that ants have zero knowledge of what they do. At every turn, animals prove themselves to be smarter than we realize.

> And because of how it helps to distribute "the information" better.

This I find interesting because there is another side to the coin. Try for yourself, do a google image search for "baby owlfish".

Cute, aren't they? Well, turns out the results are not real. Being able to mass produce disinformation at scale changes the ballgame of information. There are now today a very large number of people that have a completely incorrect belief of what a baby owlfish looks like.

AI pumping bad info on the internet is something of the end of the information superhighway. It's no longer information when you can't tell what is true vs not.


> I'd also question how you know that ants have zero knowledge of what they do. At every turn, animals prove themselves to be smarter than we realize.

Sure, one can't know what they really think. But there are computer simulations showing that with simple rules for each individual, one can achieve "big things" (which are not possible to predict when looking only to an individual).

My point is merely, there is possibly interesting emergent behavior, even if LLMs are not AGI or anyhow close to human intelligence.

> To your interesting aspect, you're missing the most important (IMHO): accuracy. All 3 are really quite important, missing any one of them and the other two are useless.

Good point. Or I would add alignment in general. Even if accuracy is perfect, I will have a hard time relying completely on LLMs. I heard arguments like "people lie as well, people are not always right, would you trust a stranger, it's the same with LLMs!".

But I find this comparison silly: 1) People are not LLMs, they have natural motivation to contribute in a meaningful way to society (of course, there are exceptions). If for nothing else, they are motivated to not go to jail / lose job and friends. LLMs did not evolve this way. I assume they don't care if society likes them (or they probably somewhat do thanks to reinforcement learning). 2) Obviously again: the scale and speed, I am not able to write so much nonsense in a short time as LLMs.


> But things will change, because of the scale

Yup!

Plus we can't ignore the inherent reflexive + emergent effects that are unpredictable.

I mean, people are already beginning to talk like and/or think like chatGPT:

https://arxiv.org/pdf/2409.01754


I find statements like this kind of funny.

If an AI assistant was the equivalent of “a dozen PhDs” at any of the places I’ve worked you would see an 80-95% productivity reduction by using it.


>you would see an 80-95% productivity reduction by using it.

they are the equivalent.

there is already an 80-95% productivity reduction by just reading about them on Hacker News.


Yeah, we're only seeing a 20% reduction in productivity.

Does anyone know how they actually trained text rendering into these models?

To me they all seem to suffer from the same artifacts, that the text looks sort of unnatural and doesn't have the correct shadows/reflections as the rest of the image. This applies to all the models I have tried, from OpenAI to Flux. Presumably they are all using the same trick?


It's on page 14 of the technical report. They generate synthetic data by putting text on top of an image, apparently without taking the original lighting into account. So that's the look the model reproduces. Garbage in, garbage out.

Maybe in the future someone will come up with a method for putting realistic text into images so that they can generate data to train a model for putting realistic text into images.


Wouldn't it make sense to use rendered images for that?

i'm not sure if that's such garbage as you suggest, surely it is helpful for generalization yes? kind of the point of self-supervised models

If you think diffusing legible, precise text from pure noise is garbage then wtf are you doing here. The arrogance of the it crowd can be staggering at times

They're referring to the training data being garbage, not the diffusion process.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: