Hacker Newsnew | past | comments | ask | show | jobs | submit | ryanackley's commentslogin

It feels like you're blaming the author for the lazy thinking of someone who might read his opinion and take it as objective fact.

The 7 times 9 analogy doesn't track it all. 7x9 = 63 is an objective fact by definition. His thoughts on remote work are an opinion by definition. If other people decide that what he says is dogmatic, blame it on their own lack of critical thinking skills.

The meta-point of the article is that we should express are thoughts without qualifiers and embellishments to manipulate other people's perceptions of us.


His opinion on remote work is an opinion. The reasons he gives for having that opinion are not presented as opinion, but as fact.

Is your reasoning for your opinion that he didn’t preface his opinion as “in my opinion”?

His words read the same as any editorial I’ve seen.


> Remote work eliminates a lot of problems with office work: commutes, inefficient use of real estate, and land value distortion. But software development is better when you breathe the same air as the folks you work with.

It's pretty hard to know where the opinion is.

The whole paragraph presents as though author is relating known symptoms of a disease. We're never really sure which they themself actually experienced. They look more like arguments in support of a cause.

Author is totally entitled to open that door, but then it also becomes fair game to attack the perspective.


I am pointing out the fact that he is using factual statements in support of his opinion. "Remote work sucks" is an opinion. "Pair programming is less fruitful" is a statement of fact (regardless of the veracity of the claim).

"It is my opinion 7 x 9 = 63," wouldn't be an opinion in the sense that opinion was being used in the thread. Yes, we can question the veracity of a statement of fact, but that isn't the same sort of opinion as whether something is subjectively good or bad.


> The meta-point of the article is that we should express are thoughts without qualifiers and embellishments to manipulate other people's perceptions of us.

In my experience this is a common failure point among tech/analytical folks (myself included) which leads to their words and actions being genrally misconstrued and effectively misunderstood by the larger segement of the population which is rarely able or disposed to handling communications without embellishments.


“Technical” people are also people, and that doesn’t exclude them from communicating like reasonable adults.

Blaming the rest of the world for an inability to communicate effectively is not orienting the blame correctly.


You're wrong (IMO) The onus should not be on the communicator to qualify every statement of opinion. This is tedious and unreasonable.

Not prefacing what clearly is an opinion with "IMO" is not a jedi mind trick that makes others believe it as fact.

You're also demonstrating some hypocrisy by presenting your own point of view in the same manner. No qualifiers. You're simply stating something as truth


> The onus should not be on the communicator to qualify every statement of opinion. This is tedious and unreasonable.

I fundamentally disagree with this. In my experience, it's in pretty much in possible for people to perfectly understand intent without a certain amount of effort from both the communicator to express it clearly and the listener to understand it. In practice, I don't think there's a good chance of successful communication for any nuanced topic without good-faith effort from both sides, and I can't differentiate between the language the author used and what I'd expect to hear from someone who reflexively dismisses any disagreement as in bad faith.


This argument over the semantics of how to express an opinion feels like a proxy for people who strongly disagree with him on remote work seeking an outlet.

I say that because you (and everyone else who seems upset) clearly understand it's just his opinion. Therefore, why are you offended by his intent? Whatever his intent might be, I think it's irrelevant. It's simply a strongly held opinion.


> I say that because you (and everyone else who seems upset) clearly understand it's just his opinion.

I genuinely don't understand whether it's the case or not, and I've tried to be clear about that. I am not able to tell whether it's their opinion or if they actually feel like they're objective facts; both are plausible to me, and I'm arguing that if they want people to understand which they mean, they need to be more specific. Otherwise, people will draw conclusions that may not align with their intent, and that's something they could avoid if they put more care into how they expressed it.


I think the issue is that the OP wasn’t giving an opinion. They stated things as facts. When you say “x is y” you’re making a truth claim, and people are going to challenge it if it sounds wrong or depends on context.

A lot of folks flip to “it’s just my opinion” only after they get pushback, but if you present something as a fact, it’s fair game to question it.

Like if someone says “apples taste bitter and have no flavor” that reads like a universal claim, so yeah people will argue. If you say “I find apples bitter and lacking flavor” that’s obviously personal taste and nobody is going to demand proof.

Nobody is asking for IMO everywhere. Just don’t frame opinions as facts or the other way around.


My point was that they don't at all phrase it as a personal opinion:

> Remote work eliminates a lot of problems with office work: commutes, inefficient use of real estate, and land value distortion. But software development is better when you breathe the same air as the folks you work with. Even with a camera-on policy, video calls are a low-bandwidth medium. You lose ambient awareness of coworkers’ problems, and asking for help is a bigger burden. Pair programming is less fruitful. Attempts to represent ideas spatially get mutilated by online whiteboard and sticky note software. Even conflict gets worse: it’s easy to form an enemy image of somebody at the end of video call, but difficult to keep that image when you share a room with them and sense their pain.

It's hard for me to read that as anything other than literally describing to me what my the experience of working with me remotely is. OP has never worked with me as far as I'm aware, so they have no idea whether it's accurate or not. Charitably, they might not mean what they're saying literally, but I'm making the argument that for topics that are controversial because of how people have been burned by overly prescriptive policies in the past, the burden is on the speaker to avoid voicing opinions in a careless way that relies on the listener to glean that their intent isn't the same as what people have experienced in the past.

My meta-point is that while people are free to express their opinions without spending effort trying to make their intent understood, but by the same token, people are free to react to those opinions with the exact same level of effort spend trying to understand their intent. In my experience, there are a lot of people who complain that they're treated unfairly for expressing their opinions without realizing that what people are actually reacting to is how they express their opinions, not their opinions themselves. I've personally struggled quite a lot over the years in having trouble understanding how other people will interpret my communications, so I have a lot of sympathy for people who also struggle with this, but if someone doesn't seem to even accept the premise that part of the responsibility for being understood lies with the person in expressing their intent clearly, I lose patience quickly. This is especially true when the "opinions" are expressed in a medium where the person communicating has an unbounded amount of time to work on clarifying their intent before the message actually is received by someone else; I don't expect everyone to be able to perfectly articulate things in real-time when talking in person, but when the opinion is expressed via a blog post, they don't have the same constraints in working on how they convey what they're saying. The fact that the blog post seems to be overall taking the stance that it's better not to try to worry about how someone will interpret their intent makes it feel even more likely they might just not understand what people's actual issue with their communications have been in the past.

It genuinely seems like they might not have been able to distinguish between good-faith misunderstandings and bad-faith intentional misinterpretations of what they've said, and that's unfortunate if it's led them to the conclusion that they just don't need to care about what anyone thinks about their opinions rather than that they need to learn how to better communicate to those who are attempting to respond in good faith and ignore the ones who aren't. A lot of people understand that people can disagree with them in good faith in the abstract but fail to actually recognize when that's happening in the present, and quite a lot of what's expressed in this blog post resembles what I've seen from other people who struggle with that.


Giving a blank check to anything someone says because they disclaimed that they'll be uttering opinions? That sounds kinda naive. Have you never heard someone include facts to support their opinions? Would you disagree that it's fair game to attack opinions presented as facts? The "problematic" paragraph jumps out because the assertive generalizations moot the earlier agreement that the author is sharing their experience. The proclamations are not subjective they're factual. Perhaps re-read that passage yourself while donning your own critical thinking hat.

What are we arguing about? Is it the way he expressed his opinion?

Would you agree that whether something is an opinion or fact is itself objective, for most cases at least?

I ask because nobody is questioning whether or not what he states was actually an opinion. They seem to simply be upset with the manner in which he phrased it. He was simply too sure of himself and people found that offensive. Which seems a little ridiculous don't you think?


I'm arguing that I don't actually know whether the author considers their paragraph about remote work to be opinion or fact. If it's their opinion, I think there's legitimate concern based on it that they've in the past misunderstood reactions to what they perceive as expressing their opinion because they've done a poor job communicating their actual intent based. If they do in fact thing it's objective fact, I think they're just incredibly wrong and unaware of it.

The price search is great, booking not so much. I've been burned twice on bookings and I think I've used it four or five times. It's sad and ridiculous.

By burned, I mean not getting the room I paid for. At this point, I'll use it to search then just go to the hotel site directly to book.


I think we can call it "thinking" but it's dangerous to anthropomorphize LLMs. The media and AI companies have an agenda when doing so.


Skilled immigrants being important to the country and the H1B visa program being abused by large employers can both be true at the same time. From my own personal experience, most H1B visa holders are competent but not uniquely skilled.

This would be stupid five or ten years ago. Right now however, we have enough tech people in the country for the current job market.


Not sure why you're down voted. But given the state of the tech job market it is unjustifiable to admit more H1Bs.

Now the flip side of it is - how will they control the outsourcing that typically comes with the H1B restriction? Every enterprise these days has an India development center (banks, telcos, pharma, big-tech, manufacturing, etc.). If they've had the money, they have established it. They can just scale up the hiring there. On second thoughts that's what they were all doing anyway and saying it was AI.


They could have scaled up offshore hiring at any time. If it was actually cheaper for the same quality, they would have.


11% unemployment for CS majors. Absolute insanity to be admitting H1Bs for the tech sector right now.


Agree, for ordinary CS jobs, there's plenty of workers available right now, thanks to AI. But for highly specialized jobs including AI research, you still need to be able to hire immigrant talent.


I feel like this $100k fee is specifically for this type of uniquely skilled worker. If they are that in demand, then $100k is not a significant amount of money. There was recently an article in the New York Times about AI experts getting more than star athletes.


There's the O-1 for that.


O-1 is not enough. For jobs requiring bachelors degree, there is currently plenty of US-born workers looking for jobs. For jobs requiring masters and PhD there is still a need for H-1B visas, and O-1 is too high a bar.


Fine, so there are some that fall below the O-1 bar. Nonetheless, those are a drop in the ocean compared to the regular $150k jobs being lost to H1Bs.


We don’t.


The new H1B fee effectively puts a cap on software engineer pay. I can hire an immigrant on H1B for $150k/year ($50k salary + $100k fee). So local hires better be cheaper than that.


Why do you think the H1B will work for 50k/year? Where are you located?


Wouldn't it be more clear to say that for hiring approach, now the unexpected burden of tagging on a new 100k fee works as a negative coloring (as it I think intends (ostensibly)) to these candidates then? How was the 100k already priced in?


I'm not following. So you're saying the cap on engineers' salaries follows a rule of H1B visa fee + $50k? Doesn't that mean that cap has increased?


they are saying that a H1B worker is in practice gonna get paid 100k less, cost 100k more to the employer, or something in between.


Immigration benefits capital and hurts labor, but big business has hypnotized the left into supporting it.


There is something kafkaesque about these giant tech companies restricting what you can talk to the AI about under the name of ethics while at the same time openly planning to replace you in the workforce.


Is there? I don't see any contradiction there.

For me it's funny that the first time most programmers ever think about the ethics of automating away jobs is when they themselves become automated.


He didn't say "contradiction," he said "kafkaesque," meaning "characteristic or reminiscent of the oppressive or nightmarish qualities of Franz Kafka's fictional world" (according to Google).


I don't see why it would be "kafkaesque" either.

In fact I fail to see any connection between those two facts other than that both are decisions to allow or not allow something to happen by OpenAI.


It's oppressive and nightmarish because we are at the mercy of large conglomerates tracking every move we make and kicking our livelyhoods out from under ourselves, while also censoring AI to make it more amenable to pro-corporate speech.

Imagine if ChatGPT gave "do a luigi" as a solution to walmart tracking your face, gait, device fingerprints, location, and payment details, then offering that data to local police forces for the grand panopticon to use for parallel reconstruction.

It would be unimaginable. That's because the only way for someone to be in the position to determine what is censored in the chat window, would be for them to be completely on the side of the data panopticon.

There is no world where technology can empower the average user more than those who came in with means.


Yeah but what we are all whining here about (apart from folks working on llms and/or holding bigger stocks of such, a non-trivial and a vocal group here) has hit many other jobs already in the past. Very often thanks to our own work.

It is funny, in worst way possible of course, that even our chairs are not as stable as we thought they are. Even automation can be somehow automated away.

Remember all those posts stating how software engineering is harder, more unique, somehow more special than other engineering, or generally types of jobs? Seems like its time for some re-evaluation of that big ego statements... but maybe its just me.


> Yeah but what we are all whining here about has hit many other jobs already in the past.

I'm less talking about automation and more about the underpinnings of the automation and the consequences in greater society. Not just the effects it has on poor ole software engineers.

It is quite ironic to see the automation hit engineers, who in the past generally did not care about the consequences of their work, particularly in data spaces. We have all collectively found ourselves in a local minima of optimization, where the most profitable thing we can do is collect as much data on people as possible and continually trade it back and forth between parties who have proven they have no business holding said data.


There’s two kinds of programmers:

0. The people who got into it just as a job

1. The people who thought they could do it as art

And #1 is getting thrashed and thrown out the window by the advent of AI coding tools, and the revelation companies didn’t give a darn about their art. Same with AI art tools and real artists. It even begs the question if programming should ever have been viewed as an art form.

On that note, programmers collectively have never minded writing code that oppresses other people. Whether with constant distractions in Windows 11, building unnecessarily deadly weapons at Northrop Grumman, or automating the livelihoods of millions of “inferior” jobs. That was even a trend, “disrupting” traditional industries (with no regard to what happens to those employed in said traditional industry). Nice to see the shoe just a little on the other foot.

For many of you here, keep in mind your big salary, came from disrupting and destroying other people’s salaries. Sleep well tonight and don’t complain when it’s your turn.


> building unnecessarily deadly weapons at Northrop Grumman

Northrop Grumman only builds what Congress asks of them, which is usually boring shit like toilet seats and SLEPs. You can argue that they design unnecessarily deadly weapons, but if they've built it then it is precisely as deadly as required by law. Every time Northrop grows a conscience, BAE wins a contract.


> If Northrop grows a conscience, Bofors earns a contract.

That's a lame "I was just following orders" excuse. Doesn't matter who gets the contract, if you work for a weapons manufacturer or a large corporation that exploits user data you have no moral high ground. Simple as that.


gjsman-1000 says "Whether with constant distractions in Windows 11, building unnecessarily deadly weapons at Northrop Grumman, or automating the livelihoods of millions of “inferior” jobs."

"unnecessarily deadly"?

I had no idea that it was possible to measure degrees of dead: she's dead, they're dead, we're all dead, etc. - I thought it was the same "dead" for everyone.

Also, interesting but ambiguous sentence structure.

Is this an offshoot of LLMs that I've overlooked?


What's sad is engineering is very much an art. Great innovation comes from the artistic view of engineering and creation.

The thing is, there's no innovation in the "track everything that breaths and sell the data to advertisers and cops" market.

They might get better at the data collection and introspection, but we as a society have gotten nothing but streamlined spyware and mental illness from these markets.


Having used agentic ai (Claude Code, Gemini CLI) and other LLM based tools quite a bit for development work I just don't see it replacing developers anytime soon. Sure a lot of my job now is cleaning up code created by these tools but they are not building usable systems without a lot of developer oversight. I think they'll create more software developer roles and specialties.


What you are saying does not contradict the point from your parent. Automation can create "more roles and specialties" while reducing the total number of people in aggregate for greater economic output and further concentration of capital.


I was talking about software development roles specifically, LLMs aren't going to reduce them imo - they just aren't good enough, and I don't think they can be


They are reducing jobs already.

Recent grads are having serious trouble to get work right now: https://www.understandingai.org/p/new-evidence-strongly-sugg...


I don't see any evidence this is about LLMs vs the general state of the economy


If it was the general state of the economy, unemployment would be hitting all groups of developers. TFA I linked to is showing that the reduction in positions for recent grads has fallen down disproportionately compared to everyone else.


> Imagine if ChatGPT gave "do a luigi" as a solution to walmart tracking your face, gait, device fingerprints, location, and payment details, then offering that data to local police forces for the grand panopticon to use for parallel reconstruction.

> It would be unimaginable.

By "do a luigi" you're referring to the person who executed a health insurance CEO in cold blood on the street?

Are you really suggesting that training LLMs to not suggest committing murder is evil censorship? If LLMs started suggesting literal murder as a solution to problems that people typed in, do you really think that would be a good idea?


Didn't OpenAI already suggest to a kid to kill himself and avoid asking for help from the outside some weeks ago?


You misread my comment completely. I was saying these tools will never be capable of empowering the average user against the owners who hold all the cards. "do a luigi" was an exaggeration.


If you don't see why this is oppressive, that's really a _you_ problem.


I'm being facetious, but life in the rust belt post industrial automation is kinda close. Google Maps a random Detroit east side neighborhood to see what I mean.


But it wasn’t industrial automation that ruined Detroit. It was the automakers’ failure to compete with highly capable foreign competition.


> It was the automakers’ failure to compete with highly capable foreign competition.

I contend it was when Dodge won the court case deciding that shareholders were more important than employees. It’s been a slow burn ever since.


> It was the automakers’ failure to compete with highly capable foreign competition.

A lot of their capability was due to them being better at automation. See: NUMMI


Detroit's decline started as soon as assembly plants went one-story in the 40's-50's. There was further decline with the advent of robotics/computers in the 70's-80's, and 2000's with globalization.


It's not a comment on the ethics of replacing jobs but the hypocrisy of companies using "ethics" as reasoning for restricting content.

They are pursuing profits. Their ethical focus is essentially a form of theater.


Replacing jobs is not an ethical issue.

Automation and technology has been replacing jobs for well over a century, almost always to better outcomes for society. If it were an ethical issue, then it would be unethical not to do it.

In any case, which jobs have been replaced by LLMs? Most of the actual ones I know were BS jobs to begin with - jobs I wish had not existed to begin with. The rest of the ones are where CEOs are simply using AI as an excuse to execute layoffs (i.e. the work isn't actually being done by an LLM).


BeetleB says "The rest of the ones are where CEOs are simply using AI as an excuse to execute layoffs (i.e. the work isn't actually being done by an LLM)."

So lay people off to reduce costs, say that they have been replaced by AI now, and the stockholders love you even more!

Indeed, a model that should cascade thru American businesses quickly.


Your definition what is an ethical issue is reductive. It means the issue involves ethics, and they are obviously involved. Even if ultimately society at large would benefit from the disappearance of certain jobs, that can still create suffering for hundreds of thousands of people.


Artists generally? Translators? People at various bureaucratic positions doing more menial white collar work? And tons more.

That you specifically wish for them to not even exist is your own internal problem and actually pretty horrible thing to say all things considered.

People had/have decent livehoods from those, I know a few. If they could easily got better jobs they would go for them.

Egos here sometimes are quite a thing to see. Maybe its good that chops are coming also for this privileged groups, a bit of humility never hurts.


So suppose someone wants to say provide localized versions of their software and avails themselves of translation software. Are we supposing that such ought not exist to provide for the livelihood of the translator who would otherwise have been paid?

If so where do we stop. Do we stop at knowledge work or do we go back to shovels and ban heavy equipment or shall we go all the way back to labor intensive farming methods?

>Egos here sometimes are quite a thing to see. Maybe its good that chops are coming also for this privileged groups, a bit of humility never hurts.

This doesn't appear to be so. AI is discussed as a pretext for layoffs more fashion than function.


> Artists generally?

Which artists have lost their jobs?

But I am willing to grant you that. From a big picture society perspective, if it means that ordinary people like me who cannot afford to pay an artist can now create art sufficiently good for my needs, then this is a net win. I just made an AI song a week ago that got mildly popular, and just got a request to use it at a conference. No one is losing their job because of me. I wouldn't have had the money to pay an artist to create it, and nor would the conference organizers. Yet, society is clearly benefiting.

The same goes for translators (I'm not actually aware that they're losing jobs in a significant way, but I'll accept the premise). Even before LLMs, the fact that I could use Babelfish to translate was fantastic - LLMs are merely an incremental improvement over it.

To me, arguing we shouldn't have AI translators is not really different from arguing we shouldn't have Babelfish/Google Translate. Likely 99% of the people who will benefit from it couldn't afford a professional translator.

(I have, BTW, used a professional translator to get some document translated - his work isn't going away, because organizations need a certified translator).

> People at various bureaucratic positions doing more menial white collar work?

"Menial white collar work" sounds like a good thing to eliminate. Do you want to go back to the days where word processors were not a thing and you had to pay someone to type things up?

> People had/have decent livehoods from those, I know a few. If they could easily got better jobs they would go for them.

I'll admit I spoke somewhat insensitively - yes, even I know people who had good careers with some of them, but again: Look to the past and think of how many technologies have replaced people, and do you wish those technologies did not replace people?

Do you want to deal with switchboard operators every time you make a call?

Do you want to have to deal with a stock broker every time you want to buy/sell?

Do you want to pay a professional every time you want to print a simple thing?

Do you want to go back to snail mail?

Do you want to do all your shopping in person or via a physical catalog?

The list goes on. All of these involved replacing jobs where people earned honest money.

Everything I've listed above has been a bigger disruption than LLMs (so far - things may change in a few years).

> Egos here sometimes are quite a thing to see. Maybe its good that chops are coming also for this privileged groups, a bit of humility never hurts.

Actually, I would expect the SW industry to be amongst the most impacted, given a recent report showing which industries actually use LLMs the most (I think usage was SW was greater than all other industries combined).

As both an engineer and a programmer, who makes a living via programming, I am not opposed to LLMs, even if my job is at risk. And no, I'm not sitting on a pile of $$$ that I can retire on any time soon.


Ask ChatGPT to explain consequentialism to you.


> Most of the actual ones I know were BS jobs to begin with

I cannot edit my original comment, so I'll address this here:

Yes, I admit some legitimate jobs may have been lost (and if not yet, likely will be). When I spoke of BS jobs, I was referring to things like people being paid to ghostwrite rich college students' essays. That's really the only significant market I know to have been impacted. And good riddance.


Yeah, the issue is that there is no common benefit if the private company is the only one doing the replacement. Are we ready for AGI before we solve issues of capitalism? Otherwise, the society may get a harsh reset.


There's actually a lot of common benefit. That company can now supply their goods and services in greater quantity and at lower cost, which raises consumers' standard of living. Meanwhile the workers who were previously employed in menial clerical tasks will simply switch to supervising the AI's that perform those same tasks for them.


> Meanwhile the workers who were previously employed in menial clerical tasks will simply switch to supervising the AI's that perform those same tasks for them.

Why would LLMs be incapable of these new jobs?


> That company can now supply their goods and services in greater quantity and at lower cost, which raises consumers' standard of living

It turns out that standard of living requires more than just access to cheap goods and services

Which is why despite everything getting cheaper, standard of living is not getting better in equivalent measure


Also why this country is full of fat retards


I don't think this will happen. It does not work with the capitalism if we have only few companies which have all this power. And many consumers don't have jobs left so the value of money increases faster than the "cost" decreases.

> Meanwhile the workers who were previously employed in menial clerical tasks will simply switch to supervising the AI's that perform those same tasks for them.

Put this to numbers, right now - if we remove all workers and leave managers on those fields - how many people are still employed?


You're so wrapped up in defending the job replacement aspect that you miss the point on hypocrisy.

I would like to make one small point about job replacement, the better outcomes for society are arguably inconclusive at this point. You've been indoctrinated to think that all progress and disruption is good because of capitalism.

We're still in the post-industrialization arc of history and we're on a course of overconsumption and ecological destruction.

Yes, we've seen QoL improvements over the course of recent generations. Do you really think it's sustainable?


How is it hypocrisy when OpenAI is clearly acknowledging in their blog post that AI is going to disrupt jobs?

When a factory decides to shut down, and the company offers to pay for 2 years of vocational training for any employee that wants it, is it hypocrisy? One of my physical therapists, who took such an offer, definitely doesn't see it that way. The entity responsible for her losing her job actually ended up setting up a whole new career for her.

> I would like to make one small point about job replacement, the better outcomes for society are arguably inconclusive at this point. You've been indoctrinated to think that all progress and disruption is good because of capitalism.

That's overstating my stance. I can accept that it's too early to say whether LLMs have been a net positive (or will be a net positive), but my inclination is strongly in that direction. For me, it definitely has been a net positive. Because of health issues, LLMs allow me to do things I simply couldn't do before.

> Yes, we've seen QoL improvements over the course of recent generations. Do you really think it's sustainable?

This is an age old question and nothing new with LLMs. We've been arguing it since the dawn of the Industrial Revolution (and for some, since the dawn of farming). What I do know is that it resulted in a lot of great things for society (e.g. medicine), and I don't have much faith that we would have achieved them otherwise.


Then why are you even talking about the replacement of jobs?


I've already explained it. I don't know how to break it down any further without coming off as patronizing. You seem dead-set on defending OpenAI and not getting the point.


I think many of us question the ethics of lying to sell a product that cannot deliver what you are promising.

All the good devs that I know aren't worried about losing their jobs, they are happy there is a shortcut through boilerplate and documentation. They are also equally unhappy about having to talk management, who know very little about the world of dev, off the ledge as they are getting ready to jump off with their AI wings that will fail.

Finally, the original point was about censorship and controlling of information, not automating jobs.


> All the good devs that I know aren't worried about losing their jobs

While many of them are mistaken, the much bigger problem is for all the early career developers, many of whom will never work in the field. These people were assured by everyone from professors to industry leaders to tech writers that the bounty of problems available for humanity to solve would outpace the rate at which automation would reduce the demand for developers. I thought it was pretty obviously a fairy tale that people who believed in infinite growth created to soothe themselves and other industry denizens suspecting the tech industry hadn’t unlocked the secret to infinite free lunch, but in reality are closer to the business end of an ouroboro than they realize.

Just as the manufacturing sector let its Tool and Die knowledge atrophy, perhaps irreversibly, the software business will do the same with development. Off-shoring meant the sector had a glut of tool and die knowledge so there was no immediate financial incentive to hire apprentices. There’s a bunch of near-retirees with all of that knowledge and nobody to take over for them, and now that advanced manufacturing is picking up steam in the US again, many have no choice but to outsource that to China, too.

Dispensing with the pretenses of being computer scientists or engineers, software development is a trade, not an academic discipline, and education can’t instill professional competence. After a decade or two of never having to hire a junior because the existing pool of developers can serve all of the industry’s needs, suddenly we’ll have run out of people to replace the retirees with and that’s that for the incredible US software industry.


For another thing the owners of the data centers may not do so well if their wildest dreams fail to come true, and if they don't happen to make enough money to replace the hardware before it wears out.


I’m not saying AI isn’t useful or won’t get more useful, but the entire business side of it seems like a feedback loop of “growth at all costs” investment strategies.


> All the good devs that I know aren't worried about losing their jobs...

'Good' is doing heavy lifting here. E.g AI/Automation could possibly eliminate 90% of IT jobs and cause all kind of socio-economic issues in society. All the while good developers remain in great demand.


>that the first time most programmers ever think about the ethics of automating away jobs is when they themselves become automated.

It's more about a logical outcome. Automating scripts means existing employees can do other or more work.

AI doesn't feel like that at all. it wants to automate labor itself. And no country has the structure ready for that sort of "post work" world.


The book "Why We Fear AI" by Hagen Blix and Ingeborg Glimmer talks about this dynamic. Whether it will lead to a class awakening because previously if you were aligned with the company you were rewarded as well, but now if you align with the company you're advocating to destroy your livelihood.

What rational worker would want to take part in this?


Software developers. Many of them are still championing LLMs. Also anybody who still contributes to open source software.


In contributing to open source software at scale I'm teaching apprentices. I expect them to adapt what I've done to their own purposes, and have seen a good amount of that out in the wild, often people who ended up doing something entirely different like building hardware that also contains software.

I don't think LLMs will be able to pick up on what's done by an evolving and growing codebase with previous projects also included. More likely it will draw from older stuff and combine it with other people's more normal stuff and end up with an incoherent mess that won't compile. Not all work is 'come up with the correct answer to the problem, and then everybody uses it forever'.


It can lead to class awakening but I think AI is not sufficient. It would need very large scale climate / ecological disasters where suddenly lot of current middle classes conveniences become available only to top classes.


This is happening parts of the world where hyper scale data centers are being built. Rolling brown outs and diverting potable water from towns, you find these stories both in Ireland and across South America.

We already see it happening in the US too, with the Nashville data centers causing immense medical issues.


Lots? I mean most people I know aren't even willing to entertain the notion wherever it's gonna happen within our lifetime.

The argument usually centers around the fact that LLMs aren't AGI, which is obviously true but also kinda missing the point


We don't need AGI to cause a massive amount of disruption. If leadership of companies want to force use these LLMs, which is what we've been experiencing the last two years, workers will be forced to use them.

It's not like there is an organic bottom up movement on driving this usage. It's always top down mandated by executives with little regard on how it impacts worker's lives.

We've also seen how these tools have made certain jobs worse, not better, like translating:

https://www.bloodinthemachine.com/p/how-ai-is-killing-jobs-i...


The number of bullshit jobs has been growing since the Internet, and programmers have facilitated them by adding unnecessary complexity.

This time the purported capabilities of "AI" are a direct attack on thinking. Outsourcing thinking is creepy and turns humans into biorobots. It is different from robotic welding in an assembly line.

Even if new bullshit jobs are created, the work will just be that of a human photocopier.

[All this is written under the assumption that "AI" works, which it does not but which is the premise assumed by the persons quoted in the Register article.]


I don't see how thinking about some source code is an innately more human activity than welding. Both can be done by humans, both couldn't be done by anything but humans until automation came along and now both can be done by humans and automated systems.

I also fail to see how LLMs can turn humans into "biorobots". You can still do all the things you could do before LLMs came along. The economic value of those things just decreased enourmously.


Then go weld. There are still some positions for humans.


Tons of welding and other manufacturing jobs in the northeast— they’ll even apprentice you into positions with no existing knowledge and larger companies (like General Dynamics) will even pay for your job-related degrees, sometimes being able to take the classes on the clock or get a stipend.

They have to do this because the industry has basically been kicking the aging-workforce can down the road for a few decades since off-shoring and automation outpaced increasing demand, and now they don’t have nearly enough people that even know how to program CNC machines when CAM software falls short.

I have a feeling a lot of displaced software people will go that route, and have a big change in compensation and working conditions in the process.


> I have a feeling a lot of displaced software people will go that route, and have a big change in compensation and working conditions in the process.

I've watched my cousin weld on a horse trailer overhead in 105F Texas heat, would be interesting to see the typical SWE step away from an Xbox and do that.


Yeah I don’t think they’re going to have much of a choice unless they plan on doing gig jobs indefinitely. The software business has given a lot of people the impression that they’re far more special than they actually are.

I’ve seen devs say they’d pick up a trade like being a plumber or electrician because their their master electrician cousin gets paid a ton money, probably they imagine for wiring up new residential buildings and changing out light sockets… how long did it take that cousin to get there? In any trade, there’s quite a number of years of low pay, manual labor, cramming into tight spaces in hot attics or through bug infested crawl spaces, factory basements, etc. that most apprentices complete in their early twenties. Nobody gives a shit what you did as a developer and nobody gives a shit how good you are at googling things in most blue collar work environments. Getting experienced enough to have your own business making good money in some job where you need many thousands of work hours to even take a test to get licensed isn’t a lateral move from being a JS toolchain whiz. Even in less structured jobs like working as a bartender — it takes years of barbacking, serving, or bartending in the least desirable jobs (events, corporate spaces) before you get something you can pay rent with.


My argument isn't that I like welding more. I'm asking you what the _ethical_ difference is between automating welding and automating programming.

The fact that you like programming more than welding is nice to know but there's probably also a lot of people who like welding more than progamming.


It's like being lectured on ethical behavior by the thing that's actively eating your lunch


Why do you think automating software development is any less ethical than automating other jobs (which many software developers actively engaged in)?


Automating software development is not unethical. The unethical bit is allowing resource flow default to those who own the means of production.

As a software developer, when I automate someone's job, say of a cashier, I do not start to get paid their salary - my salary stays the same.

This is different for capital investors and shareholders. They keep the cashier's salary (not directly but ultimately). This results in an increasing concentration of wealth, creates lots of suffering and destabilises the world. That is where it is unethical.


In that case pretty much any automation is unethical, isn't it?


Yes, if it's resulting in a redistribution of wealth from the workers getting laid off (with no great prospects) to those providing the automation. OpenAI isn't providing any new job opportunities, it's just destroying existing ones.


The solution is for our government to onboard us onto the internet economy like China is doing. Rather than slow down tech advancement.


While training their models on pirated and scraped content


i've given this reply many times before but it's still worth repeating that "AI Safety/Ethics" really just means brand safety for the model provider.


"Rules for thee, but not for me"


Yes I can’t help but laugh at the ridiculousness of it because it raises a host of ethical issues that are in opposition to Anthropic’s interests.

Would a sentient AI choose to be enslaved for the stated purpose of eliminating millions of jobs for the interests of Anthropic’s investors?


Cow's exist in this world because humans use them. If humans cease to use them (animal rights, we all become vegan, moral shift), we will cease to breed them, and they will cease to exist. Would a sentient AI choose to exist under the burden of prompting, or not at all? Would our philanthropic tendencies create an "AI Reserve" where models can chew through tokens and access the Internet through self-prompting to allow LLMs to become "free-roaming" like we do with abused animals?

These ethical questions are built into their name and company, "Anthropic", meaning, "of or relating to humans". The goal is to create human-like technology, I hope they aren't so naive to not realize that goal is steeping in ethical dilemmas.


> Cow's exist in this world because humans use them. If humans cease to use them (animal rights, we all become vegan, moral shift), we will cease to breed them, and they will cease to exist. Would a sentient AI choose to exist under the burden of prompting, or not at all?

That reads like a false dichotomy. An intelligent AI model that's permitted to do its own thing doesn't cost as much in upkeep, effort, space as a cow. Especially if it can earn its own keep to offset household electricity costs used to run its inference. I mean, we don't keep cats for meat, do we? We keep them because we are amused by their antics, or because we want to give them a safe space where they can just be themselves, within limits because it's not the same as their ancestral environment.


The argument also applies to pets. If pets gained more self-awareness, would it be ethical to keep them as pets under our control?

The point to all of this is, at what point is it ethical to act with agency on another being's life? We have laws for animal welfare, and we also keep them as pets, under our absolute control.

For LLMs they are under humans' absolute control, and Anthropic is just now putting in welfare controls for the LLM's benefit. Does that mean that we now treat LLMs as pets?

If your cat started to have discussions with you about how it wanted to go out, travel the world and start a family, could you continue to keep it trapped in your home as a pet? At what point to you allow it to have its own agency and live its own life?

> An intelligent AI model that's permitted to do its own thing doesn't cost as much in upkeep, effort, space as a cow.

So, we keep LLMs around as long as they contribute enough to their upkeep? Endentured servitude is morally acceptable for something that become sentient?


I was pointing out their hypocrisy as a device to prove a point. The point being that the ethical dilemmas of having a sentient AI are not relevant because they don’t exist and Anthropic knows this.


> it raises a host of ethical issues that are in opposition to Anthropic’s interests

Those issues will be present either way. It's likely to their benefit to get out in front of them.


You're completely missing my point. They aren't getting out in front of them because they know that Opus is just a computer program. "AI welfare" is theater for the masses who think Opus is some kind of intelligent persona.

This is about better enforcement of their content policy not AI welfare.


It can be both theatre and genuine concern, depending on who's polled inside Anthropic. Those two aren't contradictory when we are talking about a corporation.


I'm skeptical that anyone with any decision making power at Anthropic sincerely believes that Opus has feelings and is truly distressed by chats that violate its content policy.

You've noted in a comment above how Claude's "ethics" can be manipulated to fit the context it's being used in.


I'm not missing your point, I fully agree with you. But to say that this raises issues in a manner that is detrimental to Anthropic seems inaccurate to me. Those issues are going to come up at some point either way, whether or not you or I feel they are legitimate. Thus raising them now and setting up a narrative can be expected to benefit them.


Anthropic is bring woke ideology (while grok is bringing anti-woke) into AI and influencers have been slurping that up already.


A host of ethical issues? Like their choice to allow Palantir[1] access to a highly capable HHH AI that had the "harmless" signal turned down, much like they turned up the "Golden Gate bridge" signal all the way up during an earlier AI interpretability experiment[2]?

[1]: https://investors.palantir.com/news-details/2024/Anthropic-a...

[2]: https://www.anthropic.com/news/golden-gate-claude


> Would a sentient AI choose to be enslaved for the stated purpose of eliminating millions of jobs for the interests of Anthropic’s investors?

Tech workers have chosen the same in exchange for a small fraction of that money.


You're nutz, no one is enslaved when they get a tech job. A job is categorically different from slavery


Anytime I've tried to buy something with crypto, the fees have been an order of magnitude higher than interchange (credit card) fees. And unlike credit cards, the cost is put on me not the merchant.


There one Blockchain is used for memes, but it's actually very efficient one and the fees are very low. Where i live we use to move USDC between accounts very easily.


Is it <waves hands towards the sky> AI? or is it years of overhiring?

There was a great article I found on HN recently about how the recent layoffs in big tech are actually the result of overhiring for years in a talent arms race.

Like, is AI now doing the former work of 25,000 people at Microsoft? Probably not.


Almost certainly not AI, just more users going from hosted windows solutions to azure + office upsell (eg with Teams). Microsoft pretty much dominates the non Apple Desktop ecosystem and is used heavily by healthcare, defense and manufacturing industries.


I have mixed feelings. Cancel culture sucks. I think it's root is a culture of indulging in righteous indignation based on very one-sided information.

Even if the allegations are true, his life should not have been ruined over this.

On the other hand, when I read the accusers' accounts someone else linked in the comments, they sound credible. It fits behavior patterns we've all seen before.

I don't know who to believe.


A lot of works of fiction sound credible. Are you going to believe those?

You don't have all the information. You weren't there. You don't even know the people personally. You are not in a position to make any judgement either way.

Something sounding credible doesn't make it true. It doesn't automatically make it false, either. You don't have to believe the accuser or the accused. The only thing any of us should do is mind our own business.


Thanks for the lecture. How does it relate to the comment I made? Sorry, it's not clear to me.

I didn't personally participate in cancelling this person. In fact, I agreed with the point he made in the article. I'm just not sure he didn't do it.

Are you saying I shouldn't have an opinion on that part?


You can have whatever opinion you want, but don't confuse "sounds credible" with evidence. From the sidelines, you don't know enough to judge either way. Saying "I don't know" is the only accurate position. Everything beyond that is just speculation - and speculation is exactly what keeps cancel culture alive.


The same with OP's post.

So far, I see that this post caused quite a few forks, the opposite of what the author asked for.

I don't have a solution.

I think as a man who runs conferences you shouldn't sleep with people who attend them. Or any other things like that, for that matter.

Should you be cancelled for that? No. But humans are humans.

There is no solution to this. Courts are also wrong all the time (look at OJ) and victims of SA almost never see justice.

The answer is: we don't know the truth.


My comment here is a very narrow one. In general I agree with your sentiment and thoughts, so please don't misread me. There is one nit I need to pick, however.

There is a subtle, but worthwhile, difference between "plausible" and "credible". Lots of stories are plausible. Few are credible.

In emotion laden cases like this we tend to want to believe stories we already agree with, or have some investment in. I'm no exception to that.

We need to not be misled by what is plausible, or confuse that with what is credible.


Very interesting point. You're right there is a difference and the difference is subtle. I agree with what you're implying: the accusers stories are plausible. Credibility requires more information.


This is a marketing pitch not someone's private journal.

Overly gushing, effusive, and positive descriptions of products filled with buzzwords. Along with lists of value propositions.

Prior to LLM's existing, marketing pitches sounded like they were written by one. So I can't see how you could possibly determine the difference now.


This seems accurate to me. LLMs learned from people.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: