Hacker Newsnew | past | comments | ask | show | jobs | submit | bubbleRefuge's commentslogin

Do they let anyone pick for free ?

Plenty of orange trees in public places, plenty of abandoned groves around and a handful of eco-fincas where you can pick for next to nothing.

Got some addresses? I know Ecovinyassa, but they don’t let you pick.

Its a thing. Living in hurricane alley, I see it all the time. Lines at gas stations, grocery stores, hardware stores. My strategy is to wait till the 11th hour after supply trucks have restocked everything and shop in peace.stores are open, quiet , sparsely trafficed, usually stocked up at this point. Works every time


Absolutely false. Worse case is dollar going down. Interest rates are exogenous and controlled by the fed who can buy all the treasuries in the world at a moment's notice. The treasury securities held by China are their problem . Not the US's.


There is no proof that higher interest rates lead to greater unemployment. In fact, macro employment kind of boomed during the referenced period. I'd posit that higher rates actually boosted macro employment stats . Why ? Because higher rates = higher income to rich people via interest income channel = higher fed budget deficits ( gov is net payer of interest) = higher GDP = lower unemployment ceterus paribus.


This is completely backwards. When interest rates are high, the expected returns of equity investments have to be even higher to justify the risk over risk-free fixed income assets.

And that's only the indirect effect on equity funding; debt funding just directly becomes more expensive.


a tech mentor once told me what makes a developer great is not how good or talented he is but how good he makes those around him be.


I have a gf in Goias brazil . She told me that she was mugged 7 times for her cell phone years ago . Then a new governor came along and began a policy of. allowing police to execute dangerous criminal gang members rather than arrest them. Now the area is considered one of the safest regions. He's rummored to have presidential ambitions. Guess there is a breaking point when crime and corruption gets to a point where people just have had enough and only want results Justice be dammed.


Any sources for the shoot-to-kill policies?


Loads. This wasn't a secret policy:

> “The police will do the right thing: Aim at their little heads and fire! So there is no mistake”

https://www.nytimes.com/2019/05/26/world/americas/brazil-rio...


Think the marginal cost of developing complex software goes down thereby making it affordable to a greater market. There will still be a need for skilled software engineers to understand domains, limitations of AI, and how to harness and curate AI to develop custom apps. Maybe software engineering for the masses. Local small businesses can now maybe afford to take on custom software projects that were before unthinkable.


> There will still be a need for skilled software engineers to understand domains, limitations of AI, and how to harness and curate AI to develop custom apps.

But will there be a need for fewer engineers, though? That's the question. And the competition for those who remain employed would be fierce, way worse than today.

Or so I fear. I hope I'm wrong.


I think it might be useful to look at this as multiple forces to play.

One force is a multiplier of a software engineer’s productivity.

Another force is the pressure of the expectation for constant, unlimited increase in profits. This pressure force the CEOs and managers to look for cheaper alternatives to expensive software engineers, ultimately to eliminate the position and expense. The lie that this is a possibility draws huge investments.

And another force is the infinite number of applications of software, especially well designed, truly useful, software.


Yes, these are good considerations.

I'd be a hypocrite if I didn't admit I use AI daily in my job, and it's indeed a multiplier of my productivity. The tech is really cool and getting better.

I also understand AI is one step closer for the everyday Jane or Joe Doe to do cool and useful stuff which was out of reach before.

What worries me is the capitalist, business-side forces at play, and what they will mean for my job security. Is it selfish? You bet! But if I don't advocate for me, who will?


Jevon's Paradox says that you're probably wrong. But I'm worried about the same thing. The moat around human superiority is shrinking fast. And when it's gone, we may get more software, but will we need humans involved?


AI doesn't have needs any desires, humans do. And no matter how hyped one might be about AI, we're far away from creating an artificial human. As long as that's true, AI is a tool to make humans more effective.


AI may not have desires, but corporations do. And control more resources than humans.

Making corporations more effective is not always in the interest of humans.


That's fair, but the question was whether AI would destroy or create jobs.

You might speculate about a one-person megacorp where everything is done by AIs that a single person runs.

What I'm saying is that we're very far from this, because the AI is not a human that can make the CEO's needs and desires their own and execute on them independently.

Humans are good at being humans because they've learned to play a complex game, which is to pursue one's needs and desires in a partially adversarial social environment.

This is not at all what AI today is being trained for.

Maybe a different way to look at it, as a sort of intuition pump: If you were that one man company, and you had an AGI that will correctly answer any unambiguously stated question you could ask, at what point would you need to start hiring?


You're taking your opinion to extreme because I don't think anyone is talking about replacing all engineers with a single AI computer doing the work for a one-person mega-corporation.

The actual question, which is much more realistic, is if an average company of, let'say, 50 engineers will still have a need to hire those 50 engineers if AI turns out to be such an efficiency multiplier?

In that case, you will no longer need 10 people to complete 10 tasks in given time-unit but perhaps only 1 engineer + AI compute to do the same. Not all businesses can continue scaling forever, so it's pretty expected that those 9 engineers will become redundant.


You took me too literally there, that was intended as a thought experiment to explore the limits.

What I was getting at was the question: If we feel intuitively that this extreme isn't realistic, what exactly do we think is missing?

My argument is, what's missing is the human ability to play the game of being human, pursuing goals in an adversarial social context.

To your point more specifically: Yes, that 10-person team might be replaceable by a single person.

More likely than not however, the size of the team was not constrained by lack of ideas or ambition, but by capital and organizational effectiveness.

This is how it's played out with every single technology so far that has increased human productivity. They increase demand for labor.

Put another way: Businesses in every industry will be able to hire software engineering teams that are so good that in the past, only the big names were able to afford them. The kind of team required for the digital transformation of every old fashioned industry.


In my 10-person team example, what in your opinion would the company with the rest of the 9 people do once the AI proves its value in that team?

Your hypothesis is AFAIU is that the company will just continue to scale because there's an indefinite amount of work/ideas to be explored/done so the focus of those 9 people will just be shifted to some other topic?

Let's say I am a business owner I have a popular product with a backlog of 1000 bugs and I have a team of 10 engineers. Engineers are busy both juggling between the features and fixing the bugs at the same time. Now let's assume that we have an AI model that will relieve 9 out of 10 engineers from cleaning the bugs backlog and we will need 1 or 2 engineers reviewing the code that the AI model spits out for us.

What concrete type of work at this moment is left for the rest of the 9 engineers?

Assuming that the team, as you say, is not constrained by the lack of ideas or ambition, and the feature backlog is somewhat indefinite in that regard, I think that the real question is if there's a market for those ideas. If there's no market for those ideas then there's no business value $$$ created by those engineers.

In that case, they are becoming a plain cost so what is the business incentive to keep them then?

> Businesses in every industry will be able to hire software engineering teams that are so good that in the past, only the big names were able to afford them

Not sure I follow this example. Companies will still hire engineers but IMO at much less capacity than what it was required up until now. Your N SQL experts are now replaced by the model. Your M Python developers are now replaced by the model. Your engineer/PR-review is now replaced by the model. The heck, even your SIMD expert now seems to be replaced by the model too (https://github.com/ggerganov/llama.cpp/pull/11453/files). Those companies will no longer need M + N + ... engineers to create the business value.


> Your hypothesis is AFAIU is that the company will just continue to scale because there's an indefinite amount of work/ideas to be explored/done so the focus of those 9 people will just be shifted to some other topic?

Yes, that's what I'm saying, except that this would hold over an economy as a whole rather than within every single business.

Some teams may shrink. Across industry as a whole, that is unlikely to happen.

The reason I'm confident about this is that this exact discussion has happened many times before in many different industries, but the demand for labor across the economy as a whole has only grown. (1)

"This time it's different" because the productivity tech in question is AI? That gets us back to my original point about people confusing AI with an artificial human. We don't have artificial humans, we have tools to make real humans more effective.

(1) The point seems related to this https://en.wikipedia.org/wiki/Lump_of_labour_fallacy


Hypothetically you could be right and I don't know if "this time will be different" nor am I trying to predict what will happen on the global economic scale. That's out of my reach.

My question is rather of much narrower scope and much more concrete and tangible - and yet I haven't been able to find any good answer for it, or strong counter-arguments if you will. If I had to guess something about it then my prediction would be that many engineers will need to readjust their skills or even requalify for some other type of work.


Automation improved life in the Industrial Revolution because it displaced people from spinning and weaving into higher value add professions.

What higher value add professions will humans be displaced into by AI?


It should be obvious that technology exists for the sake of humans, not the other way around, but I have already seen an argument for firing humans in favour of LLMs since the latter emit less pollution.

LLMs do not have desires, but their existence alters desires of humans, including the ones in charge of businesses.


> AI doesn't have needs any desires, humans do.

I fear that this won't age well. But to shamelessly riff on Marx, those who control the means of computation will control society.


I agree the latter part is a risk to consider, but I really think getting an AI to replace human jobs on a vast scale will take much more than just training a bit more.

You need to train on a fundamentally different task, which is to be good at the adversarial game of pursuing one's needs and desires in a social environment.

And that doesn't yet take into account that the interface to our lives is largely physical, we need bodies.

I'm seeing us on track to AGI in the sense of building a universal question answering machine, a system that will be able to answer any unambiguously stated question if given enough time and energy.

Stating questions unambiguously gets pretty difficult fast even where it's possible, often it isn't even possible, and getting those answers is just a small part of being a successful human.

PS: Needs and desires are totally orthogonal to AI/AGI. Every animal has them, but many animals don't have high intelligence. Needs and desires are a consequence of our evolutionary history, not our intelligence. AGI does not need to mean an artificial human. Whether to pursue or not pursue that research program is up to us, it's not inevitable.


To be clear, I'm not arguing humans will stop being involved in software engineering completely. What I fear is that the pool of employable humans (as code reviewers, prompt engineers and high-level "solution architects") will shrink, because fewer will be needed, and that this will cause ripples in our industry and affect employment.

We know this isn't far-fetched. We have strong evidence to suspect during the big layoffs of a couple of years ago, FAANG and startups all colluded to lower engineer salaries across the board, and that their excuse ("the economy is shrinking") was flimsy at best. Now AI presents them with another powerful tool to reduce salaries even more, with a side dish of reducing the size of the cost center that is programmers and engineers.


Honestly, I wasn't even talking about jobs with that. I worry about an intelligent IOT controlled by authoritarian governments or corporate interests. Our phones have already turned society into a panopticon, and that will can get much worse when AGI lands.

But yes, the job thing is concerning as well. AI won't scrub a toilet, but it will cheaply and inexhaustibly do every job that humans find meaningful today. It seems that we're heading inexorably towards dystopia.


> AI won't scrub a toilet, but it will cheaply and inexhaustibly do every job that humans find meaningful today

That's the part I really don't believe. I'm open to being wrong about this, the risk is probably large enough to warrant considering it even if the probability of this happening is low, but I do think it's quite low.

We don't actually have to build artificial humans. It's very difficult and very far away. It's a research program that is related to but not identical to the research program leading to tools that have intelligence as a feature.

We should be, and in fact we are, building tools. I'm convinced that the mental model many people here and elsewhere are applying is essentially "AGI = artificial human", simply because the human is the only kind of thing in the world that we know that appears to have general intelligence.

But that mental model is flawed. We'll be putting intelligence in all sorts of places that are not similar to a human at all, without those devices competing with us at being human.


To be clear, I'm much more concerned about the rise of techo-authoritarianism than employment.

And further ahead, where I said your original take might not age well; I'm also not worried about AI making humanoid bodies. I'd be worried about a future where mines, factories, and logistics are fully automated: an AI for whom we've constructed a body which is effectively the entire planet.

And nobody needs to set out to build that. We just need to build tools. And then, one day, an AGI writes a virus and hacks the all-too-networked and all-too-insecure planet.


> I'm also not worried about AI making humanoid bodies. I'd be worried about a future where mines, factories, and logistics are fully automated: an AI for whom we've constructed a body which is effectively the entire planet.

I know scifi is not authoritative, and no more than human fears made into fiction, but have you read Philip K. Dick's short story "Autofac"?

It's exactly what you describe. The AI he describes isn't evil, nor does it seek our extinction. It actually wants our well-being! It's just that it's taken over all of the planet's resources and insists in producing and making everything for us, so that humans have nothing left to do. And they cannot break the cycle, because the AI is programmed to only transition power back to humans "when they can replicate Autofac output", which of course they cannot, because all the raw resources are hoarded by the AI, which is vastly more efficient!


I think that science fiction plays an important role in discourse. Science fiction authors dedicate years deeply contemplating potential future consequences of technology, and packaging such into compelling stories. This gives us a shorthand for talking about positive outcomes we want to see, and negative outcomes that we want to avoid. People who argue against scifi with a dismissal that "it's just fiction" aren't participating in good faith.

On the other hand, it's important not to pay too close attention to the details of scifi. I find myself writing a novel, and I'm definitely making decisions in support of a narrative arc. Having written the comment above... that planetary factory may very well become the third faction I need for a proper space opera. I'll have to avoid that PKD story for the moment, I don't want the influence.

Though to be clear, in this case, that potentiality arose from an examination of technological progress already underway. For example, I'd be very surprised if people aren't already training LLMs on troves of viruses, metasploit, etc. today.


Yes, okay.

I think we're talking about different time scales - I'm talking about the next few, maybe two or three decades, essential the future of our generation specifically. I don't think what you're describing is relevant on that time scale, and possibly you don't either.

I'd add though that I feel like your dystopian scenario probably reduces to a Marxist dystopia where a big monopolist controls everything.

In other words, I'm not sure whether that Earth-spanning autonomous system really needs to be an AI or requires the development of AI or fancy new technology in general.

In practice, monopolies like that haven't emerged due to competition and regulation, and there isn't a good reason to assume it would be different with AI either.

In other words, the enemies of that autonomous system would have very fancy tech available to fight it, too.


I'm not fussy about who's in control. Be it global or national; corporate or governmental; communist or fascist. But technology progresses more or less uniformly across the globe and systems are increasingly interconnected. An AGI, or even a poor simulacrum cobbled together from LLMs with internet access, can eventually hack anything that isn't airgapped. Even if it doesn't have "thoughts" or "wants" or "needs" in some philosophical sense, the result can still be an all-consuming paperclip maximizer (but GPUs, not paperclips). And every software tool and every networked automated system we make can be used by such a "mind."

And while I want to agree that we won't see this happen in the next 3 decades, networked automated cars have already been deployed on the street of several cities and people are eagerly integrating LLMs into what seems to be any project that needs funding.


It's tempting to speculate about what might happen in the very long run. And different from the jobs question, I don't really have strong opinions on this.

But it seems to me like you might not be sufficiently taking into account that this is an adversarial game; i.e. it's not sufficient for something just to replicate, it needs to also out-compete everything else decisively.

It's not clear at all to me why an AI controlled by humans, to the benefit of humans, would be at a disadvantage to an AI working against our benefit.


Agreed on all but one detail. Not to put too fine a point on it, but I do believe that the more emergent concern is AI controlled by a small number of humans, working against the benefit of the rest of humanity.


In the AI age, those who own the problems stand to own the AI benefits. Utility is in the application layer, not the hosting or development of AI models.


Can’t be sure though, there used to be way more accountants decades ago.


this is a better world. we can work a few hours a week and play tennis, golf, and argue politics with our friends and family over some good cheese and wine while the bots do the deployments.


We're already there in terms of productivity. The problem is the inordinate number of people doing nothing useful yet extracting huge amounts. Think most of finance for example.


oh. yeah. finance is to big. they've captured the government.


Assuming you retain a good paying job and are not treated like a disposable commodity. That cheese and wine is not going to be free.


as long as we keep learning and our heads in the game we will be fine. I worry much more for the non-techno savy like scrum masters. yikes.


If it's any consolation, if indeed the extra productivity happens, and kills the number of SWE jobs I don't see why this dynamic shouldn't happen in almost all white collar job across the private sector (government sectors are pretty much protected no matter what happens). There'll be a decreasing demand for lawyers, accountants, analysts, secretaries, HR personnel, designers, marketers etc etc. Even doctors might start feeling this eventually.


no I think more engineers. especially those who can be a jack-of-all-trades. if a software project that takes normally 1 year of customer development can be done in 2 months, then that project is affordable to a wide array of business who would could never fund that kind of project before.


I can see more projects being deployed by smaller businesses, that would otherwise not be able to.

But how will this translate to engineering jobs? Maybe there will be AI tools to automate most of the stuff a small business needs done. "Ah," you may say, "I will build those tools!". Ok. Maybe. How many engineers do you need for that? Will the current engineering job market shrink or expand, and how many non-trash, well paid jobs will there be?

I'm not saying I know for sure how it'll go, but I'm concerned.


Just had a thought, perhaps software engineers will become more like car mechanics.


That's not an encouraging thought.

By the way, car mechanics (especially independent ones, your average garage mechanic) understand less and less about what's going on inside modern cars. I don't want this to happen to us.


would be similar to solution engineers today. you build solutions using ai. think about all the moving parts to building a complex business app. user experience, data storage, business logic, reporting, etc. etc. the engineer can orchestrate the ai to build the solution and validate its correctness.


I fear even this role will need way fewer people, meaning the employment pool will heavily shrink, and those competing for a job will need to accept lower paychecks.


like someone said above. demand is infinite. imagine a world where the local AI/Engineer tech is a ubiquitous as the uber driver. don't think it will necessarily create smaller paychecks. hard to say. But I see demand skyrocketing for customized software that can be provided at 1/10 of today's costs.

We are far away from that though. As an enterprise software/data engineer, AI has been great in answering questions and generating tactical code for me. Hours have turned into minutes. It even motivated me to work on side projects because they take less time. You will be fine. Embrace the change. Its good for you. Will lead to personal growth.


I'm not at all convinced demand is infinite, nor that this demand will result in employment. This feels like begging the question. This is precisely what I fear won't happen!

Also, I don't want to be a glorified uber driver. It's not good for me and not good for the profession.

> As an enterprise software/data engineer, AI has been great in answering questions and generating tactical code for me. Hours have turned into minutes.

I don't dispute this part, and it's been this way for me too. I'm talking about the future of our profession, and our job security.

> You will be fine. Embrace the change. Its good for you. Will lead to personal growth.

We're talking at cross-purposes here. I'm concerned about job security, not personal growth. This isn't about change. I've been almost three decades in this profession, I've seen change. I'm worried about this particular thing.


3 decades. me too. since 97. maybe uber driver was a bad example. what about having a work model similar to a lawyer? whereby one can specialize in creating certain types of business or personal apps at a high hourly rate ?


Did the introduction of assemblers lead to creating more or fewer programming jobs?


I get this argument, but it feels we cannot always reason by analogy. Some jumps are qualitatively different. We cannot always claim "this didn't happen before, therefore it won't happen now".

Of course assemblers didn't create fewer programming jobs, nor did compilers or high level languages. However, with "NO CODE" solutions (remember that fad?) there was an attempt at reducing the need for programmers (though not completely taking them out of the equation)... it's just that NO CODE wasn't good enough. What if AI is good enough?


In what AI-powered world do you think that local small software businesses will survive?


One where other businesses need help figuring out how to use AI for their own businesses.

It doesn't matter how "easy" technology gets to use, there will always be a market for helping other people figure out best to apply it.


a bigger concern to me is if the current regime pulls the rug out from fiscal policy. This will certainly crash the market. They seem to be doing this as we speak.


yeah. good points. mandlebrot wrote about this as well. that portfolio theory is based "normal" curve returns but crashes occur with much higher probability than a normal distribution would assume. he made a case for fractals.

managing options is hard. when do you close the hedge after it gives you a profit? when do you put it back on.


> crashes occur with much higher probability than a normal distribution would assume.

This is probably the most intuitive way to reason about it, but there's a subtlety there that relates to the magnitude of the crashes and their probability of happening. Intuitively, if the normal distribution was correct in estimating market behavior, you'd not expect to see 10 sigma events ever (we are looking at something like 1/10^20 of this happening). And yet, these events happen with some regularity when you try to use normal distributions for market returns.

So there are two aspects to this: one is that the normal distribution and people underestimate how often crashes can happen, and they also underestimate how big those crashes can be. It's the latter that is arguably more dangerous, because if you lose money more frequently than expected, you might end up earning less, but if you underestimate how bad a drawdown can be, you can be easily wiped out before you can react.

So why do we use the normal distribution so often in finance? Because it is convenient. It works fine for 99.99% of the cases, and it is easier to deal with the tails as a special beast rather than always having a complicated model to look at. There's also an element of lottery. Significant crashes happen once every 10 years roughly, so you don't need a great deal of luck to make a lot of money without seeing one :D

> managing options is hard. when do you close the hedge after it gives you a profit? when do you put it back on. It is definitely quite hard. Options are nonlinear instruments with a significant complexity to their behavior. This is why people pay Universa and other, lesser known tail risk funds to handle this complexity. You also have instruments like Variance Swaps, which allow one to easily lock in convexity pnl - considerably removing the timing aspect. You can construct these instruments synthetically using options, but this is not for the faint of heart.

It is doable on your own, but not without understanding options in depth and having tools that allow you to manage an option portfolio. These days, a person who is decent at freshman math and knows how to use python+pandas can easily manage this entirely on their own with a handful of scripts, but they'll likely need to spend a few months learning the theory and then it can take a year to build the neccessary intuition.


yeah 100% . picking up pennies in from of the train. thanks for the bits. so I should search on "Variance Swaps"? . I do a great deal of options trading and play around with some ATS software for options trading that i've developed using IB api. lot of factors and complexity as you mentioned. factors like liquidity . IV spikes, are dimensions to consider. thinking of trading high probability 0DTE spreads with portfolio hedging in place.


In large scale business intergration platforms/apps, you have operational systems like SAP and and Oracle Service Cloud generate/stream raw or business events which are published to message brokers in topics ( orders, incidents, suppliers, logistics, etc). There the data is published , validated, transformed (filtered, routed, formatted, enriched, aggregated, etc) into other downstream topics which can be used to egress to other apps or enterprise data stores/data lakes. Data governance apps control who has access. Elastic search or Splunk for data lineage and debugging. you also have sbservability systems sandwiched in there as well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: