Hacker Newsnew | past | comments | ask | show | jobs | submit | 40acres's commentslogin

Someone framed debt to gdp ratio like this: if your debt is 150% of your GDP. It would take a year and a half to pay off your debt if all the national income (gdp) was allocated to payments.

No obviously this isn’t a realistic policy - but if we can have “war-time” economies as proven in the past it reasons that we can have an economy solely focused on the repayment of debt (idk, high taxes for a decade?). Am I tripping?


Maybe a better comparison is to a post war economy? The US paid off the WW2 debt in about 17 years through ordinary taxation, but US taxes today are much lower than they were after WW2, at least for the very rich.


If one looks at the national debt clock I think we each owe just $300k.

In these terms it seems quite manageable. My guess is that if it was an actual problem we would solve it. My guess is that it is mostly a talking point.


You're tripping if you think everyone is going to give all their output for a year to pay for debt without a revolution.

Almost half of households have less than $1000 in savings. How are they making it through the whole year without income?

How are all the old people who live off social security making it without 5% of GDP set aside to finance their lives?


What does that have to do with the debt?


I’m not sure about this one - seems like it significantly reduces the value of data, which would have a downstream impact on the entire industry. I guess that’s what you intended, but my mind quickly went to a rabbit hole with the second order effects of this one.


Arguably, the value of data in its modern version is much higher than mere “goodwill” including customer relationships. The data travels easily without customer consent, while a traditional customer relationship can be canceled by the customer at any time.

So the value of data is a modern day business windfall, that can be argued as unfair. Or fair. Depending on your viewpoint.


The market is definitely there for enterprise LLMs. Everyone is using GPT for work. I use it to provide stubs for memos and to brainstorm - but the real value comes from replace internal “tribal knowledge” with an AI who knows your org in and out.


"Everyone" is exaggeration at best.


It kind of boggles my mind that there are people who arent using LLMs yet.

Sure its not everyone, but the people who arent using them are signaling a major red flag IMO.

They are resistant to change, even if they don't understand the technology, what else are they resisting from their managers/leadership team? Further, I think of the people in my life who have refused to even try it, they all seem to have a screw or two loose, even if they are making 200k/yr successful.

All IMO of course, but in tech, I imagine something needs to be 'off' to never try it.

EDIT: Seems I'm getting criticism from people who are using it for inappropriate use cases. I don't use a screwdriver to hammer nails.


They're wildly inappropriate for most things. For similar anecdata, see blockchain fever where everybody shoehorned it in wherever they could, even when a traditional database made more sense.

Consistent conditional logic makes more sense than a risk-laden hallucinating LLM for a lot of workflows.

"Everyone" doesn't need to hammer nails because there's more than just one career and industry. The acceptable quality of the job output varies drastically too.

"It kind of boggles my mind" that people can't see beyond their own life.


I think they mean using it as a tool at work, not using it in production or as a feature in their application.


That's how I interpreted it in the first place.


I don't think you've used an LLM then.


I can't say I was impressed with ChatGPT help when I tried it. I figured quizing it on reading comprehension would be a great task, given that it is a language based model and a skill seemingly in short supply amongst my coworkers and self. After confirming that the specifications of a standard I am implementing were within its knowledge, I tried to have it explain the difference between two parts and it failed so miserably that its understanding of the content was below even my managers for whom this is only something they occasionally review. Any attempt to correct it only resulted in it providing an apology and new misunderstandings. Outside of work, I tried using it to find an old movie, probably from the '60s, about a man refusing to shave his long beard and featuring a scene with him being chased around his home half shaven, but it merely made up scenes about beard shaving for several other movies. Admittedly, I have not tried uploading any of my companies code to give it a less memory based task.


I think reading comprehension is a notable weakness - asking it detailed questions about a long text comes up with lots of hallucinations in my experience.

But it's definitely good at some other things. Writing boiler plate texts of various sorts and giving instructions on how to do certain things notably.

It seems to mostly synthesize common knowledge rather than learning anything. But that can be very useful, a lot of people's job involves doing things like that today.


It sounds like you were testing memorization not reading comprehension.

To test reading comprehension, the source should be in the prompt, not the training set.


yeah, same boat here.

It's great for generating sample code snippets or refactoring code, but I can't paste my company's intellectual property into it

If I could train a customized version of it on all my company's Slack messages, Jira tickets, e-mails, etc it'd be insanely useful . . . . but I don't think any big company would actually want that, since it wouldn't be able to keep secrets from anyone with access to it


> It kind of boggles my mind that there are people who arent using LLMs yet.

maybe it is easier to go through actual verified information than to double check everything an AI says.

I only use LLMs to restate information that I can half piece together so I can remember the missing bits (like a math proof or derivation), or to point me to recommendations of actual resources. And even those two things i am very wary off.


>maybe it is easier to go through actual verified information than to double check everything an AI says.

Wrong tool for the job. You don't ask it information questions. Ask it brainstorming questions.


I think people have different jobs and different skill levels. For some it gives them a boost for others it slows them down. Translating what you want exactly into english is a different way of producing something for many. Some people are really smart and they don't need a calculator. No shame in using a calculator that mostly works.


So employees share their knowledge to get replaced by an AI?


one could argue that sharing knowledge was not exactly in employee interest even before AI.

(it makes it easier for some junior to replace you some day)


The difference is the speed and the amount if knowledge transfer.

And the junior could leave the company, an AI won't.


Also, at least for now, "AI" will never forget.


>The market is definitely there for enterprise LLMs.

Also in the news: https://www.cnbc.com/2023/05/02/chegg-drops-more-than-40perc...

>but the real value comes from replace internal “tribal knowledge” with an AI who knows your org in and out

I bet Microsoft is already working on that.


I've seen fine tuning attempts on internal documents - it's terrible (eg. mixing up stuff between locations/teams, making shit up)

They are now trying to build a search index and feeding it in-context results.

Honestly not seeing much value over a search index, but hey if it makes the internal data easily searchable under the banner of AI hype it's a win.


We already have something developed like that in our company (~30 pax employee owned wealth management firm). It's.... interesting.

We currently use GPT-4 combined with an internal knowledge base we had earlier since the beginning, and we practically have to fire our chief of staff and the admin team. Just kidding, but it's made her team's work a ton easier that she can devote more time to the nitty-gritty hard stuff.

The interesting part is that I had a bit of a personality touch added as part of its context, so the AI's character is quite.... villainous.

Enterprise self-hosted ChatGPT is going to be huge.


I am about to release a self hosted GPT that works with OpenAI and Azure OpenAI. It has several enterprise features, mostly around authentication/authorisation. I'll let you know when it drops or you can be a beta tester if you like!


Noted. Just a few questions, what do you mean by "works with OpenAI"? I thought those were closed systems, so is the system basically pretrained and the weights saved? I'm pretty sure that if it were even possible, it would still be misuse per their terms and conditions?

Currently we use an initial semantic search for context injection, which is then passed to GPT for completions. If any LLAMA company were to make that second pretrained bit self-hostable for some license fee, I know a bunch of companies in finance of all sizes which would readily pounce on that tool. But I'm fairly certain that's not what Open AI wants to do.


There's something like 5-6 companies in the W23 batch tackling this space.


We just ordered a white label instance from Open AI. It will consume some terabytes of data and hopefully be the oracle we need.


Can you point us to how you did this?


We placed a request to set this up on / via Azure through our Microsoft account executive.


Thanks!


They offer white label instance? TIL


Yes they do. Also, these are completely without any of the safeguards that the public instances have. This 'on the record' by a Microsoft regional CEO that was pushing this pretty hard.


Sick. So they get pretty much the "base" model? That's almost too powerful


Some would say your first paragraph describes Apple.


I was laid off in November - few false starts here and there but I start my new gig on the 20th. Better yet I have $20k left from severance which is really nice.


I assume non-startup / large company? Congrats on the role.


Thanks. It’s a startup of about 100-150 people.


JK Rowling built a billion dollar children’s franchise shortly after Dahls death.


True. Although her audience is a little older than I was thinking. Also I didn't think of her because I don't like her writing, I find it boring. The stories are ok but the way she presents it is dull, to me. Obviously that's a minority opinion given her broad appeal.


I don't think it will be a minority opinion, given the test of time. I think HP's star is already waning (and no, I don't mean because of the author's views on certain subjects; I think the faddishness of HP itself is already wearing off).


A game based on her work is a top seller on steam right now. Her books are still being sold at grocery stores.

I don't think HP will become irrelevant any time soon.


Loads and loads of very young readers have relished the HP books.


I don’t know how to quantify it but I have definitely noticed an increase in productivity. I don’t need my whole team in the office but a few key people once a week was a game changer.


If there's an increase in productivity you should be able to point to it. Otherwise you may just be interpreting "I see people talking" as being "I'm seeing collaboration happening" - essentially experiencing confirmation bias or similar.


this has been my experience as well.


Funnily enough I really like YouTube shorts. A lot of Tiktokers cross post and because of this I haven’t found a reason to download TikTok.


> I haven’t found a reason to download TikTok

Its a good business. My daughter isn't allowed Tiktok, but youtube comes with every phone and ipad already...


This is the exact same message Microsoft CEO said in his message. So yes MSFT is directing these layoffs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: