Someone framed debt to gdp ratio like this: if your debt is 150% of your GDP. It would take a year and a half to pay off your debt if all the national income (gdp) was allocated to payments.
No obviously this isn’t a realistic policy - but if we can have “war-time” economies as proven in the past it reasons that we can have an economy solely focused on the repayment of debt (idk, high taxes for a decade?). Am I tripping?
Maybe a better comparison is to a post war economy? The US paid off the WW2 debt in about 17 years through ordinary taxation, but US taxes today are much lower than they were after WW2, at least for the very rich.
If one looks at the national debt clock I think we each owe just $300k.
In these terms it seems quite manageable. My guess is that if it was an actual problem we would solve it. My guess is that it is mostly a talking point.
I’m not sure about this one - seems like it significantly reduces the value of data, which would have a downstream impact on the entire industry. I guess that’s what you intended, but my mind quickly went to a rabbit hole with the second order effects of this one.
Arguably, the value of data in its modern version is much higher than mere “goodwill” including customer relationships. The data travels easily without customer consent, while a traditional customer relationship can be canceled by the customer at any time.
So the value of data is a modern day business windfall, that can be argued as unfair. Or fair. Depending on your viewpoint.
The market is definitely there for enterprise LLMs. Everyone is using GPT for work. I use it to provide stubs for memos and to brainstorm - but the real value comes from replace internal “tribal knowledge” with an AI who knows your org in and out.
It kind of boggles my mind that there are people who arent using LLMs yet.
Sure its not everyone, but the people who arent using them are signaling a major red flag IMO.
They are resistant to change, even if they don't understand the technology, what else are they resisting from their managers/leadership team? Further, I think of the people in my life who have refused to even try it, they all seem to have a screw or two loose, even if they are making 200k/yr successful.
All IMO of course, but in tech, I imagine something needs to be 'off' to never try it.
EDIT: Seems I'm getting criticism from people who are using it for inappropriate use cases. I don't use a screwdriver to hammer nails.
They're wildly inappropriate for most things. For similar anecdata, see blockchain fever where everybody shoehorned it in wherever they could, even when a traditional database made more sense.
Consistent conditional logic makes more sense than a risk-laden hallucinating LLM for a lot of workflows.
"Everyone" doesn't need to hammer nails because there's more than just one career and industry. The acceptable quality of the job output varies drastically too.
"It kind of boggles my mind" that people can't see beyond their own life.
I can't say I was impressed with ChatGPT help when I tried it. I figured quizing it on reading comprehension would be a great task, given that it is a language based model and a skill seemingly in short supply amongst my coworkers and self. After confirming that the specifications of a standard I am implementing were within its knowledge, I tried to have it explain the difference between two parts and it failed so miserably that its understanding of the content was below even my managers for whom this is only something they occasionally review. Any attempt to correct it only resulted in it providing an apology and new misunderstandings. Outside of work, I tried using it to find an old movie, probably from the '60s, about a man refusing to shave his long beard and featuring a scene with him being chased around his home half shaven, but it merely made up scenes about beard shaving for several other movies. Admittedly, I have not tried uploading any of my companies code to give it a less memory based task.
I think reading comprehension is a notable weakness - asking it detailed questions about a long text comes up with lots of hallucinations in my experience.
But it's definitely good at some other things. Writing boiler plate texts of various sorts and giving instructions on how to do certain things notably.
It seems to mostly synthesize common knowledge rather than learning anything. But that can be very useful, a lot of people's job involves doing things like that today.
It's great for generating sample code snippets or refactoring code, but I can't paste my company's intellectual property into it
If I could train a customized version of it on all my company's Slack messages, Jira tickets, e-mails, etc it'd be insanely useful . . . . but I don't think any big company would actually want that, since it wouldn't be able to keep secrets from anyone with access to it
> It kind of boggles my mind that there are people who arent using LLMs yet.
maybe it is easier to go through actual verified information than to double check everything an AI says.
I only use LLMs to restate information that I can half piece together so I can remember the missing bits (like a math proof or derivation), or to point me to recommendations of actual resources. And even those two things i am very wary off.
I think people have different jobs and different skill levels. For some it gives them a boost for others it slows them down. Translating what you want exactly into english is a different way of producing something for many. Some people are really smart and they don't need a calculator. No shame in using a calculator that mostly works.
We already have something developed like that in our company (~30 pax employee owned wealth management firm). It's.... interesting.
We currently use GPT-4 combined with an internal knowledge base we had earlier since the beginning, and we practically have to fire our chief of staff and the admin team. Just kidding, but it's made her team's work a ton easier that she can devote more time to the nitty-gritty hard stuff.
The interesting part is that I had a bit of a personality touch added as part of its context, so the AI's character is quite.... villainous.
Enterprise self-hosted ChatGPT is going to be huge.
I am about to release a self hosted GPT that works with OpenAI and Azure OpenAI. It has several enterprise features, mostly around authentication/authorisation. I'll let you know when it drops or you can be a beta tester if you like!
Noted. Just a few questions, what do you mean by "works with OpenAI"? I thought those were closed systems, so is the system basically pretrained and the weights saved? I'm pretty sure that if it were even possible, it would still be misuse per their terms and conditions?
Currently we use an initial semantic search for context injection, which is then passed to GPT for completions. If any LLAMA company were to make that second pretrained bit self-hostable for some license fee, I know a bunch of companies in finance of all sizes which would readily pounce on that tool. But I'm fairly certain that's not what Open AI wants to do.
Yes they do. Also, these are completely without any of the safeguards that the public instances have. This 'on the record' by a Microsoft regional CEO that was pushing this pretty hard.
I was laid off in November - few false starts here and there but I start my new gig on the 20th. Better yet I have $20k left from severance which is really nice.
True. Although her audience is a little older than I was thinking. Also I didn't think of her because I don't like her writing, I find it boring. The stories are ok but the way she presents it is dull, to me. Obviously that's a minority opinion given her broad appeal.
I don't think it will be a minority opinion, given the test of time. I think HP's star is already waning (and no, I don't mean because of the author's views on certain subjects; I think the faddishness of HP itself is already wearing off).
I don’t know how to quantify it but I have definitely noticed an increase in productivity. I don’t need my whole team in the office but a few key people once a week was a game changer.
If there's an increase in productivity you should be able to point to it. Otherwise you may just be interpreting "I see people talking" as being "I'm seeing collaboration happening" - essentially experiencing confirmation bias or similar.
No obviously this isn’t a realistic policy - but if we can have “war-time” economies as proven in the past it reasons that we can have an economy solely focused on the repayment of debt (idk, high taxes for a decade?). Am I tripping?