Have you considered giving your digital twin a jolly aspect? I've wondered if an AI video agent could be made to appear real time, despite a real processing latency, if the AI were to give a hearty laugh before all of its' responses.
>So Carter, what did you do this weekend?
>Hohoho, you know! I spent some time working on my pet AI projects!
I wonder if some standard set of personable mannerisms could be used to bridge the gap from 250ms to 1000ms. You don't need to think about what the user has said before you realize they've stopped talking. Make the AI Agent laugh or hum or just say "yes!" before beginning its' response.
I think I recall that Google did exactly this with their telephone bot (Google assistant?), sneaking in very natural sounding "um"s here and there to mask processing/network latency.
This is definitely a good idea! I think the hard part is making it contextual and relevant to the last question/response, in which case the LLM comes into the equation again. Something we're looking at though!
Perhaps use a small, fast LLM to maintain a rolling "disposition" state, and for each of perhaps a handful of dispositions, have a handful of bridging emotes/gestures. You can have the small LLM use the next-to-last/second-most-recent user input to control the disposition async'ly, and in moments where it's not clear just say "That's a good question," "Let me think about that," or "I think that..." etc.
A cutoff date of April 2023 means the AI also presumably has access to about a month's worth of blogs that have been written since GPT4 was released on March 14th. So perhaps a few "Best Practices" or "Prompt Engineering" guides might have made it into the training set.
Chat GPT can probably help users better optimize their conversations with it.
This article is so bereft of specific details that it could easily pass as a dystopian science fiction short story. Fortunately, it’s just marketing PR for a global logistics company.
If you don't release a censored model for the casual observer to tinker with, you could end up with a model that says something embarrassing or problematic. Then the news media hype cycle would be all about how you're not a responsible AI company, etc. So releasing a censored AI model seems like it should mitigate those criticisms. Anyone technical enough to need an uncensored version will be technical enough to access the uncensored version.
Besides, censoring a model is probably also a useful industry skill which can be practiced and improved, and best methods published. Some of these censorship regimes appear to have gone to far, at least in some folks' minds, so clearly there's a wrong way to do it, too. By practicing the censorship we can probably arrive at a spot almost everyone is comfortable with.
> Anyone technical enough to need an uncensored version will be technical enough to access the uncensored version.
I wasn't talking about that. I was talking about organizations who need a censored model (not uncensored model). I was saying that even those organizations will fine tune their own censored model instead of using Meta's censored model.
You're not wrong that you almost certainly will want to finetune it to your use case.
I'm looking at it from the perspective of the "tinkering developer" who just wants to see if they can use it somewhere and show it off to their boss as a proof-of-concept. Or even deploy it in a limited fashion. We have ~6 developers where I work and while I could likely get approval for finetuning, I would have to first show it's useful first.
On top of this, I think that for many use cases the given censored version is "good enough" - assigning IT tickets, summarizing messages, assisting search results, etc.
Given the level of "nobody knows where to use it yet" across industries - it's best that there's already a "on the rails" model to play with so you can figure out if your usecase makes sense/get approval before going all-in on finetuning, etc.
There's a lot of companies who aren't "tech companies" and don't have many teams of developers like retail, wholesalers, etc who won't get an immediate go-ahead to really invest the time in fine-tuning first.
It’s a segmentation issue. The ad buy was for “people going to restaurants” but it might have been “people going to {type} restaurants.”
Thought it’s probably not that simple. A naive ad buy might not care to target, or targeting is too expensive and you’re ok wasting some impressions because it might be all-in cheaper, or {brand} has the media budget to pay to be in front of your eyeballs all the time.
I suspect a more sophisticated chatbot will upsell the restaurant's offerings. "Would you like a bottle of champagne chilled and waiting for you? The Mushroom Bruschetta with Brie, Sage and Truffle Oil appetizer is the special of the day" that sort of thing.
Then you get there and find out it hallucinated and ordered you the cheesesteak eggrolls. But this is okay because you love cheesesteak eggrolls and come on... truffle oil? Really?
If they are taking an Uber to of the fancy restaurant and passing a McDonald’s chances are highly likely they will take a very similar route on the way home. They will still want to stop at McDonald’s for a $4 Large fry or an ice cream cone but no coupon this time. The line is now longer which increases the ride time and the drivers perceived profit.
Weren't they describing a Senate that was appointed by state legislators? They weren't describing what a government with Direct Election of Senators would look like.
Or, if that's not what you're talking about, why not just quote a relevant portion of the FP instead of being so deliberately cryptic?
Buzzfeed News _was_ tabloid journalism. They published demonstrably fake news, most notably the infamous Steele Dossier. Buzzfeed News claimed the document was "unverified," when in reality it was verifiably false. Every other major media outlet had access to this document and refused to publish it because its claims were unsubstantiated. But not Buzzfeed News -- skepticism of this outlet's seriousness was completely warranted.
My brother's rule for his kids regarding swearing and other NSFW words is: "You'll probably hear adults use these words from time to time, but you can't repeat these words until you start paying taxes."
Soooooo I’m guessing your nieces and nephews swear whenever they want, they’re just careful not to do it when dad is around to hear it.
I tend to favour my friends’ policy about swearing with their kids: everybody feels differently about it, so you have to be careful about swearing around other people until you know whether it’s okay with them. At home? Probably fine. At school? Probably need to be more careful about it.
It’s much more realistic to teach nuance than to outright forbid the behavior imo.
( Although I suppose we should probably acknowledge - 1) these are not our kids, and 2) every kid is different. Some kids appreciate the value of choosing your words to fit your audience, some adults never learned. )
I mean if the language is not appropriate at school, then the school should educate them. There's plenty other behaviours expected at a school - that the school teaches them - that isn't expected at the home or other places.
I'm not sure which comment you're reading, but the person you're replying to is talking about their friend's policy which explicitly states that there exist some people who you probably shouldn't swear around. It just isn't dad
I'm 38 and I still won't swear when my parents are around (maybe it's a cultural difference because I also can't imagine what a "they can say whatever they want in front of me" household would look like).
I was 30s before I could let myself swear in front of my parents. Who did I learn curse words from? My father, of course.
This was more of an individual thing than cultural though. My family is stereotypical North American Protestant, of Northern European descent. And neither of my siblings showed the same restraint in using "foul" language around my parents that I did.
My kids don't routinely swear in front of me. 10yo tries it out occasionally but then feels silly and stops. So our household is probably not too different from most when it comes to language use at home. But they don't get in trouble if they do, and I don't personally care - I try to emphasize the importance of thinking about what they're going to say to achieve their longer term goals. I'm more bothered by "you're stupid" than "fuck you", for instance, as the latter is more clearly just shorthand for "I'm angry at you". Though I try to get them to verbalize the latter if they can.
adults really shouldn't pregurgitate spurious reasoning like this for children, no matter how well-meaning. instead, we should tell them the direct truth, perhaps only eliding nuance and detail for the appropriate maturity.
swear words are exclamatories, meant to indicate extreme emotion. so using a lot of swear words is like shouting all the time, or using all caps all the time in writing. it washes away the richness of expression and makes the swearer seem unable to properly contextualize and express their emotions. our brains are attuned to wash out sameness and pick out differences, especially sharp ones. that's also why selective swearing is very effective, and constant swearing is ineffective.
there's certainly much more to it than just that, but that should be the core contextualization of swearing for kids. let them experiment on their own but just like shouting all the time, let them know it won't be tolerated in most situations until they can effectively and appropriately wield this grammatical tool. don't impose some indirect, off-the-cuff age cut-off, which they'll surely understand as being arbitrary and therefore easily (and rightfully) dismissed.
I wonder if some standard set of personable mannerisms could be used to bridge the gap from 250ms to 1000ms. You don't need to think about what the user has said before you realize they've stopped talking. Make the AI Agent laugh or hum or just say "yes!" before beginning its' response.