That seems a bit counter-evidenced by the fact that he reached an audience much larger than his chosen fields. He had an unique ability to explain complex matters in a very concise manner, which is an awesome presentation skill in its own right.
I mean we have ~4,000 page bills going through Congress today, it is impossible to even verify anyone has read through the entire thing. At least this will be a start.
I don't know, it always feels like we are stuck between SUPER LONG and descriptive TOS and contracts when simple language + trust would make things easier. Maybe something like this helps abstract away legalese and keep bills in readable form? That would be a great world to be in, common sense contract/bills but still hold up to bad actors/conflict resolution.
Yes. What's the use of having standards for document handling and guaranteeing their correctness if we then feed them through a black box and accept its interpretation at face value.
It certainly tends to be perceived that way by the person corrected. It's also debatable whether nitpicking on someone's grammar is "correcting their language", linguists tend to think it's not: https://journals.plos.org/plosone/article?id=10.1371/journal...
I’m a non-native English speaker and I always appreciate corrections. Same is true for most non-native speakers I know personally. Most commonly I see this concern for possible offence at being corrected for bad English come from native English speakers, which is peculiar. (I don’t know if it’s true in this instance.)
I'm a native speaker and I always appreciate corrections.
Getting upset about being corrected means you've got so much hubris as to think that you think you couldn't possibly be making any mistakes, or so much ego as to think you are above being corrected... or both.
To NOT correct people because we might "offend" them leads to a world of clueless idiots thinking they're infallible geniuses.... Crap. We're too late.
The only time I'm annoyed by corrections is when it's halting the conversation and stopping others from progressing towards the goal.
Since this forum has threaded conversations, someone can correct grammar at the same time that someone else is progressing the discussion, so I wouldn't have a problem with it here.
Well, okay, there's another time: If they're obviously being mean about it. But I think most people would dislike that, regardless of context.
Anecdotally I have seen the same. It's easy for a native speaker to be offended when corrected because it's their first (and often only) language... they should be good at it. Language-learners tend to be quite open to correction in any language.
I suspect part of this is that there's prescriptivist nitpicking and non-prescriptivist, uh, non-nitpicking.
Pointing out that "feedback" in uncountable is giving information on how to better match actual usage. A prescriptivist will agree with it, but so would a descriptivist.
Contrast this with "lion's share": traditionally, it meant "all", but nowadays is usually used to mean "most, but not all". If I convince someone that they should use it to mean "all", they'll be further from matching actual usage. I think that's what qualifies it as nitpicky and rude vs. just helpful.
I don't agree with this - it really depends on how it's done. I think the way it was done in this thread was non-accusatory, and generally I find that these suggestions are very helpful. Once you get to a certain level in a second language, your communication tends to be good enough that no-one corrects you any more, which makes it really hard to progress. I really like it when people correct me, FWIW.
That's not my experience when interacting with people communicating in a second language. They generally appreciate the opportunity to learn and improve their (in most cases) English. They also usually love to be taught idioms so that they don't stick out like a sore thumb!
It's not a bad deal at all, if that's what they stick to. It's really mostly non-violent thefts these days. That still doesn't make the area "low-crime" or "safe."
I am toying about with building a virtual puppet software in the style of watchmeforever. I have a number of voices I do for the stage and DnD that I would be willing to train a few models on so I could give my puppets unique voices.
Anything written can be listened to with this tech. Any news article, any short story, a draft of a piece of writing you're working on. There is too much text for human beings to read it all.
When I said there's too much text for human beings to read it all I meant that it isn't feasible to pay people to read all text that someone might want to listen to into audio. Like a random blog written by someone in their spare time probably isn't going to hire a voice actor.
I think the case for having all text be listenable is pretty clear. We're all really busy and often our hands are busy but we're not doing something that mentally stimulating. This is an ideal time to listen to an audiobook, a blog, the news, or whatever else you'd like.
Generating audio for an audio book: If an author could speak for 20 minutes and then generate audio for an entire book from the book's text and the model, I think that would be very useful.
The OP mentioned that for so called, "High-fidelity voice cloning", it would take 20 minutes of training. I think a book author would want the best quality possible to reproduce their voice.
Many people prefer an audiobook version of a book to be read by the original author, which isn't always the case. If an author could make that version happen by using 20 minutes of their time + text2speech of the whole book, that would be an immensely positive value proposition on the side of this company.
But I'm not sure. Part of why I'd prefer the original author to read a book is that they vocally emphasize certain parts of the book, and I don't think these models could do that at this point.
> Many people prefer an audiobook version of a book to be read by the original author
Right, but having AI read the book in the author's voice is definitely not the author reading the work.
As you mention, the reason that people like to hear the author read it is because it's the author reading it, theoretically emphasizing and acting things out according to what was intended. It's not just to hear the author's voice.
I would consider studios taking voice actors' voices and using them to generate new content beyond their contract to be abuse. I'm sure big corporations are rubbing their hands in anticipation, but I'm sure killing the VA industry will make the world just a tiny bit worse for everyone else.
Mods are more difficult to attach a moral judgement to. I don't think I'd really consider them malicious, as long as they're not sold, but there's a very thin line between a high quality mod and stealing someone's voice.
I think it will probably kill the current Business Model of the VA Industry. Having the ability to generate as much audio content as you like without the risk of the VA not being available anymore (dead, booked out,...) is just too good to pass up.
Instead we will probably see licenses for generated voices. And in case for games the game developer could make the voice model freely available for mods of his game.(The mods are already using assets from the game, why not also audio?)
On the other hand, why shouldn't voice actors benefit from this tech?
I can easily imagine a future where AI-generated impersonations are deemed by courts or new legislation to be protected by personality rights. In that world, voice actors could expand their business by offering deeply discounted rates for AI-generated work.
Alternatively, if/when tech like Play.ht is consistently good enough, maybe it just becomes a standard practice for all voice acting work to include a combination of human- and AI-generated content, like a programmer using Copilot or a writer using GPT.
Sure, why not? If you could earn more money and produce more value to society with the same amount of labor, and the legal/regulatory environment supported it, I wouldn't see a reason not to.
If you had a solo contracting business, and the technology existed to fully outsource a development project to AI based on carefully documented requirements, using it would be a cheaper alternative to subcontracting. Rather than writing every line of code by hand, you would transition to becoming an architect, project manager, code reviewer, and QA tester. Now you're one person with the resources and earning potential of an entire development shop.
I have my fair share of complaints about AI coding tools, but that isn't one of them. Maybe the increase in supply would result in a lower average software engineering income, but it wouldn't have to if demand kept pace with supply.
Furthermore, code is more fungible than a person's voice. If someone wants a particular celebrity's voice, that celebrity has a monopoly on it. Thus, it's not obvious that increasing the supply of one's voice acting work would decrease its value. (I suspect the opposite to be the case, until a point of diminishing returns.)
Although the voice acting case has a similar concern; will we get an explosion in new and/or higher-quality media, or will we see a consolidation to a smaller number of well known voice actors taking an outsized amount amount of work? Another issue, if we look beyond impersonation specifically, is that human voices may become marginalized over time in favor of entirely synthetic voices. I imagine that this would start with synthetic voices playing minor roles alongside human/human-impersonated voices, but over time certain synthetic voices would organically become recognizable in their own rights.
Again, I see plenty of concerns with AI in general, but more of a mixed bag than strictly negative, and there isn't anything inherently nefarious about this product in particular.
Personally, I'm optimistic about what society looks like in the long run if humanity proves to be a responsible steward of increasingly advanced AI. By the time we're at a point where 90% of people can be effectively automated out of a job, we'll have had to have figured out some alternative way of distributing resources among the population, i.e. a meaningful UBI backed by continued growth of our species' collective wealth and productivity. I can easily imagine a not-too-distant world that is effectively post-scarcity, where it's not frowned upon to spend years (or lifetimes) on non-income-generating pursuits, and where the only jobs performed by humans are entrepreneur, executive, politician, judge, general, teacher, and other things of that must be done by humans for one reason or another.
So am I happy that AI is encroaching on skilled labor? In the short term, not necessarily. But it's not necessarily bad either, it's the reality that we're in, and long-term I'm more optimistic than not.
Star Trek: Prodigy has already used audio from previous movies and TV to bring back to life several actors from previous series. It's not exactly the same as this, but their dialogue was taken out of context to create new scenes and story.
I think “talking” with dead relatives or friends will become real pretty soon.
If people can find comfort hearing their mom say words of encouragement in a tough situation, I think a lot of people would do it. Kinda hard because for some others that would mean never getting closure.
The last thing on earth I'd want is for any aspect of my dead relatives to be reanimated through technology. No. That's absolutely fucking horrific to consider. I don't need a hallucinating AI pretending to be my dead wife. That's literally shambolic.
There is vastly more potential for that to be abused by others than used in any emotionally or socially constructive way.
I would also find that very creepy and it would probably keep you from moving on.
I think there is a big difference between remembering what happened by looking at a photo or hearing an audio recording and having newly generated "content" from a deceased loved one.
there has been some media coverage on this already (e.g. [1]). an emerging concern among mental healthcare professionals is that a sufficiently-convincing simulation could interfere with the progression of the stages of grief, prolonging the 'denial' stage and potentially heightening the intensity of the stages that follow.
I'm not sure that's quite true. I like many others are by no way a Bash expert, and will generally reach for Python if I have to build anything of reasonable complexity.
I (and many, many like me) have however had enough Bash exposure to do day to day maintenance on Bash scripts.