Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
This Music Video Does Not Exist [video] (thismusicvideodoesnotexist.com)
251 points by smthngwitty on March 23, 2021 | hide | past | favorite | 146 comments


Using AI this way seems like the equivalent of inventing spam for music to me. It's interesting, but basically we've figured enough out about the entropy and shape of an honest signal that we can automatically produce noise against it that cancels meaning, or subverts it.

The difference between simulating music and using formalisms to make discoveries about it seems like a matter of intent. As in, what's the difference between a horizon, a window viewing one, and a picture of one? The existence of an observer makes them related, and the position of the observer makes them different. That difference is probably analogous to what simulated music is to intentional music. The existence of a listener makes it music, and the relationship of the listener makes it different.

These AI generated images are like cancelling-noise to images we already associate meaning with. Which is seems analogous to 1/f fractal or Perlin noise at a certain level of abstraction. Not to dismiss or attempt to trivialize the accomplishment at all, but when people create tech that is overwhelming to the senses like that, sometimes a new frame of reference can help.


The syncing of the visuals to the music brings to mind the video for The Chemical Brothers’ Star Guitar: https://www.youtube.com/watch?v=0S43IwBF0uM


A really impressive achievement, I wonder how they achieved it when the song was released.


If you like that, you can look up other music video shot by Michel Gondry. He has an incredible creativity and DIY ethos that really sets him appart.

The film "Be Kind, Rewind" is kind of a "mise en abîme" in that regards, shot by Gondry and featuring characters that show the same kind of cinematic craftiness.



The different elements in the scenery represent various sounds and instruments in the song.

Another of Gondry's videos that did a similar thing was Daft Punk - Around the World (https://www.youtube.com/watch?v=LKYPYj2XX80) where each of the different types of "people" represented different sounds in the source track.


I think they did a good job choosing a scene that is somewhat easier for CGI: artists have struggled for a long time to make acceleration and general changes of momentum line up with the way that objects do so in real life. It also makes the motion blur shader more or less a constant which is easy to simulate, and our brain fills in the rest :)


Star Guitar is from 2003 which was well in the era of "cheap computers are good enough now that we can manipulate each pixel" (after all, Terminator 2 and Jurassic Park were made in the early 90's, and music videos usually don't need "cinematic" image quality because they were broadcasted over TV).

If you look closely at the Star Guitar video, you can find "clipping artefacts" where the fragments are pieced together or fade in and out.


I don’t disagree with your point however Jurassic Park was mostly animatronics so not the best example to use.


Well it's just a few minutes, but definitely the most impressive few minutes of the movie ;) AFAIK most scenes where dinosaurs are jumping and running around were computer-rendered.

Remember that scene where the protagonists are running with the dino swarm around them? The paths of the dinosaurs were laid out in the real world with tennis balls, and back then we wondered "wow they must have great image processing code to remove all those tracking markers". Turns out it wasn't algorithms but humans who removed them frame by frame, pixel by pixel (it didn't occur to us computer nerds that brute force makes the most sense if you're on a tight schedule).


Some of the background buildings look completely flat when they should have some slight depth. It took me dozens of viewings before I even noticed


I recall seeing a behind-the-scenes clip somewhere (perhaps on the Directors Label series that came out in the early 2000s) where Gondry talks about his brother piecing this together from video footage they'd captured. I can't find that clip but came across this very early exploration towards the final product: https://youtu.be/GF0-wGbRqEs


Music video reminded me of "The Cure - Jumping Someone Else's Train".

https://www.youtube.com/watch?v=s1oWf07FRCw

From Wikipedia: The music video shows a speeded up view from the driver's cab of a train journey from London Victoria to Brighton Station.


It reminded me of the old Amega demo scene. I think it is the overly strong syncing with the beat. More deviation from it would be an improvement.


Also lots of cheaply produced music videos for techno songs in the mid 90's that needed "something, anything" to run on MTV looked like that :)


Edit: Its using OpenAI Jukebox (https://openai.com/blog/jukebox/) which claims to actually produce good music.

Ive heard music synthesized by neural nets made by different people, this doesn't sound like that. It may be over-fitting.

Also, watching multiple videos, all the music is similar. There is this one tune thats in every single one of them.


Oh, it's human music! Love it.

This is my favorite so far, especially the lyrics :D Had to leave the office, to not disturb anyone with my laughter. https://soundcloud.com/openai_audio/classic-pop-in-the-style...


Human Flesh For Sacrifice!


I really like it, altough the music seems to be always the 2 same songs with some alterations

- Dancer In The Dark https://www.youtube.com/watch?v=XnGv2gwk8As

- Cigarettes After Sex https://www.youtube.com/watch?v=KIKTnnsL9CU


Did you reverse search that with Shazam etc or did you just know those songs?


One was familiar, but i shazamed them to be sure, worked surprisingly well


Yes.


The 1st one is very similar due to the syncopated bass being anticipated by one 16th within the 1st beat of each measure. I guess it's common in that style.


Yeah, it definitely seems to be overfitting.


The one I got seemed to be a combination of the two. It didn't sound all bad tbh.


The former music video shows the holy grail for an AI (Don't have to pay for models, or at least not that kind of model)


This is scary and what I feared would eventually happen. How long before AI is better than every music producer, trained on all of music how likeable something is? How long before fake influencers on IG and TikTok completely dominate by having more addictive personalities and videos to follow than real people?


There's two fallacies I think you've touched on here, one is that there is an "objectively best" music. Music is so tightly coupled to cultural movements, very subjective, and extremely broad in scope, it just can't be the case.

The second I feel is that people would still be interested in AI music in a meaningful way. I think AI music has a following now because it's interesting and novel, and that gives it a story, but once it's mainstream, that story is boring and people will go back to seeking real culturally relevant music.

Sure the music on TikTok/IG and friends could be generated, but I am not sure I care too much. Those platforms are almost entirely vapid, void of authenticity already, the music being fake too doesn't detract that much.


> one is that there is an "objectively best" music.

Case in point, I enjoy some shit. No way AI will replace lo-fi recorded-in-underground-bunker limited-cassette-tape-release atmospheric black metal / noise.

I mean it might, but here's a take from a different angle: art is not just the end result, it's the process and the story behind it. Nobody objectively gives a shit about a selfie from the early 1500's, but because of its provenance the Mona Lisa is considered one of the most valuable pieces of art out there. Nobody gives a shit about a database row with an ID, timestamp, user ID and the text "just setting up my twttr", but because it's the first tweet and it's put on a trading platform and it might appreciate in value, someone spent $2.5M on it.

You slag of TikTok because every generation will slag off whatever the following generation(s) do, that's normal. But the generation following you see that differently, and in two decades they will still reference some of the more iconic clips they've seen. I mean I do it with really bad movie voiceovers from nearly 20 years ago, as well as terrible porn intros. It's part of culture.

Anyway, there will be a place for AI generated anything alongside the handcrafted stuff. Like how there is still a market for handcrafted goods alongside the mass producing machines. Or hand-drawn art in the age of digital. Or physical valuables in the age of electronic money and cryptocurrencies.


"Process is art"


I am old enough to remember when people were afraid that Muzak would take over.

It did not.

I was at a Chinese restaurant, around a year and a half ago, and realized that the music they played was a sort of “ersatz” music. It was familiar tunes, like Scarborough Fair, or Hotel California, slightly modified, and strung together. Vocals were basically wordless humming.

There is an entire industry, based on musicians, providing “stock music,” and it’s been around for a long time. Sort of a musical equivalent of Shutterstock. Some of this music is quite good. Most is fairly boring.

Music is way too connected to our emotions to be reliably synthesized. AI would need to advance to be able to produce emotions and creativity, before it would threaten the music industry.

Also, there’s always the “tabloid” aspect of the industry. There are artists that may be fairly unremarkable musicians, but generate a lot of press coverage. Unless the world of Questionable Content becomes real, I can’t see any Star headlines of AIs beating up paparazzi.


>I am old enough to remember when people were afraid that Muzak would take over.

In the 1980s musicians unions wanted to kill the Fairlight CMI synthesizer and it's progeny

https://archive.macleans.ca/article/1985/7/29/a-source-of-di...


That was revolutionary, at the time. I remember when the guitarist for our band got a Korg sampler. It was amazing.


I would argue that a lot of this "stock music" is just a continuation of elevator music which has been around for a very long time. There's musicians (often very good ones) who essentially made their whole career doing this music, look up James Last for example.


Exactly. In fact, there was a Romanian electronic musician that I used to like, and his catalog suddenly disappeared.

When I contacted him, to find out what happened, he informed me that all his music was now only available through a stock agency.

Can't blame him, but I was a bit sad.


It's called library music. It's much used in ads, trailers, and corporate productions. Ad agencies sometimes also commission music.

Ad use pays incredibly well, because unlike streaming you get a non-trivial fee every time it airs on TV/radio/cinema.

Some musicians make very successful careers doing this.


Always seemed like the modern analogue to the older practice of artists only being able to make a living if they could secure patronage. The days of (semi) direct payment for physical copies of recorded media are starting to seem like an aberration rather than the rule.


> I think AI music has a following now because it's interesting and novel, and that gives it a story, but once it's mainstream, that story is boring

I'd have agreed 10 years ago, but I swear every time someone drives past with their stereo bumping, it's "robot music" (as I smart-assedly refer to heavily quantized/autotuned vocals). I guess it will probably still die out eventually, but that fad has been a lot longer-lived than I'd expected.


I don't think we need "objectively best" here—merely "subjectively best". And because each person could have their own AI DJ trained on an arbitrarily rich set of preferences and experiences from their one-person audience, we should expect that AI DJ to be subjectively the best for their respective human.

Obviously, this overlooks the shared experience element of music, but that's not so relevant in the case of online music we consume solo.


Maybe we call them dopamine melodies instead of music to underscore the effect rather than the art. The fact that most synthetic music today is hot garbage doesn’t convince me that we couldn’t connect with it in a meaningful way in the future. After all, things like GPT3 are more like talking to a hive of humans than a single alien.


I'm convinced tons of this stuff already exists as things like youtube channels of 'Best Chill Piano Ambient'. I think it's become impossible to tell neural-net generated stuff from the output of humans trying hard to define a functional genre, especially one that's effectively 'background' rather than didactic or challenging.


Parent meant best in the sense of Billboard Hot 100.

Saying people will not be interested in AI music is like saying people will not be interested in movies with CGI.


That's not even a close comparison. Humans make CGI, it's still directed and created by humans. The real comparison would be seeing if people would accept that movies aren't written and directed by humans anymore.

Music outside of pop gains popularity through grass roots popularity contests essentially. Then in Top 40 people care way more about the personalities than the music, you could probably sneak some AI music in there but you would still need the singer to sing.

Music is very often a cult of personality thing, even outside of pop. In those arenas authenticity is really important, someone not even making their music would be outed pretty quick.

Some examples of what I mean: People go to clubs to hear DJs play, because they perceive the DJ to be generating the atmosphere. You couldn't get people to pay a cover charge to see that DJ's mix play without them present, even though it's technically very easy and would be exactly the same music.

Similarly, you can't get people to go to a concert of a recording of a band, that would be lame. People want the band.


Yes, a lot of people don't get this. Pop is as much about fashion and branding as sound, and an AI would need a virtual personality to get anywhere with that.

Which will likely happen within 10 years or so. Hatsune Miku is already a thing, and a few upgrades from now she'll probably appear to be running her own post-TikTok account.

J- and K-pop boy/girl bands are already run like this. The individual artists are more or less interchangeable and can be dropped at any time if they lose their looks or appeal. They don't get much/any creative freedom, and the performances are all externally choreographed.

It wouldn't take much to create a virtual version. Give it some virtual sass and you have a tame monster.


It actually wouldn't be the same music. DJs typically adapt their set to how the crowd reacts.

See e.g. https://www.digitaldjtips.com/2011/05/dj-getting-people-to-d...


William Gibson's Neuromancer (1984) features a Jamaican space tug where the "righteous dub" played within is automatically generated by remixing a vast library of existing music. I was fascinated by the concept when I first read about it as a budding DJ, and it's kind of exciting to see it come to fruition, even if the single tune this particular AI seems to play is pretty insipid.


Just started reading Neuromancer. Glad to know this.


Ah man, now you can see what the past ~37 years of sci-fi films and video games have been making little nods and allusions to all this time :)


This sounds like an application I want to play with. Given a playlist, generate an endless tune based on remixes of all the tracks.


That is something that would be of interest to those working with splitting audio into 'stems', drums, bass, voice, music. Algorithm DJay can do a version of that 'on the fly'. Serato Studio is a kind of remix 'on the fly', but there are many others. Both have automix settings that work very well, especially if you line up similarly grouped tracks.


Yeah, I've not used the "auto" features to that extent, but I've DJ'ed at a few events and festivals just for fun and it's super convenient to have software that has analyzed my music library for BPM and key. I still get to play whatever I want to play, but if I'm drawing a blank or just looking for inspiration, I can sort by all tracks within "n" BPM of the current track and in the same/compatible key.

Then if I really want to play something that wouldn't normally fit with an easy transition, I can always adjust they key or tempo on the fly. There are limits before it starts to sound weird, but that can also be a neat way to mess with a song. Add a riff or a phrase from one song into the current one, but only because I'm able to halve the speed or chop up the time signature to match.

And that's just me as a barely fluent hobbyist. Experienced DJs can do some great stuff. Even a non-"mixing" DJ like a radio DJ can string together songs based on thematic connections, sneaky relations between artist or lyric or even the conditions in the room. Those can be a lot of fun and would be a lot harder to automate than simple beat matching.


I was also very hyped at first when I saw this... But it seems like there are only 14 distinct videos available which are loaded from youtube: https://www.thismusicvideodoesnotexist.com/assets/urls.json


I encourage you to read up on "Doctoring the Tardis" and "The Manual", as well as "I wanna 1-2-1 with you".

These were productions from the artists behind the KLF to expose how formulaic chart hits were. So I would not be surprised if we see AI produce top 40 hits sometime in the future. However, I would be very surprised if AI could replace the vast majority of music that is out there.


How do you know we're not already at that tipping point, if not there than in other forms of influencer 'culture'?

Of course you're right, but I'll tell you what next.

Redefine 'dominate'. All that has happened is we've shown mass media, mass popularity, is better suited to the unreal. Humans need not apply here.

'Mass' anything, then gets less interesting as it is self-evidently a dead-end. And not 'everybody', but significant numbers of people, pursue something else, perhaps things that are unlikeable in an interesting way. The differences will always be less 'addictive' than the 'mass' stuff, but will overperform in other, definable ways.

As someone who has deleted Twitter and Facebook I think I'm correct in this notion that the most addictive, most 'mass' media sources are not 'good' in any normal human sense other than manipulation of that very instinct in all its forms…


I for one welcome our new AI influencer overlords!


There is more to experience than "likability".

If I go to a techno club, I would soooo love to see what an AI can do to make me move my feet more.

But if I listen to some mellow song with lyrics maybe I want to feel the touch of a human artist and his emotions.

I would also love to listen to AI generated music with lyrics, but it would generate different emotions (not worse, just different).

AI generated art will just open another type of experiences, but it will never replace human art.


Computers were DJing in 2005:

https://www.hpl.hp.com/techreports/2005/HPL-2005-88.html

In fact, before that - i saw a talk on this while i was a student, and i finished university in 2003, i think.

The guy who did this work went on to get the computer to do production, but i don't know how that panned out.


When it gets good I do think this will decimate the industry that cranks out generic electronic music for YouTube videos. (Does that count as an industry?) That aspect is not so bad perhaps.

THE FUTURE: Your phone will have an exploratory music feature. You dial in all sorts of weights... genre, tempo, density, melody, etc. You'll crank out a custom instrumental playlist at the tap of a button. This will probably make for great study/work music.

When it eventually overshadows normal instrumental music it's going to be a giant bummer for anyone hoping to make money doing it. Artists are going to have a lot of soul searching to do.


That's not how music works. My bet is we will see crappy music for ads replaced with AI generated stuff, but NFTs etc may actually mean a much more interesting market for real musicians moving forward..


If there is a way to turn this AI into an instrument then people will learn "the AI" the same way they learn "the guitar".


I remember reading 10 or 20 years ago that dj/rap turntables had outsold guitars in the UK for the first time that year. That was my "wow, people don't play music any more" moment...


Considering I like metalcore, bagpipes, Enya, Bruno Mars, Pantera, The Blues Brothers/Chicago blues in general, Daft Punk, Monty Python songs, Loreena McKennitt, Led Zeppelin, Bach, ...

I absolutely doubt that any AI will ever produce something I genuinely like. Maybe some features in isolation, hopefully not multiple genre features in some unholy mish mash...

I just don't see it.


Don't forget that playing a musical instrument or watching crowds move to your beat is an incredible experience that is not going to be enjoyed by artificial systems, at least anytime soon.


Make part of your AI DJ a reinforcement learning agent whose reward function considers the amplitude and coherence of the movement of the crowd.

This agent would enjoy the moving crowds in a much more authentic way than a human DJ would.

I can't find it now, but there was a wearable project (I believe earrings?) which gave quantitative feedback about how a music audience were moving.


I guess the question is if crowd movement is differentiable.


>...watching crowds move to your beat is an incredible experience that is not going to be enjoyed by artificial systems...

Leving aside arguments about qualia, couldn't we just make the "crowd moving" metric part of the objective function?


The public personas of celebrities were always unrealistic and unattainable for ordinary people. Maybe replacing them with explicitly fictional people would lead to a healthier culture.


This is already happening: the top 5 virtual influencers have over 11MM followers in total.

It's not clear this is healthier.


Internet is already dominated by designed-by-committee personalities.


If it's made by AI then it has no inherent value. There was no labour associated with whatever AI produced. Open source AI trained on public models is the ultimate "hold my beer". What's the point of paying for anything when my old GTX 760 could generate it for me?


This is just a fancy version of the old windows media player music graphic thing


I think you're talking about visualizers.[0] And yeah, this is a very nice visualizer, but I think it's a bit much to call it a "music video."

https://en.wikipedia.org/wiki/Music_visualization


Came here to say that. This is the Juicero of ML.


It's actually good music. The visual parts match the music pretty well, too (though it makes me anxious if I look at it longer than 3 seconds).

I do wonder, though, whether it's just over-fits, and takes the original music with some minor tweaks.

Another interesting point would be to know if this music could be sold cheaply without legal troubles (or even distributed royalty free). Most of the music I heard is pretty neat, I could image using them in YouTube videos, or as podcast jingles.


> though it makes me anxious if I look at it longer than 3 seconds

I've read that people with Trypophobia can often have similar reactions to these GAN type generated images. You may want to look into seeing if you have a mild case of it.

https://en.wikipedia.org/wiki/Trypophobia


I have tyro-phobia and yeah those Gan images freak me out, any solution for tyro-phobia?


I don't have it to the point of phobia, but I do find images of multiple tiny holes (especially with eye-looking things inside) in animals and plants disturbing.

I think it is indeed a natural response. Animals/plants with lots of tiny holes look diseased to me.

By the way, if you do have this to the point of phobia, don't look at images of the Surinam Toad, whose young grow out of holes in the mother's back. Really, it's nightmarish.


Never doing that lol


You're afraid of cheese?


Not afraid just triggered, disgusted and uncomfortable(I get goosebumps), plus the image stays burned in my mind for hours , constantly triggering me, kinda like when you see a graphic image.


Wait, any image of any kind of cheese? We're not talking about trypophobia here (the disgust of images with holes) but tyrophobia, a fear of cheese.


Cheese with holes in them lol trypophobia sorry spelt it wrong.


lmao


I don't know. Maybe stare at it long enough until it becomes boring?


Rolls eyes, absolutely not lol.


Dunno much about it. Just read about it once and found it interesting.


Can someone explain why this is impressive? Seems easy to correlate music features to visuals. It's been done for decades. What's special about this one?


Music generated with this https://openai.com/blog/jukebox/


I enjoyed the music and thought "wait, was this song AI generated?" then I turned Shazam to confirm it and it showed:

"k. (Klesh Remix)"

"Cigarettes After Sex"


It's done with AI.


I was dissapointed to see that while the videos are hosted on YouTube, all videos on the channel are unlisted.

Luckily the video IDs can be found at https://www.thismusicvideodoesnotexist.com/assets/urls.json

List of videos:

  https://youtube.com/watch?v=Spu3eiOEJ-M
  https://youtube.com/watch?v=BE2lZ-Ti1Wc
  https://youtube.com/watch?v=lHpcYPfjiLs
  https://youtube.com/watch?v=6w2WXRFJpAE
  https://youtube.com/watch?v=jPBgu-IO6TI
  https://youtube.com/watch?v=g4px8cFR3gc
  https://youtube.com/watch?v=whD78YCQXoo
  https://youtube.com/watch?v=4iN9738uASY
  https://youtube.com/watch?v=2tSa701EftM
  https://youtube.com/watch?v=hjinBNYEkb8
  https://youtube.com/watch?v=jckJS8RNMbw
  https://youtube.com/watch?v=gq5EQtSiJiE
  https://youtube.com/watch?v=zhV3ecScgrA
  https://youtube.com/watch?v=wVnt_CX0C64


I see heavy influences from the video for Take On Me by a-ha (except that was in black and white) https://youtu.be/djV11Xbc914


I would love to see this used with music that's already extremely "algorithmic" and aesthetically lends itself to a computer-created process... IDM stuff like Access to Arasaka[0], Autechre[1], Qebrus[2] etc. Can I just take an artist's entire discography (or that of a few artists) and generate comparable material? Ahhhh can I have "more music" from my favorite artists!??! hehehe :)

[0] https://tympanikaudio.bandcamp.com/track/nypox

[1] https://autechre.bandcamp.com/track/acroyearii

[2] https://exophobiaorgqebrus.bandcamp.com/track/hmn-fshn


This makes me uncomfortable, but also very intrigued. Something about the visuals "tickles the brain" but not in any particular way I can reason about. Cool!


What is a good start to learn about generative art with neural nets? Using tools which work on OSX preferably.

Lets say I want to train the AI on classical art pictures and see what it can generate by itself.


If you are just starting with ML a very good start IMO is "Deep learning with Python", by Chollet. The second edition is very recent.

It starts from the basics and takes you to some complex scenarios. It's focused primarily on Keras, which is a very to use library to start from.

The book covers a lot ML on images, so moving from there to generative art should be "easy" once you grasp the fundamentals.

I don't know however how feasible is to do ML on Macs, it might be a breeze or impossible, I genuinely have no idea :-)


Whatever tools you pick, if you want to run your training locally, get ready to invest in a neat CUDA GPU.


Shameless plug: You can create one of these yourself using https://wzrd.ai , with any audio you want.


I tried this recently but it's possible to get all the way to putting your card details in without it ever telling you how much it's going to charge you. I closed it.


What do you mean? Normally you should see the price at the start of checkout.

We are improving some stuff though: - Better communication of what's include when you upgrade your project -> DONE - Explain pricing on other pages as well, like home page -> TODO


It allows to login in only with Google. Can't use it.


Regular email+pw coming very soon!


I can see it's now implemented. Thank you, acnops.


I've had my render queued up for many months, can that happen? It just shows "No renders yet :("


We had some bugs that caused this! If you could send your project name/ID via discord PM (see https://wzrd.ai/contact/), I can fix that for you!


Is there an explanation of how it works aside from just "It's a GAN"?

I'm guessing it's generating a GAN on frames, and navigating around it (at random?) synchronized to the music, like the transition animations at https://www.thisfuckeduphomerdoesnotexist.com/ , I'm curious if there's something else to it


I think the model is plagiarizing somewhat. Both https://www.youtube.com/watch?v=zhV3ecScgrA and https://www.youtube.com/watch?v=hjinBNYEkb8 contain the part of the melody from Take on Me by a-ha.


Me when I saw the title: Oh fuck my petunias.

Me when I saw the video: Whew, this is much less timeline-scary than I was afraid it would be.


The petunias were like "Oh no, not again."


So the petunias are safe?


It started with just noise, but a bit later there was actual music which was very impressive if indeed synthesized out of 'nothing' by a neural network. Chord progression which was something like I-V-IV-vi, a simple melody based on the same progression, a simple baseline...


I had lots of fun taking images from https://thispersondoesnotexist.com/ and feeding that to wombo.ai (converts static images of people to them singing a song)


It would be cool if videos like these can be generated in realtime to replace music visualizers like waveform or spectrograms. Even if it has no utility as a real visualizer, it is still nice to watch!


The videos created are more like visualisers. Not a bad thing, but I was expecting dancers, singers. Still, very impressive technology.


Those visuals are badass. I know the nn was trying to replicate “realistic” images, but this semi-organic nonsense vibe would actually be awesome at a concert.


Does anyone know what this is? I see a lot of people jumping to a lot of conclusions, but no explanation on the site.

EDIT: Ok, there's text at the bottom that I missed, with some cursory information and a link to http://taggartbonham.me/ . It says:

Built with OpenAI's Jukebox and NVIDIA's StyleGAN2.

Watch another or contact me


The music is made using OpenAIs Jukebox like others have said.

My guess is that this is using this repo for making the visual part https://github.com/mikaelalafriz/lucid-sonic-dreams


a GAN for music videos, in the same vein as thispersondoesnotexist.com and thisworddoesnotexist.com - here's an index of them https://thisxdoesnotexist.com/


I'm asking if anyone here knows anything about it beyond what's implied in the URL (I'm aware of the other websites with similar URLs). Things like: how was it made, what are the inputs, is it generated on the fly each time you go to the page?


oh. it's just playing a youtube (you can click through to youtube on the top left - "ThisChannelDoesNotExist" the videos are unlisted)


The music portion is always the same it seems, yes?


It seems there's 2 music pieces with some minor variations.

I think maybe the GAN overfitted to these particular songs?

Or it found 2 attractors in the space of all music videos :)


I could totally believe walking into a bar playing this. Who would get the performing rights?



Beautiful, almost every frame is a perfectly plausible and very tasteful abstract painting. But it seems a bit obsessed with cars, not a very meaningful image. Would it be possible to nudge the net to other concepts?


Reminds me of Cuttlefish changing their camouflage.

https://youtu.be/pgDE2DOICuc


It seems great, but these things are not surprising anymore. I hope it will eventually turn into something more concrete than a lava lamp.


The biggest achievement here is managing to avoid a Youtube copyright violation for the two songs that were incorporated.


My first thought was wondering if I could use it as background music in a twitch stream without the vod getting muted...


Like the music it accompanies, it is probably very good.


Watching that with no sound was actually nauseating.


I don't seem to be getting a video?


Well it does explicitly say it doesnt exist


Every time I see one of these “X does not exist” articles, I hope that it’s just a blank page.


I get a YouTube embed [0], of which it probably has a list it tries to deliver one from randomly.

[0] https://www.youtube.com/watch?v=2tSa701EftM


I got the same exact video. I wonder if that's because its not random or just doesn't have enough videos


It's picking a random video out of 14: ${urls[urls.length * Math.random() | 0]}

With the list loaded as an asset: urls = ["Spu3eiOEJ-M", "BE2lZ-Ti1Wc", "lHpcYPfjiLs", "6w2WXRFJpAE", "jPBgu-IO6TI", "g4px8cFR3gc", "whD78YCQXoo", "4iN9738uASY", "2tSa701EftM", "hjinBNYEkb8", "jckJS8RNMbw", "gq5EQtSiJiE", "zhV3ecScgrA", "wVnt_CX0C64"]


Whatever I viewed, frequently morphed into car shapes, with an apparent emphasis on frontal and rear portions. A sort of pseudo intellectual ADHD pr0n vid for kinky vehicle computers, or very peripherally specialized humans.



Cultural extrapolation


It's so organic!


gluten free, too


this looks like a new cicada 3301 riddle?


It does now.


It looks like this has involved uploading tens of thousands of videos to youtube... (guessing by the video names).

All unlisted and without ads... Presumably uploaded with the API...

Thats tens of TB's of free storage provided by Youtube there... With no ads... Do you not run a risk of a ban for that?


There are only 14 videos linked to from the the website (see my other comment on this thread). The video IDs could be seeds for generation, or some other identifier besides sequential numbers.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: