Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Automatic supercuts on the command line with Videogrep (lav.io)
352 points by saaaam on May 23, 2022 | hide | past | favorite | 56 comments


I always wanted a markdown-like video editor. I want to chop up my videos and make notes about what is in them.

Text based document would be so much easier for this than big clunky Premiere.


Have you seen https://www.descript.com/ it transcribes video and allows you to edit the transcript. Those edits to the transcript are reflected in the video. You can even train it to voices if you have enough content.


No markdown but have you played with Descript?


I liked that at first but the buggyness and cloud lag killed it for me


Hmm it’s been pretty stable for me but I also have only used it a handful of times. Inconsistency is definitely not a desirable trait in an NLE!


In the social sciences this is actually a common use case, so some CAQDAS https://en.wikipedia.org/wiki/Computer-assisted_qualitative_... are using that. See a free one here: https://github.com/ccbogel/QualCoder/wiki/09-Coding-audio-an...



Oh man, this. I want a human readable document (markdown fits the bill) that resembles this:

    ---
    source: <path_to_video>
    ---

    [<timecode>-<timecode>]
    ```json
    { optional metadata }
    ```
    ...notes...
And can target different render outputs:

- Various NLE formats

- Mini web app with playback/live annotations (Maybe even with the capability to edit the underlying file)

- Simple webpage with thumbnails and notes

- An actual annotated supercut(!)

I just might build it!


I mean you can always write an EDL. The issue with most filmic material however is that moving pictures have their own internal timing, movments, directional changes and so on. Ignoring the content of shots is something that you might do with long shots of talking heads, but literally everything else won't let you get away with it in my experience.


This would be particularly useful for speeches or presentations where the content is the important part and the visuals don't change much (or wouldn't create jarring cuts when just editing based on the transcript).


Any thought on providing an option to make a super cut that produces a desired output?

Ie `videogrep --input *.mp4 --produce "I am a robot"` and will find all the pieces it needs to produce the desired output?


Yes that’s possible, but the results are usually not so great!


Does it depend on the source material though? I think I'm happy with scattered tone and pitch if it comes out funny

Take this video of my now ex prime minister: https://www.youtube.com/watch?v=aHsFtANY5Ro (little bit of NSFW language)


Yes it definitely depends on the source material… I’ll see about adding it in.


Amazing! Love it so much. My brain was running wild with possibilities (like having an autocomplete from the corpus, live audio only previews, and the above)

I didn't realise there was a github! You should add a link to your tutorial

https://github.com/antiboredom/videogrep

If I have the time I'll see how I go dipping my toes in the code.


I guess there would need to be an intermediate step. Videogrep helps to surface useful ngrams and there would still be a manual/creative step to stitch them together in a way that works.


This is awesome! I’ve considered building something nearly identical over the years, as I’ve definitely used VTT files to aid in searching for content to edit, but never did because getting all the FFmpeg stuff to work made my head hurt. I’m so glad someone else has done the hard work for me and that it’s been documented so well!

Love this.


Thank you! And it's using moviepy to make the cuts (which is technically speaking the actual hard part).


oh awesome! Very, very cool!


If anyone else decides to give this a try on video files with multiple audio tracks, there doesn't seem to be an easy way to tell it to select a certain track.

I got it working by manually adding `-map 0:2` (`2` being the trackid I'm interested in) when calling ffmpeg.

You'll have to make that edit in both `videogrep/transcribe.py` as well as `moviepy/audio/io/readers.py`.

And I'm not sure how easy adding real support for that would be, considering that moviepy doesn't currently have a way to support it (https://github.com/Zulko/moviepy/issues/1654)


The short clip showing the results of searching for 'ing' words caught me so off guard. I have the humor of a 12 year old.


Even with your warning, I have to admit I giggled.


Exciting!

Back in 2011-12, my MFA (poetry) thesis project was a sort of poetic ~conversation between myself, and (selected) poems generated by a program I wrote, using transcripts of Glenn Beck's TV show.

I really, really wanted to be able to generate video ~performances of the generated poem in each pair for my thesis reading (and for evolving the project beyond the thesis). I have to imagine videogrep could support that in some form, at least if I had the footage. (Not that I want to re-heat that particular project at this point).

Great work.


amazing - would love to see that!


This is very cool! I wonder if Videogrep works better with videos sourced from Youtube (consistent formats, framerates, bitrates) compared to arbitrary sources.

I've used ffmpeg before to chop video bits and merge them before. Mixed results. It'd struggle to cut at exact frames or the audio would go out of sync or the frame rate would get messed up.

I gave up and decided to tackle the problem on the playback side. Like players respect subtitle srt/vtt files, I wish there were a "jumplist" format (like a playlist but "intra-file") that you could place alongside video/audio files, and players would automatically play the media as per markers in the file, managing any prebuffering, etc. for smooth playback.

For a client project, I did this with the Exoplayer lib on Android, which kinda already has an "evented" playback support where you can queue events on the playback timeline. A "jumplist" file is a simple .jls CSV file with the same filename as the video file.

Each line contains: <start-time>,<end-time>,<extra-features>

"extra-features" could be playback speed, pan, zoom, whatever.

Code parses the file and queues events on the playback timeline (On tick 0 jump to first <start-time>, on each <end-time> go to next <start-time>).

I set it up to buffer the whole file aggressively, but that could be improved. Downside may be that more data is downloaded than is played. Upside is that multiple people can author their own "jumplist" files without time consuming re-encode of media.


Resounding notions of Object Oriented Ontology[1] in Cinema[2][3] here, which is very much about pick out & possibly stitching together key items from film.

> "All of the elements of a shot’s mise en scène, all of the non-relational objects within the film frame, are figures of a sort. The figure is the likeness of a material object, whether that likeness is by-design or purely accidental. A shot is a cluster of cinematic figures, an entanglement. Actors and props are by no means the only kinds of cinematic figures—the space that they occupy and navigate is itself a figure"

And the words they say, as seen here.

[1] https://en.wikipedia.org/wiki/Object-oriented_ontology

[2] https://larvalsubjects.wordpress.com/2010/05/03/cinema-and-o...

[3] https://sullivandaniel.wordpress.com/2010/05/02/film-theory-...


Hi Sam, I'm big fan of your work! Coincidently, I just made a simple POC video editor by editing text using this speech to text model https://huggingface.co/facebook/wav2vec2-large-960h-lv60-sel.... It might be cool to integrate into your Videogrep tool, it also works offline with CPU or GPU, and gives you timestamps for word or character level.

https://twitter.com/radamar/status/1528660661097467904


Thank you! I will definitely take a look at that - looks great.


Very nice project!

Would be cool if it didn't need a .srt file, but that it would scan the audio for a search prase.

Edit: Never mind, I see that you can create transcriptions using vosk!


You can search inside video using phrases on muse.ai -- see examples for TED talks https://muse.ai/demo-embed-search-ted or yc startup school https://muse.ai/demo-embed-search-uni


Yes, and vosk is really amazing…


See section on Transcribing


This needs a function where you can give it a string and it goes and finds the longest matches from the database then builds a video that says what the string says.

Also it would be fun if it output a kdenlive project file, so you could easily tweak the boundaries or clip orders.


Enjoying the serendipity of finding the right tool at the right time: https://twitter.com/xn/status/1528845032438083584


Videogrep + Youtube subtitle corpus = chef kiss emoji


great project! since it relies heavily on subtitle files, and as an alternative to generating your own, which websites would you recommend to find subtitles for videos which are not on youtube i.e. movies and series? preferably ones with ratings systems similar to guitar tabs websites - I can envisage a musical similarity in the variance and quality of user-submitted content e.g. timing, volume, tone, punctuation, expression, improvisation, etc. since I doubt many are composed from the actual scripts. I have never used vosk so am also wondering whether it would be quicker and more reliable than filtering and spot checking say a few subtitle files per video


I just started playing around with the transcription part after seeing this blog post. Consider giving it a try.

I'm not sure how well most subtitle sources will work with this. I don't think they'll generally embed the word timings needed for picking out fragments (just line timings). The blog post mentions it being the case for `.srt` specifically. Not 100% sure, someone with better understanding of the subtitle formats would be able to correct me.

FWIW I'm finding the video transcription to be working quite well (and I even decided to use Japanese-speaking media because I wanted to see how well vosk handles it).

It might be my system, but the transcription is unfortunately a bit slow/single threaded. I quickly added a GNU `parallel` in front of the transcription step to speed up processing an entire season.


thank you for the info

I hope the subtitles website I am searching for will provide multiple formats and I understand a lot more effort would be required to produce the .vtt with word fragments. running a diff on vosk text and subtitle file text might help to iron out ambiguities

I will at some point try vosk (with parallel!)


Super niche but this would be great to build a comprehensive clip archive of the genovaverse.

Search by text, generated videos of frequent phrases and other meme-worthy sayings from the Sith Lord.

What do you think about that DALE?


Are there any additional annotations that would be available? Identifying objects in the scene, sentiment or tone of speech, etc?


Between this and VideoMentions I'm not sure what I love more!

Great work, can't wait to try them!



WTF is a supercut. ...OK apparently it means cutting a number of parts from the source video containing a given spoken text and joining them together again. Still not sure why you would call that a supercut.



Let me enhance the wiki definition

https://www.youtube.com/watch?v=LhF_56SxrGk


I'm not sure why you're being down-voted. Its not a term I'm familiar with either. Even just a link to the wikipedia article would have improved the post immensely.



Disapproval registered!


Cut Cut Cut Cut Cut Cut Cut Cut Cut Cut Cut Cut Cut Cut Cut Cut Cut Cut.


This is an useful tool for OSINT.


Is there a tool to generate subtitles locally?


The Donald Trump video made me laugh-out-loud for real like we used to do in the 90s.


Has Zuckerberg deliberately had work/make-up done to look like his own avatar might in some sort of 'metaverse' world? I can't be alone in thinking a lot of those clips look more like gameplay footage than photography?


The YouTube video linked in that post (https://www.youtube.com/watch?v=nGHbOckpifw) is probably the most hilarious thing I'll see this week. Thank you for sharing that.

Also is Zuckerberg for real? Half of those snippets look like it is a NPC from from a game. ¯\_(ツ)_/¯


It's certainly an interesting physical experience in the world. In the future I will watch it with some company.


If you don't have time, just click on the Zuckerberg video on the homepage and experience something :-)




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: