A hearty +1 to using WinForms (as long as your desktop app targets Windows only, of course). The fact that WinForms has outlasted so many other UI toolkits is a testament to its simplicity and ease of use. I still use it in numerous personal projects, and I don't foresee Microsoft dropping support for it anytime in the near future.
My biggest gripe with these AI film enhancements is that they are adding information that was never there. You're no longer watching the original film. You no longer have a sense of how contemporary film equipment worked, what its limitations were, how the director dealt with those limitations, etc.
I don't think that's universally true of all AI enhancement though. Information that is "missing" in one frame might be pulled in from a nearby frame. As others have pointed out, we are in the infancy of video enhancement and the future is not fundamentally limited.
If that takes away from the artistic nature of the film I understand the complaint, but I look forward to seeing this technology applied where the original reel has been damaged. In those cases we are already missing out on what the director intended.
In part, we need more vocabulary to distinguish different techniques. Everyone is just "AI" right now, which could mean many different things.
Standard terminology would help us discuss what methods are acceptable for what purposes and what goes too far. And it has to be terminology that the public can understand, so they can make informed decisions.
I also think we already have a bunch of old words for the techniques, like "upscaling" or "statistics". It's "AI" everything now but the old words for the old techniques are waiting to be used again.
If there is a movie which is only shot in 1080p and I have a 4k TV, it seems like there’s three options. One, watch it in the original 1080p with 3/4 of the screen as black border. Two, stretch the image, making it blurry. Three, upscale the image. If you give me the choice, I’m choosing 3 every time.
Sorry if it sounds crass, but I feel the process of shooting the movie is less important than the story it is trying to tell.
Upscaling algorithms vary from the extremely basic to the ML models we see today that straight up replaces or adds new details. Some of the more naive algorithms do indeed just look blurry.
Most people don't care. Photographers had a real great time pointing out that Samsung literally AI replaced the moon, but some Samsung S21 Ultra users were busy bragging how great “their” moon pictures turned out. Let's judge AI enhancements like sound design: Noticeably good if done well, unnoticeable if done satisfactory and noticeably distracting if done poorly. The article shows a case of noticeably distracting, so they're better off with the original version.
It's a fundamentally different concept of photography though, one that becomes more similar to a painting or collage than a captured frame of light. Regardless of the merits of one over the other are for the purposes of storytelling, it's a bit worrisome when the distinction is lost on people altogether.
I get how a film buff might care, and agree the original version should be available, but isn’t there space for people who just want to see the story but experience it with modern levels of image quality? The technical details of technology at some point of time is definitely interesting to some people, but as say the writer or others associated with the creative and less technical aspects of a film I may find the technical limitations make the story less accessible to people used to more modern technologies and quality.
What does "modern levels of image quality" mean in this context?
The article is about AI upscaling "True Lies", which was shot on 35mm film. 35mm provides a very high level of detail -- about equivalent in resolution to a 4k digital picture. We're not talking about getting an old VHS tape to look decent on your TV here.
The differences in quality between 35mm film and 4k digital are really more qualitative than quantitative, such as dynamic range and film grain. But things like lighting and dynamic range are just as much directorial choices as script, story, any other aspect of a film. It's a visual medium, after all.
Is the goal to have all old movies have the same, flatly lit streaming "content" look that's so ubiquitous today?
I think the argument against "isn’t there space for people who just want to see the story but experience it with modern levels of image quality" is that such a space is a-historical -- It's a space for someone that doesn't want to engage with the fact that things were different in the (not even very distant) past, and (at the risk of sounding a bit pretentious) it breeds an intellectually lazy and small-minded culture.
The problem with that is the content is usually shot with the certain definition in mind. If you don't film certain scenes from scratch, they can end up looking weird in higher definition, simply because certain tricks rely on low definition/poor quality, or because you get a mismatch between old VFX and new resolution, for example.
It's a widespread issue with the emulation of old games that have been made for really low resolution/different ratio screens and slow hardware, especially early 3D/2D combinations like Final Fantasy, and those that relied on janky analog video outputs to draw their effects.
For a specific simple example: multiple Star Trek TV series were shot with the assumption that SDTV resolution would hide all the rough edges of props and fake displays. Watch them in (non-remastered) HD and suddenly it's very obvious how much of the set is painted plywood and cardboard.
One somewhat funny example of this is in the first ST:TNG episode "Encounter at Farpoint". In one shot, the captain asks Data a question, and the camera turns to him to show him standing from his seat at the conn and answering. At the bottom of the screen, it's plainly visible (in the new Blu-Ray version) that a patch of extra carpet is under the edge of the seat. It was probably put there to level the seat or something. At the time, this was ignored, because on a standard SDTV screen, the edges are all rounded, so the very edge of the frame isn't normally visible.
Another thing that's plainly obvious in TNG's remastered version is all the black cardboard placed over the display screens in the back of the bridge, to block glare from lights. In SDTV, this wasn't noticeable because the quality was so bad.
Actually I would expect AI up scaling of SDTV in this case would perform better. It would assume semantically the props were real and would extrapolate them as such.
For anything that's not just "grab a camera and shoot the movie" the format that it is shot in is absolutely taken into account. I don't think you can separate the story from how the image is captured.
'Film buff' responses are common to every major change in technology and society. People highly invested in the old way have an understandably conservative reaction - wait! slow down! what happens to all these old values?! They look for and find flaws, confirming their fears (a confirmation bias) and supporting their argument to slow down.
They are right that some values will be lost; hopefully much more will be gained. The existance of flaws in beta / first generation applications doesn't correlate with future success.
Also, they unknowingly mislead by reasoning with what is also an old sales disinformation technique: List the positive values of Option A, compare them to Option B; B, being a different product, inevitably will differ from A's design and strengths and lose the comparison. The comparision misleads us because it omits B's concept and its strengths that are superior to A's; with a new technology, those strengths aren't even all known - in this case, we can see B's far superior resolution and cleaner image. We also don't know what creative, artistic uses people will come up with - for example, maybe it can be used to blend two very different kinds of films together.
These things happen with political and social issues too. It's just another way of saying the second step in what every innovator experiences: 'first they laugh at you, then they tell you it violates the orthodoxy, then they say they knew it all along'.
I draw the line at edits that consider semiotic meaning. Edits are acceptable if they apply globally (e.g. color correction to compensate for faded negatives), or if they apply locally based on purely geometric considerations (e.g. sharpening based on edge detection), but not if they try to decide what some aspect of the image signifies (e.g. red eye removal, which requires guessing which pixels are supposed to represent an eye). AI makes no distinction between geometric and semiotic meaning, so AI edits are never acceptable.
Easy counterexample: dumb unsharp masking will ruin close-up scenes that are shot for softness and/or have bokeh. ML upscalers can do this too when applied mindlessly. But you can also train an upscaler on the same type of footage, or even on the parts of the same footage available in higher definition. Even if you don't, matching the upscaler with the intent behind the content is your job.
The separation you're talking about is imaginary, the line doesn't exist. Any tool will affect the original meaning if it doesn't match the execution. Remastering is an art regardless of the tool, and it's always an interpretation of the original work. It's fine to like or dislike this interpretation.
Remastering can screw up intent with something as simple as color grading.
But there is a line here. An editor that's using simple tools knows exactly what they're changing, and if they're using simple frame-global tools then they're not introducing anything that wasn't already there.
If you throw an AI at things, it will try to guess what things in the image are, and make detail adjustments based on that.
So that's three categories of edit, easily distinguished: human making frame-global changes, human deliberately changing/adding details, AI changing/adding details in a way that's basically impossible to fully supervise.
It sounds like they accept category 1 in remastering, even though it's not foolproof, and reject 2 and 3.
No, "AI" is absolutely not uncontrollable magic that does something you don't want sometimes. It's not an issue really, you always have arbitrarily granular control of the end result, with ML tools or not. You can train them properly, you can control the application, you can fix the result, you can do anything with it. It's the usual VFX process, and it's not the only tool at your disposal.
The problem is that remasters don't make a lot of money, so instead of a properly controlled faithful representation (or a good rethinking) it's typically a half-assed job with a couple filters run over the entire piece. Another issue is that you now have two possibly conflicting intents - one from the original and another from the remaster. ML haven't changed anything in here, it's always been like that.
Sure, my point is that proper remastering is not just applying a couple ML filters. If you're doing that you should either do this selectively or fix the result by other means, i.e. the same thing you would do with dumb processing. This is a labor intensive VFX work, feasible for a new movie but not feasible for a remaster.
Yes, back in the mid-late 80s Turner Entertainment colorized a huge number of old films in their vaults to show on cable movie channels. It was almost universally panned. It was seen at first as a way to give mediocre old films with known stars a brief revival, but then Turner started to colorize classic, multi-award-winning films like The Asphalt Jungle and the whole idea was dismissed as a meretricious money-grab.
Any art and/or media production executed well enough to be culturally significant rests on an enormous depth of artistic and technical choices that most audiences have zero awareness of—and yet, if you took them all away, you would have nothing left. Every change takes you further from the original artist's vision, and if all you want to do is Consume Slop then that's fine I guess, but the stewards of these works should aim higher.
Well there are movies which were technically well executed with poor stories, and great stories with poor execution. And there were movies which did well in both areas.
For example Tenet. Cool story, poor audio mix. (I don’t buy the explanation that Nolan had any reason other than expediency for this.) If we use “AI” to fix the audio after the fact, that’s a win in my book.
I’m not a film buff or a purist though. I watch movies with subtitles which is certainly not what the director had in mind, but that’s ok.
I agree, most people watch the movie for the story that unfolds. Few are looking at things like framing the subject, the pull of the focus, subtle lighting differences between scenes, they are interested in the story, not the art of filmmaking. The people offended by this are the ones that are crying about the art being taken out of it.
The film grain will have no effect if it's not visible due to image/stream compression such as when the viewer sees the film on a video streaming service. HDR wont show up for most viewers. Details you need more than 1080p to see won't show for many (most?) viewers ... so I'd dispute your "will have an effect" here.
Good storytelling (and probably blunt spectacle) is the only thing common to all viewers that can win them over. For mainstream media everything else is gloss that may have no effect.
Most people don't even have their sound/brightness/contrast well-adjusted. Some free-to-view services regularly air content with the wrong ratio (and I've seen people happily sit watching the wrong ratio seemingly oblivious to it).
Yes, media nuances can have an effect on the unwitting, but I suspect much doesn't even have opportunity to.
> The film grain will have no effect if it's not visible due to image/stream compression such as when the viewer sees the film on a video streaming service. HDR wont show up for most viewers. Details you need more than 1080p to see won't show for many (most?) viewers ... so I'd dispute your "will have an effect" here.
You're going too low level, I'm thinking of lighting and colour and intentional blur via adjusting focus.
> Good storytelling (and probably blunt spectacle) is the only thing common to all viewers that can win them over. For mainstream media everything else is gloss that may have no effect.
You really need to reverse spectacle and storytelling in this statement. How else can the box office be dominated by superhero movies that personally I ... just ... can't ... tell ... apart?
> The originals still exist and you’re free to watch those instead.
This is far from certain, unless "you" are willing to engage in piracy. It's often difficult or impossible to legitimately buy (or even rent) the original, unadulterated versions of older films.
Very cool prototype, and a very interesting idea to sort nearby Wikipedia articles by number of pageviews.
Just a note that the Wikipedia app for iOS does in fact have a map with nearby articles, and we will soon bring the feature (back) to the Android version.
Oh yes, sorry. This is a form-over-function detail that I added to the website to mimic the app.
I did not, however, invest the time to make it expandable. I thought that people anyway don't use the scroll bar that much, and for quick navigation there is the table of contents on the left.
You can't (unless there's a trick I don't know) change the width of the scrollbar when hovering over it. However... you can keep the width constant and have the apparent-width of the scroll thumb be determined by transparent borders.
This would give the appearance of your current 2px scrollbar, but it'd be usable, and would visually expand out to show its grabbable area on hover:
(The key to it is the background-clip property, that lets you use the border to control where the background is drawn.)
You could also do exactly-this but without the :hover state, and it'd effectively just increase the grabbable-area of the thumb without any visual change to your current style. I like changing the visible width as a form of feedback though. :D
There is good, scientific evidence that skin contact with bare soil (walking barefoot in natural surroundings, gardening without gloves) makes humans happier, and the mechanism is this: there exists a species of bacteria found everywhere on Earth (even Antarctica, apparently) that exudes a compound which, on contact with the skin, triggers the production of endorphins, making is feel happier.
I'm skeptical of the electrical-conductivity related claims in the article, but contact with the Earth being good for us is at least explained/explicable.
I approach it not from a perspective of "taste" per se, but from a perspective of purity and truth. All of these techniques (AI upscaling, colorizing, interpolation, etc) are adding information that was never there, which means you're no longer watching the genuine media.
By all means take the original film and rescan it in the highest possible resolution, to capture all the information that was in the film, but to add information that wasn't there to begin with is going too far.
It's like all the modern TVs that have the horrendous "motion smoothing" feature, which is enabled by default, and makes movies look like soap operas. When the TV performs frame interpolation, it's adding information that was never there, and wasn't intended by the director.
100%, this is the fundamental problem with all these techniques. We don't treat our other, more established, historic media/artefacts with the same contempt. If an ancient text fails to mention the colour of the emperor's robes we do not "interpolate" the text with our own ideas of what it should be in our new editions. This would be universally recognised as a textual corruption yet when photo "colourisers" do the same to colourless photographs they think they do the world a service. Why is historic media interesting and valuable in the first place? Fundamentally it is because a historic artefact conveys information from that time period. This is why none of these hyper-restored versions interest me, they are far too corrupted by false or spurious information. Time corrupts them enough already, we don't need to add more. This is the modern version of historical embellishment/exaggeration, contorting it into a more palatable or appealing semi-truth instead of telling the uglier actual truth. The motivations need not be nefarious, they could be entirely commercial like it is today.
They are more tools of embellishment than tools of restoration. A good restoration actually "restores" information. If an old manuscript is missing a page or contains an error, we replace it with the same portion from another manuscript. Likewise if there is a scratch on this frame we should use the same frame from another print to patch it over. We should try to develop our tools to do more of that sort of work. A motion picture produced in the 1990s does not need to look like one produced in the 2020s, in the same way a Rembrandt does not need to look like it was painted in Photoshop.
Accessibility would be too accessible. For extra fun, make the scrolls bars reversed and inverted axises. After all, it's not like there are disabled people who use assistive devices need simpler, properly-designed websites. Also, it's not GDPR compliant and demands acceptance of cookies.
> I stay active on a Slack community of devs and creators in my country, as well as go to meet-ups and events in interesting communities
How does one find these supposed Slack groups and meetups? I live in a major metropolitan area, but the meetups I've been able to find have been underwhelming.
Check out if a university near you has a startup incubator / entrepreneurship program. They often have events, probably socialized most on their Instagram account. From expert talks, to hackathons - go to the next event. While there, tell the organizer what type of community you’re looking for, and ask if there is a group chat.
For example, I’m in a city, and there are at least 2 or 3 very active WhatsApp groups - a mix of tech devs, creative entrepreneurs (one of those groups has 800+ members). Every day - there are 2-3 events posted, talks, seminars, workshops, code jams, pitch workshops, digital marketing how-tos.
Idea: Co--working spaces often have a formal or informal group chat (often WhatsApp). You could take a tour and ask around.
Business incubators - Business incubators, often have a Slack channel (with channels like #dev, #design, #3d-printing). They often have a networking site where you can book free office hours with vetted experts / founders / engineers / executives / angel investors). The one I participate in has a Slack group containing 2,800 creative entrepreneurs, founders, investors, etc. At the co-working space, they host events, social mixers, etc. You’ll aways find some interesting people there. Find the group chat, get on it. Eventually you’ll get “dragged” into an interesting social group of startup / entrepreneurship / growth minded people. I jokingly say dragged, but what I mean is - if you go to an interesting talk on CNC routers, you’ll likely meet someone interesting there, they’ll tell you about a cool upcoming event, you book it - over time, you’re just spending more time per week with interesting people.
I know you just asked about WhatsApp groups, but I wanted to just share some of what I’ve learned, within 1 year of moving to a new city.
Joining and starting a meetup are two completely different things. Joining is low effort, you just need to rock up. Starting one now needs venues, advertising (as in posting the event, updates) and other work like things.