I've been camera scanning 4x5 and I'm happy with the results. Take two offset photos and stitch them in post. Mind you, I scan with pixel shift for higher res.
As someone who has a mirrorless scanning setup for my film, and pondered getting a dedicated scanner... the price of this is quite steep given how inflexible of a tool it is.
A second hand DSLR setup is going to be roughly the same price or less. I'm also not sure what kind of workflow improvements it actually offers. If you want fancy and experimental, filmomat has arguably a more interesting but pricier offering.
But naysaying aside, I hope they manage to find a niche that allows them to survive as a company, and keep the analog photography revival alive.
I bought some time on a hasselblad medium format scanner (took fucking ages)
The results are good, as you'd expect. However can I tell the difference between that and me putting the negatives on a decent softbox and using a fancy camera to take a picture? yes, but not by much.
I think the main issue is film registration, that is getting the film to be flat and "co-planar" to the lens so the whole frame is sharp.
My negatives are slightly warped, so they really need a frame to make sure they are perfectly flat. But for instagram, they are close enough.
However scanning more than a few pictures is a massive pain in the arse. If I was scanning film regularly, then this is what I'd want, and its cheaper than the competition.
Assuming that its actually any good, I haven't seen any scans yet.
It'd be nice if they were able to adapt the Hasselblad/Imacon "virtual drum" concept and curve the film underneath the sensor for side-to-side flatness. I wonder if that's feasible with a 2D sensor.
Thats a good question. I wonder if the "virtual drum" was there to get over film holding issues (as in it physically bends the film) or that its a line scanner
> Thats a good question. I wonder if the "virtual drum" was there to get over film holding issues (as in it physically bends the film) or that its a line scanner
It's not - the issue that still remains is keeping the film flat, and this is especially problematic with smaller formats. With current solutions you can get the resolution but not the flatness, or you sacrifice something to get the flatness (e.g. ANR glass holders). It's the old glass vs glassless carrier debate, applied to a modern workflow.
I repeat myself: focus, DPI / resolution, dynamic range - these are the solved problems. In fact, modern medium format digital cameras are superior on all these factor. Keeping the film flat, however? Only drum scans and the Imacon "Flextight" solution do this well.
Of course, it depends on what you plan to do with the scans and for 99% of people the solution in the link above is more than good enough.
I've written about this previously https://leejo.github.io/tags/scanning/ # I'm going to add the fourth, and hopefully final, part in a couple of months time.
I almost missed the price. Wow you're right that's a lot! And the final retail price is 1599 euros. I have a good Plustek that cost me $300 or $400. Automated transport and unantiquated software sounds nice, but those features are not worth an extra $600-1,100 to me.
Am I missing something or is this supposed to be in another tier of image quality?
Honestly I feel like anything beyond 5 megapixels per frame is pushing beyond reasonable expectations with 35mm. This is certainly the case with any kind of available light or high speed work in silver-halide process, the area where I figure most people are going to be using this device. Lab-work in C41 and E6 is definitely possible at home but must account for single digit percentage of the home analogue market.
A 4000 DPI scan of 135 gives you 21 megapixels. So 36MP with a good lens will easily resolve just as much detail. There is not 60-70MP of information in a 4000 DPI scan, period.
For most films, anything beyond 4000 DPI is just going to help resolve the grain particles or dye cloud shapes. You have to be shooting slow fine grained BW with the best lenses to need more.
"…beyond 5 megapixels per frame is pushing beyond reasonable expectations with 35mm."
Well, as I mentioned elsewhere old fashioned Kodachrome resolves ~100 lines/mm and some newer color emulsion are considerably higher, and of course B&W ones have even higher resolutions.
Given that a 35mm frame is 36x24mm even Kodachrome achieves 8.64 megapixels. OK, let's allow for an overgenerous Kell factor of say 0.8, this figure will drop to ~6.9 megapixels. Given the ready availability of emulsions with higher resolutions, especially the best B&W ones then a figure well in excess of 5 megapixels is relizable in practice.
Of course, that doesn't take into account the image chain as a whole, lenses, displays, compression, etc. which would reduce the effective resolution. That said, these days the typical image chain can easily achieve much higher pixel throughput than 5 megapixels before bandwidth limiting so the effective Kell derating factor could easily be kept quite small.
I think I see what you mean. It’s the difference between having an image showing the shape and texture of each film grain, and an image which looks like what I saw in the camera and which isn’t going to be any sharper. The former has value but the latter was always good enough for me and, surprisingly, rather low in resolution compared to subsequent DSLRs and mirrorless cameras I bought in the 2010s.
Ilford Delta 400 pushed two stops to 1600 ASA in a 1970s Asahi Pentax SP1000 was always going to produce… artistic results, requiring as much imagination as acuity to appreciate the subject. (Read: see past the blur.)
I wondered what the price was. 1599 seems pretty decent.I was expecting about 4k This is about the price of a venerable Nikon 5000. Some of the setups use film mounts that cost as much as this whole unit.
> A second hand DSLR setup is going to be roughly the same price or less.
And if you get one with Pixel Shift, you can get way higher resolutions than the 22MP they're offering (e.g. my cheapo Olympus gets 40MP JPEG or 64MP RAW from a 16MP sensor.)
You’re for sure exceeding the linear resolving power of 35mm film at 40MP or 64MP.
However, a Bayer-filtered sensor has lower color resolution, since each pixel only sees one color. So the pixel shift really helps quite a bit here since the sensor (and Bayer array) are shifting relative to the film multiple times per exposure.
High-quality film scanners maintain color resolution by using linear sensors without Bayer filtering. But they’re slow and expensive.
All the current Nikon Z bodies (and probably other brands too) have different levels of pixel shift where it’ll take 4 or 8 images and basically cancel out that it’s a bayer sensor. The bayer array is a 4 pixel pattern, so it moves one pixel to the right then one down and then one back to capture all 3 channels for each individual pixel. For things like film scanning it works flawlessly, I use it all the time.
Then it’ll do a 16 or 32 shot stack in order to do the same thing but with more resolution.
The scan is the least of the problems - good luck getting to that level of detail with mostly vintage lenses, balancing depth of field and diffraction, keeping the film perfectly flat, on a stable enough tripod with no vibration whatsoever; developing perfectly in the dedicated developer. Yes, it's impressive but no, it's not relevant to the average user or hobbyist.
Filmomat looks fun. Many money. Love the hipster flex with the Weber HG-1 in background of the demonstration video. I do own an Intrepid enlarger (sort of experimental?), and I used to live near Ars Imago in Zurich who sell a "lab in a box", similar to Filmomat's Light system. The independent dev scene is pretty great, though none of it is particularly cheap and is rarely open source, which is disappointing.
> I'm also not sure what kind of workflow improvements it actually offers.
The obvious one is auto-feeding and portability, but without using it who knows. It doesn't offer IR, but even Filmomat's system needs a modified camera. You get that with most flatbed and Plustek-style scanners. I have a V850 Pro which wasn't cheap either, but it'll do a full roll in one go and I can walk away. Even if I shot a roll a day it would be more than fast enough. It has occasional focus issues, and you need to be scrupulous about dusting, but it works well enough. I've never been a huge fan of the setup required for copy-stand scanning and it's tricky getting the negatives perfectly flat in/frame. The good carriers are also not cheap, look at Negative Supply for example.
Frankly it also looks great, like the Filmomat. I think some of the appeal is a chunk of modern looking hardware and also the hope that it's maintained? My Epson works well, but I ended up paying for VueScan because the OEM software is temperamental.
I don't know if it's still true in the recent versions of Scala (stopped caring in 2018) but it used to have implicit parameters designed specifically for passing context like this.
A notable example was passing around an implicit ExecutionContext for thread pools, e.g. in Akka :)
I tried that with my MEPs on the Copyright Directive. Nearly all of them replied with parrotted talking points from the EU Commission, as opposed to any kind of understanding of the issues I raised.
At the end of the day, EU Parliament representation is dilute and indirect. Unlike the democratic systems of most nations, elected EU parliamentarians cannot originate any new law. Only appointed (unelected) individuals within the EU Commission/Council can do so, behind closed doors if it suits them.
MEPs are on a lucrative gravy train and they generally don't want to rock the boat. If the Commission doesn't get a "yes" from Parliament, it simply makes superficial amendments and retries Parliament until the "yes" is received.
With the Copyright Directive, after a "no" vote in Parliament in 2018, the Commission literally put the same contentious articles (11 and 13) back in again for the second vote - this time under different article numbers (15 and 17), so all the public activism and criticism linked to the original article numbers would be orphaned. MEPs voted "yes" the second time, like the good, obedient MEPs they are.
Anecdata but I also had good experiences reaching out to MEPs, so not all is lost.
At its core, the core issue seems to be the lack of accountability between the MEP, and people that voted them in. Few people vote in the EU elections, and even fewer follow up on what happens there.
Chicken and egg problem but if you want your MEP not to be just "a good obedient MEP they are", the electorate needs to ask more of them.
So we expect our public to care, and to engage with a Parliament in a foreign country, where elected representatives wield barely any power and cannot originate law?
Prior to Brexit, the UK had less than 10% of a stake in the European Parliament, so our 73 representatives had little effect on the overall system.
I didn't know a single person who could name their MEP.
Direct democracy at the national level is simply more engaging and relatable. It matters that the electorate, and their representatives are accountable for the outcomes of their decisions.
Try calling up, E-mailing or anything else with Dutch politians. No one will talk to you, answer your E-mails, allow you to call them. They make themselves unreachable.
The issue with many of these tips is that they require you use to claude code (or codex cli, doesn't matter) to spend way more time in it, feed it more info, generate more outputs --> pay more money to the LLM provider.
I find LLM-based tools helpful, and use them quite regularly but not 20 bucks+, let alone 100+ per month that claude code would require to be used effectively.
what happened to the "$5 is just a cup o' coffee" argument? Are we heading towards the everything-for-$100 land?
On a serious note, there is no clear evidence that any of the LLM-based code assistants will contribute to saving developer time. Depends on the phase of the project you are in and on a multitude of factors.
I'm a skeptical adopter of new tech. But I cut my teeth on LLMs a couple years ago when I was dropped into a project using an older framework I wasn't familiar with. Even back then, LLMs helped me a ton to get familiar with the project and use best practices when I wasn't sure what those were.
And that was just copy & past into ChatGPT.
I don't know about assistants or project integration. But, in my experience, LLMS are a great tool to have and worth learning how to use well, for you. And I think that's the key part. Some people like heavily integrated IDEs, some people prefer a more minimal approach with VS Code or Vim.
I think LLMs are going to be similar. Some people are going to want full integration and some are just going to want minimal interface, context, and edits. It's going to be up to the dev to figure out what works best for him or her.
While I agree, I find the early phases to be the least productive use of my time as it’s often a lot of boilerplate and decisions that require thought but turn to matter very little. Paying $100 to bootstrap to midlife on a new idea seems absurdly cheap given my hourly.
So sad that people are happy to spend 100$ pd on a tool like this, and we're so unlikely (in general) to pay $5 to an author of an article/blog posts that possibly saved you the same amount of time.
(I'm not judging a specific person here, this is more of a broad commentary regarding our relationship/sense of responsibility/entitlement/lack of empathy when it comes to supporting other people's work when it helps us)
No, it doesn't. If you are still looking for product market fit, it is just cost.
After 2 years of GPT4 release, we can safely say that LLMs don't make finding PMF that much easier nor improve general quality/UX of products, as we still see a general enshittification trend.
If this spending was really game-changing, ChatGPT frontend/apps wouldn't be so bad after so long.
Finding product market fit is a human directional issue, and LLMs absolutely can help speed up iteration time here. I’ve built two RoR MVPs for small hobbby projects spending ~$75 in Claude code to make something in a day that would have previously taken me a month plus. Again, absolutely bizarre that people can’t see the value here, even as these tools are still working through their kinks.
I switched to capture 1 due to how poorly adobe handles fujifilm raw file even today. Workflow wise it is basically the same functions just in different places. Doesn’t take long to get up and running.
Source: I have a relationship with OpenSSF but not directly involved. I'm involved in a "competing" standard.
As other commenters pointed out this is "just" a signature. However, in the absence of standardised checks, this is a useful intermediate way of addressing the integrity issue around ML supply chain; FWIW today.
Eventually, you want to move to more complete solutions that have more elaborate checks, e.g. provenance of data that went into the model, attested training. C2PA is trying to cover it.
Inference time attestation (which some other commenters are pointing out) -- how can I verify that the response Y actually came from model F, on my data X, Y=F(X) -- is a strongly related but orthogonal problem.
reply