Hacker Newsnew | past | comments | ask | show | jobs | submit | lambdasquirrel's commentslogin

And yet it all circles back.

We used Peano arithmetic when doing C++ template metaprogramming anytime a for loop from 0..n was needed. It was fun and games as long as you didn't make a mistake because the compiler errors would be gnarly. The Haskell people still do stuff like this, and I wouldn't be surprised if someone were doing it in Scala's type system as well.

Also, the PLT people are using lattices and categories to formalize their work.


Yeah you can't really have a foliage map without a drought map to accompany it. The fall colors are a fickle thing. Last year's was pretty drab in lower NY. The year before it was quite good.


We can hope that they both fail. :o)


> This reads like it was written by a developer 'who doesn't get marketing'.

At first, I didn’t know what to say about the article other than to agree to something about it that I couldn’t put a finger on. But now it makes sense.

Developers really can’t be faulted to hate LinkedIn specifically because it’s marketing. It’s just pure noise to signal. It’s pure promotion.


Yeah, and it’s an interesting problem because for a lot of casual photos, most people won’t care. But once you do care, there’s suddenly no recourse.

Folks will say it’s just the focal length. But can you crop when your sensor is already that small?


This does not address the detrimental parts of computational photography.


Which I’m personally failing to witness consistently by the “evidence” in this article.

Most of the photo examples here were somewhere between “I can’t tell a significant difference” and “flip a coin and you might find people who prefer the iPhone result more.”

Even less of a difference when they’re printed out and put in a 5x7” frame.

Keep in mind the cost of a smartphone camera is $0. You already own one. You were going to buy a smartphone anyway for other things. So if we are going to sit and argue about quality we still have to figure out what dollar value these differences are worth to people.

And the “evidence” is supposedly that people aren’t getting their phone photos printed out. But let’s not forget the fact that you literally couldn’t see your film photos without printing them when we were using film cameras.


> Keep in mind the cost of a smartphone camera is $0.

Many people buy a more expensive smartphone specifically for the better camera module. These are expensive devices! It's good marketing that you perceive that as "free", but in reality, I spend way less money on my fancy camera (new models every five years), than my iPhone-loving friends on their annual upgrades.


I can see a noticeable loss of detail in the iPhone sample photos. Personally, I prefer cameras that prioritize capturing more detail over simply producing visually pleasing images. Detailed photos offer much more flexibility for post-processing.


The problem with computational photography is that it uses software to make photos "look good" for everyday users. That may be an advantage for those users but it is basically a non-starter for a photographer because it makes it a crapshoot to take photos which predictably and faithfully render the scene.


Lots of apps gives you other options for how to process the image data.

I've had a bunch of "high-end" digital SLRs and they (and the software processing the raw files) do plenty computational processing as well.

I completely agree that all else being equal it's possible to get photos with better technical quality from a big sensor, big lens, big raw file; but this article is more an example of "if you take sloppy photos with your phone camera you get sloppy photos".


This made me ask, is there a (perhaps Swift) API to get the raw pixels coming in from the camera, if there is such a thing? I mean, before any processing, etc.


There is. If you use lightroom app for example you can have access to raw pixel. But I'm not sure there is a way to get all the images the camera app from the iphone take. Phone don't take one shot to create the final image. they take hundred of shot and combine them.


Your brain also uses software to make what you see look good


> but the Canon takes a modicum of skill, which my wife is not interested in

And so, the reasons why Fuji and point-and-shoots are popular. Lots of “serious” photography enthusiasts don’t really get this and call Fujis “hype” cameras but it’s like bashing Wordpress because most people don’t want to learn AWS to post cat pics.

> The iPhone is always in my pocket

Rationale for both point-and-shoots as well as Leica (also hated by lots of serious camera people ;)).


This is the opposite of my experience.

I went from a D300s kit with about $10k of lenses to Fuji. I had an X100s, then an X-E2, and now an X-Pro3.

The X-Pro3 especially is light, has excellent physical controls, and very much feels like a vintage Leica. It's what I'd consider an "art camera" -- not what I'd choose if I were shooting weddings regularly, but perfect for street photography, family stuff, and perfectly capable of higher-end commercial work if you're willing to put up with its quirks.

The quirks are the point, though.


They were popular. Are they still? Just observationally there are two groups left, phone users, and people with very expensive complex setups. Everyone who would have bought those simple cameras moved on to using phones.


By the numbers, the casual cameras are having a quiet turnaround.

Fuji and Ricoh can hardly keep their X100 and GR cameras stocked. Fuji added extra production capacity in China because it exceeded their expectations. I brought them up specifically because the serious camera people rag on them for being hype cameras, but I see plenty of everyday people with them. Go to places like the High Line in NY and there’s folks with A6700s and various X-mount cameras in addition to the serious full-frame mounts. Leica is doing financially well because of their Q series.

I think five years ago you could say it was just two groups, but by the numbers and by what I see in the streets, the point and shoots have been prematurely declared dead. Fuji and Sony are meanwhile figuring out how to sell APS-C to a more casual crowd, after the other old players effectively left that market.


I'm a semi-retired pro and acting like Fujifilm are "hype" is really ignorant. They are a smart company who have a long history of making great pro-grade cameras and lenses.


You’d be surprised. Point-and-shoot cameras have become extremely popular with young people in the past ~2 years or so because of the nostalgia factor.


I see people using weird things like 2000s digital cameras and Nintendo DS cameras for that old look, but I've never seen someone with one of those entry to mid level point and shoot cameras you used to see before smartphones. I only see phones, ancient retro cameras, and hobbyists with high end gear.


A thousandth of an inch would do it? They couldn't give more margin-of-safety to a critical part like that?

A thousand of an inch isn't such a theoretical number. It's about 25 microns, and I've shimmed one of my back-focusing photography lenses for less than that much (about 10 microns, to be specific). This is something that they ought to be able to machine for, but depending on the context, it might not leave much room for error.


> A thousandth of an inch would do it? They couldn't give more margin-of-safety to a critical part like that?

If it's true, that's truly terrible design.


Its likely a misunderstanding and/or mischaracterization of "tolerance stacking."

A safe example is bike chain. If each one is 1 inch +- 0.01", if every single one is +0.01" then ten links will be long by a tenth of an inch. And might pass QC on the bike when pedaled by hand- but it'll fall off when somebodies full bodyweight and 100hrs of wear is out into it.


That's not how errors add up, it's nonlinear. You have to take the sum of squares. So in your case, it wouldn't be 10 * 0.01 = 0.1, but sqrt(10 * 0.01^2) = 0.032, which is less than one third of a tenth.


I provided a "worst case", not statistical, example.

For those who want an example, calculator, and demo see: https://www.smlease.com/entries/tolerance/tolerance-stackup-...

NB: using disks like the site does provides a clearer example.


Do we know what reducing solar irradiation will do for plant growth though? We might actually make the root cause worse if it decreases carbon fixation.


Yep, this. Same for any kind of solar shielding. Some are so fixed on controlling a single metric (Carbon/Temperature)that they may end up inadvertently influencing hundreds of other things.

When simple solutions interact with complex systems, complex problems arise. As it is said, for every problem there is a solution that is simple, easy and wrong.


Eh... you should read the article. It sounds like a pretty big deal.


I did read the article, appart for that it sounds like a terrible place to work, I'm not sure I see what's the big deal?

No one knows how any of the models got made, their training data is kept secret, we don't know what it contains, and so on. I'm also pretty sure a few of the main models poached each others employees which just reimplemented the same training models with some twists.

Most LLMs are also based on initial research papers where most of the discovery and innovation took place.

And in the very end, it's all trained on data that very few people agreed or intended would be used for this purpose, and for which they all won't see a dime.

So why not wrap and rewrap models and resell them, and let it all compete for who offers the cheapest plan or per-token cost?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: