Hacker Newsnew | past | comments | ask | show | jobs | submit | expensive_news's commentslogin

This thread seems to have a lot of people that love the iPhone mini (me included - I still use my 12 mini).

But from all reports that you can find with a quick search it seems clear that it did not sell well by Apple standards.

I would love them to bring it back and I’m not sure what it is about the Hacker News crowd that makes this phone over-represented. Maybe the tech crowd also uses laptops more, so we think of phones as our “small device” and use other devices more as appropriate?


> by Apple standards

Yeah. The question I'm trying to answer is not "does it make sense for Apple to make a small phone?", but rather "does it make sense for anyone to make a small phone?" I'm using the 13 Mini's sales data as evidence, because it is the one and only small phone made in the past decade or so.


I understand why you'd reach for that data, not a ton of other alternatives... But I'm not convinced that an arbitrarily chosen brand could achieve those sales figures. Especially if it was a new or no-name brand that didn't have a proven track record with software updates and hardware build quality.

Maybe I'm just incredibly naive but I have this small hope that we'll see a return to smaller phones that are trifolds for when you need the real estate.


I tend to like smaller phones as well, but even comparing the Pixel 9 Pro vs Pixel 9 Pro XL used markets, it seems really hard to find non-XL versions. I would totally believe that the XL is a far more popular model, unfortunately for the rest of us.


It is interesting seeing the difference in model perception between “normal” people and the Hacker News crowd.

My perception is that a huge percentage of the mass market just like OpenAI because they were the first to market and still have the most name recognition. Even my coworker who works in DevOps says “Gemini sucks, Claude sucks” even though he has never once tried either of them and has never looked at a single benchmark comparison.


This is a great comment.

I’ve noticed a new genre of AI-hype posts that don’t attempt to build anything novel, just talk about how nice and easy building novel things has become with AI.

The obvious contradiction being that if it was really so easy their posts would actually be about the cool things they built instead of just saying what they “can” do.

I wouldn’t classify this article as one since the author does actually create something of this, but LinkedIn is absolutely full of that genre of post right now.


> their posts would actually be about the cool things they built

Presumably, they are all startups in stealth mode. But in a few months, prepare to be blown away.


> My theory is that Apple specifically wanted an effect that can’t be replicated in webviews

This makes a lot of sense to me. I was also under the impression that all these lighting effects would be rather computationally expensive. This could encourage people to upgrade devices and make it hard to replicate this design on other brands’ less powerful hardware.


That's where they're mistaken. This stuff can definitely be done. It's gonna be clunky and messy but it can be done.


I typically get AppleCare on my phone and then get a new screen and battery right before the window is up. AppleCare is cheaper than the cost of those repairs plus I have the added peace of mind that if something bad does happen I have AppleCare. I don’t renew it as part of the monthly plan though.

I also don’t use a case or screen protector on my phone fwiw


That last point (compliance gaps in fintech) sounds fascinating. Is there a place that I could read more about this?


Compliance gaps / legal analysis is a pretty common theme in my community (meaning - it was mentioned 3-4 times by different teams). Here is how the approach usually looks like:

0. (the most painful step) Carefully parse all relevant documents into a structural representation that could be walked like a graph.

1. Extract relevant regulatory requirements using ontology-based classification and hybrid searches.

2. Break regulatory requirements into actionable analytical steps (turning a requirement into checklist/mini-pipeline)

3. Dynamically fetch and filter relevant company documents for each analytical step.

4. Analyze documents to generate intermediate compliance conclusions.

5. Iteratively validate and adjust analysis approach as needed.

6. Summarize findings clearly, embedding key references and preserving detailed reasoning separately.

7. Perform gap analysis, prioritizing compliance issues by urgency.


I’m looking forward to checking this out when I have time tonight or tomorrow.

I was actually just looking at different open source tools I could run locally that would allow agents to write and execute their own Python code. Seems like this might be able to do so in 2 steps, with a file write, and then a command?


Interesting to see what I feel like is a big disconnect between the article and the comments.

In my interpretation the author of the article is doing this almost more out of respect for those around him than himself. As a photographer he was always preoccupied with looking for a good shot rather than enjoying the company he was with.

Even when he talks about the pictures of his child’s birth he looks at it through the lens of a professional photographer - it’s not about the memories attached to the photos, it’s about the composition being ‘generic’ vs the photo saying something interesting.

I feel like this article is really more about work/life balance than taking out your phone to grab a snapshot. That’s just how I read it. Also what a sad ending.


I have mixed feelings. Generally I don’t think that LLM output should be used to create anything that a human is supposed to read, but I do carve out a big exception for people using LLMs for translation/writing in a second language.

At the same time, however, the people who need to use an LLM for this are going to be the worst at identifying the output’s weaknesses, eg just as I couldn’t write Spanish text, I also couldn’t evaluate the quality of a Spanish translation that an LLM produced. Taken to an extreme, then, students today could rely on LLMs, trust them without knowing any better, and grow to trust them for everything without knowing anything, never even able to evaluate their quality or performance.

The one area that I do disagree with the author, though, is coding. As much as I like algorithms code is written to be read by computers and I see nothing wrong with computers writing it. LLMs have saved me tons of time writing simple functions so I can speed through a lot of the boring legwork in projects and focus on the interesting stuff.

I think Miyazaki said it best: “I feel… humans have lost confidence“. I believe that LLMs can be a great tool for automating a lot of boring and repetitive work that people do every day, but thinking that they can replace the unique perspectives of people is sad.


I actually feel very strongly that code is very much written for us humans. Sure, it's a set of instructions that is intended to be machine read and executed but so much of _how_ code is written is very much focused on the human element that's been a part of software development. OOP, design patterns, etc. don't exist because there is some great benefit to the machines running the code. We humans benefit as the ones maintaining and extending the functionality of the application.

I'm not making a judgement about the use of LLMs for writing code, just that I do think that code serves the purpose of expressing meaning to machines as well as humans.


>As much as I like algorithms code is written to be read by computers and I see nothing wrong with computers writing it.

unless you're the sole contributor, code is a collaborative effort and will be reviewed by peers to make sure you don't hit any landmines at best, or ruin the codebase at worst. unless you're writing codegen itself I very much would consider writing code as if a human is going to read it.

>“I feel… humans have lost confidence“

Confidence in their fellow man? yes. As the author said a lot of this reliance on AI without proper QA comes down to "nobody cares". Or at least that mentality. And apathy is just as contagious in an environment as passion. If we lose that passion and are simply doing a task to get by and clock out, we're doomed as a species.


I agree with this. I do mostly DevOps stuff for work and it’s great at telling me about errors with different applications/build processes. Just today I used it to help me scrape data from some webpages and it worked very well.

But when I try to do more complicated math it falls short. I do have to say that Gemini Pro 2.5 is starting to get better in this area though.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: