Hacker Newsnew | past | comments | ask | show | jobs | submit | eitland's commentslogin

Maybe you have aphantasia as well. I had without knowing it.

Some observations:

Someone told me to close my eyes and think about "an apple at a table".

Then I was told to open my eyes and tell what color the apple was.

The question didn't make sense to me:

I only thought about about the concept of "an apple on a table". When my eyes are closed it is black. Absolutely black. Blacker than a Norwegian winter night with cloud cover and no moon. There is nothing.

Until then I thought all this talk about seing things was just a metaphor for what I had also done.

But when I talk to others they will often immediately say it was green or red. Because they saw it.

Two extra observations:

Sometimes just before I fall asleep I can sometimes think images of stuff that doesn't exist: think 3d modeling with simple shapes.

And just after waking up I can sometimes manage to see relatively detailed images of actual physical things.

Both these only last for a few seconds to a few minutes.

Does this help?


I also have this mostly when I'm half asleep and have had some very 4K sharp lucid dreams as well, including seeing leaves on a tree up close and feeling the texture.

Under normal circumstances, my imagination is also colorless and is more about spatial layout and shapes. Like an untextured 3D model.


It's hard to describe. I think there's more nuance here. When you ask "What colour was the apple?" then I can "fill in" the colour and imagine a "red" one. But it's more like the details are filled in "on demand" or "lazilly" rather than "ahead of time". And like I said, it's not the same thing as actual visual hallucination.

It is helpful to have someone engage, for sure. I have a question for you: if you look at a 3d object that you can only see one side of, can you make inferences about the other side of the object? Can you rotate it in your head? Could you quickly be able to tell whether an object will fit in a particular hole, without actually trying it?


> if you look at a 3d object that you can only see one side of, can you make inferences about the other side of the object? Can you rotate it in your head? Could you quickly be able to tell whether an object will fit in a particular hole, without actually trying it?

Obviously I cannot know for sure what the other side looks like without seeing it, but I can make a reasonable guess and yes, I can mentally turn around objects in my head to see if they fit.

I also enjoy woodworking and repairs and other activities that force me to think 3D, but I believe it would be much easier if I could think in images.


Are you trying to help them believe?

Yes. Or maybe rather understand. For me it was a lightbulb moment just like my realisation of exactly how bad my colourblindness was: what is next to impossible for me to see (red drawings on woods in maps) was chosen by someone who thought it stood out.

I'm at least pointing out that I now know personally that there are multiple levels of visualisation, from me just "feeling" what it would mean to rotate a 3d object (it works, I can absolutely determine if it will fit but it is absolutely not visual) up to some close friends of mine that see vivid pictures of faces and can combine them with eyes closed.

For me who cannot see images except what I physically see it certainly is interesting to hear people describe remembering peoples phone numbers as text that they can see (I remember the feeling of myself saying it, not the sound) or memorising my name by mentally putting the image of ne next to their image of their brother who has the same name as me (!)

It really is funny, because I can draw. For example the famous "draw a bike" thing seems weird to me because I can't see myself making any of the mistakes from any of the drawings. Not because I can see a bike, but because I know it.


I really wish I could occupy your brain for a few minutes to see just how much of this is language. There's an amazing effect in this conversation where I remain convinced that basically everything I've heard could come down to definitional differences, and yet it really could come down to a radically different subjective experience between us, and I have no real way of knowing.

With me I know:

I know if I close my eyes now there is nothing visible.

I also know if I have a good night's sleep and wake up late on Saturday I might be able to see images of things I am working on in the garden or elsewhere.

So I know seing nothing is my default and I know that seing something vividly can be possible.


Being able to draw better than people who can "visualize" better throws doubt on what type of thing "visualizing" really is.


I don't think that makes sense. Most people struggle to draw even with something to copy right in front of them. Seeing something is insufficient to draw well. It's also not necessary in order to draw well.

I don't say I draw nice drawings. I am referring to this art/experiment:

https://www.booooooom.com/2016/05/09/bicycles-built-based-on...

I don't draw impossible bikes. Because I know what bikes are. That is what I mean. Not that I can make nice or even photographically correct images of them.


I can draw better than most people, but have nearly zero internal visualization. I learned to draw by direct observation, committing the patterns to memory, and repetition.

As a result, I have excellent (if I do say so myself) drawings from life, some shockingly good portraits in oil, and also I can reproduce a few cartoon characters (which I’ve practiced extensively) almost perfectly. BUT, ask me to draw my mom from memory, and I can’t do it, like at all. I have, really, no idea what my own mom looks like.


Again and again people keep saying this while many of us keep using LLMs to create value.

Countless people in comments say this, but other people fail to see evidence of that in the wild. As has been said in response to this point many times in the past: Where's the open source renaissance that should be happening right now? Where are the actual, in-use dependencies and libraries that are being developed by AI?

The only times I've personally seen LLMs engaged in repos has been handling issues, and they made an astounding mess of things that hurt far more often than it helped for anything more than automatically tagging issues. And I don't see any LLMs allowed off the leash to be making commits. Not in anything with any actual downstream users.


Let's look at every PR on GitHub in public repos (many of which are likely to be under open source licenses) that may have been created with LLM tools, using GitHub Search for various clues:

GitHub Copilot: 247,000 https://github.com/search?q=is%3Apr+author%3Acopilot-swe-age... - is:pr author:copilot-swe-agent[bot]

Claude: 147,000 https://github.com/search?q=is%3Apr+in%3Abody+%28%22Generate... - is:pr in:body ("Generated with Claude Code" OR "Co-Authored-By: Claude" OR "Co-authored-by: Claude")

OpenAI Codex: ~2,000,000 (over-estimate, there's no obvious author reference here so this is just title or bid containing "codex"): https://github.com/search?q=is%3Apr+%28in%3Abody+OR+in%3Atit... - is:pr (in:body OR in:title) codex

Suggestions for improvements to this methodology are welcome!


What's the acceptance rate on such PRs?

Add is:merged to see.

For Copilot I got 151,000 out of 247,000 = 61%

For Claude 124,000 / 147,000 = 84%

For Codex 1.7m / 2m = 85%


... I just found out there's an existing repo and site that's been running these kinds of searches for a while: https://prarena.ai/ and https://github.com/aavetis/PRarena

The main problem with your search methodology is that maybe AI is good at generating a high volume of slop commits.

Slop commits are not unique to AI. Every project I’ve worked on had that person who has high commit count and when you peek at the commits they are just noise.

I’m not saying you’re wrong btw. Just saying this is a possible hole in the methodology


That's a denominator of total. How many are actually useful?

HN people: lines of code and numbers of PRs are irrelevant to determine the capabilities of a developer.

Also HN people: look at the magic slop machine, it made all these lines of codes and PRs, it is irrefutable proof that it's good and AGI


Both of these things can be true at the same time:

1. Counting lines of code is a bad way to measure developer productivity.

2. The number of merged PRs on GitHub overall that were created with LLM assistance is an interesting metric for evaluating how widely these tools are being used.


> Countless people in comments say this, but other people fail to see evidence of that in the wild. As has been said in response to this point many times in the past: Where's the open source renaissance that should be happening right now? Where are the actual, in-use dependencies and libraries that are being developed by AI?

The thing that this comment misses, imo, is that LLMs are not always enabling people who previously couldn't create value to create value. In fact i think they are likely to cause some people who created value previously to create even less value!

However that's not mutually exclusive with enabling others to create more value than they did previously. Is it a net gain for society? Currently I'd bet not, by a large margin. However is it a net gain for some individual users of LLMs? I suspect yes.

LLMs are a powerful tool for the right job, and as time goes on the "right job" keeps expanding to more territory. The problem is it's a tool that takes a keen eye to analyze and train on. It's not easy to use for reliable output. It's currently a multiplier for those willing to use it on the right jobs and with the right training (reviews, suspicion, etc).


> The thing that this comment misses, imo, is that LLMs are not always enabling people who previously couldn't create value to create value. In fact i think they are likely to cause some people who created value previously to create even less value!

Agree.

For some time I’ve compared AI to a nail gun:

It can make an experienced builder much quicker at certain jobs.

But for someone new to the trade, I’m not convinced it makes them faster at all. It might remove some of the drudgery, yes — but it also adds a very real chance of shooting oneself in the foot (or hand).


Using the same arguments people used (use?) against IDEs and I think also against compilers and stuff back in the punch card days.

I am not a researcher, but I am a techlead and I've seen it work again and again: IDEs work. And LLMs work.

They are force multipliers though, they absolutely work best with people who already know a bit of software engineering.


What would it mean to see it in the wild?

I think that highly productive people who have incorporated LLMs into their workflows are enjoying a productivity multiplier.

I don’t think it’s 2x but it’s greater than 1x, if I had to guess. It’s just one of those things that’s impossible to measure beyond reasonable doubt


Well, i haven't used LLMs much for code (i tried it, it was neat but ultimately i found it more interesting to do things myself) and i refuse to rely on any cloud-based solutions, be it AI or not, so i've only been using local LLMs, but even so i've found a few neat uses for it.

One of my favorite uses is that i have configured my window manager (Window Maker) that when i press Win+/ it launches xterm with a script that runs a custom C++ utility based on llama.cpp that combines a prompt that asks a quantized version of Mistral Small 3.2 to provide suggestions for grammar and spelling mistakes in text, then uses xclip to put whatever i have selected and filters the program's output through another utility that colorizes the output using some simple regex. Whenever i write any text that i care about having (more) correct grammar and spelling (e.g. documentation - i do not use it for informal text like this one or in chat) i use it to find mistakes as English is not my first language (and it tends to find a lot of them). Since the output is shown in a separate window (xterm) instead of replacing the text i can check if the correction is fine (and the act of actually typing the correction helps me remember some stuff... in theory at least :-P). The [0] shows an example of how it looks.

I also wrote a simple Tcl/Tk script that calls some of the above with more generalized queries, one of which is to translate text to English, which i'm mainly using to translate comments on Steam games[1] :-P. It is also helpful whenever i want to try out something quickly, like -e.g.- recently i thought that common email obfuscation techniques in text (like some AT example DOT com) are pointless nowadays with LLMs, so i tried that from a site i found online[2] (pretty much everything that didn't rely on JavaScript was defeated by Mistral Small).

As for programming, i used Devstral Small 1.0 once to make a simple raytracer, though i wrote about half of the code by hand since it was making a bunch of mistakes[3]. Also recently i needed to scrape some data from a page - normally i'd do it by hand, but i was feeling bored at the time so i asked Devstral to write a Python script using Beautiful Soup to do it for me and it worked just fine.

None of the above are things i'd value for billions though. But at the same time, i wouldn't have any other solution for the grammar and translation stuff (free and under my control at least).

[0] https://i.imgur.com/f4OrNI5.png

[1] https://i.imgur.com/jPYYKCd.png

[2] https://i.imgur.com/ytYkyQW.png

[3] https://i.imgur.com/FevOm0o.png


The trouble is that peoples' self-evaluation of things that they believe are helping them is generally poor, and there's, at best, weak and conflicting evidence which is _not_ based on polling users.

In particular, "producing stuff" is not necessarily "creating value"; some stuff has _negative_ value.


If the value produced was so great I think we've be able to measure it by now, or at least see something. If you remove the hype around AI the economy is actually on the way down, productivity wasn't measured to have increased since llm became mainstream either.

Lot's of vibes and feelings but 0 measurable impact


Are you Nvidia? If not, then I don’t believe you.

Agree.

But for anyone tempted by Oracle, do remember that the upfront, agreed licence costs are only a fraction of the true price:

You’ll need someone who actually knows Oracle - either already in place or willing to invest a serious amount of time learning it. Without that, you’re almost certainly better off choosing a simpler database that people are comfortable with.

There’s always a non-zero risk of an Oracle rep trying to “upsell” you. And by upsell, I mean threatening legal action unless you cough up for additional bits a new salesperson has suddenly decided you’re using. (A company I worked with sold Oracle licences and had a long, happy relationship with them - until one day they went after us over some development databases. Someone higher up at Oracle smoothed it over, but the whole experience was unnerving enough.)

Incidental and accidental complexity: I’ve worked with Windows from 3.1 through to Server 2008, with Linux from early Red Hat in 2001 through to the latest production distros, plus a fair share of weird and wonderful applications, everything from 1980s/90s radar acquisition running on 2010 operating systems through a wide range of in house, commercial and open source software and up to modern microservices — and none of it comes close to Oracle’s level of pain.

Edit: Installating Delphi 6 with 14 packages came close, I used 3 days when I had to find every package scattered on disks in shelves and drawers and across ancient web paces + posted as abandonware on source forge but I guess I could learn to do that in a day if I had to do it twice a month. Oracle consistently took me 3 days - if I did everything correct on first try and didn't have to start from scratch.


Wait, are you saying that oracle lets you use features you don’t have a license for, but then threatens to sue you for using them? I guess I shouldn’t be surprised given oracle’s business model, but I am surprised they let you use the feature in the first place.

That was not what I was thinking of when I wrote it, but that absolutely also was a thing:

I especially remember one particular feature that was really useful and really easy to enable in Enterprise Manager, but that would cost you at least $10000 at next license review (probably more if you had licensed it for more cores etc).

What I wrote about above wasn't us changing something or using a new feature but some sales guy at their side re-interpreting what our existing agreement meant. (I was not in the discussions, I just happened to work with the guys who dealt with it and it is a long time ago so I cannot be more specific.)


Got it, that is much more in line with my expectations of oracle. I am constantly amazed they have any business left given these kinds of stunts.

And it would make it really painful.

But we couldn't get rid of it because it papered over something important.


FWIW, as someone who has chosen to pay for Kagi for three years now:

- I agreee fake news is a real problem

- I pay for Kagi because I get more much more precise results[1]

- They have a public feedback forum and I think every time I have pointed out a problem they have come back with an answer and most of the time also a fix

- When Kagi introduced AI summaries in search they made it opt in, and unlike every other AI summary provider I had seen at that point they have always pointed to the sources. The AI might still hallucinate[2] but if it does I am confident that if I pointed it out to them my bug report would be looked into and I would get a good answer and probably even a fix.

[1]: I hear others say they get more precise Google results, and if so, more power to them. I have used Google enthusiastically since 2005, as the only real option from 2012, as fallback for DDG since somewhere between 2012 and 2022 and basically only when I am on other peoples devices or to prove a point since I started using Kagi in 2022

[2]: haven't seen much of that, but that might be because of the kind of questions I ask and the fact that I mostly use ordinary search.


To late to edit, but I probably started using Google somewhere between February 2001 and July 2003, not in 2005.

Interesting idea and I think Nikon did with high end(?) cameras a number of years ago.

Nikon tried this a number of years ago, and I am not sure if they ever managed to stop hardware hackers from extracting their private keys, here are some examples: https://petapixel.com/2011/04/28/nikon-image-authentication-...

On a second reading it seems that Canon keys were leaked even earlier.


IIRC and AFAIK the plans for Israel were made by the precursors to UN way before Holocaust.

Holocaust was not the reason for the plan for a Jewish national home in historic Israel, Arab persecution of Jews in the region was.


> JS is an OK and at least usable language, as long as you avoid TS and all that comes with it.

Care to explain why?

My view is this: since you can write plain JS inside TS (just misconfigure tsconfig badly enough), I honestly don’t see how you arrive at that conclusion.

I can just about understand preferring JS on the grounds that it runs without a compile step. But I’ve never seen a convincing explanation of why the language itself is supposedly better.


My sense is this: one side is utterly unhinged, the other seems desperate to outdo them.

I’ve left out which side is which, because I think it works both ways.


I see it the exact other way around:

- everyday bugs, just put a breakpoint

- rare cases: add logging

By definition a rare case probably will rarely show up in my dev environment if it shows up at all, so the only way to find them is to add logging and look at the logs next time someone reports that same bug after the logging was added.

Something tells me your debugger is really hard to use, because otherwise why would you voluntarily choose to add and remove logging instead of just activating the debugger?


So much this. Also in our embedded environment debugging is hit and miss. Not always possible for software, memory or even hardware reasons.


Then you need better hardware-based debugging tools like an ICE.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: