Hacker Newsnew | past | comments | ask | show | jobs | submit | tbabb's commentslogin

Here is some context: Early in the aphantasia discourse, someone asked a group I was in to do a mental exercise: Imagine an apple. Can you tell what color it is? What variety? Can you tell the lighting? Is it against a background? Does it have a texture? Imagine cutting into it. And so on.

For me, not only was the color, variety, lighting, and texture crystal clear, but I noticed that when I mentally "cut into" the apple, I could see where the pigment from the broken skin cells had been smeared by the action of the knife into the fleshy white interior of the apple. This happened "by itself", I didn't have to try to make it happen. It was at a level of crisp detail that would be difficult to see with the naked eye without holding it very close.

That was the first time I had paid attention to the exact level of detail that appears in my mental imagery, and it hadn't occurred to me before that it might be unusual. Based on what other people describe of their experience, it seems pretty clear to me that there is real variation in mental imagery, and people are not just "describing the same thing differently".


I can _remember_ the properties of an apple - approximate size, weight (my hand does not instantly drop to the floor due to its weight), etc.

I can't _imagine_ an apple in my hand if you defined the colour, size or weight (for example, purple, 50cm diameter and 100Kg).

In my mind I am recalling a _memory_ of holding an apple in my hand - not imagining the one according to your specifications.

One example I can give is being tasked with rearranging desks in an office. I can't for the life of me _imagine_ what the desks would look like ahead of physically moving them into place.

I can make an educated guess based on their length/width but certainly not "picture" how they would look arranged without physically moving them.

It's like my brain BSODs when computing the image!

The same applies to people - I can only recall a memory of someone - not imagine them sitting on a bench in front of me. I might remember a memory of the person on _a_ bench but certainly not the one in front of me.


Can I ask you a personal question? How do you imagine sex? I thought that everyone kinda thought about themselves doing it with someone else, a bit like a porn movie that you make in your own mind.

I can't imagine it being at all interesting to just think about it the way you are talking about it, like it would just be a sort of description of what the other person looks like, without the multifaceted sensations. Touch, smell, visuals.

And if you can't imagine it, how do you go about ever doing anything about getting it? It's like saying you want a juicy burger without imagining yourself eating it. Like a paper description of an experience, rather than a simulation of it. It doesn't seem motivating enough that you'd bother washing yourself, getting nice clothes, and going to chat with women.


I have so many questions to ask people with aphantasia related to sex, but it would get uncomfortably personal, so maybe best not to.

The best I can do: do people with aphantasia only get aroused if the stimulus is present? Can't they not get horny just imagining things, like I imagine most people can?

Does steamy literature do anything for them? I imagine it doesn't, since if you cannot imagine things then words on a page just have no power.

In my opinion, the fact erotic literature exists is proof aphantasia is not normal. Words cannot be arousing if you cannot imagine things "in your mind's eye".


Good erotic literature does not only describe images, but also desires, emotions and sensations, all of which I think have different channels of imagination/recall.


I didn't mean it describes images, I meant it elicits them. If you cannot imagine what's happening, you cannot get aroused. Words are just words, they must conjure an image.

Aphantasiacs often cannot imagine sensations either (at least, my friend doesn't. He cannot imagine the smell of coffee either).


> In my opinion, the fact erotic literature exists is proof aphantasia is not normal. Words cannot be arousing if you cannot imagine things "in your mind's eye".

The opposite seems to follow? erotic literature is proof you don't need images to be aroused.


Hmm, no? The words must elicit images and sensations, otherwise they wouldn't work as erotica. Words are just words. If you cannot picture what they are describing, you cannot get aroused.


> If you cannot picture what they are describing, you cannot get aroused.

This is your thesis. In the first place, the existence of erotic literature doesn't prove this is true, like you claimed. I would furthermore claim that it calls this assumption into question. If the goal was imagery, the more straightforward approach would be to draw an image. If that wasn't possible, you would instead describe the image you wanted to draw in words in great detail. But this isn't at all what most erotica consists of.


> In the first place, the existence of erotic literature doesn't prove this is true, like you claimed

Everything we are discussing in this comments section must be understood in an informal way. I obviously did not "prove" anything; I don't think anything can be proven about this anyway. Whenever I say "proof", read my statements as "[in my opinion] this is strong evidence that [thing]".

It's a figure of speech: "this cannot be so!", "it must be like this other thing", etc. It's informal conversation.

> If the goal was imagery, the more straightforward approach would be to draw an image.

Maybe straightforward, but as with anything related to the phenomenon of closure (as in Scott McCloud's closure), drawing an image closes doors. If you describe but don't draw an image, the reader is free to conjure their own image. Maybe they visualize a more attractive person than the artist would have drawn, or simply the kind of person they would be more attracted to.

Have you never seen a movie adaptation after reading the book and thought "wait, this wasn't how I imagined this character"?

> If that wasn't possible, you would instead describe the image you wanted to draw in words in great detail. But this isn't at all what most erotica consists of.

That's such a mechanistic description! Words don't work like this. Sometimes describing less is better, because the human brain fills in the gaps. You don't simply list physical attributes in an analytical way, you instead conjure sensory stimulus for the reader.

(If talking about sex and adjacent activities makes anybody nervous, simply replace this with literature about food. In order to make somebody's mouth water you cannot simply list ingredients; you must evoke imagery and taste. Then again, some people -- aphantasiacs -- simply cannot "taste" the food in textual descriptions!).


> Whenever I say "proof", read my statements as "[in my opinion] this is strong evidence that [thing]".

read my statement as "it isn't any evidence at all"


Well, that's easy: your statement is wrong.


For me visualization by itself is mostly useless, it is more of a concept of something arousing happening and vague visual flashes of something similar I have seen. It somewhat works, but nowhere near as effective as real pictures.

What works for me - is imagining sensations, they could enhance both real and vague pictures, and I feel them directly in the body which makes them very effective.


> I can't _imagine_ an apple in my hand if you defined the colour, size or weight (for example, purple, 50cm diameter and 100Kg).

I think most people couldn't imagine holding an apple specced like a washing machine in one hand. :-)


That'd be a tiny washing machine, to be fair. That said, a 50cm diameter apple would weigh maybe half that, unless it's made entirely of water ice.


but are those details fabricated on demand?

I don't have any trouble following your path of increased detail, but if someone says "imagine an apple", I get a vaguely apple-shaped, generally redish object (I like cosmic crisp), which only becomes detailed if I "navigate my mental eye" closer.


I think that is pretty normal while dreaming, daydreaming, or awake if you don't have aphantasia. Someone skilled in neural-linguistic programming can guide someone into developing greater and greater details.

Psychedelics and certain meditative practices can enhance this effect. There are also specific practices that allow imagined object to take a life of its own.

That's in the private imaginative mindspace. There are other mindspaces. There was one particular dream where I can tell, it was procedurally generated on-demand. When I deliberately took an unusual turn, the entire realm stuttered as whole new areas got procedurally generated. There were other spaces where it was not like that.


When you image slicing, does the video your head renders smooth or jittery string of pictures strung together?


For me the default is typically an instant view of whatever is described, first an apple, then when I read "sliced" now it's suddenly in slices. But if I want to image motion I can easily do that also, like of a knife cutting down through an apple and the two halves falling to either side, just like a video but with a generic background and other simplifications, like the knife suddenly disappears when the cut is complete.


They don't mention Twitter, but that's where Willison got it from.


Recruiters use automated tools to scrape LinkedIn profiles and send what amounts to customized spam.


On the one hand, yes.

On the other hand, clock time is entirely a social construct whose whole purpose is to coordinate social and business activity, so it should be specifically designed around social customs in order to serve that purpose.


EDIT: Author has pointed out that the interpolation mode can be changed. Very slick!

It looks like this is interpolating in HCL or HSV space— that tends to produce unexpected results, including intermediate colors with unrelated hues (pink between orange and blue?), or sharp discontinuities if one of the endpoints changes slightly (try mixing orange and blue, and then shifting the blue towards teal until suddenly the intermediate pink pops to green).

This document[1] also illustrates pretty well.

Interpolating in RGB space has its own issues (more so if gamma is not handled correctly) due to the human visual system's differing sensitivity to different colors— the result is often that two bright colors will have an intermediate color which is darker than either endpoint.

There's a known solution, thankfully: Mix colors in a perceptual color space like Lab or Oklab[2]. The behavior is very predictable and aesthetically pleasing.

[1] https://observablehq.com/@zanarmstrong/comparing-interpolati... [2] https://bottosson.github.io/posts/oklab/


First of all thanks for the comments and info. I'm actually not doing the color interpolation myself. I'm using chroma.js[1] for the most part.

You can actually change the interpolation mode on the bottom below the color boxes. The default is LCH because I think it looks the best most of the time, but you can use LAB if you prefer that.

[1] https://gka.github.io/chroma.js/


    CIE L*C*h is based on either CIE 1976 L*a*b* or CIE 1976 L*u*v*, it's just a different representation. <--for formatting or the asterisks vanish.
For displays you're incrementally better off using LChuv. But note that these are very rudimentary color appearance models compared to CIECAM02 and newer, where surrounding color and ambient light can be taken into account. For a given color, surrounding it with two different colors will produce a different color appearance for that given color, i.e. a spot measurement of the given color will of course be the same, but the color in context will be different. It's a feature of human vision. Simultaneous contract, Bezold effect, and so on.

But without display compensation (mapping the CIE color created for given RGB value, such that a color management system can alter the RGB sent to the display in order to preserve color appearance) you will still have the experience of getting a different color for a given encoding on different displays.

How far down the rabbit hole do you want to go? In any case, these are better than HSL and HSV representations.


tip: In HN comments you can write \* to display an asterisk.


Really cool site! After playing with it for a while, could the CSS gradient for MIX could be changed to use the same number of color stops as there are steps rather than just using the start and end, to better match the chosen interpolation mode? I had a great gradient in LAB space, but the CSS version interpolates through ugly greys in RGB.


That's a great idea! I'm gonna take a look at that.


Aha! That was a bonehead miss on my part. Very nice, and very slick interface. :)


Well it is very easy to miss. ;) Thank you!


Thanks so much for this info! I had no idea this was a solved problem.


Pixar used RCS at the time. Problem is, when you run `rm -rf /`, that deletes the RCS directories as well.


Ed Catmull.


You are right, I was wrong. Thank you.


The failures will continue indefinitely, until Tesla decides to use stereopsis for ranging.


Humans don’t really use stereopsis beyond the reach of their arms (which makes sense if you think about it). Beyond that we use semantic cues, which is why we can also understand pictures.

Sadly most research in this area went out of fashion 30+ years ago.


> Humans don’t really use stereopsis beyond the reach of their arms

This is outright false. A person with acute vision can perceive stereopsis out to 1/4 mile. Trivially, 3D movies are projected onto screens which are 10m away.

> which is why we can also understand pictures

We don't drive using pictures.


> Humans don’t really use stereopsis beyond the reach of their arms (which makes sense if you think about it).

But of course we do. It's how we throw rocks and hit what we aim at. It's how we catch things. It's how we walk around anywhere that has obstacles. We use it beyond the reach of our arms really frequently.


I can still throw a rock fairly accurately with one eye closed. I can catch pretty much perfectly with one eye closed. I can walk around without hitting anything with one eye closed.


The point was that you don't need stereopsis to do that.

Basically, could you do it with one eye closed? Without stereopsis, tasks involving close-up things are harder, but farther away things are not.


Depends. If I'm hunting with a bow or a throwing stick, I'd bet stereopsis is pretty important. Do I need it? Maybe not, but I'll be more accurate with it.

If I'm running through the woods, trying to escape from a bear, stereopsis is pretty useful, because dodging trees while at a dead run is life or death.

I don't do either of those, but some of my ancestors probably did. But I play sports. I may be able to catch a ball with one eye. Getting myself to where the ball is going? I couldn't do that as well with only one eye.

So: Do I need stereopsis? No. Did the human race? Yes, it was a competitive advantage to have.


> I play sports. I may be able to catch a ball with one eye

I would love to see a sports game where one team can use both eyes, and the other team only has one eye uncovered. I imagine it'd be downright funny.


But doubling front cameras would add like $200 to the car BOM!!1


> Requiring multiple sensors means that all of your sensor systems need to notice a danger

This is the opposite of true. In well-designed sensor fusion algorithms, every new piece of sensor data, however noisy, helps to overconstrain the estimate. Each sensor reading helps to inform the interpretation of the other readings. If your system is designed such that more information worsens your inference, you have designed a terrible system.


And that's the biggest doublespeak lie about it.

The web is already open and decentralized by its very design, and that openness and decentralization is a major reason for its incredible success.

Web3 people want that stuff not to work. They want you to be unable to right-click save. You can already openly save, modify, and share images. Web3 people want to lock it down and charge you for memes.

It is the biggest, greediest grift in recent memory. If you are working on this, don't be fooled by all the empty words about "decentralization"; you are working on DRM for giant hedge funds who are trying to take over and own the entire web. Its current openness is their enemy.


Has this "Web3 is DRM" argument been written up in more detail somewhere? The idea already seems silly to me as just crypto hype, but I haven't considered that there might be more sinister intentions behind it.


Yup. Web3 is basically trying to sell people DRM for web culture.


> The web is already open and decentralized by its very design, and that openness and decentralization is a major reason for its incredible success.

> It is the biggest, greediest grift in recent memory.

Strong agree, all around. But I also have strong laments about trying to better distribute, decentralize the technical potential we've already built. There's a couple rare projects like https://yunohost.org doing the honest competent work of making this power accessible, of sharing it. But there's a super toxic mentality that computing isn't worth doing, a belief that real computing is not for users, that users are only interested in highly synthesized Service-as-a-Software-Substitute (SaaSS) horseshit.

I want to say that there's precious few real examples to empower & enable users, to really decentralize the power-base, to make accessible. If I look a little further, I can see endless fields of techies trying hard to make their projects usable, understandable. But so many projects exist in their own particular tech niches. Even if we are talking general software systems, there's still endless fractal mazes of Ansible or Chef or Terraform or Salt or Kubernetes, different worlds & paradigms to compute under, with various points of overlap & disjointedness. I cited Yunohost because it's one of the broader efforts to provide generalized mass configuration of systems, to make a lot of things accessible, in a friendly fashion from a central control panel. But it's still just a super cheap hack, a bunch of preconfigured scripts for a bunch of pieces of software, far from real systems mastery. Overcoming this our expectation that users fear & don't want real computing, that they must have baby-food, is not a real sustainable paradigm, in my personal view, and it cripples our ability to make real advances, real innovation: we need real operational paradigms to begin to have a power base upon which decentralization & distributedness can begin. For too long we've considered p2p & distributed to be app level concerns, things for the app. In my view, the cloud needs to come home, we need real bases of computing to start from, & a willingness to have systems that both have easy-to-get-started paths, but which also go deep & invite in real operational working & reworking. Computing is still closed. We need to re-embrace & make computing work. Web3 is not alone in not fighting the good fights.


Meh. There's some logic to that.

People tend to work harder to create interesting things if they get to have a chance to generate wealth from that work.

But the nature of web3 now really doesn't fulfill that vision.


The web is highly centralized. For example there can be only one facebook.com domain and you need an account to do anything in that walled garden. With an account they will stalk you to death because are the product, not the user.

If it were decentralized then why would I need to go their servers to access my content or the content of my friends. Why don’t I just connect directly to my friend, not through some third party content server, and pull the desired content directly off my friends hard drive? That’s decentralized. Social media is not.

The web gave this up long ago for advertising revenue.


>The web is highly centralized. For example there can be only one facebook.com domain and you need an account to do anything in that walled garden. With an account they will stalk you to death because are the product, not the user.

Facebook is not "the web". You can opt out of that walled garden and go elsewhere on the web if you want.

>If it were decentralized then why would I need to go their servers to access my content or the content of my friends. Why don’t I just connect directly to my friend, not through some third party content server, and pull the desired content directly off my friends hard drive? That’s decentralized. Social media is not.

Because you and your friends choose to host your content on their servers. If it's that important to share something with your friends, spin up your own website for your friend group. Don't wanna do that? Then get a NAS and show them how to log into it to see your photos. There are countless ways for you to share stuff with your friends that doesn't involve using Facebook, social media or walled gardens.

Hell, by virtue of the web being the web, you can "connect directly to [your] friend, not through some third party content server, and pull the desired content directly off of [your] friends hard drive". SSH, baby!

The web remains decentralized, but everyone chooses to remain in walled gardens.


> The web remains decentralized, but everyone chooses to remain in walled gardens.

So despite how you wish to define the technology there exists maximum centralization.

SSH is not the web. The web is HTTP.


> Why don’t I just connect directly to my friend, not through some third party content server, and pull the desired content directly off my friends hard drive?

That, my friend, is called a "web page", and thanks to the open design of the web, literally nothing stops you from setting one up.


You are conflating a static file from a third party server for some streaming interconnection.


> Why don’t I just connect directly to my friend, not through some third party content server, and pull the desired content directly off my friends hard drive?

You understand this is what Web 1.0 was right?


How is a third party web server that accepts anonymous requests the same as a private feed directly to the hard drive of a friends computer?


You are forgetting the dozen other computers you are routing your request through in order to get to that walled garden, any one of which if it went down would route to a different computer so you could always get to that walled garden.

That's the decentralization of the web. And that decentralization still led to walled gardens, just like it will with Web3 because it's always in a corporations best interest to do so.


You are conflating the internet, the network, for the web, the application.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: