Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Misalignment Museum (niche-museums.com)
141 points by another on April 18, 2023 | hide | past | favorite | 59 comments



Wow, didn't expect my https://www.niche-museums.com/ website to show up on Hacker News!

It's actually running on a templated instance of Datasette - try the "Use my location" button on the homepage to see tiny museums near you that I've been to.


Is there a way to add entries to this collection, or do you only list museums you have been personally been at?

I ask because I tried look for listings of places I know well (e.g. Turin, Italy or Budapest, Hungary) and found that the closest entry was hundreds of miles away.

Turin has, for example, this: http://www.museodellafrutta.it/en/ which is pretty niche, imho. Budapest has a chocolate museum: https://www.csokolade-muzeum.hu/bemutatkozunk/ and so on...


I have the same question. Happy to prepare a PR if that is welcome.

The one I would be recommending (to the niche museums site, and also to everyone who is around) is the Yokohama Coast Guard museum.

It has a really unassuming name, I only went in because I got caught out in the rain without an umbrella in Minatomirai. Dare I say it is the best museum I have ever visited. It is organised around a single curious event: In 2001 a Japanese Coast Guard vessel encountered a suspiciously behaving fishing travel. They wanted to board them when the fishing vessel took off at high speed and started shooting at them. During the pursuit they even seen the crew wield shoulder mounter missile launchers.

Turns out it was a North Korean spy ship on a mission to raise funds by smuggling drugs to Japan. The incident ended by the Korean crew scuttling their vessel. The coast guard has raised the sunk ship and built this museum around it.

Website of the museum: https://jcgmuseum.jp/en/

More info on the event: https://en.wikipedia.org/wiki/Battle_of_Amami-%C5%8Cshima


I accept tips of where I should go next, but the site is exclusively museums I've been to myself - it's a really rewarding hobby.


I see you've been to several aviation museums, but if you're ever anywhere near Ohio, may I suggest the Air Force museum? There are so many aircraft there that one can't see anywhere else, like the only remaining XB-70.

Some other personal favourites:

* Molson, Washington - it's a preserved ghost town, somewhat like Bodie, California, but one can go inside nearly all of the structures, and there is an actual museum set up in the three-story former town school.

* Not sure this counts, because The Met isn't a niche museum overall, but their Egyptian wing is the only place I've been to where one can walk through two more or less complete ancient Egyptian structures without going to Egypt.


Brussels/Belgium is full of them.

My favorites are the Clockarium, Museum of the Art Deco Ceramic Clock. (https://www.clockarium.org/ Check out the video, its is magnificent.)

And The Sewer Museum: Experience an authentic sewer, stroll along the Senne and discover the little-known but ever so important profession of a sewage worker. Descend deep into the bowels of the city for this unique experience! https://sewermuseum.brussels/


I second the wish to add.

Who didn’t want to know what happened to East London Victorian Sewage.

Or the mysterious cave of shells which no science know it came to be [I suspect due to lack of interest, lols]


Simon, you are a beast, is there anything you don't do?

many thanks for all the content and inspiration, your blog is genuinely my favourite read right now!


Link?



I have one suggestion too! :-D The Devil Museum in Kaunas, Lithuania (https://ciurlionis.lt/activity/permanent-exhibitions/velniu-...). It has few hundred wooden devil statues, most of them made in traditional methods.


How does one traditionally make devils?


I meant in lithuanian folk art traditions - mostly woodcarvings with various paints, but sometimes using other materials too. I guess forms and colors can also be included in definition of "traditional" :thinking_emoji:

Anyway, I am not an art expert or anything, I just find Devil museum interesting, haha. I managed to find a virtual tour of it too: https://3dtour.1001pikselis.lt/tour/velniu-muziejus


cool site! used it immediately to check if there are interesting places to add to my next trip. "normal" museums bore me.

May I suggest two additions? Kassel, Germany has two weird ones: A museum for death culture (Sepulkralkulturmuseum, https://www.sepulkralmuseum.de/), and a museum of wallpaper (https://www.tapeten.museum-kassel.de/) :)


Awww, unfortunately nothing in my country! I love going to weirdo museums. My favorites so far are the sardine can and the dredging museums.


Well those sound amazing! Where are they?


you might want to add the Crime Museum [https://wien.kriminalmuseum.at/en/news/] and the Museum of Contraception and Abortion [https://www.muvs.org/en/] of Vienna, Austria


Wow for sure if my genius was recognized like that would be surprised too


There’s also a prison museum in Deer Lodge Montana that my partner and I went to to kill a little time before the Vonnie Louise had our room available.


Oh. I thought it would be a museum of slightly misaligned things.

It would be a fun place to visit for OCD-type people.


Yeah, I was expecting something like the exact opposite of that "things fitting perfectly in other things" Tumblr (at least, I think it was a Tumblr).


I can't stop chuckling at the big "Sorry for killing most of humanity." I just picture a little robot-kid looking down and doing that little kick.


Has anyone been there? It's a temporary installation in San Francisco.


Yes. AMA


Is it worth visiting?


I wouldn't travel far, but if you're near SF anyway, then it's a nice stop. It's pretty small, just two rooms, so it's not a big thing. I found it interesting to see Audrey Kim (the curator)'s perspective through the exhibit and I also enjoyed meeting fellow museum goers and chatting with them


This actually brings to mind a conversation I've been having with my buddies recently. Has there ever been a case of harm from AI misalignment?

Not some kind of speculative art exhibit, but real harm?


Author of "The Alignment Problem" here, to say: Of course this question depends on your semantics of "harm," "AI," and "alignment," but by most definitions (and certainly by mine) the answer is overwhelmingly yes, many.

These harms can be diffuse at massive scale, and acute at small scale.

One example of each: (1) https://www.science.org/doi/abs/10.1126/science.aax2342 One of USA’s largest health insurers builds ML system for patient triage. It optimizes for a proxy metric of health need (namely, cost) rather than health need itself; consequently it deprioritizes and systematically excludes millions of people from access to health care.

(2) https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg Autonomous Uber car builds their braking system on top of a vision model that optimizes for object classification accuracy using categories of {"pedestrian", "cyclist", "vehicle", "debris"}; consequently it fails to determine how to classify a woman walking a bicycle across the street, as a result killing her.

In both cases, optimizing for a naively sensible proxy metric of the thing that was truly desired turned out to be catastrophic.


The problem with that is - until we have "true AGI" about which everybody agree it's "true AGI" - you can always dismiss deaths caused by software as "just a bug, not an AI safety problem".

For example when Tesla autopilot kills someone (which has happened).

https://impakter.com/tesla-autopilot-crashes-with-at-least-a...


I guess maybe one could make that argument, but the majority of AI alignment concerns seem more along the lines of skynet level scenarios.

The paperclip optimizer could be called a "bug" I guess. But the comparison to Tesla autopilot is interesting. First, is anyone calling that an AI? Second, when an objective baseline comparison to human drivers is possible, shouldn't that determine whether the AI is a net benefit or loss? And then (assuming a benefit) we can say with some degree of confidence that the AI is not malicious?


I think of it more like "Corporation X making whole species extinct because it has better uses for rainforests" not like "Skynet hates people". But the end result is the same.

Humanity doesn't hate polar bears. We just love burning oil more than we love them. Polar bears die as an unintended side effect. We might even be sad about that. We might even keep a few alive in ZOOs. But we won't change our whole economy to save a few cute bears.

Of course these kinds of threats will start to appear only when it's smart enough. The main problem with AGI is that we probably won't know until it's too late, because of how fast this technology develops once it can improve itself.

Can I ask you another question - what do you think will happen?

1. it won't get smarter than us 2. it will care about us 3. we will somehow keep it in check despite the fact it's smarter than us

Cause it seems to me that most people still intuitively think (1) because it's "too sci-fi". And even if they became persuaded that (1) is no longer certain - they didn't updated the rest of their beliefs with that new information, they are still believing AI is safe like they did when (1) was assumed true because they haven't updated the cache, or maybe they don't even realize there's dependency somewhere in their train of thought that needs to be updated.

This is how living in a world undergoing singularity will be like, BTW - you can't think one thought to the end without realizing the assumptions might have changed since the last time you thought about it. So you go down and realize the assumptions there are also changing. And so on.


>is anyone calling that an AI?”

Sure. And TSLA unveils their beta FSD during their AI Day.


You asked for examples of existing harms from AI misalignment, and now seem to define AI misalignment so that, by definition, it excludes all current systems. So tautologically the answer to your question will be "no".

Brian Christian's book "The Alignment Problem" contains many examples of harms from systems which people plausibly see as being based on precursor technology.


Misalignment is not malice. The paperclip optimizer isn't malicious. The risk isn't software going evil on us. The risk is software doing exactly what we made it do.

A software bug is generally what we call the situation where software does exactly what its creator asked it to do, but that thing is not a thing its creator actually intended or wanted. The creator did not correctly express their request, and/or did not properly think through all the effects of carrying it out.

Think about people giving each other instructions and making rules for each other as we normally do day to day in English. English is not a very precise language for expressing what we actually want to happen, and also humans are not very good at rigorously specifying what they want to happen, relying instead on assumed implicit shared understanding; these assumptions lead to much misery in human/human interactions, never mind human/computer. Worse, humans are not very good at actually knowing either what they want the world to be like or how to make that happen. With the best of intentions, we make rules and set policies, intending that the world become better for it, and for every instruction we give, rule we set, policy we make, we invariably end up with some unintended consequences. This is the human condition: every day, we try to make the world a little better, but in the end things turn out like they always do. General human communication relies on shared values, but our values are not actually universally shared, we don't know how to even begin rigorously expressing our values, and much of the time we can't agree on what they are or even properly explain our own values to ourselves - all those fuzzy open questions in philosophy arise from this.

When we interact with a general AI, we are programming a computer system, in English. On top of all the usual problems, the computer system lacks our shared understanding, because not only do we not know how to impart it, we don't even really know what to impart. It is not aligned to human values. It can't be: we can't even achieve alignment with each other, never mind a lump of silicon. So miscommunication is inevitable. Worse, the recent direction in AI has been to throw away any attempt to actually express what behaviours we want explicitly, and instead just throw the entire contents of the internet at a giant statistical model and hope the correlations it makes are somehow useful to us. The honestly surprising thing is that this even works to any extent at all. But a few minutes' interaction quickly assures us that the resulting systems react quite unpredictably to our input.

We will ask for things, the AI will do exactly what we asked for, and we will find that what we literally asked for is not actually what we want: the system that is the combination of our request with the AI will contain bugs.

The amount of resulting harm is determined by what it is the software is controlling and how much time we have to react to the unintended consequences. The AI doomer claim is that, unless we do better at the alignment problem, as our software gets faster and we give it control over more stuff, the inevitable bugs will cause bad things to happen faster than we can react to prevent harm, and the consequences will be worse than we can tolerate; worse, as we link everything together, we might not even realise we are working indirectly with safety critical systems until they do whatever it is we told them to but did not mean.

The solution should be obvious: don't put software in control of devices that interact with the real world in ways that could cause serious harm without first rigorously proving that no combination of inputs can result in behaviours that make the situation worse instead of better. Traditional engineering sounds expensive, hard and tedious, and it is, and it is not very shiny or sexy, but we can do it, and do do it in situations where serious harm will otherwise result, like aviation or (most of) the automotive industry. Include the fuzzy ill-conditioned statistics software, by all means, but don't wire it directly to the controls - make it an input to a traditionally engineered and well understood system, treated like any other noisy and potentially broken input, with the system as a whole rigorously designed to produce safe outputs when it can and to safely shut down when it cannot.

Surely the AI doomers are overstating the risk of doom - surely no-one working in safety critical systems would do things any other way? "Has there ever been a case of harm form AI misalignment?" - this is the real thrust of that question. What sort of idiot would wire an unintended consequence generator directly to anything that might harm or hurt?

Tesla autopilot is interesting precisely as a current ongoing real-world example of the new-style fuzzy black box tech being wired directly to several tons of trundling metal, harm to property and life resulting from unintended behaviours, and instead of putting things on hold pending fault analysis and a more rigorous approach, we just double down on throwing more data at the fuzzy black box and hoping the fact that we can't reproduce the last bug with a few quick tests means it's gone away.


>When we interact with a general AI, we are programming a computer system, in English.

I agree with what you're saying in your post, but I want to further refine the part of your statement 'in English'....

LLMs are far beyond that... We're not just interacting in English, we're interacting in all languages the model was trained on. The key word here is language because for the average human this entails the primary language they grew up with. But for many people that are bilingual they realize the complexity of language potential is far greater than a person that speaks a single language, some languages can contain concepts that don't exist in another language. Now extent that ever further. Programming languages are not much different from human spoken language, just more formalized. Mathematics is a language that contains formal language.

And it can extend even further than that... That 802.11 signal in the air is a language, along with all of our other wireless signals. Yes, people have used deep learning to decode wireless.

I brought all this up because as models become more multi-modal us humans are going to get stuck in thinking that what we say/type to the AI might be what it is interpreting when the actual model may be working on a far larger and richer dataset then we are giving it credit for. As you said above, this would cause us to incorrectly interpret the capabilities of the model, likely significantly underestimating its ability in places where humans are incapable without additional tooling.


> Surely the AI doomers are overstating the risk of doom - surely no-one working in safety critical systems would do things any other way? "Has there ever been a case of harm form AI misalignment?" - this is the real thrust of that question. What sort of idiot would wire an unintended consequence generator directly to anything that might harm or hurt?

You don't have to look far to find people doing exactly that. AutoGPT, ChaosGPT (!), a lot of random people copying its output into a Python prompt.

"It's not safety-critical", you might say, but that's only because the AI isn't smart enough yet. A human-level AI could easily do a lot of damage this way, and I don't think we'll learn our lesson until it's already happened. Here's hoping it happens before we get superhuman AI!


...for people playing Russian Roulette, the temptation to take pulling the trigger and surviving as evidence that it is safe to pull the trigger again appears irresistible.


“'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says” https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-a...


Corporations can be regarded as AI agents with misaligned goals. In a bit over a century this has resulted in damage to the climate and Earth's ecosystem that is threatening the survival of our civilization. And that's just slow, badly optimized paperclippers so far.


It's really not though is it, that's just your programming thinking for you.

Human deaths from weather and climate related events are near all time lows, while overall population is indeed at all time highs. We have better crop yields (3) and productivity thanks to both technology and indeed, CO2 fertilization of the atmosphere (4), which is driven by photosynthesis, a process which is optimized at thousands of ppm for the vast majority of plant species, many multiples from where we now stand.

Of course, the most abundant periods for life on earth have been periods of abundant carbon, such as the Carboniferous era, and the Cambrian era (5).

Clearly the ecosystem is far from collapse as a result of mild warming and improved plant productivity. In many regards, it's thriving, and certainly more so than 100-120 years ago, which predated any pretense of conservation at all.

And we know that it's indeed glaciation (global cooling) that so often leads to catastrophe in ecosystems (1)(2).

But that we can program human beings with subtle prods of horrific and exaggerated headlines, along with cult-like expectations of consensus, into believing the opposite to be true from reality, is certainly an indication of paperclip optimization gone awry.

(1) https://www.researchgate.net/publication/259495017_Ecologica... (2) https://www.researchgate.net/publication/360894775_Global_co... (3) https://www.agriculture.com/news/crops/usda-raises-the-us-co... (4) https://www.nasa.gov/feature/goddard/2016/carbon-dioxide-fer... (5) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5942912/


Huge amount of monocultures are not the sign of a healthy environment, but one that is at the precipice of collapse.

This behavior is seen in animal populations quite often. A series of 'good' events happen allowing the population to expand beyond the mean population capacity. If the good times are excessively long or excessively good you can see a rapid overshoot in total population. When this occurs the population is screwed. Even if 'bad' times do not come just reducing resources to what the mean population can support means that a large dying off will occur, that die off will typically kill almost all the population of said animal because of consumption of almost all available food. You then get a drastic reduction in total population far below the mean carrying capacity. If the environment switches from 'good' times, say high water availability into extreme extended drought you can get species extinction.


You're making the mistake of assuming that somehow humanity is better off because our population is larger. There is no intrinsic benefit to a population of 8 billion, 4 billion, or 2 billion. None of those numbers are a population bottleneck in any manner. For comparison, bonobos have a population of something like 50,000.

Having more people doesn't improve the quality of life of those people. It's just more people.


Clearly the ecosystem is far from collapse while we're in the middle of a rapid human-caused mass-extinction. What kind of propaganda are you smoking, bro.


Sorry, but according to NASA satellite data, you are sorely mistaken: https://www.nasa.gov/feature/goddard/2016/carbon-dioxide-fer...


Diversity vs biomass. If AI prefers biomass over diversity we're pretty much fucked ;) If it prefers diversity - at least some of us will remain in ZOOs ;)


The conditions necessary for biomass expansion are also conducive to biodiversity. Indeed, the vast majority of biological diversity is in the tropical region of earth. Species of life seem to have a very hard time in the arctic regions.

As for AIs, I see no reason why they should prefer any characteristic of ecology at all, if all of their needs can be met from silicon and electricity.

(1) https://www.researchgate.net/publication/259552528_Why_are_t...


> The conditions necessary for biomass expansion are also conducive to biodiversity.

It might be the case in general, but currently we're experiencing biodiversity loss on the global scale. [1] [2]

[1] https://en.wikipedia.org/wiki/Biodiversity_loss [2] https://en.wikipedia.org/wiki/Holocene_extinction


I’m loath to reply to a comment that’s either idiotic, propaganda, or a troll, but one word:

d/dt


The sources are cited and your comment is against HN policy. If you care to engage with someone more educated than yourself on a topic, I'm happy to debate on fair terms.


Your sources are absolutely irrelevant when your argument has a vast hole that is evident to anyone who has even a little bit of actual understanding about the issue. You can cargo-cult as many sources you like, they do nothing to save a fundamentally broken argument.


You're well aware of the game you're playing, your argumentation is disingenuous. You have reframed and obfuscated the issue of climate change to suit your preferred narrative, and misrepresented the science. You have indulged in ad hominem attack by saying someone here is less educated than you, thereby flouting the site's guidelines while at the same time invoking them to hide behind them.

Regarding the papers you cite, you seem to have misrepresented the views of the authors.

You cite author J Brown [1]. Another of his papers [2] says "increased energy demand and climate consequences of burning fossil fuels will continue to accompany a rapidly urbanizing planet posing major challenges for global sustainability."

You cite a NASA report of a study, saying it denies mass extinction, and suggesting that global warming supports increased life by making polar regions habitable. The report and study do not support your position. The paper itself [3] cites [4] which stresses the importance of minimising climate warming via "net negative emissions". The authors also acknowledge that "most models lack a representation of regionally important ecosystems (peatlands, wetlands)" the destruction of which they note as "negative".

Your citing of glaciation is frankly odd, and again the papers don't support your arguments. For example in [5] the authors say "it was the loss of habitat diversity [...] that triggered a drop in speciation rate, and subsequent loss of biodiversity." Further, they contradict your assertions by pointing out that biodiversity can increase in cooler zones: "cool-water niches were available to be filled, warm-tropical niches did not exist". However, to go further down this rabbit hole would be to indulge in your game of misdirection, and I've no intention of doing that.

In short, your posts in this thread appear entirely untrustworthy and you seem to be acting in bad faith. I notice you've posted climate change 'skeptical' content here previously. You suggest it is others who have been "programmed", but seem reluctant to examine the nature of your own susceptibility to misinformation, which frankly comes across as disdainful and superior.

[1] https://www.researchgate.net/profile/James-Brown-37

[2] https://www.researchgate.net/publication/331075555_The_Centr...

[3] https://www.nature.com/articles/nclimate3004 and https://sci.bban.top/pdf/10.1038/nclimate3004.pdf

[4] https://www.nature.com/articles/nclimate1783

[5] https://sci.bban.top/pdf/10.1130/G32679.1.pdf


The whole “world is ending thing” is a bit hysterical in my opinion. Discussion on it is so stifled because people don’t want to sound like they’re anti-environment so they don’t check the least reality based voices.



I am familiar with this situation and agree it is a problem. But this is a deterministic, well known, algorithm rather than AI, isn't it? I would also say that this falls under bias rather than misalignment. I think bias is a much, much more realistic problem area than misalignment.


This is exactly what I'm talking about in the other response to your previous comment. The joke goes "AI ceases to be AI when we start to understand how it works".

But it doesn't matter if you call the software that kills us all "conscious AI that was misaligned" or if it was "deterministic code that had a bug" - the underlying problem that we're all dead remains the same. We're just arguing about labels instead of issues.

The AI that kills us all (if this happens) will very likely be a simple, deterministic algorithm that ran on A LOT of data.


You have put your finger on the crucial question: whether AI research has killed everyone in the past. How overcareful would a person have to be to worry about its happening in the future if it's never happened in the past?

</sarcasm>


to kill humanity first it must already exist

in my opinion our efforts at being humane have not yet risen to our ability to imagine humanity

I do not think that AI will help in our efforts to create humanity but will rather enhance the negative aspects of our already harsh nature


You don't think that the use of that word in this context simply refers to humankind?


We put OOM in FOOM.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: