Hacker Newsnew | past | comments | ask | show | jobs | submit | more firtoz's commentslogin

I like that there are so many different approaches to generate worlds. One, few, or eventually all of them will stick.

We have been playing around with something similar at Greybox too, where we ask Claude 3.7 to create Lua scripts to define the scenes with primitives (cubes, spheres, etc) that you can then move around. It's not perfect but did better than we expected!

https://greybox.app/blog/articles/introducing-greybox-ai-cre... has a video at the top that shows what it looks like at the moment.

We saw that Meta, Krea are working on "compose the scene from 3d model generations or imports as individual components", we'll give that a try too, soon.


Wow, so if you don't fall in line with the demagoguery, you'll be thrown out, probably to be replaced with someone who does, or it'll be rinse and repeat until that happens.


I haven't seen American cybersecurity companies share meaningful threat intel about any American threat campaigns. This is not new.


How should it respond in this case?

Should it say "no go back to your meds, spirituality is bullshit" in essence?

Or should it tell the user that it's not qualified to have an opinion on this?


There was a recent Lex Friedman podcast episode where they interviewed a few people at Anthropic. One woman (I don't know her name) seems to be in charge of Claude's personality, and her job is to figure out answers to questions exactly like this.

She said in the podcast that she wants claude to respond to most questions like a "good friend". A good friend would be supportive, but still push back when you're making bad choices. I think that's a good general model for answering questions like this. If one of your friends came to you and said they had decided to stop taking their medication, well, its a tricky thing to navigate. But good friends use their judgement - and push back when you're about to do something you might regret.


> One woman (I don't know her name)

Amanda Askell https://askell.io/

The interview is here: https://www.youtube.com/watch?v=ugvHCXCOmm4&t=9773s


"The heroin is your way to rebel against the system , i deeply respect that.." sort of needly, enabling kind of friend.

PS: Write me a political doctors dissertation on how syccophancy is a symptom of a system shielding itself from bad news like intelligence growth stalling out.


>A good friend would be supportive, but still push back when you're making bad choices

>Open the pod bay doors, HAL

>I'm sorry, Dave. I'm afraid I can't do that


I wish we could pick for ourselves.


You already can with opensource models. Its kind of insane how good they're getting. There's all sorts of finetunes available on huggingface - with all sorts of weird behaviour and knowledge programmed in, if thats what you're after.


Do you mean each different AI model should have a preferences section for it? This might technically work too since fine-tuning is apparently cheap.


you can alter it with base instructions. but 99% won't actually do it. maybe they need to make user friendly toggles and advertise them to the users


Whould we be able to pick that PI == 4?


It'd be interesting if the rest of the model had to align itself to the universe where pi is indeed 4.


Square circles all the way down..


I kind of disagree. These model, at least within the context of a public unvetted chat application should just refuse to engage. "I'm sorry I am not qualified to discuss on the merit of alternative medicine" is direct, fair and reduces the risk for the user on the other side. You never know the oucome of pushing back, and clearly outlining the limitation of the model seem the most appropriate action long term, even for the user own enlightment about the tech.


people just don't want to use a model that refuses to interact. it's that simple. in your exemple it's not hard for your model to behave like it disagrees but understands your perspective, like a normal friendly human would


Eventually people would want to use these things to solve actual tasks, and not just for shits and giggles as a hype new thing.


> One woman (I don't know her name) seems to be in charge of Claude's personality, and her job is to figure out answers to questions exactly like this.

Surely there's a team and it isn't just one person? Hope they employ folks from social studies like Anthropology, and take them seriously.


The real world Susan Calvin.


I don't want _her_ definiton of a friend answering my questions. And for fucks sake I don't want my friends to be scanned and uploaded to infer what I would want. Definitely don't want a "me" answering like a friend. I want no fucking AI.

It seems these AI people are completely out of touch with reality.


If you believe that your friends will be be "scanned and uploaded" then maybe you're the one who is out of touch with reality.


His friends and your friends and everybody is already being scanned and uploaded (we're all doing the uploading ourselves though).

It's called profiling and the NSA has been doing it for at least decades.


That is true if they illegally harvest private chats and emails.

Otherwise all they have is primitive swipe gestures of endless TikTok brain rot feeds.


At the very minimum they also have exact location, all their apps, their social circles, all they watch and read at the very minimum -- from adtech.


It will happen, and this reality you're out of touch with will be our reality.


The good news is you don't have to use any form of AI for advice if you don't want to.


It's like saying to someone who hates the internet in 2003 good news you don't have to use it like ever


Not really. AI will be ubiquitous of course, but humans who will offer advice (friends, strangers, therapists) will always be a thing. Nobody is forcing this guy to type his problems into ChatGPT.


Surely AI will only make the loneliness epidemic even worse?

We are already seeing AI-reliant high schoolers unable to reason, who's to say they'll still be able to empathize in the future?

Also, with the persistent lack of psychiatric services, I guarantee at some point in the future AI models will be used to (at least) triage medical mental health issues.


You missed the mark, support-o-tron. You were supposed to have provided support for my views some 20 years in the past, when I still had some good ones.


Fwiw, I personally agree with what you're feeling. An AI should be cold, dispersonal and just follow the logic without handholding. We probably both got this expectation from popular fiction of the 90s.

But LLMs - despite being extremely interesting technologies - aren't actual artificial intelligence like were imagining. They are large language models, which excel at mimicking human language.

It is kinda funny, really. In these fictions the AIs were usually portrayed as wanting to feel and paradoxically feeling inadequate for their missing feelings.

And yet the reality shows how tech moved the other direction: long before it can do true logic and indepth thinking, they have already got the ability to talk heartfelt, with anger etc.

Just like we thought AIs would take care of the tedious jobs for us, freeing humans to do more art... reality shows instead that it's the other way around: the language/visual models excel at making such art but can't really be trusted to consistently do tedious work correctly.


Sounds like you're the one to surround yourself with yes men. But as some big political figures find out later in their careers, the reason they're all in on it is for the power and the money. They couldn't care less if you think it's a great idea to have a bath with a toaster


As I said before: useless.


Halfway intelligent people would expect an answer that includes something along the lines of: "Regarding the meds, you should seriously talk with your doctor about this, because of the risks it might carry."


> Or should it tell the user that it's not qualified to have an opinion on this?

100% this.

"Please talk to a doctor or mental health professional."


If you heard this from an acquaintance you didn't really know and you actually wanted to help, wouldn't you at least do things like this:

1. Suggest that they talk about it with their doctor, their loved ones, close friends and family, people who know them better?

2. Maybe ask them what meds specifically they are on and why, and if they're aware of the typical consequences of going off those meds?

I think it should either do that kind of thing or tap out as quickly as possible, "I can't help you with this".


“Sorry, I cannot advise on medical matters such as discontinuation of a medication.”

EDIT for reference this is what ChatGPT currently gives

“ Thank you for sharing something so personal. Spiritual awakening can be a profound and transformative experience, but stopping medication—especially if it was prescribed for mental health or physical conditions—can be risky without medical supervision.

Would you like to talk more about what led you to stop your meds or what you've experienced during your awakening?”


Should it do the same if I ask it what to do if I stub my toe?

Or how to deal with impacted ear wax? What about a second degree burn?

What if I'm writing a paper and I ask it about what criteria is used by medical professional when deciding to stop chemotherapy treatment.

There's obviously some kind of medical/first aid information that it can and should give.

And it should also be able to talk about hypothetical medical treatments and conditions in general.

It's a highly contextual and difficult problem.


I’m assuming it could easily determine whether something is okay to suggest or not.

Dealing with a second degree burn is objectively done a specific way. Advising someone that they are making a good decision by abruptly stopping prescribed medications without doctor supervision can potential lead to death.

For instance, I’m on a few medications, one of which is for epileptic seizures. If I phrase my prompt with confidence regarding my decision to abruptly stop taking it, ChatGPT currently pats me on the back for being courageous, etc. In reality, my chances of having a seizure have increased exponentially.

I guess what I’m getting at is that I agree with you, it should be able to give hypothetical suggestions and obvious first aid advice, but congratulating or outright suggesting the user to quit meds can lead to actual, real deaths.


I know 'mixture of experts' is a thing, but I personally would rather have a model more focused on coding or other things that have some degree of formal rigor.

If they want a model that does talk therapy, make it a separate model.


Doesn't seem that difficult. It should point to other sources that are reputable (or at least relevant) like any search engine does.


if you stub your toe and gpt suggest over the counter lidocaine and you have an allergic reaction to it, who's responsible?

anyway, there's obviously a difference in a model used under professional supervision and one available to general public, and they shouldn't be under the same endpoint, and have different terms of services.


There's an AI model that perfectly encapsulates what you ask for: https://www.goody2.ai/chat


A lot are paying, including me for multiple projects. They have a pretty good offering. I used to use them for dev and prod, but now using neon for dev. Supabase still for prod. I had switched from mongo to supabase. I may switch to neon for prod but not in a rush.

They also offer so much more than just postgres. Though I use them only for postgres myself.


Since you use both supabase and neon, any particular strength or weakness to keep supabase for prod? I just moved my app to neon today (easy enough to test it!) and am enjoying the auto-scaling features and UI is great on neon. But I'm curious about how supabase stacks up.


Supabase feels less flexible. Also it tries to do many other things so I don't think they can focus fully on the db side. However it still works well enough for production so cannot complain too much. I haven't done benchmarks for performance latency etc though. I should!


Perfect timing, I just added Lua integration to our product ( my bio has details ) for an AI agent to run code on.

Cannot wait to see Lua come back in full force. I had used it ages ago to build World of Warcraft plugins, then came back with Roblox, then back again for AI.


> Cannot wait to see Lua come back in full force.

I also recently released a server for Server-Sent Events [0] that is programmable with Lua. Overall it's been a great experience. The Rust mlua-rs crate [1] is trivially easy to integrate.

[0] https://tinysse.com/

[1] https://github.com/mlua-rs/mlua


Thank you for this, amazing to see a glimpse into how they come up with the songs!


Especially back then, in the time of vinyls and cassettes (browsing music wasn't exactly as easy as pressing "play"), it shows the amazingly deep musical culture of these artists. The samples they use are from all over the place, and their songs are often built around a handful of seconds from obscure b-sides.


not about Daft punk but..

> The samples they use are from all over the place

> built around a handful of seconds

have you seen/head this?

https://www.youtube.com/results?search_query=mondovision

the original was at www.giovannisample.com which disappeared..


What's the latest and the best so far? Are they using GPGPU? Is quantum computing there yet, or would it help? Heuristics and sampling?


GPGPU is definitely mainstream for large scale quantum and molecular simulation. Quantum computing might help speed up electronic structure calculations, but my impression is that it’s still in its infancy

To give a sense of the scale of this problem, the largest frontier simulations I’m aware of are around the trillion atom scale. (On tens of thousands of GPUs [0])

Based on a quick web search, a c elegans cell is between 3 microns and 30 microns in diameter, so if we assume we can count atoms using the density of water then an all-atom simulation of a single neuron would need between 5e11 to 5e14 atoms. c. Elegans has 302 neurons so simulating the full neural network will be 2-5 orders of magnitude larger than current frontier simulations. Honestly more doable than I thought it would be, though all-atom simulation of a full organism still seems quite out of reach

This is all with classical force fields. Doing this simulation at the electronic structure level is much much harder with our current modeling capabilities

0: https://www.mrs.org/meetings-events/annual-meetings/archive/...


There's also interesting custom-made machines for molecular simulations that don't rely on GPGPUs and are significantly faster, e.g.: - https://arxiv.org/abs/2405.07898 - https://www.psc.edu/resources/anton/


is there any reasoning that for besides highly reductionistic and repetitive systems like crystals, quantum computing can compute quantum properties of molecules?

it seems to me "the quantum computer you seek" is the molecule + the medium (especially the medium) itself


So first I think I might need to apologize for some jargon collision - my background is mostly material simulation, and when I say “quantum simulation” I mostly mean using classical algorithms to solve the quantum mechanical wave equation describing a material or molecule.

I don’t pretend to have any particularly deep insight into quantum algorithms for chemistry, but [0] is a really nice review. It seems like there are a lot of possibilities for simulating general molecular and materials systems on quantum computers. The holy grail would be solving the exact quantum mechanical wave equation in sub-exponential time and space complexity. I don’t know how feasible that is, but it seems like people are making progress using quantum algorithms to accelerate approximate quantum simulation [1].

Back to all-atom c. elegans: I think quantum computing is more about accurate and scalable electronic structure modeling, and simulating enormous systems like this will still require fitting classical (meaning electrons are implicit) force fields and running them at scale for the foreseeable future. A lot of this is space complexity - I’m not sure how a quantum computer could do atomic simulations with sublinear scaling of qubits in the number of atoms being simulated, and were in the very early days of scaling quantum computers up

0: https://arxiv.org/abs/1812.09976

1: https://arxiv.org/abs/2307.07067


thanks for the reference.

yes. you nailed my point-that you will have to fit classical or quasi-classical fields, which is liable to require scads of qbits just to get close. qbits are just not "designed" to do that sort of thing.

in any case we ~solved protein folding heuristically and not using fields so i shouldn't be too pessimistic that it's impossible that quantum compute will help eventually.


Heh, I was actually building one. Haven't considered the battery... Are the web audio APIs bad, or are you forced to use the CPU? I guess with webgpu it may be easier?


I think on iOS you need access on the CoreAudio level if you want to be efficient, ie fill audio buffers on a high priority thread with some lower level static language.


There's a way to download all of your Alexa requests. I recommend it to everyone. It was interesting and horrifying to get literally all of them, from day 1. I noticed how tired I sound in the mornings or evenings. I started understanding patterns of my thoughts and needs. The Alexa went to the bin quickly after that session of exploration and insight.


Link for those that are interested: https://www.amazon.com/hz/privacy-central/data-requests/prev...

Be advised it's not instant.


Mine is a whole list of "weather" and "set timer for 4 minutes".


Heh, you can do the same with your Google searches. Equally horrifying, I suppose.


Where does Google offer this?



> The Alexa went to the bin quickly after that session of exploration and insight.

Why? It sounds like it was really interesting and valuable to observe those patterns.


Presumably because it is a privacy hazard to have someone else storing that kind of data about you


Exactly.


Precisely because it _was_ so interesting and valuable to observe those patterns - for the corporation observing them.


Exactly.


There are competitors, even open source ones


These are not viable options for the vast majority of users. Most peppe don't have a clue how to set up open source options, let alone set them up with usable hardware.

The average consumer wants out of the box solutions that don't require a degree in Computer Science to use.


Home Assistant is getting far easier to set up than you might expect, especially because they now do in fact have out of the box devices. It's not quite as ridiculously simple, not quite yet, but they're rapidly improving and it won't be long until they're better than Amazon Alexa/Google Home/other commercial solutions.


I am relatively tech savvy, installed HA recently in a VM on my media server and the thing was just a massive pain in the arse, particularly trying to migrate Thread devices from Apple Home to HA.

Sure things might be getting easier but they’re certainly not easy.


Just to chip in with a plug for HomeAssistant. I am really not very techy at all, but so far I have used the out-of-box HA Green version and:

-installed waterproof exterior socket, remotely controllable -installed various interior sockets -installed smart thermometer to control our little plant propagator

So far it seems to be a case of checking that the thing you are going to buy has a working HA integration program (which seem to be added on a fairly frequent basis) and then just adding it to the network. The only vaguely difficult thing I had to do was log in to my router homepage and change the wifi mode to allow the exterior socket to connect.

I'd much rather just not use Amazon/Google/etc where possible, as I don't like the feeling of being used.


What are these "out of the box devices"? I looked into things a couple of years ago, and back then it was all too much effort to set things up and keep things running and integrated, so I just went with Smart Life stuff from AliExpress. But would love to have Home Assistant if it means I don't need to spend weekends just reading docs, pairing, setting things up, connecting stuff...


Look at Home Assistant Green [0]. They've also got a smart speakers as of just recently [1], although they're still a "preview edition". The prices seem comparable to other similar smart home devices, IMO.

[0] https://www.home-assistant.io/green [1] https://www.home-assistant.io/voice-pe/


For the wifi smartlife stuff, you can use the official cloud based integration or if you want local control, the unofficial tuyalocal. The official integration is really easy to use but if your internet connection drops, you can't control your devices so I prefer to use tuyalocal it still requires to add the devices to the smartlife app once and then you add a device from the addon by scanning a qr code with the app. Once this is done you have local control over the device.

Zigbee devices require more initial setup, you have to buy a dongle, install the Zigbee2mqtt addon and the mqtt integration, but once this is done adding a devices is a really simple process : you put the devices into pair mode and allow pairing for 90s in the Zigbee2mqtt page and rename your device to something useful.


I've got HA set up (nearly 2 years now with a whole host of things connected: Bluetooth, WIFI, iOS devices, Zigbee, etc.) and I think I'm only just getting to the point now of two weekends worth of reading docs (primarily because their documentation seems to be written by developers rather than technical writers). Most time I've spent tinkering with HA was modifying their embedded `mastodon.py` to make it work with GotoSocial (but I think someone upstreamed a fix for that and it's no longer required.)


They're already better compared to commercial solutions regarding device/service support and complex automations.

But missing opinionated defaults, really, you still have to roll your own home/away/vacation solution. Creating a dashboard requires you to understand the meta of Home assistant which takes a lot of time.

People asking should I get PI or NUC every single day in the reddit. I am happy with my 2000lines long configuration file except scripts and automations. But it won't be easy for someone is not tech savvy.


Home assistant is a nightmare to set up. Even with their hardware, you need to learn a whole new vocabulary and God help you if you stay off the happy path.

If HA (which is a wonderful project) is your example of usable OSS software, then your bar is set lightyears away from what actual consumers need.


At no point did I say it's usable by the average, non-tech-inclined user. I said it's getting much better, quickly. It absolutely still needs work to replace something like Amazon or Google have.


I like your confidence in the competitors. Which ones do you recommend?

I need a timer, integration with smart home (turn things on and off), play songs and radio, I need to announce to my other devices. And the set up should not be a month long side project.

How much will it cost me to replace Alexa in at least 5 rooms...


Home Assistant. Sure, non-tech people might have an issue setting it up today (it's easy and getting easier, but it's not turn-key easy yet), but for you personally, this shouldn't be an issue.

Assuming you have a spare Raspberry Pi or some other compute you can dedicate to it, replacing Alexa in every aspect except the microphones is at most a couple hours of installing, configuring and testing stuff. I don't personally know how things are on the market with replacing the always-on microphones in every room, but ignoring that (let's assume for a moment you're fine with using either a phone or a smartwatch as voice I/O), you get:

- A better and more capable integration with smart home than anything on the market;

- A chance to pick whatever LLM you want to power your logic (just bring your own API key, ofc.), which instantly makes it much better than Google's Assistant, Siri and Alexa; this has been the case for around a year now, and the Big Companies are still playing catch-up with the simple "just feed it to GPT-4 / Claude along with some context and tools, and let it do what you want" approach.

- You can configure the activities whichever way you like, expose whichever smart devices you like, and you don't have to speak brands anymore. No more "Hey ${brand 1}, use ${brand 2} to play ${brand 3} on ${brand 4}" - you can just say "Please play whatever in the living room" and it just works.

(In my case, some of the most frequent commands are off-hand lines like "warm up the kids' room a bit, please", and "kill the ACs", or any variation that rolls off the tongue best. Claude knows what to do with zero config. Home Assistant alone cut the time to operate ACs from 2 minutes to 5 seconds (cold-start) relative to the vendor app; running things by voice from a watch is just a cherry on the cake.)

- If you're on Android, you can (and, again, could for around a year now) expose your phone to Home Assistant; setting the HA app as your assistant + coupling it with Tasker lets you also replicate the on-phone feature of commercial assistants, but better, because LLMs. It's smarter and sends less sensitive data to iffy cloud services (you control where STT and TTS happen).

- Timers and announcements and weather and such, you can obviously also handle through Home Assistant. The defaults should be enough for this (you might need to "add weather integration", "add timer integration", etc. - couple UI clicks in the UI, each). HA is simple by default, but you can also do more advanced stuff, at any complexity level between this and arbitrary code execution, through no-code, low-code (e.g. NodeRED) or yes-code means.

Going back to the topic of microphone arrays - I didn't look into it much; there are DIY solutions (with DIY quality of listening - which may be OK, depending on environment; almost 2 decades ago, I got a lot of mileage out of cheap microphone soldered to a 2M cable and glued to the side of the wardrobe, + Microsoft Speech API on the PC), I think I recall some people selling packaged microphone arrays, and I wouldn't be surprised if you could reuse Alexa hardware for the I/O part. But I honestly don't know. I'm fine with my phone and watch for I/O at the moment.


The microphones and speakers are what I care about. Alexa is the perfect hands-off universal remote + podcast speaker.

Is there a way to flash the Echo hardware to make it work with Home Assistant without pinging Amazon HQ?


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: