As someone who pays for ChatGPT and Claude, and uses them EVERYDAY... I still am not sure how I feel about these consumer apps having access to all my health data. OpenAI doesn't have the best track record of data safety.
Sure OpenAI business side has SOC2/ISO27001/HIPAA compliance, but does the consumer side? In the past their certifications have been very clearly "this is only for the business platform". And yes, I know regular consumer don't know what SOC2 is other than a pair of socks that made it out of the dryer.... but still. It's a little scary when getting into very personal/private health data.
Gattaca is supposed to be a warning, not a prediction. Then again neither was Idiocracy, yet here we are.
Unfortunately I don't think that's a good solution. Memories are an excellent feature and you see them on.... most similar services now.
Yes, projects have their uses. But as an example - I do python across many projects and non-projects alike. I don't want to want to need to tell ChatGPT exactly how I like my python each and everytime, or with each project. If it was just one or two items like that, fine, I can update its custom instruction personalization. But there are tons of nuances.
The system knowing who I am, what I do for work, what I like, what I don't like, what I'm working on, what I'm interested in... makes it vastly more useful. When I randomly ask ChatGPT "Hey, could I automate this sprinkler" it knows I use home assistant, I've done XYZ projects, I prefer python, I like DIY projects to a certain extent but am willing to buy in which case be prosumer. Etc. Etc. It's more like a real human assistant, than a dumb-bot.
I have not really seen ChatGPT learn who I “am”, what I “like” etc. With memories enabled it seems to mostly remember random one-off things from one chat that are definitely irrelevant for all future chats. I much prefer writing a system prompt where I can decide what's relevant.
I know what you mean, but the issue the parent comment brought up is real and "bad" chats can contaminate future ones. Before switching off memories, I found I had to censor myself in case I messed up the system memory.
I've found a good balance with the global system prompt (with info about me and general preferences) and project level system prompts. In your example, I would have a "Python" project with the appropriate context. I have others for "health", "home automation", etc.
Maybe if they worked correctly they would be. I've had answers to questions be influenced needlessly by past chats and I had to tell it to answer the question at hand and not use knowledge of a previous chat that was completely unrelated other than being a programming question.
This idea that it is so much more better for OpenAI to have all this information about because it can make some suggestions seem ludicrous. How has humanity survived thus far without this. This seems like you just need more connections with real people.
> The system knowing who I am, what I do for work, what I like, what I don't like, what I'm working on, what I'm interested in... makes it vastly more useful.
I could not disagree more. A major failure mode of LLMs in my experience is their getting stuck on a specific train of thought. Being forced to re-explain context each time is a very useful sanity check.
Its unfortunate the way modern politics has gone. I see this site and am immediately suspicious. What bullshit is there? What ulterior motive should I be concerned about?
Rather than reading it, assuming it was fact based science. Maybe not the best because governments never get things 100%.... but at least able to trust it. Now specifically because this is RFK's MAHA world, I assume everything on this site is a lie.
After reading through it I don't see anything terrible or stupidly over the top. Yes, more proteins and vegetables good, less heavily processed foods.
Hi. Yes, we fully intend to open up access to the build tool here. The build file you see is a new format that we've built to be able to do reproducible builds. It's a new frontend on top of buildkit so you can use it with docker build. The team is currently working hard to provide access to this tooling which will enable you to create, build and modify the images in your environment. We just need a couple more days for this to be available.
You do not need a custom buildkit frontend to do reproducible builds with any modern container build system, including docker.
Vanilla docker/buildkit works just fine as we use it in Stagex with just makefiles and Containerfiles which makes it super easy for anyone to reproduce our images with identical digests, and audit the process. The only thing non default we do to docker is have it use the containerd backend that comes with docker distributions since that allows for deterministic digests without pushing to a registry. This lets us have the same digests across all registries.
Additionally our images are actually container native meaning they are "from scratch" all the way down avoiding any trust in upstream build systems like Debian or Alpine or any of their non deterministic package management schemes or their single-point-of-failure trust in individual maintainers.
We will also be moving to LLVM native builds shortly removing a lot of the complexity with multi-arch images for build systems. Easily cross compile all the things from one image.
Honestly we would not at all be mad if Docker just white labeled these as official images as our goal is just to move the internet away from risky and difficult to audit supply chains as opposed to the "last mile" supply chain integrity that is the norm in all other solutions today.
My friends owned it (I was never allowed to have a NES myself). Not once did ANY of us ever manage to land the plane. We tried MANY times. This blog makes it seem so easy I want to be angry at it :-)
Too much of anything sucks. Too big of a monolith? Sucks. Too many microservices? Sucks. Getting the right balance is HARD.
Plus, it's ALWAYS easier/better to run v2 of something when you completely re-write v1 from scratch. The article could have just as easily been "Why Segment moved from 100 microservices to 5" or "Why Segment rewrote every microservice". The benefits of hindsight and real-world data shouldn't be undersold.
At the end of the day, write something, get it out there. Make decisions, accept some of them will be wrong. Be willing to correct for those mistakes or at least accept they will be a pain for a while.
In short: No matter what you do the first time around... it's wrong.
Probably a lot of overlap in the venn diagram of people who would like the two
things. Mostly the "Early Adopter" circle.
Also a lot of cars have a lot of limitations with comma.ai. Yes, you can install it on all sorts but there are limitations like: above 32mph, cannot resume from stop, cannot take tight corners, cannot do stop light detection, requires additional car upgrades/features, only known to support model year 2021. Etc.
Rivian supports everything, it has a customer base who LOVE technology, are willing to try new things, and ... have disposable income for a $1k extra gadget.
I would wager that's because there isn't a lot of existing silicon that fits the bill. What COTS equipment is there that has all the CPU/Tensor horsepower these systems need... AND is reasonably power efficient AND is rated for a vehicle (wild temp extremes like -20F to 150F+, constant vibration, slams and impacts... and will keep working for 15 years).
Yea, Tesla has some. But they aren't sharing their secret sauce. You can't just throw a desktop computer in a car and expect it to survive for the duration. Ford et all aren't anywhere close to having "premium silicon".
So you're only option right now is to build your own. And hope maybe that you can sell/license your designs to others later and make bucks.
Isn't that risk balanced by a healthy reward of controlling their verticals and possible secret sauce?
And their chips give "1600 sparse INT8 TOPS" vs the Orin's "more than 1,000 INT8 TOPS" -- so comparable enough? And going forward they can tailor it to exactly what they want?
Orin is Nvidia's last generation. Current gen is Thor at 1k TOPS. Rivian's announcement specifies TOPS at the module level. The actual chip is more like 800 and probably doubled. Throw two Thors on a similar board and you're looking at 2000 sparse int8 TOPS.
I've been involved with similar efforts on both sides before. Making your own hardware is not a clear cut win even if you hit timelines and performance. I wish them luck (not least because I might need a new job someday), but this is an incredibly difficult space.
Mostly it costs hundreds of millions to develop a chip; it relies on volume to recover the cost.
NVIDIA also tailor their chips to customers. It's a more scalable platform than their marketing hints at... Not to mention that they also iterate fairly quickly.
So far anyway, being on a specialised architecture is a disadvantage; it's much easier to use the advances that come from research and competitors. Unless you really think that you are ahead of the completion, and can sell some fairly inflexible solution for a while.
OpenAI in health - I'm reticent.
As someone who pays for ChatGPT and Claude, and uses them EVERYDAY... I still am not sure how I feel about these consumer apps having access to all my health data. OpenAI doesn't have the best track record of data safety.
Sure OpenAI business side has SOC2/ISO27001/HIPAA compliance, but does the consumer side? In the past their certifications have been very clearly "this is only for the business platform". And yes, I know regular consumer don't know what SOC2 is other than a pair of socks that made it out of the dryer.... but still. It's a little scary when getting into very personal/private health data.
Gattaca is supposed to be a warning, not a prediction. Then again neither was Idiocracy, yet here we are.
reply