Quite an interesting read Basically saying we're in war time economy with a race to super intelligence. Whichever super power does it first has won the game.
Seeing the last tariffs and what China done about the rare earth minerals (and also the deal the US made with Ukraine for said minerals), the article might have a point that the super power will cripple each other to be the first with the super intelligence. And you also need money for it so tariffs.
There's only one problem with a race to superintelligence, and that's that nobody has evidence that mere intelligence is coming, much less superintelligence.
(There are a thousand more problems, but none of them matter until that first one is overcome.)
Today's AI does exhibit some intelligence, period. It is absurd to claim that an intelligent-looking entity doesn't have intelligence, only because we might not be able to determine which part of the entity has one. The superintelligence is an entirely different problem though because there is no clear path from intelligence to so-called superintelligence, everything has been just a speculation so far.
> It is absurd to claim that an intelligent-looking entity doesn't have intelligence
Is it? I am pretty sure biology will solve good old "are viruses alive?" sooner than we agree on definition of intelligence. "Chinese Room" is at least 40 years old.
And so do tons of counterarguments against the Chinese Room argument.
Practically speaking, the inherentness of intelligence doesn't really matter because both intelligent-looking entity and provably intelligent entity are capable for societal disruptions anyway. I partly dislike the Chinese Room argument for this reason; it facilitates useless discussions in most cases.
In that case there was still some intelligence. It turns out that a composite entity of Hans and its trainer was intelligent, and people (including the trainer) unknowingly regarded that as Hans' own intelligence.
Good gods, I can't wait for a second AI winter. Maybe we'll come up with fundamental breakthroughs in a couple of decades and give it another college try?
For the folks who lived though it; were the Expert Systems boosters as insufferable in the 80s as the LLM people are now about the path to machine intelligence?
No, because they mostly got military funding, not private equity.
ARPA would throw relatively large sums of money at you, but demand progress reports and a testable goal. Very little got rolled out based on hype. (Let's not talk about vehicle design.) If your project didn't show signs of working, or not enough signs of working, funding ended.
Anything which met goals and worked, we now think of as "automation" or "signal recognition" or "solvers", not "intelligent systems".
This take is way too generous to the current US administration’s quality of long term planning.
Tariffs aren’t there to pay for a race to superintelligence, they’re a lever that the authoritarian is pulling because it’s one of the most powerful levers the president is allowed to pull. It’s a narcissist’s toy to feel important and immediately impactful (and an obvious vehicle for insider trading).
If the present administration was interested in paying for a superintelligence race they wouldn’t have signed a law that increases the budget deficit.
They also wouldn’t be fucking with the “really smart foreign people to private US university, permanent residence, and corporate US employment” pipeline if they were interested in the superintelligence race.
While I agree about the competence of the high level decision makers in the administration, the people advising them are incredibly smart and should under no circumstances be underestimated. Peter and his circle are totally crazy, but if you're playing against them you better bring your A game.
I would submit the idea that the people advising this administration are not very smart. In what discernible way has this administration biased its selection process to include “smart?”
I don’t underestimate their ability to do damage but calling them smart is generous.
Not even Peter Thiel, he’s one of the most over-hyped people in tech. Access to capital is not intelligence, and a lot of his capital comes from the equivalent of casino wins with PayPal and Facebook.
It says the people running the US right now think that is the game we are playing - it doesn't say it is the one we actually are playing. America is utterly fucked if they are wrong, and only a bit less so if they are right.
Yes, but for simplicity I was looking for a provider that had both mailboxes as well as a transactional mail solution for a SaaS project I am working on.
I just finished building a Night Routine manager for me and my wife that help us keep alternating who does the toddler night routine. I needed this because we both have evening activities and trying to figure how to block day for it while keeping the routine fair was hard.
I wanted to build everything from the ground up in Go and have it fully integrated with Google Calendar where we have our family calendar.
It setup full day event with the name of the parent in charge of the night routine. To override a routine, any of us can just rename the event with the the other parent name and the software recalculate the following routines.
I also wanted to give a try to Roo Code in VS Code it only took me 2 days (evenings) to code the whole thing with a proper sqlite db.
My wife and I alternate and handle conflicts by exception. Meaning, if we have a conflict the other person subs in. I think calendaring things around your home schedule will be hard for the 'things'. Like, a lot of our evening activities are scheduled events that we can't control (concert, theater, etc) or even a group (dad's dinner, etc) and the occasional work stuff. Anyways, I just think it would be hard to plan for the home duties as a conflict and instead we try to sub in for each other and shoot for something that 'feels fair' over time instead of quantitatively fair on a tracker. I'm all for building tools for youself so if you think this would be of use then that's awesome, just thought I'd share my 2 cents on the 'problem space' :)
Also, FWIW, I think I'm the one that is in the deficit of fair although dad's usually get a bad rep in this regard. I end up doing a lot more night time reps because she does frequent girls nights and has multiple friend groups she's trying to stay engaged with and is in a theater group and mahjong group, etc. However, I balance it with my occasional "take an entire day" to myself. Stuff like this is hard to track and why I think it's important to shoot for what 'feels fair' and make sure you talk about it occasionally so nobody suddenly has repressed feelings of inequality.
This is great advice.
In our case, we realized we can't trust our memory anymore, it became hard to remember who done it the most.
We wanted both a system that keeps track of it for us. We want to be sure we can both have activities while still not leaving the other parent on the side and rely on feeling.
In our case we also have recurring events like sport in the evening that happens every week at the same time so this help plan around it and not become unbalanced. We already put everything in the calendar :)
The Apple TV also supports deep links for other services. Here are examples for the biggest streaming services:
Netflix (use the regular URL):
https://www.netflix.com/title/80234304
Disney+ (use regular URL):
https://www.disneyplus.com/movies/coco/db9orsI5O4gC
YouTube (use the regular URL with https:// replaced by youtube://)
Single video: youtube://www.youtube.com/watch?v=ah3ezprtgmc
Playlist: youtube://www.youtube.com/watch?v=v=FkUn86bH34M&list=PLzvRQMJ9HDiQF_5bEErheiAawrJ-2zQoI&pp=iAQB
The only problem with these services is that they will require you to select a profile before the movie/show will start playing. With Plex, you can enable "auto login", but I haven't tried it for the other services.
With only a little bit of effort you can prevent them from becoming obsolete. Most straightforward way is to just reprogram the nfc chip. Alternatively, Just add a redirector, one that maps show names/ids to the urls.
I still question why defer doesn't support doing exactly that.
After all it's like the go language provide us with a cleanup function that in 99% of the time shouldn't be used unless we manually wrap what it's calling to properly handle error.
> I still question why defer doesn't support doing exactly that.
When would it ever be useful? You'd soon start to hate life if you actually tried using the above function in anything beyond a toy application.
> 99% of the time shouldn't be used
1. 99% of the time it is fine to use without further consideration. Even if there are errors, they don't matter. The example from the parent comment is a perfect case in point. Who cares if Close fails? It doesn't affect you in any way.
2. 0.999% of the time if you have a function that combines an operation that might fail in a manner you need to deal with along with cleanup it will be designed to allow being called more than once, allowing you, the caller, to separate the operation and cleanup phases in your code.
3. 0.001% you might have to be careful about its use if a package has an ill-conceived API. If you can, fix the API. The chances of you encountering this is slim, though, especially if you don't randomly import packages written by a high school student writing code for the first time ever.
I think from a Go point of view, the lesson to be drawn from that is "don't defer a function call if you need to check its error value", rather than "defer needs to support checking of function return values".
In the example at hand, it really makes more sense to call Close() as soon as possible after the file is written. It's more of an issue with the underlying OS file API making error checking difficult.
In 99% of cases, the solution to this problem will be to use a WriteFile function that opens, writes and closes the file and does all the error handling for you.
> the lesson to be drawn from that is "don't defer a function call if you need to check its error value"
Isn't the lesson here: If you must have a Close method that might fail in your API, ensure it can safely be called multiple times?
As long as that is true, you can approach it like you would any other API that has resources that might need to be cleaned up.
f, _ := os.Create(...)
defer f.Close()
// perform writes and whatever else
if err := f.Close(); err != nil {
// recover from failure
}
(os.File supports this, expectedly)
> the solution to this problem will be to use a WriteFile function
If it were the solution you'd already be using os.WriteFile. It has a time and place, but often it is not suitable. Notably because it requires the entire file contents to be first stored in memory, which can become problematic.
Certainly you could write a custom WriteFile function that is tuned to your specific requirements, but now you're back to needing to be familiar with the intricacies of a lower-level API in order to facilitate that.
Sure, that's an alternative, although it means there will be some code paths where the error returned by f.Close() becomes the error returned by the entire function and others where it is ignored (though you could easily log it). That might be fine, but you also might want to handle all the cases explicitly and return a combined error in a case where, say, a non-file-related operation fails and then the file also fails to close.
> becomes the error returned by the entire function
If you find the error returned by f.Close to be significant, are you sure returning again it is the right course of action? Most likely you want to do something more meaningful with that state, like retrying the write with an alternate storage device.
Returning the error is giving up, and giving up just because a file didn't close does not make for a very robust system. Not all programs need to be robust, necessarily, but Go is definitely geared towards building systems that are intended to be robust.
You seem confused. The article is about writing a file where it does matter, but the comment example, which is what we're talking about, only reads a file. If close fails after read, who gives a shit? What difference is it going to make? All your read operations are complete already. Close isn't going to trigger a time machine that goes back and time and undos the reads you've performed. It is entirely inconsequential.
You ignore errors on close, and one morning you wake up with your app in CrashLoopBackoff with the final log message "too many files". How do you start debugging this?
Compare the process to the case where you do log errors, and your log is full of "close /mnt/some-terrible-fuse-filesystem/scratch.txt: input/output error". Still baffling of course, but you have some idea where to go next.
To start, you need to figure out why Kubernetes isn't retaining your stack trace/related metadata when the app crashes. That is the most pressing bug. Which is probably best left to the k9s team. You outsourced that aspect of the business of good reason, no doubt.
After they've fixed what they need to fix you need to use the information now being retained to narrow down why your app is crashing at all. Failing to open a file is expected behaviour. It should not be crashing.
Then maybe you can get around to looking at the close issue. But it's the least of your concerns. You've got way bigger problems to tackle first.
A file not able to opened is expected, always! accept is no exception here. Your application should not be crashing because of it.
If I recall, Kubernetes performs health checks over HTTP, so presumably your application is using the standard library's http server to provide that? If so, accept is full abstracted away. So, if that's crashing, that's a bug in Go.
Is that for you to debug, or is it best passed on to the Go team?
There isn't a bug, it's resource exhaustion. You open a bunch of files and they fail to close. You don't log errors on the close, so you have no idea it's happening. Now your app is failing to open new file descriptors to accept HTTP connections. You get a fixed number of fds per app; ulimit -n. If you don't close files you've read, the descriptor is gone.
The bug in this case is in the filesystem that hangs on close. It happens on network filesystems. You can't return the fd to the kernel if your filesystem doesn't let you.
The bug of which we speak is in that your app is crashing. Exhausting open file handles is expected behaviour! Expected behaviour should not lead to a crash. Crashing is only for exceptional behaviour.
The filesystem hanging is unlikely to be a bug. The filesystems you'd realistically use in conjunction with Kubernetes are pretty heavily tested. More likely it is supposed to hang under whatever conditions has lead that to happen.
And, sure, maybe you'll eventually want to determine why the filesystem has moved into that failure state, but most pressing is that your app is crashing. All that work you put into gracefully handling the failing situation going to waste.
Kubernetes is really here nor there. It's the crashing of the app that is our focus. An app should not be crashing on expected behaviour.
That's clearly a bug, and the bug you need to fix first so that you can have your failsafes start working again. You asked where to start and that's the answer, unquestionably.
The app doesn't crash, it's deadlocked. It can't do any more work because to do future work it needs to accept TCP connections. It can't do that because it has hit a resource limit. It hit the resource limit because it didn't correctly close files. It can't close files because of a bug in the filesystem. You don't know this because you didn't log the errors.
I really don't know how I can make my explanation simpler.
> I really don't know how I can make my explanation simpler.
Not making up some elaborate story that you now are trying to say didn't even happen would be a good start. What you are actually trying to communicate is not complicated at all. It didn't need a story. Not sure what you were thinking when you decided fiction writing was a good idea, but I certainly had fun making fun of you for it! So, at least it was not all for not.
Only if you can safely assume the OS, file system, or std lib cleans up any open file handles that failed to close; I'm 99% sure this is the case in 99% of cases, but there may be edge cases (very specific filesystems or hardware?) where it does matter? I don't know.
You can't safely assume that, but what are you going to do about it when it does fail? There is nothing you can do. There isn't usually a CloseClose function to use when Close fails. If Close fails, that's it. You're otherwise out of luck.
Certainly, in the write failure case where there is write failure you'd want to try writing to something else (ideally), notify someone that an operation didn't happen (as a last resort), or something to that effect in order to recover.
But in this case there is no need to try again and nobody really cares. Everything you needed the resources for is already successfully completed. If there is failure when releasing those resources, so what? There is nothing you can do about it.
TLDR; People don't trust airline and airport with their carry on.
It's all about being sure your carry on luggage will be in the plane with you and not have to be checked in.
People don't want to pay for that and don't trust the airport to get the luggage to their destination.
Also if you do multi flight with different airlines that don't have code share you'd have to go take your luggage and checked in it again, do security etc ... (Depends on the airport).
Seeing the last tariffs and what China done about the rare earth minerals (and also the deal the US made with Ukraine for said minerals), the article might have a point that the super power will cripple each other to be the first with the super intelligence. And you also need money for it so tariffs.