I have to say, when I see a post by a company like OpenAI about "safety, freedom and privacy", I can't keep a straight face. They might as well title the piece "If you don't mind, we'd like to gaslight you across several paragraphs". No thanks.
I'm sure it's true and all. But I've been hearing the same claim about all those tools uv is intended to replace, for years now. And every time I try to run any of those, as someone who's not really a python coder, but can shit out scripts in it if needed and sometimes tries to run python software from github, it's been a complete clusterfuck.
So I guess what I'm wondering is, are you a python guy, or are you more like me? because for basically any of these tools, python people tell me "tool X solved all my problems" and people from my own cohort tell me "it doesn't really solve anything, it's still a mess".
I'm about the highest tier of package manager nerd you'll find out there, but despite all that, I've been struggling to create/run/manage venvs out there for ages. Always afraid of installing a pip package or some piece of python-based software (that might muck up Python versions).
I've been semi-friendly with Poetry already, but mostly because it was the best thing around at the time, and a step in the right direction.
I'm (reluctantly) a python guy, and uv really is a much different experience for me than all the other tools. I've otherwise had much the same experience as you describe here. Maybe it's because `uv` is built in rust? ¯\_ (ツ)_/¯
But I'd also hesitate to say it "solves all my problems". There's plenty of python problems outside of the core focus of `uv`. For example, I think building a python package for distribution is still awkward and docs are not straightforward (for example, pointing to non-python files which I want to include was fairly annoying to figure out).
As a mainly Python guy (Data Engineering so new project for every ETL pipeline = a lot of projects) uv solved every problem I had before with pip, conda, miniconda, pipx etc.
Yes. Because it will decrease the legitimate traffic online that is encrypted, which makes it easier to pick out encrypted channels from the noise. A few listeners at key nodes in the country's communications network to flag encrypted signals for investigation or simple disruption and you're G2G.
It's the "If you ban guns, only criminals will have guns" theory, except the other side of that coin is "It's real easy to see who the criminals are if guns are banned: they're the folks carrying guns."
How do you filter encrypted channels from the noise? For example, say the criminals now communicate by having a browser extension write e2ee encrypted todo items on a shared todo list app.
Sure, you could make unauthorized, fully encrypted communication illegal. But what would be the punishment for using it? Worse than for smuggling, human trafficking, murder? I seriously doubt it. If you're a criminal risking decades in prison for major crimes, using some illegal software is 100% worth it, if it significantly reduces the risk of getting caught for the real crimes you're committing.
You can't make laws that govern how criminals behave. All chat control will really accomplish is maybe a momentary string of arrests(which is meaningless in the long term; there's always someone to take over), and longer term, worse privacy and security for everyone except the criminals.
UK has the idea of contempt of court. Even as it stands, the court can demand you submit some evidence - say an encryption key for a document. And if you refuse, they can even imprison you until you surrender the key.
Another principle is that when someone is destroying evidence, you can presume it contained incriminating evidence.
I think you could make the punishment proportional to the presumed crime.
This is the conclusion I come to whenever I try to grasp the works of Nagel, Chalmers, Goff, Searle et al. They're just linguistically chasing their own tails. There's no meaningful insight below it all. All of their arguments, however complex, all rely on poorly defined terms like "understand" "subjective experience", "what it is like", "qualia", etc. And when you try to understand the arguments with the definition of these terms left open, you realise the arguments only make sense when the terms include in their definition a supposition that the argument is true. It's all just circular reasoning.
“The Feeling of What Happens” by Antonio D’Amasio, a book by a neuroscientist some years ago [0], does an excellent job of building a framework for conscious sensation from the parts, as I recall, constructing a theory of “mind maps” from various nervous system structures that impressed me with a sense that I could afterwards understand them.
As a radical materialist, the problem with ordinary materialism is that it boils down to dualism because some types matter (e.g. the human nervous system) give rise to consciousness and other types of matter (e.g. human bones) do not.
Ordinary materialism is mind-body/soul-substance subjectivity with a hat and lipstick.
Human bones most definitely do contribute to feeling, but not through logos. The book expands upon the idea of mind body duality to merge proprioception and general perception.
I’d bet bats would enjoy marrow too if they could.
So how does a radical materialist explain consciousness- that it is too is a fundamental material phenomena? If so are you stretching the definition of materialism?
I find myself believing in Idealism or monism to be the fundamental likelihood
well the hard problem of consciousness gets in the way of that
- I assume you as a materialist you mean our brain carries consciousness as a field of experience arising out of neural activity (ie neurons firing, some kind of infromation processing leading to models of reality simulated in our mind leading to ourselves feeling aware) ie that we our awareness is the 'software' running inside the wetware.
That's all well and good except that none of that explains the 'feeling of it' there is nothing in that 3rd person material activity that correlates with first person feeling. The two things, (reductionist physical processes cannot substitute for the feeling you and I have as we experience)
This hard problem is difficult to surmount physically -either you say its an illusion but how can the primary thing we are, we expereince as the self be an illusion? or you say that somewhere in fields, atoms, molecules, cells, in 'stuff; is the redness of red or the taste of chocolate..
whenever I see the word 'reductionist', I wonder why it's being used to disparage.
a materialist isn't saying that only material exists: no materialist denies that interesting stuff (behaviors, properties) emerges from material. in fact, "material" is a bit dated, since "stuff-type material" is an emergent property of quantum fields.
why is experience not just the behavior of a neural computer which has certain capabilities (such as remembering its history/identity, some amount of introspection, and of course embodiment and perception)? non-computer-programming philosophers may think there's something hard there, but they only way they can express it boils down to "I think my experience is special".
Because consciousness itself cannot be explained except through experience ie consciousness (ie first person experience) - not through material phenomena
It’s like explaining music vs hearing music
We can explain music intellectually and physically and mathematically
But hearing it in our awareness is a categorically different activity and it’s experience that has no direct correlation to the physical correlates of its being
Up to a point I agree, but when someone deploys this vague language in what are presented as strong arguments for big claims, it is they who bear the burden of disambiguating, clarifying and justifying the terms they use.
I don't agree that the inherent nebulousness of the subject extends cover to the likes of Goff, Chalmers (on pansychism), or Searle and Nagel (on the hard problem). It's a both can be true situation and many practicing philosophers appreciate the nebulousness of the topic while strongly disagreeing with the collective attitudes embodied by those names.
If he were capable of describing subjective experience in words with the exactitude you're asking for, then his central argument would be false. The point is that objective measures, like writing, are external, and cannot describe internal subjective experience. Its one thing to probe the atoms; its another thing to be the atoms themselves.
Basically his answer to the question "What is it like to be a bat?" is that its impossible to know.
>This is the conclusion I come to whenever I try to grasp the works of Nagel, Chalmers, Goff, Searle et al. They're just linguistically chasing their own tails.
I do mostly agree with that and I think that they collectively give analytic philosophy a bad name. The worst I can say for Nagel in this particular case though is that the whole entire argument amounts to, at best, an evocative variation of a familiar idea presented as though it's a revelatory introduction of a novel concept. But I don't think he's hiding an untruth behind equivocations, at least not in this case.
But more generally, I would say I couldn't agree more when it comes to the names you listed. Analytic philosophy ended up being almost completely irrelevant to the necessary conceptual breakthroughs that brought us LLMs, a critical missed opportunity for philosophy to be the field that germinates new branches of science, and a sign that a non-trivial portion of its leading lights are just dithering.
Don't agree with this kind of linguistic dismissal. It doesn't change the fact that we have sensations of color, sound, etc. and there are animals that can see colors, hear sounds and detect phenomena we don't. It's also quite possible they experience the same frequencies we see or hear differently, due to their biological differences. This was noted by ancient skeptics when discussing the relativity of perception.
That is what is being discussed using the "what it's like" language.
I like the more specific versions of those terms: the feeling of a toothache and the taste of mint. There's no need to grasp anything, they're feelings. There's no feeling when a metal bar is bent by a press.
As a player I like to think of sharpness as a measure of the potential consequences of a miscalculation. In a main line dragon, the consequence is often getting checkmated in the near future, so maximally sharp. In a quiet positional struggle, the consequence might be something as minor as the opponent getting a strong knight, or ending up with a weak pawn.
Whereas complexity is a measure of how far ahead I can reasonably expect to calculate. This is something non-players often misunderstand, which is why they like to ask me how many moves ahead I can see. It depends on the position.
And I agree, these concepts are orthogonal. Positions can be sharp, complex, both or neither. A pawn endgame is typically very sharp; the slightest mistake can lead to the opponent queening and checkmating. But it's relativity low in complexity because you can calculate far ahead using ideas like counting, geometric patterns(square of the pawn, zone of the pawn, distant opposition etc) to abstract over long lines of play. On the opposite side, something like a main line closed Ruy Lopez is very complex(every piece still on the board), but not especially sharp(closed position, both kings are safe, it's more of a struggle for slight positional edges).
Something like a king's indian or benoni will be both sharp and complex. Whereas an equal rook endgame is neither(it's quite hard to lose a rook endgame, there always seems to be a way to save a draw).
To add to this, Java and other GC languages in some sense have manual memory management too, no matter how much we like to pretend otherwise.
It's easy to fall into a trap where your Banana class becomes a GorillaHoldingTheBananaAndTheEntireJungle class(to borrow a phrase from Joe Armstrong), and nothing ever gets freed because everything is always referenced by something else.
Not to mention the dark arts of avoiding long GC pauses etc.
It's possible to do this in rust too, I suppose. The clearest difference is that in rust these things are explicit rather than implicit. To do this in rust you'd have to use 'static, etc. The other distinction is compile-time versus runtime, of course.
> The clearest difference is that in rust these things are explicit rather than implicit. To do this in rust you'd have to use 'static, etc.
You could use 'static, or you can move (partial) ownership of an object into itself with Rc/Arc and locking, causing the underlying counter to never return to 0. It's still very possible to accidentally hold on to a forest.
> It's easy to fall into a trap where your Banana class becomes a GorillaHoldingTheBananaAndTheEntireJungle class(to borrow a phrase from Joe Armstrong), and nothing ever gets freed because everything is always referenced by something else.
Can you elaborate on this? I'm struggling to picture a situation in which I have a gorilla I'm currently using, but keeping the banana it's holding and the jungle it's in alive is a bad thing.
The joke is you're using the banana but you didn't actually want the gorilla, much less the whole jungle. E.g. you might have an object that represents the single database row you're doing something with, but it's keeping alive a big result set and a connection handle and a transaction. The same thing happening with just an in-memory datastructure (e.g. you computed some big tree structure to compute the result you need) is less bad, but it can still impact your memory usage quite a lot.
The reason it's common courtesy is out of respect for the reviewer/maintainer's time. You need to let em know to look for the kind of idiotic mistakes LLMs shit out on a routine basis. It's not a "distraction", it's extremely relevant information. On the maintainer's discretion, they may not want to waste their time reviewing it at all, and politely or impolitely ask the contributor to do it again, and use their own brain this time. It also informs them on how seriously to take this contributor in the future, if the work doesn't hold water, or indeed, even if it does, since the next time the contributor runs the LLM lottery the result may be utter bullshit.
Whether it's prose or code, when informed something is entirely or partially AI generated, it completely changes the way I read it. I have to question every part of it now, no matter how intuitive or "no one could get this wrong"ish it might seem. And when I do, I usually find a multitude of minor or major problems. Doesn't matter how "state of the art" the LLM that shat it out was. They're still there. The only thing that ever changed in my experience is that problems become trickier to spot. Because these things are bullshit generators. All they're getting better at is disguising the bullshit.
I'm sure I'll gets lots of responses trying to nitpick my comment apart. "You're holding it wrong", bla bla bla. I really don't care anymore. Don't waste your time. I won't engage with any of it.
I used to think it was undeserved that we programmers called ourselved "engineers" and "architects" even before LLMs. At this point, it's completely farcical.
"Gee, why would I volunteer that my work came from a bullshit generator? How is that relevant to anything?" What a world.
But how much time does that 0.3 watt hour query take to run? They imply that an individual ChatGPT query takes 0.3-3 watt hours, but most queries come back in seconds, so we need to scale that over a whole hour of processing.
Edit: Scrolling down: "one second of H100-time per query, 1500 watts per H100, and a 70% factor for power utilization gets us 1050 watt-seconds of energy", which is how they get down to 0.3 = 1050/60/60.
OK, so if they run if for a full hour it's 1050*60*60 = 3.8 MW? That can't be right.
Edit Edit: Wait, no, it's just 1050 Watt Hours, right (though let's be honest, the 70% power utilization is a bit goofy - the power is still used)? So it's 3x the power to solve the same question?