Hacker Newsnew | past | comments | ask | show | jobs | submit | syllogism's commentslogin

This has been discussed to death but to reiterate here: people have been much too polite about Kramnick's nonsense. Danya and Hikaru are (were) probably the two people in the world whose bullet play is least suspicious. Cheating isn't very powerful in short time controls, and they have streamed thousands of games playing them.

Kramnick's bullshit never made any damn sense at all.


no doubt they're two of the least suspicious, particularly Danya, but you're mistaken in thinking that cheating isn't very powerful in short time controls


In Europe there are legitimate and extremely established services that require you to input your bank login details into something other than your bank's website. It's madness.


There's no legitimate case for that since PSD2 (mandatory since 2020). Are you not confused by that? PSD2 doesn't share your credentials.

I'm an European and have never needed to use nor encountered those services.


PSD2 is just MFA, it doesn't prevent shady companies still asking your login credentials, even if you must authorize that login from your official banking app. Klarna is one of many examples - they ask me for my bank credentials on their own website so they can crawl all my finance data .


Plaid and Finicity do this in the USA for some linking of banking to other financial products. Feels SO insecure. Connecting my credit union checking account through Plaid even ironically brought me to a login page which explicitly states I should never give my banking password to any other entity.

If I need to link my accounts and these services are the only choice then I change my banking passwords immediately after.


I thought Plaid used OAuth2. Hmm.


Plaid whole business model is that it uses OAuth2 on banks that support it and export the data through APIs; and for the banks that don't, they ask for name/password and scrape it through "fake" web browser that mimick user behavior on the backend.

(I worked for a Plaid competitor. The long-term goal for all similar companies is of course to use OAuth and APIs, because it breaks less often; but since the banks don't offer that, scraping it is!)


MX?


Plaid asks for your raw bank credentials so that it can scrape up data. That's why I've always refused to use it.


I really hope to never be in the position where I have to use it


I have a Klarna account I opened when their flex account rate was amongst the best you could get and I don't remember them ever asking for my bank credential.

I think Bankin' used to before PSD2 and to get a bit more information from some banks but then again Bankin' is a financial agreggator whose explicit purpose is crawling your banking data so it's not too surprising to see them asking for your credentials.


So does Paypal nowadays when you want to open a new account...


Where a bank doesn't offer compliant APIs, screen-scraping integrations are explicitly allowed. Not sure how common that is at this point.


Thousands and thousands of institutions, they scrape.


Not sure what you mean specifically, but generally the organisations doing screen-scraping¹ would prefer to use compliant APIs as they don't require anything like as much maintenance (bank adds a button to the login flow? Kaboom! Integration is broken...) or resources (e.g. running headless browsers).

Some markets are pretty much exclusively compliant - I don't think there are any Nordic banks that don't have fully PSD2 compliant APIs for example whereas, if I remember rightly, the Spanish banks were all over the place. I'm fairly out of date though, so things may have improved or exceptions for scraping expired.

¹ Note that I'm talking exclusively about banking integrations here, not AI nonsense.


Care to mention what these legitimate and established services are?


Plaid is used by a lot of the major Canadian banks.


Flinks is also an often-used aggregator in Canada.

"Connecting" savings accounts from EQ Bank or Wealthsimple to an account at TD Bank requires providing TD credentials to Flinks.


Sofort used to do this. I don't know if they still do.


Paypal, Klarna


Name and shame: Klarna did this.

Not sure if they still do because i stay well clear of them.


I find this hard to believe and have never seen that ever.


It used to be common 5 years ago before PSD2.


Don't understand the downvotes, i never saw that too, and i am shopping online very often.


If you used the first gen "pay later" services they'd scrape you for "compliance checking" or simply mask it as a transaction which is actually just personal information scraping.

Most of the times you did not see it, as it's obfuscated as a part of the transaction.

They are also the companies complaining a lot about the "failure" of the PSD standards since it limits how much and how obfuscated they can scrape everything (and there are records).


Are you talking about the possibility to pay via your bank account directly on a checkout page? If so this is the bank page you are using.

Can you give some examples?


Multiple US hospitals and insurance companies use genuine links like doctor-services-for-u.biz - infuriating.


Are you sure? Never seen any such thing.


It used to be common before PSD2 but I have personally not seen it for some years.


It seems mainly localized to Germany


It's interesting that there's still such a market for this sort of take.

> In a recent pre-print paper, researchers from the University of Arizona summarize this existing work as "suggest[ing] that LLMs are not principled reasoners but rather sophisticated simulators of reasoning-like text."

What does this even mean? Let's veto the word "reasoning" here and reflect.

The LLM produces a series of outputs. Each output changes the likelihood of the next output. So it's transitioning in a very large state space.

Assume there exists some states that the activations could be in that would cause the correct output to be generated. Assume also that there is some possible path of text connecting the original input to such a success state.

The reinforcement learning objective reinforces pathways that were successful during training. If there's some intermediate calculation to do or 'inference' that could be drawn, writing out a new text that makes that explicit might be a useful step. The reinforcement learning objective is supposed to encourage the model to learn such patterns.

So what does "sophisticated simulators of reasoning-like text" even mean here? The mechanism that the model uses to transition towards the answer is to generate intermediate text. What's the complaint here?

It makes the same sort of sense to talk about the model "reasoning" as it does to talk about AlphaZero "valuing material" or "fighting for the center". These are shorthands for describing patterns of behaviour, but of course the model doesn't "value" anything in a strictly human way. The chess engine usually doesn't see a full line to victory, but in the games it's played, paths which transition through states with material advantage are often good -- although it depends on other factors.

So of course the chain-of-thought transition process is brittle, and it's brittle in ways that don't match human mistakes. What does it prove that there are counter-examples with irrelevant text interposed that cause the model to produce the wrong output? It shows nothing --- it's a probabilistic process. Of course some different inputs lead to different paths being taken, which may be less successful.


> The mechanism that the model uses to transition towards the answer is to generate intermediate text.

Yes, which makes sense, because if there's a landscape of states that the model is traversing, and there are probablistically likely pathways between an initial state and the desired output, but there isn't a direct pathway, then training the the model to generate intermediate text in order to move across that landscape so it can reach the desired output state is a good idea.

Presumably LLM companies are aware that there is (in general) no relationship between the generated intermediate text and the output, and the point of the article is that by calling it a "chain of thought" rather than "essentially-meaningless intermediate text which increases the number of potential states the model can reach" users are misled into thinking that the model is reasoning, and may then make unwarranted assumptions, such as that the model could in general apply the same reasoning to similar problems, which is in general not true.


Meaningless? The participation in a usefully predicting path is meaning. A different meaning.

And Gemini has a note at the bottom about mistakes, and many people discuss this. Caveat emptor, as usual.


So, you agree with the point that they’re making and you’re mad about it? It’s important to state that the models aren’t doing real reasoning because they are being marketed and sold as if they are.

As for your question: ‘So what does "sophisticated simulators of reasoning-like text" even mean here?’

It means CoT interstitial “reasoning” steps produce text that looks like reasoning, but is just a rough approximation, given that the reasoning often doesn’t line up with the conclusion, or the priors, or reality.


What is "real reasoning"? The mechanism that the models use is well described. They do what they do. What is this article's complaint?


For example - at minimum reasoning should match what actually happened. This is not even a complete set of criteria for reasoning, but at least a minimal baseline. Currently LLM programs are generating BS in the "reasoning" part of the output. For example ask the LLM program to "reason" how it produces a sum of two numbers and you will see that it doesn't match at all with what LLM program did in the background. The "reasoning" it outputs is simply an extract of the reasoning which humans did in the LLM dataset. Even Anthropic officially admits this. If you ask a program how to do maintenance on a gearbox and it replies with very well articulated and correct (important!) guide to harvest wheat, then we can't call it reasoning of any kind, despite that wheat farming guide was correct and logical.


As soon as you introduce multiple constraints on what is and isn't reasoning people get confused and disengage.

I like this approach of setting a minimum constraint. But i feel adding more will just make people ignore the point entirely.


The reality is obvious. The only way not to see it when looking at research like this is to not want to see it. The idea that this critique is somehow more confusing than the use of the word "reasoning" itself is farcical.

LLMs are cool and some of the things they can do now are useful, even surprising. But when it comes to AI, business leaders are talking their books and many people are swept up by that breathless talk and their own misleading intuitions, frequently parroted by the media.

The "but human reasoning is also flawed, so I can't possibly understand what you mean!" objection cannot be sustained in good faith short of delusion.


“the mechanism the models us is well described”

Vs

Total AI capex in the past 6 months was greater than US consumer spending

Or

AGI is coming

Or

AI Agents will be able to do most white collar work

——

The paper is addressing parts of the conversation and expectations of AI that are in the HYPE quadrant. There’s money riding on the idea that AI is going to begin to reason reliably. That it will work as a ghost in the machine.


This is why research like this is important and needs to keep being published.

What we have seen the last few years is a conscious marketing effort to rebrand everything ML as AI and to use terms like "Reasoning", "Extended Thinking" and others that for many non technical people give the impression that it is doing far more than it is actually doing.

Many of us here can see his research and be like... well yeah we already knew this. But there is a very well funded effort to oversell what these systems can actually do and that is reaching the people that ultimately make the decisions at companies.

So the question is no longer will AI Agents be able to do most white collar work. They can probably fake it well enough to accomplish a few tasks and management will see that. But will the output actually be valuable long term vs short term gains.


I'm happy enough if I'm better off for having used a tool than having not.


Most people weren’t happy when the 2008 crash happened, and bank bailouts were needed, and a global recession ensued.

Most people here are going to use a coding agent, be happy about it (like you), and go on their merry way.

Most people here are not making near trillion dollar bets on the world changing power of AI.

EVERYONE here will be affected by those bets. It’s one thing if those bets pay off if future subscription growth matches targets. It’s an entirely different thing if those bets require “reasoning” to pan out.


The scary thing about ML isn’t that it’s poised to eat a lot of lower-reasoning tasks, it’s that we’re going to find ourselves in a landscape of “that’s just what the AI said to do” kind of excuses for all kinds of bad behavior, and we’re completely unwilling to explore what biases are encoded in the models we’re producing. It’s like how Facebook abdicates responsibility for how users feel because it’s just the product of an algorithm. And if I were a betting person I’d bet all this stuff is going to be used for making rental determinations and for deciding who gets exceptions to overdraft fees well before it’s used for anything else. It’s an enabling technology for all kinds of inhumanity.


"the reasoning often doesn’t line up with the conclusion, or the priors, or reality."

My dude, have you ever interacted with human reasoning?


Are you sure you are not comparing to human unreason?

Most of what humans think of as reason is actually "will to power". The capability to use our faculties in a way that produces logical conclusions seems like an evolutionary accident, an off-lable use of the brain's machinery for complex social interaction. Most people never learn to catch themselves doing the former when they intended to engage in the latter, some don't know the difference. Fortunately, the latter provides a means of self-correction, the research here hopes to elucidate whether an LLM based reasoning system has the same property.

In other words, given consistent application of reason I would expect a human to eventually draw logically correct conclusions, decline to answer, rephrase the question, etc. But with an LLM, should I expect a non-determisitic infinite walk though plausible nonsense? I expect reaaoning to converge.


It's not clear what LLMs are good at, and there's great interest in finding out. This is made harder by the frenetic pace of development (GPT 2 came out in 2019). Not surprising at all that there's research into how LLMs fail and why.

Even for someone who kinda understands how the models are trained, it's surprising to me that they struggle when the symbols change. One thing computers are traditionally very good at is symbolic logic. Graph bijection. Stuff like that. So it's worrisome when they fail at it. Even in this research model which is much, much smaller than current or even older models.


> It's interesting that there's still such a market for this sort of take.

What do you think the explanation might be for there being "such a market"?


Not sure why everyone is downvoting you as I think you raise a good point - these anthropomorphic words like "reasoning" are useful as shorthands for describing patterns of behaviour, and are generally not meant to be direct comparisons to human cognition. But it goes both ways. You can still criticise the model on the grounds that what we call "reasoning" in the context of LLMs doesn't match the patterns we associate with human "reasoning" very well (such as ability to generalise to novel situations), which is what I think the authors are doing.


""Sam Altman says the perfect AI is “a very tiny model with superhuman reasoning".""

It is being marketed as directly related to human reasoning.


Sure, two things can be true. Personally I completely ignore anything Sam Altman (or other AI company CEOs/marketing teams for that matter) says about LLMs.


If you read the comments of AI articles on Arstechnica, you will find that they seem to have becomes the tech bastion of anti-ai. I'm not sure how it happened, but it seems they found or fell into a strong anti-AI niche, and now feed it.

You cannot even see the comments of people who pointed out the flaws in the study, since they are so heavily downvoted.


Like a backup generator for inputs. Makes sense.


Yes, this is how I have set up systems to bootstrap.

For example a service discovery system periodically serializes peers to disk, and then if the whole thing falls down we have static IP addresses for a node and the service discovery system can use the last known IPs of peers to bring itself back up.


LLM agents are very hard to talk about because they're not any one thing. Your action-space in what you say and what approach you take varies enormously and we have very little body of common knowledge about what other people are doing and how they're doing it. Then the agent changes underneath you or you tweak your prompt and it's different again.

In my last few sessions I saw the efficacy of Claude Code plummet on the problem I was working on. I have no idea whether it was just the particular task, a modelling change, or changes I made to the prompt. But suddenly it was glazing every message ("you're absolutely right"), confidently telling me up is down (saying things like "tests now pass" when they completely didn't), it even cheerfully suggested "rm db.sqlite", which would have set me back a fair bit if I said yes.

The fact that the LLM agent can churn out a lot of stuff quickly greatly increases 'skill expression' though. The sharper your insight about the task, the more you can direct it to do something specific.

For instance, most debugging is basically a binary search across the set of processes being conducted. However, the tricky thing is that the optimal search procedure is going to be weighted by the probability of the problem occurring at the different steps, and the expense of conducting different probes.

A common trap when debugging is to take an overly greedy approach. Due to the availability heuristic, our hypotheses about the problem are often too specific. And the more specific the hypothesis, the easier it is to think of a probe that would eliminate it. If you keep doing this you're basically playing Guess Who by asking "Is it Paul? Is it Anne?" etc, instead of "Is the person a man? Does the person have facial hair? etc"

I find LLM agents extremely helpful at forming efficient probes of parts of the stack I'm less fluent in. If I need to know whether the service is able to contact the database, asking the LLM agent to write out the necessary cloud commands is much faster than getting that from the docs. It's also much faster at writing specific tests than I would be. This means I can much more neutrally think about how to bisect the space, which makes debugging time more uniform, which in itself is a significant net win.

I also find LLM agents to be good at the 'eat your vegetables' stuff -- the things I know I should do but would economise on to save time. Populate the tests with more cases, write more tests in general, write more docs as I go, add more output to the scripts, etc.


Homelessness is a somewhat broad category though. There's lots of people couch-surfing between friends and their car. They're also in a very different position from people who are sleeping rough.


I only experienced traditional rough sleeping homelessness once when my "house" (my van) was towed and I had to sleep in an hostile architecture bus stop bench that had ridges between each "seat" area. Otherwise, I was technically "homeless"/vanliving in SV from about 2010-2019.


Van is still a home, isn’t it?


The word home can apply to a van. I also know people who are considered unhoused/homeless whose home is a van.


van is a home if you have your stuff in it, live in it, and enjoy moving around, this does not make you homeless, sorry. is a home on wheels and can be very comfortable should you decide so.


I already said home can apply to a van. We have no argument there.


yes, sorry, my bad, just dont get it why comment got downvoted. no harm was intended.


No harm caused to me.

I commented instead of downvoting. However, to speculate, you implied a person isn't homeless if they have a van. You were responding to a comment containing:

> I was technically "homeless"/vanliving

Wherein they were relating their experience and recognized that they were vanliving (living in their van as their home) and even quoted their use of homeless, calling themselves "technically" so.

Even someone living in a tent or sleeping on the ground, if they keep returning to a site could say that site was their home. Some say the world is their home or that the region they stay in is. They would still be very clearly considered homeless despite having a "home".

As I understand it, it is a gray area whether vanliving is legal. You are allowed to park a vehicle but the owner doesn't have unlimited right to leave the van in one spot and live there. Even living in a van on your own land can be against code. People sleeping rough generally have no recognized right to sleep where they do. They are frequently moved and more often harassed. The situation is similar for those living in a van.

Anyway, that aside and trying to speculate about your downvote(s?), the context was that you started a language specificity discussion with someone who appeared to be unsure about the right words and hesitant to call themselves homeless. There were plenty of places on this thread to have that discussion but this doesn't seem an appropriate spot to me. I don't know whether that poster even knows you responded but if they do, I could see your response causing some difficult thoughts and/or emotions.

I doubt you intended harm but it can be helpful to consider the context to minimize the risk of harm or even just better understand the diverse manners in which your comments could be received. Hope you enjoy commenting here and learn from it and the comments of others.


Thank you for taking time to write this, appreciated - and it is a beautiful writing, I would say.

Indeed, I intended no harm, but having spent time on desolate beaches, retreats and similar, I could definitely disambiguate between living in a tent, in the forest, like even doing some coding from there, and being homeless. My comments joined the... seemingly overall uproar against author's choice to call his experience homelessness.

But you make a valid point reg. how many ways a comment can be interpreted.

My point was that precisely, and with my other comment - that homelessness is not a state you typically get to by choice. It is social status more or less. Unless the choice is to become sannyasa or traveling Buddhist monk and renounce the material world, which is not op's story, really. Given my previous experienced living in the tent in a forest for... months, well I can definitely say is not the same as being homeless. I have also met refugees (mostly levantines in Europe) who are much more homeless even when being crammed together in "homes" dozens at a time. They have interesting perspective of what is home, and having the world for your home is not always a good thing to say.


Thank you for your kind words and discussion.

I completely agree that living in a tent can be lovely and some of my life's favorite moments include tent living (e.g. in the temperate rain forest of Olympic National Park) and moving every day.

I even mostly agree with the overall uproar. It feels like bending the term pretty hard for the author to claim homelessness. The further point that there may be moral hazard in that use seems reasonable.

I like your point about homelessness more or less being a social status. I think it adds insight and I have enjoyed that this whole discussion (ours and other bits in the context) has really stimulated me to more deeply examine what homeless means.

Your point about even having a standard shelter to live in (house seemed implied but isn't important) not wiping away homelessness is excellently instructive. I certainly know people I consider homeless who have assigned housing. Considering the counsel housing system in the U.K. seems like it might start to step to the other side of that line on the other hand, shifting the discussion to other dimensions of a person's needs and "enfranchisement".

Let me reiterate my gratitude for sharing your thoughts and even more for getting to a discussion that feels more like peace and curiosity.


Carville (DNC strategist) is advocating a "play dead" strategy. Let Trump implode so that he owns the inevitable failure. His base will desperately want to blame the left for not letting the policies work as intended. The less the Democrats do, the harder that is. I think a lot of Democrat politicians are going this way, and it's why Schumer rolled over on the budget.

Part of the logic here is that Trump is indeed different from other authoritarians. He's even less competent. He's blowing all his political capital on imploding the economy. He also can't understand the legal battles, so when Stephen Miller tells him they won the Supreme Court case 9-0, he believes him. This seems to have been a big wake-up call to Gorsuch, Coney-Barrett and Kavanaugh. The administration has shown its hand much too quickly, before it fully consolidated its power.

What the Democrats should be doing already is campaigning more. Run ads that are literally just Trump quotes. Show people Trump calling January the "Trump economy" before inauguration, then calling April the "Biden economy" now that he's crashed it. If Trump polls low enough, more senators will jump ship, and impeachment could be possible.


The dumb thing about this, is that the republicans are going to blame democrats whether or not they do anything. Play dead is a really dumb idea because it looks exactly like the rest of the democratic do-nothing strategies.

Someone needs to stop listening to Carville. Every time I see/hear him I am reminded of everything wrong with the DNC. They don't even pretend like they want to fight for people's rights. That's not gonna win them any elections. I would argue the reason they lost the election was for how little they actually promised they would do beyond "maintain the status quo".


Not just campaigning. Resourcing. By now (by 8 years ago, to be !@#$ing honest) there should be a very clean, crisp website that's a searchable list of topics/talking points, with immediately available videos, audios, screenshots of Trump contradicting himself, along with links to easily digestible facts.

This alone will never convince anyone of anything who isn't already convinced. But as an absolute minimum, it should be effortless for anyone to demonstrate his lack of ideology every single time he speaks about how he's always/never supported something.

Maybe we could even educate "journalists" and the media on its existence so they can do more than "agree to disagree" whenever they talk about things.


> Carville (DNC strategist) is advocating a "play dead" strategy.

Our tax money hard at work. What a fucking joke.


Do American parties get tax money to spend on strategists?


There's no playing. It's real. Flatlined a long time ago.

They need to rise from the dead.

And it needed to be done way before Trump got elected the first time.

What were people thinking then, and why haven't they gotten off their butts yet?


This is so easy to counter though.

1. Just make stuff up, MAGA and stupid people will believe it. For example, there are so may AI-generated political videos on Youtube that resemble Facebook boomer posts, with completely fake stories about some conservative figure getting the best of a liberal '...and then everybody clapped'. Even when it says in the description that the story is fictional, there are often hundreds of approving MAGA comments. (example: https://www.youtube.com/watch?v=nM_ylQmJIHo )

2. Say the Democrats subverted something. Example, blaming judges or deriding DOJ lawyers who admitted the government deported someone by mistake as 'Democrat plants'.

3. Castigate the Dems for not actively supporting the President and imply they created a morale crisis.

I don't think many GOP senators are going to jump ship because they are afraid of MAGA people on a personal level. They are more afraid of being branded as traitors and putting their family in danger than they are of losing re-election.


This might be a crazy idea, but they could also try advancing policies their constituents care about.


Yes but...

> Run ads that are literally just Trump quotes.

TIL cliché "Democrats buy ads, Republicans buy stations."

I mosdef 100% utterly agree Dems should campaign more.

It's just that... Per the entire duration of the Biden Admin and post mortems of 2024 election, voters barely hear any Dem messaging. So I question the ROI on buying ads.

Like most others fretting from the sidelines (I'm still recovering from activist burnout, sorry), I have no idea how Dems, and "The Left" more broadly, can connect with voters.

AOC & Bernie's nationwide tour is doing a good job. A good start.

Insert something here about embracing social media(s).

Insert something here about owning our own media ecosystem.

Insert something here about loudly and proudly pivoting away from neoliberalism into full throated support for our working class(es).

Blah, blah, blah building and nurturing a movement.

Etc.

Please share any tips, ideas you have. TIA.


It wasn't even so much the blunders as the strategic decisions I think. Like, a blunder isn't in itself "baffling".


The examples in the article recommend doing the right things, but I think the initial discussion of the LBYL vs EAFP discussion could be better.

The article says basically "LBYL is bad", but this isn't a good description of what the author does in the later examples. Following "EAFP" without exception is also really bad.

The simple policy is that you should use "LBYL" when you're dealing with local state that can't change out from under you, especially when it's just properties of your local variables, but you need to use "EAFP" for anything that deals with remote state. Function calls can go either way, depending.

Blanket advice against LBYL leads people to use try/except blocks to express basic conditionals. If you want to ask something about your local variables, just ask it in a conditional --- don't make the reader infer "ah, these are the situations which will raise the error, and therefore we'll go into this block if the values are this way". If you want to do something when a key is missing from a dictionary, don't write:

    try:
        value = table[key]
    except KeyError:
        return default_value
Just write:

    if key not in table:
        return default_value
    value = table[key]
(If you really need to avoid two lookups for efficiency, you would do something like value = table.get(key, MISSING_VALUE) and then check 'if value is missing_value'. But this optimisation will seldom be necessary.)

The example the author gives about interacting with the file system is a good example of where indeed you really should use EAFP. The file system is remote state that your function does not own. Similarly if you're doing database operations, calling a remote API...lots of things.

There's various middle ground when you're calling functions. Often you don't want to worry about whether that function is going to be interacting with remote state, and you want to treat it totally as a black box, so you just use EAFP. But if the function documents really clear pre-conditions of your variables you can just go ahead and check, it's better to do that.


What's really wrong with try/except here other than it's not to your personal taste?

Brett Cannon, one of the Python core devs, wrote a blog post using exactly this dict KeyError example in 2016 [1]. It concludes:

"The key takeaway you should have for the post is that EAFP is a legitimate coding style in Python and you should feel free to use it when it makes sense."

[1] https://devblogs.microsoft.com/python/idiomatic-python-eafp-...


I would say that it's almost good if it wasn't for the Error word in KeyError.

If it was something like except KeyDoesNotExist: or KeyNotFound, it would make more sense for me, because it seems hacky to consider it an error where it's a normal default to some value behaviour.


I still think using the exception mechanism for things that could just be conditionals is bad. I elaborated on this to a sibling comment.


Yes there's been various recommendations of this over the years and I think it's really bad.

Using try/except for conditional logic gives the developer a spurious choice between two different syntaxes to express the same thing. The reader is then asked to read try/except blocks as meaning two different things: either ordinary expected branching in the function, or handling exceptions.

I think it's definitely better if we just use conditionals to express conditionals, and try/except for errors, like every other language does. Here's some examples of where this duplication of syntax causes problems.

* Exceptions are often not designed to match the interface well enough to make this convenient. For instance, 'x in y' works for both mapping types and lists, but only mapping types will raise a `KeyError`. If your function is expected to take any iterable, the correct catching code will be `except (KeyError, IndexError)`. There's all sorts of these opportunities to be wrong. When people write exceptions, they want to make them specific, and they're not necessarily thinking about them as an interface to conveniently check preconditions.

* Exceptions are not a type-checked part of the interface. If you catch `(KeyError, IndexError)` for a variable that's just a dictionary, no type checker (or even linter?) is going to tell you that the `IndexError` is impossible, and you only need to catch `KeyError`. Similarly, if you catch the wrong error, or your class raises an error that doesn't inherit from the class that your calling code expects it to, you won't get any type errors or other linting. It's totally on you to maintain this.

* Exceptions are often poorly documented, and change more frequently than other parts of the interface. A third-party library won't necessarily consider it a breaking change to raise an error on a new condition with an existing error type, but if you're conditioning on that error in a try/except, this could be a breaking change for you.

* The base exceptions are very general, and catching them in code that should be a conditional runs a significant risk of catching an error by mistake. Consider code like this:

    try:
        value = my_dict[some_function()]
    except KeyError:
        ...
This code is very seriously incorrect: you have no way of knowing whether 'some_function()' contains a bug that raises a KeyError. It's often very annoying to debug this sort of thing.

Because you must never ever ever call a function inside your conditional try block, you're using a syntax that doesn't compose properly with the rest of the language. So you can either rewrite it to something like this:

    value = some_function()
    try:
        return my_dict[value]
    except KeyError:
        ...
Or you can use the conditional version (`if my_dict[some_function()]`) just for these sorts of use-cases. But now you have both versions in your codebase, and you have to think about why this one is correct here and not the other.

The fundamental thing here is that 'try/except' is a "come from": whether you enter the 'except' block depends on which situations the function (or, gulp, functions) you're calling raise that error. The decision isn't local to the code you're looking at. In contrast, if you write a conditional, you have some local value and you're going to branch based on its truthiness or some property of it. We should only be using the 'try/except' mechanism when we _need_ its vagueness --- when we need to say "I don't know or can't check exactly what could lead to this". If we have a choice to tighten the control flow of the program we should.

And what do you buy for all this spurious decision making and the very high risk of very serious bugs anyway? Why should Python do this differently from every other language? I don't see any benefits in that article linked, and I've never seen any in other discussions of this topic.


But if you do the "in" syntax with some_function() then you would need to assign the value before anyway or have to call the function twice?

  if some_function() not in table:
          return default_value
      value = table[some_function()]


You could do something like `value = table.get(some_function(), MISSING_VALUE)` and then have the conditional. But let's say for the sake of argument, yeah you need to assign the value up-front.

Let's say you're looking at some code like this:

    if value in table:
        ...
If you need to change this so that it's `some_function(value)`, you're not going to miss the fact that you have a decision to make here: you can either assign a new variable, or you can call the function twice, or you can use the '.get()' approach.

If you instead have:

    try:
        return table[value]
    except KeyError:
        ...
You now have to consciously avoid writing the incorrect code `try: return table[some_function(value)]`. It's very easy to change the code in the 'try' block so that you introduce an unintended way to end up in the 'except' block.


> Just write:

    if key not in table:
        return default_value
    value = table[key]
I think this could/should be shortened to

    value = table.get(key, default_value)
    ...
    return value
No?

In case you disagree, I'd be happy to hear your thoughts!


Maybe "trust" in a purely technical sense, but we live in a society. If a vendor has a part of their contract that would put them under penalty of fraud, customers don't usually worry whether they can independently verify it's true.

For instance, if you write in your agreements that data goes into this bit of open-source code and leaves it and goes nowhere else, and you list out some processes you take to ensure this is true, and the whole thing is an outright lie, that's really bad for you in ways that are clear and enforceable. It's also hard to hide from your own employees, and people come and go. You're unlikely to make that claim unless it's true.


Yes - people here talk like vendors are inscrutable black boxes, which suggests to me they have never been through a serious vendor selection process (on either side). This is not a problem businesses are unaware of, and solutions and enforcement mechanisms exist.

SOC2 may be kind of a tedious drudge to go through, and provide limited actual security, but a third-party auditor’s SOC2 attestation for a vendor goes some way to assuring you that they have some level of access controls and processes to protect your data. It’s not nothing. And the lack of a SOC2 attestation for a vendor is a red flag for sure.

Back in my proprietary SaaS days we used to regularly undergo third party security audits, code escrow processes, and numerous standards compliance hoops, to create sufficient assurance to clients to sign a contract.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: