Hacker News new | past | comments | ask | show | jobs | submit | _eigenfoo's comments login

I wrote some of my thoughts on the BCS here. tldr: I don’t think BCS outperforms humans.

https://twitter.com/_eigenfoo/status/1528427650187943941


"Web scraping is legal" seems to be an overbroad interpretation - the Ninth Circuit is merely reaffirming a preliminary injunction in light of a recent and separate ruling by the Supreme Court. The opinion [1] only weighs the merits hiQ's request for injunctive relief, and doesn't say anything about the meatier topic of web scraping legality.

I think the real action will be the ruling of the district court.

[1] https://cdn.ca9.uscourts.gov/datastore/opinions/2022/04/18/1...


Upon more digging [1], the district court trial is currently scheduled for Feb 27 2023. It'll take a while!

[1] https://storage.courtlistener.com/recap/gov.uscourts.cand.31...


First Thought: That headline seems wropng. People are probably gonna get burned just reading that.

Second Thought: I should really read the opinion before I make any comments.

Clicked through, skimmed around, looked for a link to the slip looking for the opinion link ... couldn't find it.

Thank you, sir.


What do you use the history count for?


That tells one the index to use for !N to repeat that command

!! repeats the last command

!N repeats the command with that index in the history


Hello, PyMC developer here. We're excited to give Theano a second life, and hope that this work will lend some staying power to the PyMC project in the probabilistic programming world.

As always, we're happy to accept to contributions! If you're looking to get involved, now is a great time. Please don't hesitate to speak up or reach out, either on the Theano-PyMC GitHub repo (https://github.com/pymc-devs/Theano-PyMC) or some other way (my website's in my bio).


Good to hear Theano has come back to life, it was my first deep learning library :). These days I do PyTorch, and while I really appreciate its debuggability and flexibility, I definitely can see in retrospect some of the advantages of Theano's declarative approach.

In that regard, I am curious about why Tensorflow didn't work out. I understand Tensorflow version 1 implements a declarative mode that I guess is in many ways similar to Theano's. I'm assuming v2 still supports that mode, on top of the new eager mode -- is that the case? If so, was there some aspect of its implementation that made it unsuitable for PyMC?


I'm only one person involved, but my primary reason for choosing Theano over TensorFlow has to do with the ability to manipulate and reason symbolically about models/graphs.

In order to improve the performance and usability of PyMC, I believe we need to automate things at the graph level, and Theano is by far the most suitable for this--between the two, at least.

You can find some work along these lines in the Symbolic PyMC project (https://github.com/pymc-devs/symbolic-pymc); it contains symbolic work done in both Theano and TensorFlow.


Really excited by this as well. I couldn’t really wrap my head around the proposed API changes for PyMC4. I find PyMC3 to be a pleasure to work with and understand!

Will the JAX backend and integration with external JAX modules mean that we’ll see improvements to PyMC3’s variational inference module? That would really increase the versatility of PyMC3 for probabilistic modelling in Python.


[flagged]


Remarkable bird, the Norwegian Blue, idn'it, aye? Beautiful plumage!



One major strength of SQL is its readability - it reads so much like English that a non-technical stakeholder could conceivably understand queries. Do you not find that this is a valuable thing that's lost with relational-algebra-esque syntax?


I haven’t one time in my 20 years of development had a time where that readability mattered tho. SQL very quickly becomes too complex for people who don’t intimately understand the language to make any sense out of it. Show a layman an INNER JOIN and see if they can make any sense out of what’s happening...they’ll just do what they do in real life: ask an engineer.


I'm in full agreement. Trivial SQL is somewhat readable, but anything non-trivial is not, and the order of SQL queries, that there is essentially no relationship between the syntactic and semantic orderings, make it hard to understand.

QUEL is better there but still not great unless its execution semantics are different than SQL's (aka semantically does it filter before or after selection?)


How do you compensate for the fact that you don't know the codebase that well? I feel like a lot of the value of interviews is that you see how a candidate thinks through a problem that you have also thought deeply about, which gives more fruitful discussions.


Well, that's not how I tend to manage day-to-day. I don't give engineers I manage a problem I've already figured out and solved, and then expect them to do it the way I would.

Rather, the engineers on the team are usually building something new, and figuring out how to accomplish it is their responsibility. This way of interviewing mimics that a lot closer.

If someone can't explain to me a project they're working on, then they probably won't be able to do the same while we're working together. And that's just as important of a skill as writing code.

If I'm expecting interviewees (who are nervous!) to learn and understand my codebase/problem, why can't I do the same for theirs?


Love this attitude. It's a great attitude and it shows. A very common thing I notice with interviewers is that they do one interview over and over again until it's super polished, but paradoxically, they won't compensate for that and in effect, the bar gets higher and higher for later candidates than earlier ones. Flipping the script not only is a great way to balance things here, but it also says a lot about your own confidence. I might have to lift your approach here!


Yup, exactly! The interviewer gets annoyed, since it's "so obvious" how to solve the problem... much like anything becomes obvious if you've done it a few dozen times.

Or they tune out. Or they dismiss any variation as "wrong" rather than giving it a chance.


I also love this attitude but I disagree about this negative effect you're talking about in reusing problems.

What happens in practice is the interviewer compares candidates answers to each other, not to his/her own way of solving it. It doesn't matter what whether the solution is "obvious" to the interviewer. If I hired someone who only did OK on this question and they ended up being a great engineer (and that happened multiple times) that lowers the bar for the test (or tells me it's not a very good test etc)


You're missing the part where the interviewer is changing with each interview (and the rest of their life in general). They phrase the problem differently, get anxious as the interviewee approaches different parts of the task. So, the interviewer thinks they're giving the same test and can fairly compare the outcomes. The GP's point is that's just not true

Whether it works out to be a harder or easier interview over time because of these changes probably depends on the individual interviewer.


Sorry but the pointed out problem is both way more commoin and way way more of an issue. Interviewers talk about "I just want to see how someone thinks" but the implicit biases and the overuse of problems leads to unconscious expectation setting and rigidity in the interviewers. In fact, I explicitly test for this by giving valid answers using e.g., techniques that I don't think the interviewer is familiar with precisely to see how open minded the interviewers actually are. Their response is amazingly revealing about the interviewer and the organization behind them.


> What happens in practice is the interviewer compares candidates answers to each other, not to his/her own way of solving it.

I think that's _ideally_ what happens in practice. But as sibling comments mention, is that what happens in practice?

Interview fatigue is very real. Many interviewers are not positively incentivized to put in the substantial amount of effort to conduct a quality interview. It's something that is simply thrown on top of their usual workload. Come promotion time, there's no reward for doing it well. But, there is punishment for noticeably "screwing it up". Often, that means that interviewers put in the bare minimum effort with interviewing (as with other tasks) to not get punished.

I've seen competent leaders avoid this in the past by emphasizing that interviewing is the most highly leveraged thing you can do for your organization and your own career, because you are step-function increasing the effectiveness and capability of your own team if you do it right. But in order for that to happen, there needs to be a real culture of working as a team and not just a disembodied group of ICs: a culture which rewards increasing the effectiveness of the team (instead of merely focusing on ones own tasks) and doesn't just pay lip service to it. It's tricky. But I think the subthread GP's approach is a clever and creative kind of a lateral thinking that shows how to do just that.


So practically - do you or some of your team get added to their project so you can track what they're doing?

do you (or others on team) ever make suggestions on how the code should be written? I think this would be an interesting way to pre-teach the hire on how you like things structured.

on edit - I guess since you ask them questions that implies making suggestions as well, but just wondering how it works.


It's all done live! The first time we hear about what they're working on is during the interview, and we never keep any of the code. It's great to see how they can articulate what they're working on, combined with actually working on it.

We sometimes might make suggestions, but rarely for the sake of code quality. We'll once in a while ask for something unique just to get them out of a rut, and see if they can improvise. Otherwise, we're here to learn and listen, not teach :)


Lots of why did you, why didn't you, what about this, how did you deal with that (and back to why). :-)

I signal a lot of things like testing by asking them. It gives room for the candidate to ask those sorts of questions back about how we do things.


Ask them to explain it to you. The quality of the understanding that you gain, and their willingness and ability to engage with your curiosity, is one measure of the candidate. A coarse-grained qualitative measure perhaps, but certainly one aligned to dev team productivity.


This is a great question. Guess OPs approach only works if they are hiring someone much more junior than themselves so that experience trumps code/domain familiarity.


Not at all! I’m pretty experienced, but most people I interview or hire are much better engineers than me!

Imagine you’re taking a class on a topic you don’t know. You can tell pretty quickly from how the instructor talks and explains something if they know what they’re talking about. It’s very similar! I’ve certainly interviewed people much more skilled than me in languages I’ve never used, and still got a pretty great sense of their expertise.


Can somebody please clarify - what exactly is this an outage of, and how serious is it?


Here is a fantastic, though somewhat outdated overview [1]. Section 5 is most relevant to your question. The network topology today is a little different. Think of Level3 as an NSP, which is now called a "Tier 1 network" [2]. The diagram should show links among the Tier 1 networks ("peering"), but does not.

[1] https://web.stanford.edu/class/msande91si/www-spr04/readings...

[2] https://en.wikipedia.org/wiki/Tier_1_network


tl;dr One of the large Internet backbone providers (formerly known as Level3, but now known as CenturyLink usually) that many ISPs use is down. Expect issues connecting to portions of the Internet.

Usually the Internet is a bit more resilient to these kinds of things, but there are complicating factors with this outage making it worse.

Expect it to mostly be resolved today. These things have happened a bit more frequently, but generally average up to a couple times a year historically.


Is this affecting all geographic regions?


US, Europe, and Asia that I'm aware of (NANOG mailing list).


Yes. They're pretty explicit about this:

> So long as our Founders [...] meet a minimum ownership threshold on the applicable record date for a vote of the stockholders[...], these Founders will effectively control all matters submitted to a vote of the stockholders for the foreseeable future. This could delay, defer, or prevent a change of control, merger, consolidation, or sale of all or substantially all of our assets that our other stockholders support. Conversely, this concentrated control could allow our Founders to consummate a transaction that our other stockholders do not support.


Thanks! I know many companies get away with it, but I still find it odd for shareholders to not have any control. They'll own it but have no say in how the company is run.


Most investors are passive anyway. If you don't like how the company is run you can push down the price by selling.


Have you considered linking notes from one notebook to another? E.g. if you named each notebook and numbered pages, you could just point to "page X in notebook Y", and wouldn't need to transcribe. Would this be something helpful to you?


I run through about two A4 192 page Moleskin notebooks in a year. Every entry is dated, and back references are essentially in the form of “On $date, I wrote ...”, so the lookup is sub-linear time.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: