Hacker Newsnew | past | comments | ask | show | jobs | submit | captaincaveman's commentslogin

Thinking you can figure out someone in 5 minutes makes you a no hire from me ;)


I would be the one hiring, replacing you in this made up scenario. Regardless of the position you are hiring for you would hire me under your process because I can talk your ear off for hours and can outsource the take home. My point is I shouldn't have to have such unrelated skills.


You know that I use take home to ask for modifications on the spot during technical interview. First hour is soft talk to see if there is any communication issue, then you go home do take home, if you don’t like us you don’t have to do the take home.

If someone is reluctant to do changes in his own code it is a no go for me.

Main point of take home I give is so that candidate has his own code to modify while we are on the second round, not to waste time and stress people with something they didn’t wrote/know.


> you would hire me under your process because I can talk your ear off for hours and can outsource the take home

Which is why all of the biggest tech companies does on site technical tests.


For the biggest tech companies you spend time practicing leetcode. I don't know if that's any better.


I think "figuring someone out" and "figuring out if someone can do a job that needs to be filled" are very different topics.


Yeah, and thats fine, there is literally no need to have them consistent between teams.


Agree, a main benefit of using these would be performance gains.


I think LangChain basically tried to do a land grab, insert itself between developers and LLM's. But it didn't add significant value and seemed to dress it up by adding abstractions that didn't really make sense. It was that abstraction gobbledygook smell that made me cautious.


Looks like they've parlayed it into some kind of business https://www.langchain.com/


They’ve been growth hacking the whole time pretty much, optimizing for virality. Eg integrating with every ai thing under the sun, so they could publish a seo-friendly “use gpt3 with someVecDb and lang chain” page, but for every permutation you can think. Easy for them to write since langchains abstractions are just unnecessary wrappers. They’ve also had meetups since very early on. The design seems to make langchain hard to remove since you’re no longer doing functional composition like you’d do in normal python - you’re combining Chains. You can’t insert your own log statements in between their calls so you have to onboard to langsmith for observability (their saas play). Now they have a DSL with their own binary operators :[

VC-backed, if you couldn’t guess already


I'd say that was more like a single instance, one interaction with a thing.


But in that single interaction, you might have seen the cat from all kinds of different angles, in various poses, doing various things, some of which are particularly not-dog-like.

I vaguely remember hearing that there's even ways to expand training data like that for neural networks, i.e. by presenting the same source image slightly rotated, partially obscured etc.


One interaction that captures a multidimensional, multisensory set of perceptions. In an ML training set, say for visual recognition, this would consist at least of hundreds of images from many angles, in different poses and varied lighting.


I don't think its analogous, I don't think we see a cat and our brain have it frame by frame adjust our synaptic weights (or whatever brains do). The whole premise of natural brains being able to learn by static images or disjointed modalities is a very clunky reductionist engineered approach we have taken.


> I don't think we see a cat and our brain have it frame by frame adjust our synaptic weights (or whatever brains do)

I think that "whatever we do" is doing a lot of heavy lifting here. Some of those "whatevers" will be isomorphic to a frame-level analysis that pulls out structural commonalities, or close enough that it's not a clunky reductionist analogy.


When we see what we think is a cat, what we have categorised as a cat, I don't think we are looking at it from each angle and going, cat, cat, cat. I think there is an aspect of something like the 'free-energy principle' that is required to trigger off a re-assessment. So while visually we may receive 20fps of cat images, it's mostly discarded unless there is some novelty that challenges expectation.


Mostly someone has a Phd and convinced people to give them money to 'change the world', then need someone who has actually built things beyond a script in a python notebook.


Gosh. So much this! The difference between the "average" PhD graduate in data science and the "average" software engineer with genuine experience delivering production software that people use at scale is quite something. I have nothing against data scientists, but in the same way that I wouldn't get a software engineer to build a complex model (above a certain level of complexity), neither would I get a data scientist to build a production app (above a certain level of scale). Both of these things are specialist activities that require a lot of experience, wisdom, and nuance to get right. Being good at one does not (necessarily) mean you will be good at the other.


Pompousness and self importance, we all get interviewed at some point don't power trip, it should be a humble two way conversation.


I think there is potential, but at the moment it's

1. RAG your data 2. Magic 3. Agent business logic 4. $$$

where step 2 is very unclear.

Also agent architecture what is it? A basic FSM which in essence is a bunch of business logic/rules with LLM API calls, how do you make this reliable for transactions.

I've yet to see a decent example of a business process replaced which isn't a question answer scenario i.e. call centre type role.


The problem is people want replacement when the Agent may just be able to do augmentation https://www.lycee.ai/blog/ai-agents-automation-eng


Except (and really not wanting to offend Indians), what we did is import a large number (by percentage of tech jobs) of skilled Indians which drove down the market rate for those jobs, either way it had the 'benefit' of driving down costs.


Good luck with that!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: