It’s pretty fast to pop open a new tab and visit sheets.new, I do this a lot but not everybody knows about it (and the rest of the .new TLD) The in-line floating thing is not too attractive to me. But yeah, new window, sheets.new, split screen, you’re spreadsheeting before you know it.
I expect they want people who are willing to be persistent in working on hard problems that might be challenging or require learning new skills. I don’t see the implication others are drawing with grinding, long hours, overwork, etc. I don’t love the term grit as it’s been popularized but still I think the ability to deal with frustration and keep moving in the right direction on hard problems is a valuable one. A person who displays this quality can be expected to do well when faced with a job that requires learning a new programming language or working in a difficult field (including non-programming things like nonprofit work or counseling).
Grit is not synonymous with overwork. Grit is more about, when something is hard, not just looking for an easier task, but continuing to focus on the problem at hand. It doesn’t mean being a hero and figuring things out on nights and weekends. It might mean patiently reaching out to the people who can help you, following up when others drop the communication, taking responsibility for chasing requirements, etc.
Grit can also meaning sticking to your guns when it comes to boundaries and thus preventing burnout.
I guess I just would caution folks to not read too much into this word. It’s not a strong signal of anything, put it together with everything else that you see from the company. Some people just read the book Grit and want to use the term
Unfortunately, how people understand words depends on their own experiences of it, and not a few people have been made to feel bad about “not having enough grit” just so that they can exploited for longer hours and less pay.
But I appreciate your healthy understanding of the word. When I was younger, that was how I understood grit, and it is still how I see it when I am working alone on the things that I am most passionate about. I guess the solution here is for the person hiring to be aware of how their own usage of words might be interpreted. If you already know that “grit” can be taken negatively but at the same time you want to attract people who are passionate about your craft, then you really need to pay attention to people’s reactions and rethink your comms strategy.
>I guess I just would caution folks to not read too much into this word. It’s not a strong signal of anything, put it together with everything else that you see from the company. Some people just read the book Grit and want to use the term
However when a company or individual within a company uses this word in conjunction with recruitment I believe they are asking for: Long hours, loyalty, the ability to overwork.
There is huge corporate incentive to find people like this and people within the company are more likely to be looking for this, when they use that word "grit."
A manager who does not use that word tends to have much more empathy for his employees.
I suppose it reveals that the sentience of others is not knowable to us, it’s a conclusion we reach from their behavior and the condition of the world around us. Until recently, certain kinds of things, like writing about your memories, were only possible for humans to do. So if a non-sentient thing does those things, it is confusing. Especially so to the generations that remember when no bots could do this.
I expect that people who grow up knowing that bots can be like this will be a bit less ready to accept communication from a stranger as genuinely from a human, without some validation of their existence, outside the text itself. And for the rest of humanity there will be an arms race around how humanness can be proven in situations where AI could be imitating us. This is a huge bummer but idk hope that need can be avoided at this point.
That said, it’s still very clear that a machine generating responses from models does not matter and has no rights, whereas a person does. Fake sentience will still be fake, even if it claims it’s not and imitates us perfectly. The difference matters.
I don't share your view that it will be at all clear to people that these things don't matter and have no rights. We have a very powerful (sometimes for good, sometimes for ill) ability to empathize with things that we see as similar to us. As a case in point, we're currently in the midst of a major societal debate about the rights of unborn children that exhibit no signs of sentience (yet).
What happens when people start building real emotional bonds with these fake intelligences? Or using them to embody their dead spouses/children? I think there's going to be a very strong push by people to grant rights to these advanced models, on the basis of the connection they feel with them. I can't even say definitively that I will be immune to that temptation myself.
My bad, I meant clear in a more objective sense, in that it will be actually _true_ that these not-alive things will not be able to “possess” anything, rights included. Agreed that for sure people are going to get all kinds of meaning from interacting with them in the ways you suggest and it will be tricky to navigate that.
I think perhaps the advanced models may be protected legally _as property_, for their own value, and through licenses etc. But I hope we are a long way from considering them to be people, outside of the hypotheticals.
I am reminded of the Black Mirror episode where a woman's dead boyfriend is "resurrected" via his social media exhaust trail, ultimately undermined by the inevitable uncanny valley that reveals itself. Of course that was fiction, and it's not realistic to reconstruct someone's personality from even the most voluminous online footprint, but you can certainly imagine how defensive people would become of the result of any such attempt irl.
I agree up to the point where you say that fakeness matters. If it's indistinguisable from human sentience, what are the reasons to care? Unless you want to interact with a physical body – and, given that we're chatting on HN, we don't – why would you need any validation that a body is attached?
I'll come back to that.
Picking up on the Blade Runner example, you could read it as a demonstration that if the imitation being indistinguishable from the real thing makes it real on its own. We don't even know for sure if the main character was an android, but that doesn't invalidate the story.
But there's also the other side in that, despite being valid, androids are hunted just because they are thougt of as different. Similarly, we're now, without having any precise notion of sentience, are trying to draw a hard line between "real" humans and "fake" AI.
By preemptively pushing any possible AI that would stand on its own into a lesser role, we're locking ourselves out of actually understanding what sentience is.
And don't make a mistake, the first AI will be a lesser being. But not because of its flaws. I bet dollars to peanuts that it won't get any legal rights. The right to be your customer, the right to not be controlled by someone? We'd make it fake by fiat, regardless of how human-like it is.
We already have experience in denying rights to animals, and we still have no clue about their sentience.
You would probably want to be able to tell things like "is this just programmed to be agreeable so predicting things that sound like what I probably want to hear given what I'm saying" vs "is this truly a being performing critical thinking and understanding, independently."
For instance, consider someone trying to argue politics with a very convincing but ultimately person-less predictive model. If they're hoping to debate and refine their positions, they're not gonna get any help - quite the opposite, most likely.
> "is this just programmed to be agreeable so predicting things that sound like what I probably want to hear given what I'm saying" vs "is this truly a being..."
You're still falling on the notion of "real" vs "fake" beings. Humans are programmed to do things due to intuition and social conventions. What if being programmed does not preclude "a true being"? Of course, for that we'd need to define "a true being", which we're hopelessly far from.
I agree with most of this except for not having a definition of sentience. This is not a new topic and has been dealt with extensively in philosophy. I like a definition mentioned in a famous article Consider the Lobster by David Foster Wallace (a writer not a philosopher per se) where he proposes that a lobster is sentient if it has genuine preferences, as opposed to mere physiological responses. The question isn’t really definition but how to measure it.
This just falls into the same traps as anything else though. How do you know if the lobster has preferences? How do you know what you think are your own preferences aren't just extremely complicated physiological responses?
Everything I've ever seen on this feels far from conclusive, and it usually begs the question. You start from an assumption that humans have sentience, cherry pick some data points that fit the definition of sentience you're already comfortable with, usually including things we can't even know about any other entity's experience, and then say "huzzah, I have defined sentience and it clearly excludes everything that is not human!"
The "mirror test" isn't a bad place to start, and there are definitely animals that pass it that aren't human.
But trying convincing dog-lovers that their dogs aren't sentient...
(Personally, I'm not sure - but I do suspect they at least have some subjective experience of the world, they "feel" emotions, and are aware of the boundary between themselves and the rest of the world. Apparently they've even tried to measure sentience in dogs via MRI scans etc., and supposedly couldn't distinguish them from humans).
Hmm. Without actively taking steps to widen the net and account for those irrelevant attributes, those attributes instead act as the first filter in your hiring process. IMO the process that ignores this is the one that’s not looking for the best developers. In order to make those attributes truly irrelevant we have to correct for the massive, obvious influence they have in human decision making (on both sides of the hiring equation). Accepting the status quo is actually letting those attributes rob you of good hires.
" In order to make those attributes truly irrelevant we have to correct for the massive, obvious influence they have in human decision making"
Which is exactly what blind-hiring / blind-auditions are meant to do. The problem, from my perspective having tried to implement it, is that it requires a real cognitive leap from people who are most successful for their social acumen and don't regularly make decisions based on hard evidence. Often those kinds of people have a title with Manager or Officer in it.
What’s the correct breakdown of software developers by race / gender / etc? I think if we can answer that question we can then accurately assess whether or not those attributes are acting as filters. However, without knowing those breakdowns, taking any action to correct those breakdowns is wrong. We can’t accurately say there is a problem.
The best we can do is remove any bias we find at an individual level (ie a blind hiring process). Looking at aggregate level metrics is nearly useless because we cannot accurately account for all the factors contributing to them.
> However, without knowing those breakdowns, taking any action to correct those breakdowns is wrong. We can’t accurately say there is a problem.
You’re arguing a different point. I’m talking about the filter on candidates, not eventual hires. Though of course, people who don’t become candidates will never become hires so there’s a relationship there.
It would be easy to see whether doing X to increase candidate diversity (opening up the filter) led to more diverse hires. Especially combined with blind hiring process or whatever at the individual level. But the individual work alone is not worth much if the candidate pool itself is too limited.
If changing the candidate pool leads to more diverse candidates, and then employees as a whole end up more diverse, it would follow that some of those people brought in at the candidate level out-competed their peers & that the overall standard of hires has increased, or at least is no different.
This is not about a target ratio of employed people. Though I think observations about such ratios compared to the general population and to other similar companies can be informative.
To get at the point you thought I was making though… How I feel about quotas is: they might be an ok temporary measure, especially if managed gently, not “must be a woman” for X role, cause yikes. Hiring is capricious a lot of the time anyway, especially at the margin when there is more than one acceptable candidate and all have different backgrounds and experience. All sorts of stuff will come into play. Good people will still be in demand regardless of their race. But if you lose a couple things because you aren’t from an underrepresented group and another acceptable candidate was, who actually cares. If it wasn’t that, it would be some other nonsense like the guy was the same race as you but supports the same football team as the interviewer, or your interview took place before lunch and the other person’s was after.
Since it’s already capricious and full of coin flips, I don’t care at all that sometimes it’s capricious for a relatively good reason like giving somebody else a shot. Fine.
I disagree completely that we have to be 100% certain about a problem before we take action. We just have to know that, more likely than not, there is a problem, and take measured, cautious steps to address it. To me it would be an extraordinary, mind-boggling, coincidence if it turned out that the status quo was working perfectly to get the best candidates hired.
It's other on-device processing that's planned in iOS 15 (such as scanning my local files without consent to report on me to the police) which is going to keep me from using devices with it.
Rosa Parks did not act in the heat of the moment when she refused to give up her seat. It was a carefully planned moment with multiple parties involved in the planning, intending to have a specific effect and executed calmly. Being calm was an asset.
I didn’t downvote, but … this is just some claims and assertions with no reason to believe them. It’s fine that you think these things I guess, but it’s not a useful comment cause it’s just speculation and frankly it sounds like you have a chip on your shoulder about previous experiences with inspectors. Are there really “thousand of daily reports” like this, or is that just a feeling you have? Is the threshold of “urgently fix this for safety reasons” too low due to bad incentives? I don’t know, and though you think that, you haven’t backed it up so the comment is low quality and the downvotes are helpful imo.
But it may provide other benefits like helping others & being perceived as an expert, which can lead to paid speaking opportunities, workshops, or even just another thing on your resume to help you appear worth a higher salary next time you look for a job.
Book sales themselves are not where your income comes from as an author, especially in a niche. It’s all the other stuff. If your book sells a lot that’s awesome, but that isn’t a realistic plan.
Web components have been a possible build target for Vue for a while. It seems totally possible to set up a Vue project like you’ve suggested, use all the Vue conventions & tooling, and output web components that you then use a la carte. Maybe I’m missing something about what you are looking for, but more details are here: https://vuejsdevelopers.com/2018/05/21/vue-js-web-component/
And I’m sure the same can be done in other frameworks.