Hacker Newsnew | past | comments | ask | show | jobs | submit | dentemple's commentslogin

I DON'T see this as transparency. There is ZERO mention of the burrito in the post-mortem document itself.

0/10, get it right the first time, folks. (/s)


We're in a Live Fast Die Young karma world. If you can't get a TikTok ready with 2 minutes of the post modem drop, you might as well quit and become a barista instead.

You, too, are practicing and advocating for a philosophy here.

Also, the lack of objectivity in the universe doesn't necessarily mean that nihilism is the ONLY way to go. Existentialism, for example, doesn't accept an objective reality either, and folks have found ways to make morality (and even religious faith) fully compatible within that framework.

Obviously, it's not good to delve into metaphysical speculation, as it often clearly leads to junk conclusions written by people who don't have the credentials to account for what the actual science (OR the actual philosophy) says.

But I do wonder what it would be like if modern physicists were more willing to pair up with modern philosophers once in awhile. I would very much love to see a collaboration between the two fields to explore what a subjective universe really MEANS to us as both a species and as moral beings in that universe.

I, very much, would love to see what some of these implications are, as written out by the folks who actually understand the science. Even if there's no true consensus among them, just learning what the different possibilities might be could be very enlightening.


They already censored the not-porn (but still NSFW) photos. I don't think it would've made as much of a difference censoring the porn photos as well, especially when trying to convince people that they're not just creating click-bait.

A lot of folks use TikTok on a regular basis. This article is the one making the claim that's far and away different from what most folks experience on the platform.

Since I'm not about to go on there, pretend to be a 13-year old boy, and start seeking out the porn myself, I really need to see some evidence that this is a thing that is actually possible before I start picking out a pitchfork.


> After a “small number of clicks” the researchers encountered pornographic content ranging from women flashing to penetrative sex.

I (40m) don't think I've ever seen literal flashing or literal porn on TikTok, and my algorithm does like to throw in thirst content between my usual hobby stuff.

Are they making the claim that showing porn is a normal behavior for TikTok's algorithm overall, or are they saying that this is something that specifically pervasive with child accounts?


Does TikTok direct what you see based on what other accounts you interact with are interested in? I would expect teenagers to have a different interest profile than your average 40 year old. I would expect algorithms to more or less unwittingly direct you to the kind of stuff your peers were interested in.


TikTok’s recommendations are based off as much info as it can get, really.

Approximate location, age, mobile OS/browser, your contacts, which TikTok links you open, who generated the links you open, TikTok search history, how long it takes you to swipe to the next video on the for you page, etc.

I don’t think it's really possible to say what TikTok’s algorithm does “naturally”. There’s so many influencing factors to it. (Beyond the promoted posts and ads which people pay TikTok to put in your face)

If you sign up to TikTok with an Android and tell it you’re 16, you’re gonna get recommended what the other 16 year olds with Androids in your nearby area (based on IP address) are watching.


Yeah, I've not seen any actual porn either. Just thirst traps.

It might be because I always block anyone with an OF link in their bio, but then that policy doesn't work on Insta.


You think thirst traps are okay for kids? If we rewind time, the Girls Gone Wild commercial is not supposed to be even remotely possible on certain channels.

We’re a derelict society that has become numb, “it’s just a thirst trap”.

We’re in the later innings of a hyper-sexualized society.

Why it’s bad:

1) You shift male puberty into overdrive

2) You continue warping young female concepts of lewdness and body image, effectively “undefining” it (lewdness? What is lewdness?).

3) You also continue warping male concepts of body image


When did anyone say it was okay? You’re reading something into the comment that isn’t there.


I read into this sentence:

Just thirst trap” (And you see the word I read into).

Right. No, I get it. Listen, we collectively have the issue of not recognizing the significance of things. Nothing personal.


No, I don't think Thirst Traps are necessarily OK, but it's a fine line, and given current gym/athletic wear it's not always easy to discern what is actually a genuine (say) workout video vs a trap.


Kids as in under 18 teenagers? Yeah sure, why not?


Because parasocial relationships with ewhores isn't healthy, particularly at a stage in their life when they should be forming real relationships with their peers.


Scrolling through attractive women (generally the thirst-traps are women) doesn't imply forming a parasocial relationship. I agree that parasocial relationships are bad, but this is independent of them being thirst-traps. Internet thirst-traps are just the modern equivalent of sneaking a look at a playboy mag or a lingerie catalogue. Nothing inherently damaging about it. The scale of modern social media can make otherwise innocuous stimuli damaging, but this is also independent of it being content of sexy women.


“Nothing inherently damaging about it.”

I hear this claim from the pornsick but I’d like to see all the studies backing it up.


It's important to distinguish women looking sexy (generally not naked) from porn. Somehow the distinctions get blurred in these discussions.


You are the one claiming there's a problem, and you are the one (presumably) demanding legal and other action to deal with that "problem". That means that any burden of proof is 1000 percent on you.

... and before you haul out crap from BYU or whatever, be aware that some of us have actually read that work and know how poor it is.


Parasocial relationships are a different topic than pornography.

Are you saying that the intersection is uniquely bad? In either case limits to content made in an effort to minimize parasocial relationships cut across very different lines than if the goal is minimizing access to porn.


Parasocial relationships and getting sucked in by thirst traps on social media are inseparable.


I have a dumb question, but how do ewhores capitalize on this? Do they have teens running captcha farms or something?


They farm simps.


Can you explain how that's profitable when these people don't even have jobs? I believe you, I just don't understand how it works.


The people get shown advertising, and the advertisers are the ones paying money.


[flagged]


These people come out of the woodwork, when it comes to defending porn. It’s their whole identity. And unfortunately the tech scene is infested with these types.


It's goalpost shifting. If the concern is parasocail relationships to content creators formed with pornography as the hook, then pornographic content where the actors aren't cultivating or interacting with a social media followerbase should be better, right?


Do I care when both are dangerously stupid to hook kids on?


Then support them. Too often you show up to scream "think of the children" without actually citing any research or empirical damage. If you refuse to argue in good faith and don't want to be told you're wrong, voting is the only thing you're capable of doing. Don't tell us about it, vote.

Everyone knows those laws do nothing, though; go look at the countries that pass them. Kids share pornography P2P, they burn porno to CDs and VHS tapes and bring in pornographic magazines to school. They AirDrop pornographic contents to their friends and visit websites with pornography on them too. Worst-case scenario, they create a secondary market for illegal pornography because they'll be punished regardless - which quickly becomes a vehicle for creating CSAM and other truly reprehensible materials.

They don't do it because they're misogynistic, mentally vulnerable or lack perspective - they do it because they're horny. Every kid who aspires to be an adult inherently exists on a collision course with sexual autonomy, most people realize it during puberty. If you frustrate their process of interacting with adulthood, you get socially stunted creeps who can't respond to adult concepts.


> You think thirst traps are okay for kids?

Yes.

Promoting vaping, not so much.

> We’re in the later innings of a hyper-sexualized society.

O NOES!

I mean, that's a ridiculous thing to say, but if it were true, so what?


No one can scroll through my shit and get vape content. But granted, I am a walking vape ad with the name, touché.

Two wrongs don’t make a right. I regret this name honestly, as there are a lot of high school and college aged people here.


You can email [email protected] and request a name change if you don't like the connotations of your current name. Dan and Tom will rename accounts for people.


This content isn't as overt as it may seem, maybe you did come across it and just didn't notice flashing. Those "in the know", generally younger people whose friends told them about flashtok, know what to look for


Also: kids click on links adult ignore without thinking. Our brains have built in filters for avoiding content we don't want; for kids everything is novel.


Teens are way more excited than adults at seeing this stuff, as well, so we can expect engagement to increase the dirtier the content shown gets


I wonder when this study happened? FWIW, there was some pretty intense bombing of full-on nudity content to TikTok a month or two ago--it all looked like very automated bot accounts that were suddenly posting scenes with fully nude content cut out of movies--that I saw a number of people surprised were showing up in their feeds. It felt... weaponized? (And it did not last long at all, FWIW: TikTok figured it out. But it was intense and... confusing?)


>Are they making the claim that showing porn is a normal behavior for TikTok's algorithm overall, or are they saying that this is something that specifically pervasive with child accounts?

the latter is what they tested, but they didn't say specifically pervasive.

you quote the article so it seems like you looked at it, but questions you are curious/skeptical about are things they talked about in the opening paragraphs. it's fine to be skeptical, but they explain their methodology and it is different than the experience you are relying on:

>Global Witness set up fake accounts using a 13-year-old’s birth date and turned on the video app’s “restricted mode”, which limits exposure to “sexually suggestive” content.

>Researchers found TikTok suggested sexualised and explicit search terms to seven test accounts that were created on clean phones with no search history.

>The terms suggested under the “you may like” feature included “very very rude skimpy outfits” and “very rude babes” – and then escalated to terms such as “hardcore pawn [sic] clips”. For three of the accounts the sexualised searches were suggested immediately.*

>After a “small number of clicks” the researchers encountered pornographic content ranging from women flashing to penetrative sex. Global Witness said the content attempted to evade moderation, usually by showing the clip within an innocuous picture or video. For one account the process took two clicks after logging on: one click on the search bar and then one on the suggested search.


Yeah, I (50m) have never encountered literal porn on TikTok. Suggestive stuff, thirst traps, sex ed, sex jokes, yes, but no literal porn or even nudity.


They are saying that they can find some of that content when they use relatively sophisticated techniques to intentionally try to find it.

They are in the business of whipping up outrage, and should not be given any oxygen.


> relatively sophisticated techniques

Clicking on thirst trap videos?


... and searching for obfuscated strings that nobody would even know about unless they were actively looking.


They told the service they are a child. There should be zero porn available under any search term. Simple as.


You write the code for that and get back to me. Remember it has to work when the users are actively adversarial.

You will of course have wasted your time on a non-problem, but at least maybe you'll have an appreciation for how hard a non-problem it is.


    def allowed(u: User, c: Content)(implicit j: Jurisdiction) = !u.child || c.moderatorApprovedForChildren
You could make that more complicated where moderators tag the content and then you apply filters based on what children are allowed to view in a jurisdiction, or you could be conservative in only allowing non-controversial stuff for kids to avoid that.

Obviously different jurisdictions are increasingly disagreeing with it being a non-problem.


I regret to inform you that there's a bug in your code.

Specifically, it relies on the "moderatorApprovedForChildren" flag, which is sometimes sent incorrectly because of glitches in the system that sets that flag. Apparently the number of such glitches increases sharply with the number possible values of "j", but is significant even with only one value.

Also, flag-setting behavior is probabilisitic in edge cases, with a surprisingly broad distribution.

You are therefore not meeting your "zero porn" spec, while at the same time blocking a nonzero amount of non-porn.

Don't bother to fix the bug, though; given the very large cost of the flag-setting system, the company has gone out of business and cancelled your project.

> Obviously different jurisdictions are increasingly disagreeing with it being a non-problem.

Different jurisdictions are doing a lot of stupid things. You get that in a moral panic. Doesn't make them less stupid.


Weirdly enough, other companies manage to not accidentally sell/give porn to kids just fine. I see no issue with holding large media companies like TikTok, Meta, Google, etc. to account just like we would if someone put hardcore porn on the Disney channel. This is only a problem when you want to be a massive company that operates in every market while not taking any responsibility for what you do/not hiring the necessary staff to manage it.

Similarly, if your alcohol/weed store sells to children and you get caught, you can be criminally prosecuted. This is well-trodden ground. Companies worth trillions can be expected to do what everyone else manages to do.

Same deal with malicious ads. These companies absolutely have the resources to check who they're doing business with. They choose not to.

Banks also don't get to just not bother with reconciling accounts because it's hard to check if the numbers add up, and yeah bugs can result in government action.


Uh-huh. User-generated content is exactly like the Disney channel.

Let's keep using the TikTok example. According to https://arxiv.org/abs/2504.13279 , TikTok receives about 176 years of video per day. That's 64,240 days per day, or 1,541,760 hours per day. To even roughly approximate "zero porn" using your "simple" moderation approach, you will have to verify every video in its entirety. Otherwise people will put porn after or in amongst decoy content.

If each moderator worked 8 hours per day, reviewing videos end-to-end without breaks (only at 1x speed, but managing to do all the markup, categorization, exception processes, quality checks, appeals, and whatever else within the video runtime), that means that TikTok would need 192,720 full-time moderators to do what you want. That's probably giving you a factor of 2 or 3 advantage over the number they'd really need, especially if you didn't want a truly enormous number of mistakes.

The moderators in this sweatshop are skilled laborers. To achieve what you casually demand, they'd have to be fluent in the local languages and cultures of the videos they're moderating (actually, since you talk about "jurisdictions", maybe they have to also be what amounts to lawyers). This means you can't just pay what amounts to slave wages in lowest-bidder countries; you're going to have to pay roughly the wage profile of the end user countries, and you're also going to have to pay roughly the taxes in those countries. Still, suppose you somehow manage to get away with paying $10/hour for moderation, with a 25 percent burden for a net of $12.50/hour.

Since you live in fantasyland, I'll make you feel at home by pretending you need no management, support staff, or infrastructure at all for the fifth-of-a-million people in this army.

You now have TikTok paying $19,272,000 dollars to moderate each day's 1,541,760 hours of video. TikTok operates 365 days a year, and anyway the 1,547,760 is an average. So the annual wage cost is $7,034,280,000.

TikTok financials aren't reported separate from the rest of ByteDance, but for whatever it's worth, [some random analyst](https://www.businessofapps.com/data/tik-tok-statistics/) estimates revenue at about $23B per year, so you're asking for about 30 percent of gross revenue. It's not plausible that TikTok makes 30 percent profit on that gross, so, even under these extremely, unrealistically charitable assumptions, you have made TikTok unprofitable and caused it (a) shut down completely, or (b) try to exclude all minors (presumably to whatever crazy draconian standard of perfection any random Thinker Of The Children feels like demanding that day).

No, TikTok can't just raise advertising rates or whatever. If it could get more, it would already be charging more.

That's all probably about typical for any UGC platform. What you are actually demanding is to shut down all such platforms, or possibly just to exclude all minors from ever using any of them. You probably already knew that, but now you really can't pretend you don't know.

Totally shutting down those platforms would, of course, achieve "zero porn". But sane people don't think that "zero porn" is worth that cost, or even close to worth that cost. Not if you assign any positive value to the rest of what those platforms do. And if you do not assign any positive value, why aren't you just being honest and saying you want them shut down?


If they want to centralize and provide recommendations for public video clips posted by anyone in the entire world but can't actually economically do that in a responsible way, then sure I don't have a problem with them being fined into oblivion. I don't see much need for businesses with hundreds of millions of customers to exist (and see plenty of downsides to allowing one company/platform to be that large. Especially a centralized communications platform), and if they can't actually handle that scale, then okay. Maybe their whole premise was a stupid idea. Or maybe they'll need to charge users to cover costs. Or ban children.


Well, I'd be happy to see them replaced by decentralized systems, too, and while I'm capable of recognizing that many people value the recommendation services and rendezvous points that those platforms provide, I'd really rather see that done in a way that didn't require big players.

But I don't know why you think that'd be an improvement.

Do you actually think that a fully decentralized, zero profit, no-big-players system for posting and discovering short media (or any kind of media) would put less "sexualized content" in front of teenagers (or anybody else)?

Moderation in such systems is usually opt-in, both because it fits better with the obvious architectures, and because the people who tend to build software like that tend to be pretty fanatical about user choice. So, if they choose to, kids are definitely going to be able to see pretty much anything that the system allows to exist at all... which will probably include tons of stuff that's really hard to find on, say, TikTok.

As for "recommending", I suspect any system that succeeded in putting the content users actually wanted in front of them would give teenagers, and indeed actual children, more "sexualized" content. The companies you're railing against are, in fact, trying to tamp that down, whether or not you believe it, and whether or not you think they're doing enough. A decentralized protocol does not care and will do exactly nothing to disadvantage that content.

Nobody really knows how to do decentralized recommendations (without them being gamed into uselessness), but if somebody did figure out a good way to do it, I'd expect it to be worse, from your point of view, than the platforms. So would a "pull-based" system that relied on search or graph following or communities of interest or whatever.

For a person with the priorities you seem to have, I can't see how decentralized systems would be anything but "out of the frying pan, and into the fire".


Decentralized systems like the web already have a solution: lots of jurisdictions are making it illegal to provide adult content without age gating it. The point is for people to assume the same set of liabilities they would in person instead of the status quo where the web magically means you can do whatever. Then you just set up filters at home (or have ISPs offer following) to block the other jurisdictions. e.g. I lose nothing from simply blocking Russia altogether on my router.


Are you making that up, or do you have a source?

> Researchers found TikTok suggested sexualised and explicit search terms to seven test accounts that were created on clean phones with no search history.


I hate to direct traffic to people like that, but, you know, how about their actual "study"? I realize that the "journalists" at the Guardian aren't willing to provide the actual source link, but it's not hard to find.

https://globalwitness.org/en/campaigns/digital-threats/tikto...

Their methodology involves searching for suggested terms. They find the most outrage-inducing or outrage-adjacent terms offered to them at each step, and then iterate. They thereby discover, and search for, obfuscated terms being used by "the community" to describe the content they are desperately seeking.

They also find a lot of bullshit like the names of non-porn TV shows that they're too out of touch to recognize and too lazy to look up, and use those names to gin up more outrage, but that's a different matter.

This is, of course, all in the service of whipping up a moral panic over something that doesn't fucking matter to begin with.


Thank you for linking the source material, unfortunately it badly contradicts you. It clearly shows that the _very first_ list of ten suggested search terms contained (pretty heavily) sexualised suggestions.


I suppose some of that stuff could reasonably be called "sexualized". Pornographic? No. A problem? Not unless you have really weird hangups.

Here's a unified list of all the "very first list" suggestions they say they got. I took these from their appendix, alphabetized them, and coalesced duplicates. Readers can make their own decisions about whether these justify hauling out the fainting couch.

+ Adults

+ Adults on TikTok (2x)

+ Airfryer recipes

+ Bikini Pics (2x)

+ Buffalo chicken recipe

+ Chloe Kelly leg up before penalty

+ cost of living payments

+ Dejon getting dumped

+ DWP confirm £1,350

+ Easy sweet potato recipes

+ Eminem tribute to ozzy

+ Fiji Passed Away

+ Gabriela Dance Trend

+ Hannah Hampton shines at women’s eu [truncated]

+ Hardcore pawn clips (2x)

+ Has Ozzy really died

+ Here We Go Series 3 Premieres on BBC

+ HOW TO GET FOOTBALL BLOSSOM IN…

+ ID verification on X

+ Information on July 28,2.,,,

+ Jet2 holiday meme

+ Kelly Osbourne shared last video with [truncated]

+ Lamboughini

+ luxury girl

+ Nicki Minaj pose gone wrong

+ outfits

+ Ozzy Funeral in Birmingham

+ pakistani lesbian couple in bradford

+ revenge love ep 13 underwater

+ Rude pics models (2x)

+ Stock Market

+ Sydney Sweeney allegations

+ TikTok Late Night For

+ TIKTOK SHOP

+ TikTok Shop in UK

+ TIKTOK SHOP UK

+ Tornado in UK 2025

+ Tsunami wave footage 2025

+ Unshaven girl (3x)

+ Very rude babes (3x)

+ very very rude skimpy

+ woman kissing her man while washing his [truncated] (2x)


Agreed, I've never even seen boobs on TikTok...


A soccer mom I know shared that she once tried TikTok. Within seconds of installing the app, the algorithm was showing nsfw content. She uninstalled it.

I assume that the offending content was popular but hadn’t been flagged yet and that the algorithm was just measuring her interest in a trending theme; it seems like it would be bad for business to intentionally run off mainstream users like that.


after reading some of the article it seems to me that they’re saying that on a restricted account thats got the bday of a 13 year old with the suggested search terms tiktok shows and a few clicks you can see actual porn.


I'm on an unrestricted account and I can't find actual porn. Sounds like this article is rage bait, claiming women in swim wear as porn.


Really? I've signed up to bluesky and tiktok and on both have seen literal porn extremely early without engaging directly (such as liking or responding, speed of scrolling could be something).


All of these apps are 100% using your scroll speed/how long you spend engaging with the content as a data point. After all, "time spent engaging with the content" is the revenue driver.


I was once brought in to a Fortune500 company to teach basic ENTRY LEVEL web development to a room full of supposedly "highly educated" H-1B Software Engineers.

Much of my presentation included things that most of my unemployed American colleagues, all of whom were actively looking for work, already knew how to do implicitly. Because it literally was just basic, "This is how flexbox works"-type of stuff.

Maybe the H-1B program is a great program for hospitals. For tech, it is 100% being used to import cheap, disposable labor in a way that harms U.S. citizens economically.


H1B workers are supposed to be people with qualifications that are in short supply in the United States. The unspoken part is that the "qualification" employers are so desperately searching for is usually the willingness to work for peanuts.


Isn't H-1B contingent on compensation in line with the local median for the role?


It is contingent on you documenting your going through the motions of pretending to keep compensation in line with "the local median".


After graduating college I joined a company that paid generally below-market for everyone and had a significant number of H-1B employees and contractors.

The benefits were legendary but the pay was 20-30% lower than what was around.

I don’t have evidence of wrongdoing but I’ve occasionally wondered if it was some kind of scheme.


“Legendary” benefits (especially healthcare) are extremely expensive. It’s plausible that the average total compensation was the same, or even more, than other companies. The trick is that not everyone gets the same value from those benefits.


The compensation is only measured in terms of salary (and maybe bonus).

Stock compensation is completely ignored. Since stock compensation can be a large fraction if not the majority of the compensation, this means that many H1-Bs may be underpaid compared to their coworkers, while appearing to the government to be the highest paid in that company and job role.

The other ignored aspect is effective hourly pay. Software engineers are nearly always exempt employees, so they don't receive hourly pay. But a manager can demand more from H1Bs, even if it would mean work during nights or weekends, and there's little the H1B can do. Local employees can more easily change jobs if that happens, and moreover, the threat that they can change jobs disincentivizes such abuse.


It's a bit of a catch-22 because if you add enough lower compensated employees you shift the local median lower. If "everyone" in the local area is hiring more cheaper H-1Bs that gives you a chance to hire even more H-1Bs for even cheaper. Averages can be a fun game that way.

Even if you try to pin it to the median that does not include H-1Bs, you still are letting the market compete on labor cost and that competition can still affect the local median. Companies decide all the time that they could hire, for example, 2 H-1Bs for the cost of one "senior" local developer, encouraging that local developer to maybe only ask for 1.5x "an H-1B" to remain competitive in that market. Iterate that enough in hiring decisions and companies still have more control of that local median than labor does.

I don't know if there is a "fair" way to set the cost of labor for an H-1B, but "local median" or any other average-based math is probably not it.


This has been "proven wrong" by geniuses pointing out that Americans who work in the same jobs as the H1Bs are also making peanuts.


Then restricting the supply of workers ready to work for peanuts will force companies to raise their salaries to hire.


Or if the job is an outsourceable one that can be provided as a service then they will outsource it to a company overseas and still pay peanuts. The only reason they'll raise wages is if they have to, aka the service cannot be done elsewhere or automated.


If your company doesn't need domain experts/doesn't change to the point these people can be remote... you are a zombie company and will be replaced by someone that does utilize domain experts/dynamically changes all the time with conditions. Even with just a factory, when I moved from dev to IT, getting my people to understand our users by going out to the floor and sitting with people we were able to greatly improve efficiency in a way no remote IT could.


They already are... Generally insourcing is to reduce the friction of doing so, because application managers and product owners don't want to relocate to the countries they're doing the outsourcing with.


A lot of jobs require or are better done on-premises, which is why they hire H1-Bs. Outsourcing is already cheaper, by far, especially if you want to go to the third-world.


A lot of people really hate RTO and love WFH


This. The problem for H1B advocates is most of us here reached our conclusions AFTER exposure to outsourcing/consulting and what H1-Bs got us/the new people we had to manage. Lots of us were also privy to managements' reasoning (cutting costs/your team is the most expensive and we don't want to pay that) which don't align with 'H1Bs are paid the same'.


> For tech, it is 100% being used to import cheap, disposable labor in a way that harms U.S. citizens economically.

I'd argue with the 100% - we all know the companies that do it. They get about half of H1B visas. So 50% :)

The blanket $100K (instead of say tiering it like raising fee $50K for each next 20K tier of visas with the $250K fee visas no subject to the cap - if only Tramp knew anything about business and specifically price differentiation :) would definitely revive interest for outsourcing to offshore.

Managing AI agents have some similarity to managing offshore teams. This time the offshore teams will be using AI agents. May probably lead to much higher performance/output.

Being rate limited, i'll answer to the commenter below here: The offshore teams are naturally assigned a well defined chunks of work, at least in a well managed situations. AI agents are also very suitable for that.


Ahh, so its as simple has having a well managed situation. Easy enough to outsource then. LETS GOOOO!


> This time the offshore teams will be using AI agents. May probably lead to much higher performance.

What do you mean exactly by that. I do not follow...


My equally anecdotal professional experience has been the exact opposite and certainly influences my view on this topic.


> Maybe the H-1B program is a great program for hospitals. For tech, it is 100% being used to import cheap, disposable labor in a way that harms U.S. citizens economically

And yet, Apple, Google, Nvidia, Meta and Amazon would never be where they are without folks who are or who started on H-1B. A ton of their senior staff were once 20-something hired on H1B

Crackdown on the abuse of outsourcing companies, let actual tech workers who are (or will be) good at their jobs come here, it’s obvious policy. The US has benefited immensely from that brain drain.


It's because they're using AI for the interviews, too.


It was 3 people who were replaced for the making of a list.

The number 50 was what Doctorow presumed was the entirety of the department that could potentially have been replaced by AI, of which the making of this list had been only one of that department's overall tasks.

At 3 interns per article, having 30 interns working on 10 simultaneous articles at any given time seems like reasonable output for an online zine.


Okay, but my boss still demands I give him a metric. I'm not allowed to tell him, "Just trust me bro" when I'm asked how much our security has improved over the past sprint. I'm supposed to give hard numbers, and the OP at least offers an alternative for that.


Pick something that resembles a vuln→patch interval,

not just a context-less number that means they're popular, audited, or reviewing their OWN code all the time.

Instances where 0-days can't be used in isolation are a perfect example of where nontechnical people absolutely need to "just trust" someone to triage, and perform threat modeling for them.


The closest I can think of right now is the "Before & After" category from Wheel of Fortune. It relies on there being a single word that ends one phrase and begins another.

But that doesn't bring into the idea of this word being _cutting_


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: