Hacker Newsnew | past | comments | ask | show | jobs | submit | atleta's commentslogin

For me the correct URL seems to be: https://www.facebook.com/adpreferences/ad_settings/?section=... instead of what you have in the gist. (But the script errs on the awaits.)


Neither the URL in the gist or this one load for me. The one in the gist seems to freeze at trying to load some modal, on fully updated Chrome.


Can you post screenshots of your network tab so I can calculate the intervention actors?


Well I don't even get a chance to run the script.


I've updated the script, try again! :D


I don't know how well it works in practice. The other day I bought something from a local webshop (bike parts) on my laptop. The next day I'm seeing an ad from the same webshop on my Facebook feed on my mobile. Yes, it could be coincidental though I do see a lot of bike-related ads and practically never this company. (Even though I am a returning, if not very frequent customer of theirs.)


Some companies upload their customer data to FB because Zuck promised them then you can send ads to your existing customers.

I made the mistake of having the same email address for Facebook as for registering to random online shops...


I use a separate email address basically everywhere so this can't be the reason. (I don't even use my main email address for facebook.)


I'm not sure that was a problem. Let's not forget that smartphones, as we know them, weren't really invented yet back then. So there wasn't really a common form factor and feature package that every customer was looking for.

Sure, there were Symbian phones that could functionally do almost anything smartphones can do now (and do more than the first iPhone) but those weren't for everyone and those didn't use touch screens so there were multiple form factors. Like the full keyboard communicators (9210, 9300/9500), the Blackberry clone (E61, I think), the slide keyboard (7650?) and then all the non-Symbian phones (S40 OS, IIRC). And, of course cameras were new and shitty so not every phone had them.

Now this could have caused a problem in itself and what the article says about the organization could also cause problems but (I keep saying this when this topic comes up) the real problem was that the Nokia management was too convenient/coward and didn't dare to switch away from Symbian. Especially since they have bought out Erinsson and Sony (again, IIRC), their former partners in the Symbian consortium in ~2004/5.

There were eperiments with a linux based phone OS around that time. They created the Nokia 770 "internet tablet" [1] which was this PDA-like touch screen device with a landscape screen layout, a pen, and a removable front cover. Obviously it was an experiment (and later followed by the 810 then the 900, the latter being a phone). However no one in the management was brave enough to give a linux phone a go. Especially not committing to a strategy to switch over to linux. Symbian phones were selling great, Nokia was the market leader and you can't really do better than that...

I remember, at one point, one team in the Helsinki office of NRC (Nokia Research Center) was coming up with the idea of creating a "unified architecture" (called the "Grand Unified Architecture") where they would create a uniform platform around the 3 operating systems: a linux based one, Symbian S60 and the (non-Symbian) S40. The genius idea was that they'd create a HAL (hardware abstraction layer) then above that would be one of the 3 OSs and above those would be a uniform API that could be used by all app developers. This would have been a great strategy to side-step an actual decision but other than that didn't make any sense, really. (Maybe you could argue back then that the S40 hardware was not capable of running linux, but there was no excuse for trying to keep both Symbian and linux while hiding them below a uniform API.) So the switch to linux never happened and Symbian was a pain in the ass to develop for. Just concatenating two strings took several lines of code in their C++-based API that hasn't even looked like actual C++. And this made developing in-house software slow and made 3rd party software pretty scarce.

Nokia also had an aversion towards touch screens. One of the reasons must have been that back then only resistive touch screens were available (I think the oroginal iPhone was the first phone with a capacitive one, i.e. one that was an actual touch screen and not a press/push screen). The other reason must have been Symbian (and the S60 skin) that was really not designed for touch screen and was hard to develop.

So Nokia just continued to enjoy being the market leader with the management not taking the risk to try to switch direction. And then the iPhone came and then Android came (who, after seeing an iPhone demo, very quickly changed direction because at first they thought they were competing with Blackberry, so their UI was similar to that and maybe Symbian).

https://en.wikipedia.org/wiki/Nokia_770_Internet_Tablet


The main issue they had was Symbian. Period. And not willing to let it go. It was f*&^d up before Elop. Years before him.


If they had concentrated on Series 60 they could easily have fixed Symbian over time. It was highly capable. For example it ran a full Webkit (yes really) browser before Apple shipped their own. That was genuinely useful. It even had code signing for apps in v3 (the version that shipped with the E61)


The problem was still Symbian under S60, if you like. Yes, it had code signing (which seemed like an unnecessary restriction at the time it was introduced) and a decent browser, and email that synced in the background (unlike in iPhone 1!) and background tasks in general, etc.

But developing for Symbian was convoluted, very painful and slow. And it slowed down Nokia itself not to mention the 3rd party/external app developers. There was no reasonable way to fix Symbian as these issues stemmed from the very foundations. One of them being memory management, the other probably cooperative multitasking and callbacks. But the memory management thing was all over the code (think string handling, so everywhere) and it made using existing software hard too. Linux would have been the way to go, one way or another. Sure, they would have to have rebuilt most things for that platform but e.g. webkit would have been a no-brainer and they could have used a lot of existing open source software.


GP talked about who causes the pollution not where to punch (who to blame). Everybody has a little say in policies, at least in democracies. When you say that most of America is not wealthy enough to play this game then you basically admit this. And this is what is happening: people are not keen on making policies happen if those mean lowering their standard of living. But the thing is that, unfortunately, it is that very standard of living (i.e. consumption) that causes the problem.

You can punch up as much as you want, things are not going to change without people lowering their standards. And once we accept it we can easily force politicians to do the right thing. The tragedy of the situation is that everyobody is complicit and most people will not accept that they themselves are. Sure, everybody but them .

And I'm not saying this to blame anyone. Blaming doesn't make sense. Identifying the causes and what needs to change does.


The thing I don't understand about this strategy is that it itself shows that there really is no money to be made here. I mean it's a pretty obvious giveaway that:

1. they don't have the resources to build their own technology and probably never will

2. even if they did have, the best they could do is come up with something very similar to OpenAI's GPT, i.e. a (somewhat) generic AI model. This means that OpenAI can also easily compete with them.

All these companies are doing (if anything) is that they test the market for OpenAI (or Google, MS) for free.


The flaw in your assumption is that perfect tech or tech powerhouses win. I mean, sure when they do, they win big; but the endgame for b2b SaaS is mostly M&A, powered by sales, which is mostly down to c-suite relationships and perception of being one among the market leaders ("nobody ever got fired for buying IBM").

If you can move fast, deliver, expand, and raise money, there's a good chance the AI wrapper lands a nice exit and/or morphs into a tech behemoth. Those outcomes (among others), even if mutually exclusive, are equally possible.


So, if I understand you correctly, the business strategy for an AI wrapper company would be that they acquire customers quickly from a specific niche, build a name, while having very little custom technology and then get acquired by some of the larger players who do have the actual AI tech in-house. And, for them, it would be worth it for the brand/market/existing client base.

Assuming that the advance made in the meanwhile in AI doesn't eradicate the whole thing. I mean say some company builds a personal assistant for managers to supplant secretaries, they become the go-to name and then Google buys them in 2-3-5 years. Unless Google's AI becomes so good in the meantime that you can just instruct it in 1-2 sentences to do this for you.


> get acquired by some of the larger players who do have the actual AI tech in-house. And, for them, it would be worth it for the brand/market/existing client base.

The key is, if the incumbents truly feel they can't breach whatever moat, M&A is the safer bet over agonizing what if (I am thinking "git wrapper" startups that saw plenty competition from BigTech; remember Microsoft CodePlex, Google Code, AWS CodeCommit?). Given Meta's push and other prolific upstarts (OpenAI, Mistral), I don't believe access to SoTA AI itself (in the short term) will be an hindrance for product-based utility AI businesses (aka wrappers).


Well, it seems that initially started with GPT4 but his costs were becoming high so he had to do something and had to do it quickly. Technically he could have written a few hundred responses himself while the site was still using GPT4 using the prompts from the users but that could have been slow (expensive)/boring, etc.


I think the joke hints at the recent events when Sam Altman has been fired (for a few days) and MS announced that they would take over the whole team as they said they would quit in response to Sam being fired.


The word "more" is missing from the first half of the sentence: "more proportional".


> Generally speaking, "the value of a graph is proportional to the square of the number of edges"

No, what Metcalfe's law assumes is that the value of the graph is proportional to the number of edges (not their square). And from that assumption and the fact that the graph is fully connected follows that it's proportional to the square of the number of nodes. (Because you can have (n-1)*n/2 edges with n nodes in a fully connected graph.

And hence, the Reiser quote above is similar but it emphasizes something else: it states what Metcalfe's law (I think) uses as a premise (or implicit claim) that the value is in the connections. Because it's not necessarily a fully connected graph.

Edit: originally I've given (n-1)*2/2 as the number of edges instead of (n-1)*n/2.


Promotional to the number of _edges_. Edges are proportional to the square of the number of nodes, so the value of the network overall is proportional to the square of the nodes.

Think of it this way, for every new user added to the network: * the new user is enriched proportional to the number of existing users * every existing user is enriched by the 1 new user

This double-counting is what gives it the quadratic growth.


Yep. That's what I was saying too. The first line of my comment quotes the GP and I was correcting that.


So, the wikipedia first line is wrong, you're saying? "Metcalfe's law states that the financial value or influence of a telecommunications network is proportional to the square of the number of connected users of the system"

Later in the article it seems like the original stating is more consistent with what you said, but everything I've ever learned in network theory and practice shows that it scales as a power of number of edges, and the logarithm of the number of nodes.


The number of connected users is different from the number of user connections :)

The first is the number of nodes, the second edges.


Uhh, are you sure? I believe "connected users" refers to edges. Otherwise it would be stated as "users connected to the network".

It could explain my misunderstanding, and also seems consistent with the explanation later in the article, but it's also completely the opposite of what we observe on the internet; for example, the value of the web is definitely not in its in number of pages, but in the value and quality of the connections between the pages.


Two users in the network: A and B; one connection: AB. Three users in the network: A, B, and C; three connections: AB, AC, BC. Four users in the network: A, B, C, and D; six connections AB, AC, AD, BC, BD, CD.

Metcalfe's law says value increases as 1-3-6-... instead of 2-3-4.

In graph terms, users are nodes, connections are edges, and in a fully-connected graph edges are in order of the square of nodes.


Yes, I see I had completely misread and misunderstood the original law.

But ethernetworks aren't fully connected (they tend to have lots of local connections that then are connected to each other through routing).


I think the difference between logical and physical connections is what drives the confusion here. If two nodes can reach each other somehow then for Metcalfe's law they are connected, even if there is no direct connection between them.


Yes, I realized that shortly after reading the replies. Thanks for stating it explicitly. Once again, my brain's inability to parse english caused a multi-decade misunderstanding.

Realistically, the only metric that I can think of that makes sense here isn't proportional to |V| or |E| but to the betweenness connectivity of the graph and the average distance between nodes.


You actually have a very valid point: given that there is such a thing as the maximum ttl at some point that 'logically connected' network will become more and more sparse depending on how 'wide' the network really is. I wonder if there are already parts of the V4 net that are so far removed from each other that this is an issue.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: