I don't think thats true, what jokes are you talking about specifically?
I always used Ryan Air a while back when I didn't have much money and needed to get somewhere. I dislike them now because of the removal of hand luggage by default. Additionally I find they're not even that much cheaper than alternatives nowadays which makes it harder to put up with their smaller seats and in general worse service.
I had it reappear 3 times when I was trying it out moving between pages. First time was before I created an account though in the sandbox. Not sure exactly the replication steps for the second and third time it showed up.
A complicating factor is that there are at least 2 different tours for different sections of the interface (e.g. high-level dashboards vs plan interface).
Do you think it would be better to infer from one tour dismissal that a user never wants to see another tour for anything else?
I've had some less tech savvy users say they appreciated them. Demo videos might be a good alternative though... until other folks roll up and ask for a UI tour to show them where the demo video collection is :P
Right but then you just connect your phone to the wifi? The following methods I have had for the 3DS card payments are (ultimately depends on the bank):
- Bank sends you a tiny card reader that you enter your PIN and it gives you a one time code. If you want to make payments that require 3DS (online only ofc) you have to have this card reader on you but it doesn't actually require an app or internet connectivity.
- You have an app on your phone, you drag a code from a notification onto another area of the app itself which does something (somehow - no idea the purpose) and verifies the transaction. Certificate is stored on device only.
- You open an app and it'l notify of the purchase amount, location, merchant and you just tap allow
- You receive a code in the mail that is renewed once a year which is then combined with a SMS message (or app notification). The payment flow asks you for some characters from both codes.
You do not require cell service for any that I have used and wifi is enough.
Further Edit: Just to clarify though, all of these are ONLY for online purchases. Purchases in shops you just use your pin if it requires authorisation.
Reddit used to show how much users donated vs the server cost (and running costs). It was always maxed out. Reddit grew and simply got greedy. Typical enterprise company woes where a much smaller company could cover most if not all what your typical user actually wants without losing money.
Reddit didn't need to be a video and image host as you allude to. They didn't need to bloat their stack with 'hot' technology. They didn't need to even make an app, the third parties were basically doing that for free.
They want to be an ad and data platform because they know that this will in the long run net them more money. To get there though they will have to alienate their entire user base.
I also had difficulty getting it to understand me. Theres a couple solutions I can think of that may make this more usable:
1) Speech to text into an input field, allow the user to modify
2) I presume this is uses an LLM to generate the responses, submit the new text and give it the entire convo as context but initially ask it to "correct" the text to what would make sense in context based on similar sounding words.
Edit: Hah oh it's not too great right now at all. Tried it again and it ended up writing Cyrillic as my response despite me speaking Spanish.
I was actually wondering if you could use this as another form of authentication (ignoring that WebAuthN and other such standards exists). For example create a font dynamically that when printing a specific string just outputs some form of data (eg. JWT encoded in a font glyph) that can be drawn to a canvas and read by the page.
Could be some form of incredibly sticky authentication, unless the user removes the font will never go away. Nefarious and not sure there would ever be a legitimate usecase but sounds doable.
Fire if that happens then? You can’t preemptively fire someone for something they haven’t done nor even attempted/planned to do.
Telling others your salary is allowed and should be encouraged, it is not a security concern at all nor does it imply you are going to release any actual confidential information.
So this person has at the very least demonstrated that they will broadcast to the public details of their financial relationship with their employer. Not just to their friends, or their co-workers, but to their company's competitors and to their manager's co-workers.
As far as I am aware though (and it depends on their contract/rules etc of course) that is completely fine and allowed?
Salaries are only secret if the person receiving it wants it to be. People should be able to say how much they earn without retribution. There is literally nothing wrong with it and it doesn't mean that they are going to broadcast actual company secrets.
If its a big deal for the employer then they should have put it in their contract/NDA and made that clear prior to employing them (but as I understand it that is illegal in a lot of places for good reason).
i work at a FAANG on a secret project. if i tweet about tech i’ve been using or my salary, my company would not do anything to me. i’ve even written about supporting remote work, and features we’ve shipped that i did not create.
this is such utter bs. no trust— typical shitty capitalist trash
Which is crazy. But just because you can fire someone for no reason doesn't mean you can fire them for illegal reasons, which appears to be the case here.
True, definitely depends on where you live although I also meant that in a more broad 'moral' sense. As in, giving the reason that they /may/ release 'secretive' information without any indication that it would actually occur is ridiculous to me.
I feel like a lot of the comments here are written after only taking the test and many are not reading the rest of the article.
The authors of the website are stating that they believe the study is wrong. The below/above 60 answer is showing you it’s incorrect half of the time along with data backing up the claim.
But their data doesn't make sense to be personally...
Only 5% of their dataset is above the age of 60, making their claim that they are getting 50% of their guesses wrong seem like they are calculating it wrong. Surely their cut-off should be at the 95th percentile of the data?
They shouldn't be guessing 'under 60' the same proportion of times as 'over 60', because their population is mostly under 60.
Again though, they are arguing that there is no correlation between randomness and age. This was just a demonstration that when they use randomness to predict age, the results are wrong 50% of the time-- which is precisely in accordance with their hypothesis
Yeah but their guess shouldn't be wrong 50% of the time as again that means that they can’t have picked the 95th percentile result! Because it’s 50:50 I’ll assume that they are assigning people scoring higher than average the “under 60” category - which is obviously incorrect. Otherwise how do they pick the cut off?
To explain with another example - let's say that I have a dataset of 100 people's scores at golf (no handicaps) and I know that 5% of them are pro-players and others are 'advanced amateurs'. Because of this I might take the top 5 scores and guess that they are pro's and assign the others the guess of 'advanced amateur'.
Now let's say that there was actually no correlation between people's scores at golf and their 'pro' status - what accuracy would I expect in the above experiment? The answer is actually closer to 90% 'accurate guesses' than 50%! (Although obviously - that's 90% accurate based on random chance).
Now if someone told me they got 50% of the guesses wrong at this task, that implies that they guessed that the top 50% of those golfers were pro rather than picking the top 5% of scores, and I would question the methodology.
This % is similar to the dataset in the webpage - I downloaded it, filtered out exclusions and c4% of the valid responses are 60 or over.
If I inherently pick a small population (i.e. over 60's are c4% in this dataset) and I am guessing wrong 50% of the time, it means that my cut-off is incorrectly calibrated. Their score cut-off should, at worst, be picking the wrong 4% and missing another 4%.
Am I going crazy? It seems logical to me, but to be open maths isn't my strong point. I just know that if I designed the guessing rule, I would be getting more than 50% (my algorithm would be 'if the users average score across the three tests is less than -1.5, assign 'over 60' and that would get c95% accurate guesses, albeit it would still not prove anything and I agree with the authors overall premise!).
In your golf example, making that guess requires an additional knowledge of what "pro" means and it's frequency among golfers. The data doesn't know that just like the randomness data doesn't know that most humans are younger than 65 years old. If you really want to figure out how predictive the data is, you shouldn't include considerations like that in your model. I get what you're saying but ultimately I don't think their goal was to make the most accurate prediction, they wanted to make one that illustrated their point by basing their guess off the data alone.
The calculation involves knowing the age of the sample population though (if you don’t know the ages of your sample, how do you work out what the cut off is at 60 years?).
If I don’t know how many golfers are pro, I simply cannot estimate if it is 100 golfers that are pro or 0 (unless it’s a real gap in scores). Making an assumption that 50 are pro is no more valid than 0 or 100.
If you take the average score of 100 people and say that you estimate anyone scoring below the average is above 60, you are going to be wrong regardless of if your hypothesis is valid or not.
Putting that up and saying “see, it’s wrong 50% of the time!” doesn’t make sense when your calculation is incorrect.
In order to calculate the cut-off correctly they either need to take the 95th percentile result, or pick a sample where 50% of people are over-60 and 50% are under 60 and take an average of that.
Using a dataset where 95% of people are under 60 and then picking the mean clearly isn’t going to work.
I'd have read it if it weren't white text on a pink background. I'm not going through the trouble of pulling it up in a browser and undoing what they presumably did on purpose. Then to complain that people don't read the whole thing?
Funnily enough I got this captcha 2 days ago and tried searching out to see if anyone else had thought about how ridiculous this captcha is - couldn't find any discussions elsewhere though at the time. Glad someone has brought it up though.
There has to be better way to determine if someone is a robot or not surely.
Problem with the latter example is those that are colour blind although I guess it's not much worse than asking people to sum up dice.
I'm guessing though they are more so looking at % correct and the actual behaviour of making a selection rather than just whether you are capable of adding up numbers.
I always used Ryan Air a while back when I didn't have much money and needed to get somewhere. I dislike them now because of the removal of hand luggage by default. Additionally I find they're not even that much cheaper than alternatives nowadays which makes it harder to put up with their smaller seats and in general worse service.