I don’t understand how? Wouldn’t the signal be highly directional? Surely it wouldn’t be easily detectable unless the viewer’s POV intersects the path of the beam?
The entire skin gambling scene is a big reason why I don’t touch the game anymore. It seeps into everything around CS and now you can’t watch a professional match without seeing a sponsorship from a skin casino, can’t watch a YouTube video without a sponsorship.
This is also why I don’t like the way some gamers treat Valve as the only ethical company in the industry. CS skin gambling is like what if you take the lootbox mechanism and pave it over the game’s entire ecosystem.
It’s the kids who win big you need to worry about (anecdotal, not me, but a big win concurrent with a bad time during formative years can have a lasting impact on people who then ultimately become addicts).
Guys, the author presents an overall reasonable argument and I think it's more useful to engage with it in good faith than going "so it's all my fault just because I'm a man?" - no one's implying that.
At its simplest, the point is that much of programming language design is done with a masculine perspective that values technical excellence and very little feminine perspective that focuses more on social impact. Most, including myself, have a knee-jerk reaction to dismiss this argument since at first glance it appears to trade off something known useful for something that's usually little else than a buzzword, but upon further reflection the argument is sensible.
The theme of forsaking technological perfectionism in favor of reaching whatever end goal you have set is widely circulated on this forum and generally agreed with. Those of us that work as software engineers know that impact of your work is always valued more than the implementation or technical details. It's thus reasonable that when building programming languages, the needs and experience of the users should be considered. Not override everything else, but be factored into the equation.
I know if I were to write a programming language I'd probably focus on pushing the boundaries of what's technologically possible, because I find it fun and interesting. But I would have to agree that even if I did succeed in doing so, the actual impact of my work would probably be lower than that of Hedy - the author's language. Hedy is not novel technologically, but the fact that it makes it meaningfully easier to learn programming for significant numbers of people is real, undeniable impact.
Lastly, I want to note that the author's argument for underrepresentation of women in PL cannot be reduced to "those nasty men are keeping us out". Humans are tribal and any group of humans is bound to form complex social structures. Those are going to affect different people in different ways, linked paper investigates the effect on those structures on specifically women because the topic is close to the author. Whether you care about low numbers of women in PL design or not, the dynamics that have led to that being the case are worth investigating and are quite interesting on their own.
> At its simplest, the point is that much of programming language design is done with a masculine perspective that values technical excellence and very little feminine perspective that focuses more on social impact.
I guess my criticism of this is that it reduces both men and women to what amounts as little more than stereotypes, which leaves me rather uneasy.
I also find it somewhat of a distraction as to the actual issues. For example one of the topics is that all programming languages only accept Latin numerals (0-9) and often only support English in many keywords. It's not hard to see how this might exclude people, sure.
A counter-argument to this might be that having a single Lingua Franca enables a global community of people from very diverse backgrounds to communicate and work together. Just today I accepted two patches from someone from China. Thirty years ago even talking to someone from China would be a novelty, let alone casually cooperating. That's kind of amazing, no? If we'd both be stuck in our exclusive world of "English" and "Chinese" with out own languages and counting systems and whatnot, then that would have been a lot harder.
All things considered English probably isn't the best, fairest, or more equitable choice. But it is what it is, and it's by far the most practical choice today.
You can of course disagree with all of that, and that's fine. But reducing it to "technical excellence" vs "social impact" or "male perspective" vs. "female perspective" just seems reductive and a distraction.
It is absolutely a valid point that user experience, helpful error messages, ease of use are extremely important and historically neglected in programming language design, as contrasted with technical excellence.
It is absolutely an invalid point to claim that this is due to gender, that men are doomed by biology to care about technical excellence while women are doomed by biology to care about UX. We are not living in 1825!
It is IMO very backwards and counterproductive to just gender-box mathematics/quantitative research vs peoples/qualitative research. Women are very capable of doing quantitative/mathematical research, and men are quite capable of doing qualitative/UX/anthropology research. Why must we be so narrow minded, _especially_ when we're talking about how we want to see the future?
The author may be totally right in suggesting that PL academic research needs more diversity in research, and that there might be a lot of status-quo bias, elitism and groupthink at play. The over-reliance on mathematical "purity/elegance" (like the monad meme someone mentioned below) instead of usability when it comes to new languages is something even I have encountered as an end user of programming languages.
However, claiming that there's some inherent gender based tendency to engage in one kind of research over another defeats their own purpose IMO. If they said that PL academic research could learn some tricks from fields X, Y and Z on how to engage in more end-user research, it would have made their point so much more convincing.
It just reductionist nonsense. The idea that valuing technical excellence is a masculine trait is just absurd. The idea that valuing social impact is a feminine trait is also nonsense. Yes, we could have an aggregate noticeable difference, but the idea that this creates causal outcomes in something like programing languages... it makes no sense.
People use different programing languages for different reasons. Python is easy to read, it's used more by people who come from non-cs backgrounds. Lower level languages are used by people with lower level needs.
It's like saying that "trucks are masculine" when, sure, I'll grant you that, but the point of a truck is to haul a bunch of shit, often to a work site, and there are plenty of women who need to do that. It's like saying that a "Prius is feminine" because it's built around a social cause (climate change)... I mean, sure, I guess. I still think literally millions of men drive Priuses because they actually care about social causes and want to save money on gas.
The concept between the aggregate outcomes of all masculinie and feminine coded things being driven by and arbitrary culture and not an actual random distribution of needed functions (men are more likely to work in hard labor than women, hence more likely to use a truck for work), just seem like the tail wagging the dog.
All of this, and yes, I consider myself very committed to the values of equality that feminism espouses.
If programming languages have masculine or feminine perspective, then I would like to know which culture lens this is being painted with and for what purpose. Is there any more than to dissect Charles V quote, "I speak Spanish to God, Italian to women, French to men, and German to my horse", and asking if speaking French would increase the number of women in a specific field.
If we are trying to define programming languages tailed toward children as being feminine, and then "real" languages used in the trade as being masculine, are we not devaluing the hard work of everyone involved? It seems to carry a very high risk of causing the opposite of an positive impact. Languages that are meaningfully easier to learn are a good thing, and like reading and math, seems like a good idea to teach children at an early age. The essay seems hopeful that this would decrease gender segregation in the work force, through they don't bring much to support it.
I'll add that if people want a historical perspective on the dynamics, CS Professor Ellen Spertus long ago wrote the paper "Why are There so Few Female Computer Scientists?" It helped me see a lot of the things I might have otherwise been inclined to dismiss: https://dspace.mit.edu/handle/1721.1/7040
I would rather dismiss her point on the basis, that from my perspective this may be true for a small niche of academics that focus specifically on programming language formalisms.
When I studied programming language during my university time, this was really focused on formal approaches, so it is true there. But that is how this field of studies defines itself, and that should be considered their right.
Once you look outside of this narrow field, you can easily find a lot of projects and endeavors that cover exactly what she is requesting in that article.
* The rust compiler focuses a lot on more understandable error messages (a topic specifically covered) and even recommendations that make picking up the language easier.
* C++11 standardization also focused a lot on usability and how to improve hard to read error messages.
* Scratch is explicitly designed to look for alternative approaches to programming.
* Programming in other languages has been around for a long time.
In school we were taught a German version of Logo. I don't buy her argument that her language research was dismissed purely because it wasn't hard enough. We simply have anything we need to understand how we could do a programming language in another language. Replace a few lexer definition, and then re-define the whole stdlib in another language. There is simply nothing novel about this. I really hope her research on language covers a lot more than just this.
She also does a very bad bait-and-switch when she suddenly replaces the meaning of the word "hard" in the middle of the article. Initially she clearly used "hard" to refer to difficult, then later she suddenly switches hard in the sense of "hard" sciences, i.e. sciences based on formalisms and empirical research instead of discussions and opinions.
I agree with her a lot of research is missing from non-technical hard sciences (I would consider large parts of psychology a hard science, although it lives at the border of the two worlds). There is some research on the psychology of programming, but this is definitely under-researched. Also usability studies of programming languages are not well established.
In a lot of cases, however, I don't think this is actually something we can really do research on. I have a strong background in psychology, and I don't think we actually could study the impact of different paradigms. If you pick participants that already know programming, they will be highly socialized with the dominant paradigms. If you pick novices you will have to control what they learn over years until they become fluent in the studied paradigm. This isn't feasible and raises sever ethical concerns. Or you don't control it, make short time studies, in which case the results will just not carry any meaning.
Overall for me the article raises some really valid concerns about programming language research and CS in general, but I think she took a really bad turn in describing these as gender based issues. What I would see as the reason for these issues lies in completely different areas and are only very remotely related to gender.
The author is not advocating for friendlier messages in the googlesque sense of dumbing them down or introducing more positive wording, but in the sense of making them more readable and useful.
Of course it does not matter for "a straightforward computer error message", in cases where the error is a simple type mismatch or a missed semicolon, but if those were the majority of the problems we encounter as programmers, our work would be trivial.
It's not difficult to imagine a situation where structuring a compiler in such a way that it keeps more state and perhaps has to perform more analysis is worthwhile, since a more useful error message saves the user time in understanding and fixing a problem.
An example that comes to mind is when in Rust I tried to create a dynamically dispatched trait, where the trait in question contained a function an argument of which was generic over a different, statically dispatched trait. Since the compiler did not know at compile time the exact object which would be instantiated, it was incapable of inferring the exact type of the second, statically dispatched trait at compile time, thus failing to compile.
The error was presented to me in a clear way that pointed out the problematic relationship between dispatch types of the two traits allowing me to understand and fix the problem quickly. If the error message was far simpler, such as "can't dynamically dispatch trait", I would have figured that out too, but it would have simply taken more valuable time. Most importantly, having to track down the issue from a minimal error message, would not have been an honorable test of my intelligence and emotional maturity, it simply would have been inefficient.
Could anyone more knowledgeable on the topic explain to what extent common wireless connectivity standards are open and feasible to implement for, say, a medium sized company? Apple has been working on a 5G modem for what feels like a billion years, but other standards seem to be more democratized.
The availability of hardware seems semi moot, since afaik there's basically no way to get spectrum short of big national auctions.
But now that T-Mobile is renegging their promise & not going to meet the minimum deployment size they promised, they have been saying the FCC should find a way to sell by area some of that spectrum sitting dormant in such a wide wide % of America (personally I think it makes their bid invalid & they should forefeit their bid for such egregious dirty lying).
https://www.lightreading.com/5g/t-mobile-relinquishes-mmwave...
I think some of the analog tv spectrum has some precedent for being sold per-area rather than nation wide, but I'm not sure how that's been going.
In terms of hardware, there's some fascinating stuff. Facebook's SuperCell large-tower project showed awesome scale out possibility for large towers. Their Terragraph effort is spun out, and seems to have some solid customers using their hardware. Meta spun off their EvenStar 5G system, which has a strong presence at Open compute now.
https://www.opencompute.org/projects/evenstar-open-radio-uni...
But it's hard to tell how acquireable such a thing really is. There's plenty of existing nodes out there too. It is unclear to me though how acquireable such things really are- there not being an open market, since there's no usable spectrum feels like a conundrum for the market, even though these are extremely high volume amazingly integrated advanced wireless systems that you'd think would be visibly prolific.
> The availability of hardware seems semi moot, since afaik there's basically no way to get spectrum short of big national auctions.
You can run 5G in the unlicensed spectrum. AWS can rent you hardware for it: https://aws.amazon.com/private5g/ - it's $5k a month per site. I know a plant that switched to that because they couldn't get WiFi to work reliably for them.
But even if you want to run within the licensed spectrum, local licenses for a couple of bands are cheap. I was involved in setting up a private network in the licensed spectrum around 10 years ago (based on https://aviatnetworks.com/ ), and a local site spectrum license was something ridiculously small (in the range of a hundred dollars).
From my limited understanding, the issue for Apple et al isn’t making a 5g chip, it’s making the chip small, cheap, power efficient enough and capable of having “decent” reception. I’d imagine existing patents by Qualcomm certainly make it a bit more challenging on terms of available (design) options.
> The LTE/NR eNodeB/gNodeB software is commercialized by Amarisoft.
> A UE simulator is now available. It simulates hundreds of terminals sharing the same antenna. It uses the same hardware configuration as the LTE eNodeB.
> An embebbed NB-IoT modem based on Amarisoft UE software.
What do you mean by implementing? Make your own radio chips, designed from the ground up? Or merely producing a networking device using chips from suppliers like Intel, TI, Broadcom, Qualcomm etc? Or the software side only?
Stuff for GSM/CDMA has been around for years, OpenBTS is the primary example. This is the first I've heard of anything more modern/complicated being implemented. From my understanding, a lot of the hard eng work is in the RF frontend and making it small/low power enough to fit in a phone for example. OpenBTS got around this by using existing SRDs for their RF frontend.
WiFi, Bluetooth and Zigbee has bunch of public specifications and knowledge about it to make it feasible. AFAIK, the specifications for 4G/5G is publicly available but extremely complex + you'd need licensing agreements, pay royalties, etc. So unless this imaginary company of yours have specialized expertise in all that, it seems unlikely to be feasible.
The big problem is patents and copyright. No common wireless standards are open. No wireless standards are feasible to implement. Seriously. It's that bad. Certainly a modern 4G/5G standard is complex from a hardware standpoint to implement - the way you usually do these is using a very powerful embedded DSP, which is also not open (Qualcomm Hexagon is the most reverse-engineered of these if you want to understand what's going on). But the thing that's holding Apple up is purely legal IMO.
>No common wireless standards are open, No wireless standards are feasible to implement.
What is definition of "Open" here?
The current submission is entirely about Open Source 4G/5G. Fabrice Bellard on top of the crazy amount of other stuff he did also made a LTE/NR Base Station Software [1]. WiFi and Bluetooth are also "Open".
>But the thing that's holding Apple up is purely legal IMO
People constantly mistaken having an open standard regardless of patents and an useable product on the market. There is no reason why you cant have a software modem aka Icera that was acquired by Nvidia in the early 10s. And there are no modem monopoly by Qualcomm which is common misconception across all the threads on HN and wider internet. MediaTek, Samsung, Huawei, Spreadtrum and a few others have been shipping 4G / 5G Modem on the market for years.
The only reason why Apple hasn't released a modem 6 years after they acquired the modem asset from Intel is because having a decent modem, performance / watt comparatively to what on market is Hard. Insanely hard. You have Telecoms from top 50 market each with slightly different hardware software spectrum combination and scenario along with different climate and terrains. It took Mediatek and Samsung years with lots of testing and real world usage at the lower end phone to gain valuable insight. Still not as good as Qualcomm but at least it gets to a point no one is complaining as much.
Patent unencumbered in a way that someone could make a commercially viable implementation as a "small or midsized" company, as the parent post asked. Open Source proves my point - the issue is not implementation (note - I'm not claiming implementation isn't hard, it is - I certainly know from personal experience that it is and I would never claim to be able to personally build an energy efficient 4G or 5G modem, but I don't think that raw engineering horsepower is what's holding Apple/Intel/NVidia back here).
> MediaTek, Samsung, Huawei, Spreadtrum and a few others have been shipping 4G / 5G Modem on the market for years.
The CCP effectively told Qualcomm to get lost in 2015 and Taiwan settled an antitrust agreement between them and MediaTek in 2018, so MediaTek, Huawei, and Unisoc/Spreadtrum are not good examples here. I believe the South Korean government also intervened on behalf of Samsung. Actually, the list of modem vendors you list pretty much matches exactly the list of governments who prosecuted, fined, and settled with Qualcomm for antitrust.
If I remember correctly, all the documentation needed to implement a 5G radio approaches 10,000 pages. It’s not only insanely long and complicated but there’s a nasty path dependency with most of 4G which is why Intel and now Apple have such a hard time getting their radios to the finish line. Poaching a few Qualcomm or Broadcom employees with better salaries is one thing but without the cumulative expertise contained within the companies, it’s almost impossible to bootstrap a new radio.
> Apple has been working on a 5G modem for what feels like a billion years, but other standards seem to be more democratized.
The main problem is the sheer age of mobile phone networks. A phone has to support everything from top-modern 5G down to 2G to be usable across the world, that's almost as much garbage that a baseband/modem FW/HW has to drag along as Intel has to with the x86 architecture.
And if that isn't complex enough, phones have to be able to deal with quirks of all kinds of misbehaving devices - RF is shared media after all, and there's devices not complying with the standard, the standards containing ambiguous or undefined behavior specs, completely third-party services blasting wholly incompatible signals around (e.g. DVB-T operates on frequencies in some countries that are used for phone service in other countries, and often on much much higher TX power than phone tower sites). If it can't handle that or, worse, disrupts other legitimate RF users, certification won't be possible.
But that experience in dealing with about 35 years worth of history is just one part of the secret sauce - that just makes the costs of entry for FOSS projects really huge (which is why all of these projects I'm aware of support only 4G and afterwards since that generation is the first one to throw away all the legacy garbage).
The other part of why there are so few vendors is patents, and there is a toooooon of patent holders for 5G [1], with the top holders being either Chinese or known for being excessively litigious (Qualcomm). And even assuming you manage to work out deals with all of the patent holders (because of course there is, to my knowledge at least, no "one stop shop" compared to say MPEG), you still have to get a design that fulfills your requirements for raw performance, can coexist peacefully with almost all other users of the RF spectrum to be power efficient at the same time. That is the main challenge for Apple IMHO - they have a lot of experience doing that with "classic" SoCs, but almost none for RF hardware, virtually all of that comes from external vendors.
So, why are we mad about this? The techniques used maintain perfect privacy throughought the process. It's a neat feature with no downsides for the user.
Not everyone wants the software/OSes we run to automatically send data elsewhere. I bought the damn device, I own it, yet somehow it/the company decides that some things it comes across on it, can be sent to the company?
No thank you, I prefer consensual computing.
> with no downsides for the user
No downsides for you, with your requirements/use cases. If the user has a requirement of "Doesn't send anything to anyone without consent", then this is obviously a downside.
>Not everyone wants the software/OSes we run to automatically send data elsewhere.
I personally find it offensive when a mega-corp makes the assumption that my connection to the Internet is available for them to build for-profit services without giving me sufficient agency.
The cant-disable-Wifi-safely dark pattern is bad enough. But turning me into a data harvester for their million-dollar services, without even thinking about giving me a cut?
No thanks.
Alas, these anti-patterns have become a norm by way of ignorance, and its not getting better.
Because it's a) a useful function for the app that b) can't practically be done entirely on-device, and c) they believe they're not sending anything private off-device.
It must be awfully convenient to believe that the only possible reason for it to be on is the same reason you made up entirely based on nothing. Personally, I don't find that logic convincing.
a) Compensating for missing location metadata is a valid feature for a photo library. Peer competitor Google Photos also implements location estimation from landmarks.
b) The sizes of contemporary general vision models and the size of the vector database for matching potentially millions of landmarks suggest that this is not suitable for running on-device.
c) Apple's entire strategy is to do cloud computation without private data, so it stands to reason that they believe they're not using private data.
Apple turning a feature on by default can only possibly be a conspiracy to surreptitiously obtain training data from people’s photos? It couldn’t just be that they think most people will want the feature?
It’s one thing to argue that Apple is doing this for nefarious reasons, but to suggest that this is somehow the only conceivable option is a bit nuts.
Right, so I assume that they can’t build up a database of images to use for training future models. But I was hoping someone who understands homomorphic encryption and machine learning better than me could confirm this.
I'm not mad about this because I use Google Photos, which has been doing the same thing for the last two years without people on the internet telling me to be mad about it.
Not sure what you mean. Google Photos is the default on every smartphone I've ever owned and this setting has been on by default as long as it has existed. You could just as easily say "Using Apple Photos was your own choice" and get shouted down.
The point is that outrage isn't automatic. Not everyone is going to be equally mad about a check box.
People aren't complaining that Apple Photos is installed by default. They're complaining that it's sending data up to Apple by default. You have to explicitly opt in to Google Photos backing up your photos to the cloud. That setting is not on by default.
Same. But honestly with all the "I pay extra for Apple because privacy" posts around here, I kind of expect better from them. Whereas everybody pretty much knows that if you dance with Google, they're going to be looking down your top...
Personally, the whole "send a vector embedding of part of a picture wrapped in encryption and proxies" seems like it probably is better, but maybe Google is doing all of thatz too.
Because Apple did a great job implementing a useful feature in a privacy-preserving way, and I don't want to toggle on 100 opt-in features when I setup a new iPhone
This should be a choice between "recommended experience" and "advanced experience" when you set your phone up. If one selects the latter they get all the prompts. It should then be possible to toggle between experiences at any point.
"We" don't automatically, naively assume that a brand new feature, which has undergone no external validation, that uploads data/metadata from your personal, private photos library without your foreknowledge or consent, is "perfect".
That's.. actually not a bad looking car. I mean if I had infinite money to spend on a supercar I can certainly think of a dozen others I'd rather go for, but given that most people were expecting an eyesore, I think the new Jaguar looks pretty good while also admittedly being very unique. Looking forward to seeing it on the streets.
I agree - looks sleek, powerful, muscular, elegant, refined and I like the slight boxiness rather than everything rounded like an E Type. But then I like the Cybertruck too.
Some comments try to justify this - they’re wrong.
Even if it was just 1% of users, outright ignoring their issues is not acceptable. And far more than 1% travel abroad or do other suspicious activity (such as buying things at a place you’ve never purchased from before).
And there are services that handle this correctly. Starling bank (UK) is a fave of mine. Confirm in an app, enter full password in some cases, but that’s it. I had to make some sketchy looking transactions and no matter, they never block your account or make you jump through additional hoops.
> Confirm in an app, enter full password in some cases, but that’s it
That's only on the bank's side. There's a major problem where the merchant later cancels the transaction on their side despite successful 3D-Secure.
Either 3DS doesn't actually offload liability (so even accepting a fully 3DS-verified transaction is a risk), or merchants aren't up to date on what they are and aren't liable for.
Unless I'm reading it wrong, your second source does very much imply some people can tell the difference quite reliably. As expected, regular people can scarcely tell the difference, but musicians are better at it and sound engineers are in fact quite accurate.
This matches my own experience well: most of my friends do not care about various levels of compression, nor what headphones they use - that's fine, I'm glad they're enjoying art in their own way - but I, and some others, do in fact stand to benefit from less compressed audio.
I've personally done blind tests on myself using a python script that randomly plays compressed and uncompressed snippets of the same track and mp3@320 was not transparent to me (though opus@256 was).
Can I tell the difference when casually listening? I don't know, but when the cost of lossless is having my music collection take 60gb instead of 20gb on my 512+gb device, I have no reason not to go for lossless.
The thing about being or not being able to point out differences in audio quality is that it all boils down to pattern recognition. If you know anything about pattern recognition, you understrnd that you can't have pattern recognition without prior training through provision of tagged samples of such patterns.
If you would give high quality audio experience, to a person that has been listening through 80s general store headphones, to low quality radio rips on magnetic tapes, you might be surprised how few people are going to describe one as "better", without prior description of work and technology required to produce each experience.
And one would be even more surprised by how many people choose the cassette tapes because of nostalgia and a long time satisfying experience.
isn't perception itself a matter of mere pattern recognition? hearing? the whole point is that you can hear the difference. whether or not it sounds "worse"... is certainly debatable, but is definitely a value judgment. and the burden of proof is definitely on the "it doesn't matter side" to prove that a lower fidelity version is "better" than one truer to the original master.
I've done the same blind test with decent but not amazing headphones (HD590) and I could tell the difference all the time as long as the music was slightly complex.
If loudness was artificially boosted, I had a harder time but could still often tell. I think the sound engineering of the music played a big role and a lot of modern music isn't mixed with complexity in mind.
reply