I find this response (and the class of responses like it) really frustrating, because it uses a (likely feigned) misunderstanding of the scope of the question to attempt to sidestep the real question. My question for the CTO would be, roughly:
You've now answered "Do your lawyers think you can get away with this?". But the questions you're not answering directly, which I think underlie the 'concerns' you're appreciating our sharing, are things like...
- Does the Bitwarden team see no ethical problems with making proprietary a project which many supported and contributed to explicitly because it was open source?
- Given that password management is explicitly a high-trust enterprise, how does your organization intend to navigate the rupture of trust, and subsequent forks and waves of departure, caused by an open-to-proprietary rugpull?
- Is there something that the community could do together which would help your company navigate through the dire situation you must be in to be considering something like this, without resorting to proprietarization?
I know it's his job as CTO right now to be feigning concern, particularly in forums where you can't close the conversation, but the current approach is basically confirming the worst fears ("They believe they can legally do it, and see no problem with their actions"), and that seems like exactly the wrong vibe for a company whose bottom line depends on users trusting the code and the people updating it.
This is wildly impressive. Thank you for doing and sharing this, and the 1337 on either end is just icing on the cake. It's also a beautiful example of just how broken MD5 is. I wish I'd had this to show my students 3 weeks ago when teaching about collision attacks.
I’m disappointed to hear that, just because it’d be very useful for this to be a thing. But I love the science behind it, and this is the way it should be.
Yes the strange thing about this drive is it appeared to work with no solid theory behind it. Yes, there was some but my layman's understanding was that it was flawed.
It would have been so cool for something like that to have worked. We don't get many discoveries like that.
> appeared to work with no solid theory behind it.
That, my friend is the very definition of Snake Oil.
> We don't get many discoveries like that
On the contrary, we get far more than our fair share of charlatans claiming physically impossible results. See also: Theranos, Energous, Fontus and a million other scam companies.
It's not snake oil if you have a weird unexplained effect, and:
A) you're not trying to sell anything to anyone
B) you fully open source the design, specifications, schematics and your notes, and invite any interested third party to try to duplicate or debunk the effect.
C) you intentionally refrain from making hyperbolic claims about how revolutionary your thing is, before it's been duplicated. Maybe some clueless third parties engaged in hyperbole but not the original creators of the concept.
Nobody was selling emdrives (to my knowledge). It was more of a "Hey, this is a weird effect" and an effort to explain it. That's less snake oil and more science. Most of the time there is some explanation that doesn't break your framework. Sometimes there's a breakthrough as you figure out the new thing. Sometimes you can't figure it out.
Shawyer absolutely was soliciting investment. Anyone with a basic physics education could see his proposed mechanism was utter nonsense, so it's hard to be charitable towards his rather hyperbolic marketing.
I disagree. This is the definition of progress in fundamental science. The difference between science and snake oil is what you do after your first signal. Do you try to find every possible way why it could be wrong and most importantly do you let other people try, then it is science, like the faster than light neutrinos come in mind as another example. Or do you start selling and let other people not test it. Well then its snake oil
It's really unhealthy to call anything that doesn't fit into the known laws of physics or science "snake oil"
If the EmDrive team was trying to sell NFTs and sucker people into investing in their radical invention it would be snake oil. But instead they approached this very responsibly, saying "we don't understand why this works. help us figure it out."
That's science done right, and a lot of new science has moved forward by questioning and changing the known rules. Whether or not it happens here.
You're apparently missing the origins of this whole thing. Shawyer has been soliciting investment for it since 2001, making very hyperbolic claims.
This was indeed snake oil from very the start. A couple of people at NASA EagleWorks got permission to use some of their time and facilities to investigate it as a personal project. They did an extremely lazy attempt to control for errors, published their result without review, and the hype exploded.
The whole saga is rather frustrating tbh. We knew what the definitive result was going to be 4 years ago. Getting some improved measurement technology is cool and all, but I don't think we had to go about it this way.
I disagree. Something new that challenges areas of physics where our understanding is vague, like neutrino mass or dark matter - sure: feel free to explore new theories
Something that challenges something as solidly established as conservation of momentum: Snake oil.
Newtonian motion was pretty well established for hundreds of years until general relativity transformed our understanding of gravity.
Things seem "established" until evidence comes along indicating it may not be. It doesn't always result in a reevaluation, but there's times it does.
This feels like one of those times where there's those in the establishment are yelling at others that their studies are a waste of time and that they shouldn't bother....and if that advice was followed, we'd be missing many of the breakthroughs in knowledge we have today. Of course there will be many times they're unsuccessful. That's why accepting a negative result is a lauded part of the scientific process. Without the bravery required in challenging convention, we would never make progress.
On Energeous, the science is sound in theory. In practice I don't believe the company will be able to execute; but transmitting power over RF is something we've understood for a long time (since Edison first demonstrated the technology).
I'm an investor in a competitor: Reach Labs which has systems in production powering swarms of low power devices wirelessly. I've seen the technology work in person.
According to science, a bumblebee shouldn't be able to fly. /s
A bicycle isn't some unknown mystery of the universe, the very article you linked lays it out pretty clearly. A bicycle's stability is a function of its mass distribution, geometry and the gyroscopic effect. Their exact relation depends on the bicycle and since there is no one standard SI unit of a bicycle in the Bureau international des poids et mesures, there is no apparent equation. The "nearly broke mathematics" in the title is just pure clickbait.
That’s because leaning is such a bad term to describe what happens! Even when I learned to ride bikes, it felt so strange that “you have to lean left/right”, cause I understood somehow that I’m a much more massive object and the center of masses is around my butt at best. So leaning doesn’t change much in mass/geometry/momentum configuration. You may shift a bike under you with your arms, but this is limited to their lengths.
What really happens is that you always fall either to the left or to the right. If you fall in a desired turn direction, you turn a little. If not, you turn even more than is required to support a normal turn, so that your bike moves below you to the point that now your mass part is to the other side of it, and now you fall in the other, initially desired direction. Then you quickly turn at where you need. With practice all this movement reduces to centimeters and very smooth curves at all joints.
You move and (importantly) rotate your bike under yourself. It has nothing to do with leaning, because it’s the road/front tire that make a difference in a balance, not your flanks. Leaning helps to not fall off the seat when you cycle, but not in turns.
It’s clearly a trainer’s delusion to me (that thing when your trainer explains things that do not actually work/exist but you translate or ignore these terms showing respect to a good man).
Yet, in fact you and the bicycle are physically leaning throughout any turn. I.e., you are at an angle off vertical with your center of mass distinctly not directly above the line between the points where the tires contact the ground.
Normally the plane of the bike, normal to the axles, cuts right through your center of mass.
Starting a turn by leaning is not usual for experienced bikers, but certainly works. For a beginner, staying upright while going more or less straight is what they need to work out first, but that invariably involves some spontaneous turns, so both are practiced.
What a century of children have been told is to steer with the handlebars, which is a reliable recipe for spills.
Given sufficient evidence, there's no need to have a solid theory of operation for a given claim. That said, if it contradicts known laws of physics that evidence had better be damn good.
you because there's no systemic understanding of SSRIs work? But you could say the same for most drugs. Most of the pharma industry is just targeting a specific protein without regard for how it fits into the system as a whole
If there is anything thoroughly Snake Oil in modern currency, it's Tokamak Fusion.
"Give us enough billions, we'll have one working in 2050. Or 2060, or someday. It won't produce any power, oh no, of course not. But give us ten times more money after 2050 (or 2060) and by 2100, or 2140, or anyway someday maybe, we will have a prototype power station for you.
"Sure, it will cost a hundred times as much for each kW-hr as whatever you will be using by then—and the plant will destroy itself after only two years—but it will be fusion power. And that will be so cool!"
Tokamak fusion is mainly a jobs program for hot-neutron physicists, to maintain a population ready to draw on for weapons work, but is also a massive boondoggle providing a steady flow of corruption money to well-connected pockets. (Hot-neutron physicists are not getting the $billions.) Every cent spent on Tokamak fusion is stolen from actually practical work.
So don't talk to me about snake-oil until after you kill Tokamak.
Teaching about language in a university setting, we often talk about various forms of evidence for language-like communication among animals.
As I tell students, I would bet that within our lifetimes, as research continues and our instrumentation becomes better at detecting non-human communication means, we'll finally be able to detect and decode 'language' in a non-human animal which is sophisticated and rich enough as to be impossible to handwave away.
But what I keep to myself is that this kind of discovery, and the cross-species conversations it would prompt, has the potential to change the course of the dialogue on animal rights and what it means to be 'human'. But I suspect it will wind up being largely buried outside of certain academic and spiritual communities, mostly because I don't think parts of our society could handle learning the bovine words for 'slaughterhouse' or 'mourning'.
Christina Hunger has the ability to ask her dog questions and it goes to press a button that represents the idea it wants to reply with. That button plays an English word through a speaker for Christina. She's a speech pathologist on Instagram and has a website for it. The dog learns word association in the same way babies do.
Those videos are on Instagram, not buried. I'm fairly sure any breakthroughs will make it to a Planet Earth documentary and we'll feel sad about it for a while then go buy beef from the shops anyway.
that's neat. i talk to my dog all the time. she (a rescue) came to me already knowing "do you wanna go out?" but had a really hard time teaching me when she had anxiety diarrhea and needed to go out. i really needed a button for that (she's mostly over it now though).
So, a different-but-related question: Putting aside the desired addictive nature, are there any gamers out there who actually prefer microtransaction-driven loot box mechanics to more conventional item drops?
Or is this just a UX element that most people dislike or don't care about, but some people get badly addicted to?
There's so much argument about gambling, etc, but I'm yet to hear a compelling argument for why these should be allowed to exist, from a game design perspective, short of "we can make a lot of money from gamblers".
Because that's the default state, in our society at least. If you want something banned/made illegal/etc you need a compelling argument for why it shouldn't be allowed to exist.
I kind of like how Path of Exile[0] handles that entire issue:
* PoE is a Free-to-Play game. On paper, it's totally possible to play without giving them one cent.
* PoE offers 2 pay-only features: account modifiers (such as stash tabs to hold items that drop while playing, and your personal premium hideout if you want one); and character skins + vanity portraits + in-game pets that don't help you + ....
Only the skins are in lootboxes, and you may also get the one skin you want at a fixed price (a $3 lootbox can hold any items that are $3+ combined). Not sure about the odds though - I havent seen them published anywhere.
As a gamer, I enjoy playing a game where I'm not a cash cow: my items in-game have the same drop rate as that of my neighbor; and I don't mind paying for nice-to-have "optional" features (the game is quite fun BTW).
It can be used to make a decent game free. I generally don't buy any microtransactions and I'll play a free game or at least try it out. If whales want to subsidize a game, the devs are happy, and the whales don't feel bad about it, I don't really care about changing the situation. I don't like the attitude of blaming the addictive nature, even if they specifically work on making a skinner box, because I think there should be some self control.
I think people who aren't looking for something specific (i.e. they already own all the "basic" stuff they really want) and just enjoy getting more stuff probably like random loot boxes? Because if you want to get everything, it's usually cheaper to buy those.
Personally, I don't understand the desire to want every single skin or item in a game even if you never use it, especially when you have to pay like $5000-$20000 to get that, but some people seem to have that urge. Whether that's unhealthy spending or just a hobby depends on the person, I guess.
This just underscores one of the main things I desire from social media these days: Ephemerality. I wish that social media services, Snapchat aside, started treating content as transient. Services like TweetDelete help, but I can't think of anything I'd say on a Microblogging service that I want to exist in 9 years.
Yes, yes, "everything on the internet is permanent". But every one of my social media streams could benefit from a "Automatically delete after one month" setting.
We grow, we learn. Why not let our social media represent who we are, not who we were?
For what it's worth, this only works if the source of hearing loss is conductive (e.g. eardrum or the ossicles). If your cochlea or any sub-element (e.g. inner hair cells) are damaged, bone conduction will be no different. In fact, the comparison between acoustic and bone-conduction hearing tests is a key element of audiological testing.
True. The classic tuning fork test: hit the fork, hold it up in front of your ear. Can you hear it? Then gently press the end of the (still vibrating) fork against your skull. Can you hear it now? If no/yes, the loss is conductive. If no/no, it's sensorineural.
I'd love to see a norm develop where the 'authoritative link' to an article is expected to be the most open. So, if there's a closed journal and an Arxiv pre-print, Arxiv gets the link, with the journal's publication status considered 'about the article', but not the thing itself.
I think it moves us towards a clearer understanding of Academic Journal publication as peer review's 'stamp of approval', rather than the explanatory event per se. And this will make easier to move towards long-term, sustainable practices for publication and science.
Journals used to have several important roles: curation of articles, maintaining a reputation of quality (peer review, etc), and the actual physical publication and distribution of the papers. Cheap personal computers capable of "desktop publishing" and the internet made publication and distribution really cheap and easy. Those tasks no longer require a lot of expensive specialized skills and expensive typesetting/printing tools. This means journals need to stop treating those tasks as if they were still a scarce resource, and rework their business model around the tasks people still value highly: high quality curation and a reliable and trustworthy reputation.
The actual hosting of the PDFs (and TeX, and hopefully even the raw data) is something that universities or whomever the researchers are working for could host cheaply and easily. When I was attending UC Davis in the late 90s, the university hosted a huge archive that not only included their own publications, it also mirrored the publications of the other UCs and many important public archives like kernel.org.
Compared to huge archives of Linux distros and pre-GIT source code histories, hosting a bunch of PDF/TeX is effectively free. Reliable curation that saves a lot of people from wasting their own time and effort trying to find useful/interesting papers is extremely valuable.
First we need a "verified" badge in biorxiv/arxiv to verify that the current preprint version is exact copy of the one published in the journal. Then DOI could arrange to redirect to that copy instead
That is not currently how the DOI infrastructure works at all.
Individual entities register DOIs, and decide where they redirect (and can change the resolution at any time). In these cases, the publisher (such as Elsevier) is the one who has registered the DOI, and they get to decide where it redirects/resolves. They also paid for the DOIs.
There are actually a (small-ish) number of DOI registrars. The largest, and most likely by far to be used for scholarly articles, is CrossRef.
Neither CrossRef nor the DOI foundation have the authority to change where a DOI resolves to, against the wishes of the DOI registrant. (It would be like a DNS registrar or the IANA deciding news.ycombinator.com should resolve somewhere other than Y Combinator wants it to -- indeed DOI works pretty analogously to DNS, probably intentionally by inspiration).
What you propose would require major changes to the social and business setup of DOI. Probably to the business/sustainability model too, because a registrant would probably be less excited to pay for a DOI they don't actually get to control the resolution of. (CrossRef and the International DOI Foundation are both non-profits. They still need to pay for their operations, and the DOI infrastructure. That is currently funded by charging registrants for DOIs). It would also require some kind of "regulatory regime" to determine who has the authority on what basis to determine where a DOI resolves (and those 'regulators' would probably increase expenses, which you need a new plan for funding), compared to the current situation where whatever entity registered a DOI decides where it resolves to (similar to DNS).
You need neither.
Simply hash both articles, and reference it by hash.
Then you will automatically get the right paper, no matter the source (it could even be from a bittorrent magnet link).
DOI are horrible invention, they are prone to man in the middle attacks and dead links, please don't use them.
A slight impediment to that is that ArXiv discards PDFs that have not been accessed in a while, and rebuilds them from TeX source if later accessed. The result may not have the same hash - I sometimes even see ArXiv PDFs with today's date in them despite being published a long time ago, because the author used the \today macro. So you would need reproducible builds for the hashes to be valid, or for ArXiv to no longer have the storage concerns thst lead them to this practice. Or you could hash the TeX I suppose.
Yeah, you should hash the TeX. It's a pity really that PDF has become the dominant publication format, it's just so bad and non machine readable. It's absurd to me that scientific publications haven't switched over to HTML, I mean that format was invented for scientific publication...
References to third party websites that can break. HTML is a living spec, so browsers can decide to break things that work today (as happened with Marquee for eg).
Even if you disallow JS entirely, and stick with just HTML/CSS, it has enough warts to not look and behave consistently over time.
A link could easily be a URN that identifies the target by its hash, all the protocols for that are already in place e.g. magnet links.
PDF doesn't have JS and hyperlinking, so I guess all you'd need would be HTML, even ignoring CSS, which could be tailored to the reader, e.g. Latex style, troff style etc.
Vanilla html with images embedded as data URNs should be pretty darn portable for the forseeable future.
Heres the kicker, it's a text based protocol, and a dead simple one, eben if we should loose all browsers in the big browser war of 2033, its super easy to reverse engineer. PDF not so much.
A subset of HTML + CSS (+ ECMAScript?) could replace PDF for this purpose. However, is there a standard subset and familiar, understandable tools for working with it? In general, using the 'save as' function in a web browser won't produce a document that looks the same 10 years later. Rewriting the source document using a tool like wget can achieve this, but it doesn't always work (eg. what if the content was pulled in asynchronously?), and you need a computer expert to create and explain how the archived format relates to the live content. 'Save as PDF,' despite its technical inferiority, is easy and widely understood.
HTML/CSS is extremely backwards compatible, modern browsers don't have problems of displaying the page differently.
How does pdf solve link rot problem? Pdf is good for print, it's consistant. But fails when display size other than big screen, especially e-ink displays that don't tend to be your standard A4.
PDFs don't solve link rot. But in HTML, it's conventional to rely on links for stylesheets and sometimes even content (images, asynchronous DOM elements), so link rot is a bigger problem.
Yeah for publishing you don't want content in links, but you can solve that with data URIs that embed images and other data directly into the link [1].
Imo tex isn't much more machine readable, depending on what you want to do. Reformatting or lossy conversion to plaintext? Sure. Determining semantics? Good luck.
The journal version and the arxiv version will never hash to the same value because they are not bit-identical. But you want to link to the peer reviewed version, or one which is semantically identical to the peer reviewed version. So somebody needs to check that the arxiv version is semantically identical to the journal version.
You should hash the TeX, not the PDF. Alternatively you could have both documents PGP signed by the author with a hash of the original tex, if you want to make sure you get the right "semantically the same but different" version.
But tbh that seems to be a slippery slope that I wouldn't want to go on, where do you draw the line for your semantic differences? Imagine you quote something which gets edited out, suddenly it looks like you quote nonsense while it's the original references fault.
There is no TeX source for the journal version. The point is that you don't want to trust the author to verify that the peer-reviewed+accepted version is the same as the arxiv version, and that it will not be changed. That's why people generally cite the journal version. Because it's immutable.
Journal versions are simply not immutable because they are referenced by name, not by content. I regularly see a good percentage of dead or wrong DOI, and I've hunted my fair share of papers that were supposedly released in a journal, but that only ever existed in preprint.
Arxiv already accepts latex and compiles it for you, we should expect the same from journals and ask them to publish the hash of the document they received.
Journal versions are reference by journal name, volume, year, page number, indexing a hard copy version you can find in a library. Seems pretty immutable to me.
The journals I published in all accepted latex. But they convert it to use their layouting software. The last correction steps are typically done only in this version, and the author has to backport them into their tex code.
Why should the journal have any interest in making the arxiv version more attractive?
Even if we ignore reprints, editorial series that rearrange papers (and make a paper citable more than one way), and proceedings (which often don't properly distinguish between papers, but use author + proceeding).
Science simply doesn't operate on journal published papers most of the time. The paper mills run so
hot that you regularly cite preprints, that get exchanged between authors directly.
It happens regularly that the proof is supposedly in the "full paper" only that the "full paper" was never published.
Essentially the same reason we need peer review in the first place. Many authors have strong but wrong opinions. But even without malice:Some don't care that the arxiv version is slightly different from the paper.
I dont see why anyone would put different content in the two papers since its so trivial to be ridiculed for that. I dont think arxiv has resources to review if the preprints are the same as the final, and it seems an overkill thing to do .
Also in many cases there is a final round of modifications done by the publisher that you are not free to distribute. For journal paper I was told that sometimes you cannot even publish the corrected version after rebuttal.
it's not the same file - just the same, final text proof. It will be different from the final formatting in the journal.
I dont think authors have incentive to abuse the system. Just upload the final proof of your manuscript to arxiv, click "final version" , and this lets people know that this is the same article as in the journal.
DOIs are ubiquitous and they would serve the purpose of redirecting to the free pdfs rather then the journal site. This can be applied to existing articles retroactively. Plus, many bibliography styles include the DOI which makes the reference easier to use
Semantic Scholar[0] tends to do this, but their search functionality leaves something to be desired. I tend to use them to discover DOI addresses and find related media if I already know the paper's title (e.g. following-up on a bibliography w/o links, as is the norm in many publications)
You've now answered "Do your lawyers think you can get away with this?". But the questions you're not answering directly, which I think underlie the 'concerns' you're appreciating our sharing, are things like...
- Does the Bitwarden team see no ethical problems with making proprietary a project which many supported and contributed to explicitly because it was open source?
- Given that password management is explicitly a high-trust enterprise, how does your organization intend to navigate the rupture of trust, and subsequent forks and waves of departure, caused by an open-to-proprietary rugpull?
- Is there something that the community could do together which would help your company navigate through the dire situation you must be in to be considering something like this, without resorting to proprietarization?
I know it's his job as CTO right now to be feigning concern, particularly in forums where you can't close the conversation, but the current approach is basically confirming the worst fears ("They believe they can legally do it, and see no problem with their actions"), and that seems like exactly the wrong vibe for a company whose bottom line depends on users trusting the code and the people updating it.