Hacker Newsnew | past | comments | ask | show | jobs | submit | saintx's commentslogin

According to a speech given in 2016 by Dr. Michael Bracken, an epidemiologist from Yale University, as much as 87.5% of biomedical research is wasted or inefficient.

To his point, "Waste is more than just a waste of money and resources. It can actually be harmful to people's health."

> He backed his staggering statistic with these additional stats: 50 out of every 100 medical studies fail to produce published findings, and half of those that do publish have serious design flaws. And those that aren’t flawed and manage to publish are often needlessly redundant.

What we need is NOT more funding, but we desperately need to improve the research and funding processes to make them more relevant, more efficient, and more reliable.

1. Publicly funded studies should yield open source research data that is freely available, so that studies can be repeated and experimental methodologies be improved and scrutinized.

2. We need to prioritize randomized clinical intervention trials over weak and questionable epidemiological surveys that often only muddy the waters and hinder our ability to draw sound conclusions.

3. Consequentialism should drive research funding. We need better and more formalized ways to identify gaps in our current knowledge, and to identify the potential impact of research before funding it. We don't need to allocate our current proportions of funding into research on subjects that are already very well understood or unlikely to drive policy and decision making. For example, more studies showing that exercise is good for you aren't likely to have a large impact moving forward.

Here's a more in-depth link to Dr. Bracken's speech:

https://justthenews.com/politics-policy/coronavirus/while-ni...


The thing about science is that it's quite unpredictable. While I'm sure we can increase the efficiency of the system I'm not sure where the efficiency ceiling lies, as even with great processes it's simply not possible to know beforehand which approaches will be successful (if you did it would not require research).

I mean, look at the proportion of software projects that fail, which I'd estimate to be 50 % at least. And software engineering operates with much fewer unknowns compared to research.

Physics research is similar: Much of the research does not yield world-changing technology, or anything useful at all. I wouldn't say it's useless though, as unsuccessful projects still can provide inspiration for new research avenues and even if the research fails, the researcher (hopefully) gets better at doing research in the process, so the chances of producing something good the next time he/she tries increase.

I think the most promising avenue of increasing research productivity is to make it possible for more people to do quality research. Talent is everywhere but opportunity is not, so let's create more opportunity.


Absence of a result is still a result, it's just not publishable. There are very few journals that will take a paper that boils down to "we tried some things to solve this problem, and they didn't work, and not even in an interesting way".

Sometimes scientific progress goes "boink". Consequentialism is dangerous. Researchers, like everyone, need to be able to fail.


If the attempt is novel, wouldn't it be worthwhile to share your failed results to help prevent others going down the same dead-end path you just did?


Yes. But your (financial, career) incentives are against doing that.


Here's a link to the actual source, rather than that garbage news article: https://nihrecord.nih.gov/2016/07/01/much-biomedical-researc...


It could be that there is too much money in science - there are too many under-qualified people jamming up the system. Requiring more administration and overhead to make them effective. Jamming the information channels with bad studies, reducing the signal to noise ratio.


If things aren’t highly reproducable, similar studies aren’t needlessly redundant.


People make a lot of noise about the missing function keys. But the keys that I miss the most on my 2018 Macbook Pro are the "E" and "SHIFT" keys.


I donn't miss any keeys, somee eeveenn work twice as eefficieennt.


Wait til the CMD key loses responsiveness. Hate having to CMD+V multiple times to paste something. Maybe I should just reprogram the option key or go back to my old Mac where all the keys work flawlessly.


This is the most annoying part of their keyboards.

Didn't copy or didn't paste. Or both.


How do you like pasting a TAB into your text editor/terminal every time you want to switch windows?


I have lost responsiveness in both my CMD key and the E key. Apple agreed to replace the front panel of my MBP for free, so at least there's that.


I also plan on getting it fixed, but from what I understand, Apple is putting the same keyboard in so the issue will just reappear after a few months.


Just switch to vim keybindings


Agreeed


Dudeee, samee!!!


This freaks me out soo much


For others who have the annoying keyboard issue being hinted at here (double key-press) and who, like me, can't be without their machine for weeks on end while they ship it off to get a new keyboard, I've been able to fix the issue by using Unshaky: https://github.com/aahung/Unshaky


The repair program in the US is currently on a <48 hour door-to-door turnaround. Still unworkable for some people, but you can get your laptop repaired in two days of PTO.


For m, it's "" ad "" kys.


I once saw a heat map of the Macbook Pro keyboard after a while in use. Heat spiked around WEASD location. Maybe it's where the CPU is located, but the extra heat could be why those keys always seem to be the ones that go (mine were s and d, and e sometimes). But you never know..this other Macbook Pro is from Mid-2014, and keys work fine.


Makes sense if you thing about it. Which user cohort needs the lowest possible latency for keyboard input? Gamers. And which keys are the most used in games? WASD. So it’s logical that these keys would be the closest to the CPU to minimize signaling latency!


I don’t think “heat map” means what you think it means.


Or lots of people playing FPSs on their MBPs?


Seems unlikely given how bad Macs still are for gaming (and how expensive they are compared to other laptops which are better equipped for it.)


I think the OP meant literal heat, as generated by the CPU or other internals.


Yes, but zelos suggests that the reason these keys are problematic might not be related to heat but due to them being used more.


I did mean actual heat, but the commenter's observation about WASD also probably contributes to why those keys in particular go bad. I don't know whether the heat source underneath them or the WASD gaming use is more of a factor though.


Is this a broken keyboard joke?


ys


It's probably referring to the fact that the keyboards with butterfly keys fail a lot, with dust rendering some keys unresponsive.


And I just learned that the E key sits on top of the CPU on some models, and that the heat could be the core of the problem.


The CPU core?


T and N went first for me

It's like a global monte carlo letter frequency measurement experiment


Yeah. My "F" key broke in a way that isn't fixable without replacing the whole thing (one of the ultra tiny pins that keeps the keycap in place snapped off). At least you get two shift keys.



The arrow keys are the biggest pain point from me.


"If I were misusing a word."

Just trying to help ;-)


Can you elaborate? I thought 'was' is for 1st person (I) and 3rd person (he/she), and 'were' is for 2nd person (you,your, etc).

Update: So I just googled around and learned that "were" is used for the subjunctive mood i.e. wishes/thoughts. Good to know!


Your update is correct. Verbs have moods, which is so cool to me. An article/blog about subjunctive verbs: http://www.quickanddirtytips.com/education/grammar/subjuncti...


What we call "Computer Science" is the discrete analogue of differential equations. It has absolutely nothing to do with software engineering. CS programs don't teach software engineering, which can be thought of as "how to design, build, grow, and maintain an effective software system in a team environment." Since most Universities prioritize research faculty over teaching faculty, software engineering skills are generally undeveloped in fresh college graduates, and tend to be passed along by mentors and senior engineers in software teams to junior engineers.

Shy of Universities hiring prominent open source software developers to serve as part-time teaching faculty, I don't know how we could push the transfer of these skills into the university setting and expose students to this knowledge at an earlier age.


Although it's a pleasant read, it doesn't take into account opportunity cost. Everything I do every day requires me not to do something else. There are valuable jobs that aren't urgent that never get done until I automate my way out of my current role. Russell always did treat economics as a zero sum game, when in reality it's quite expansive.

There are also jobs that need doing today and are physically possible to do, but nobody is doing them because we haven't invented them or realized they are possible yet. The Romans could have employed scientists to photograph the surface of Pluto. Physics haven't fundamentally changed since then. Only our understanding of what is possible has changed. A thousand years from now, people will marvel at all the jobs we in the early 21st century could have been doing to advance the quality of life for people around the world (and indeed all life on Earth), had we only known those things were possible.


I love the way they used color and plain English to describe that function. It would be awesome if a general app for this sort of color coded simple translation were available to help kids learn about mathematics. It'd have to be more inclusive for people with dichromacy and anamalous trichromacy, but there could be settings in the app to compensate for that.


When a publication like The Atlantic Monthly, one of the most venerable and critically acclaimed bastions of the English language, literature, and culture, refers to them as "Legos" (http://www.theatlantic.com/entertainment/archive/2014/02/-em...), it means you can finally crawl down off your absurd high horse and give it a rest. Thank you for your service, sir. Here's your lapel pin. There's a doctor down the hall if you need to talk about your feelings.


A good roadmap for a new distributed web should be broken down by OSI model layer, showing what protocols and technologies exist that need to be replaced, what levels of the OSI model they span, and identifies single points of failure lower in the stack that must be accommodated. Too few people understand how brittle the web is by its reliance on the "magical" underpinnings of the Internet continuing to "just work".

For example, let's say we want privacy, anonymity and high availability for something fundamental like name lookups. It's not enough to simply replace DNS with namecoin (L7), if there's a critical vulnerability in openssl on linux that could force a fork in the network, possibly leading to existing blocks getting orphaned (L6), if every single session that goes through AT&T gets captured, and the corresponding netflow stored in perpetuity for later analysis and deanonymization (L5), if this application's traffic could be used for reflection amplification attacks (L4) due to host address spoofing (L3). One might try to get around those issues by direct transmission of traffic between network endpoints (asynchronous peer-to-peer ad hoc wireless networks via smartphones or home radio beacons, for example), but then you not only need to deal with MAC address spoofing and VLAN circumvention, (L2) but with radio signal interference from all the noisy radios turned up to max broadcast volume, shouting over one another, trying to be heard (L1) and accomplishing little more than forcing TCP retransmissions higher up in the stack.

And really what's the point, when you can't even trust that the physical radios in your phone or modem aren't themselves vulnerable to their fundamentally insecure baseband processor and its proprietary OS? Turns out, what you were relying on to be "just a radio" has its own CPU and operating system with their own vulnerabilities.

Solving this from the top down with a "killer app" is impossible without addressing each layer of the protocol stack. Each layer in the network ecosystem is under constant attack. Every component is itself vulnerable to weaknesses in all the layers above and below it. Vulnerabilities in the top layers can be used to saturate and overwhelm the bottom layers (like when Wordpress sites are used to commit HTTP reflection and amplification attacks), and vulnerabilities in the lower layers can be used to subvert, expose, and undermine the workings of the layers above them. The stuff in the middle (switches) are under constant threat of misuse from weaknesses both above AND below.

It might be tempting for an app developer to read this blog post and think "Oh wow, what a novel idea! Why is nobody doing this?" But in reality, legions of security and network researchers, as well as system, network, and software engineers around the world toil daily to uncover and address the core vulnerabilities that hinder these sorts of efforts.


The economic motivations that drive FOSS development haven't really been done justice in the peer-reviewed literature on the subject. We used to say "Money, Glory, and Fun" were the reasons, and to some extent that was and remains true. However, I think there's always been quite a lot more to it. After the collapse of the dot-com bubble, there were a LOT of developers, particularly junior level developers, who couldn't find the kind of work they wanted and hoped to find after college, and committing to Open Source projects was a great way to get a kind of apprenticeship with some of the best development teams in the world. Participating in those efforts exposed new developers to new ideas, new ways of building software with distributed development teams, and more often than not exposure to world-class software. The monetary value of that sort of software engineering apprenticeship is hard to gauge, as you typically can't get its equivalent in a corporate internship, and many university CS programs rightly prioritize computer science over software engineering skills.


Circa 2009, while attending graduate school at the University of Minnesota, I was a student of Dr. Nick Hopper, whose CS research team were intensely focused on ways to deanonymize TOR traffic using an impressive variety of techniques. One that stood out was using statistical analysis of netflow to correlate browsing patterns. Considering that last-mile bandwidth providers also gather netflow and often provide flow data to three letter agencies, being able to map flows from known exit nodes to last mile service providers isn't rocket surgery. After an early initial exposure to some of their research, I never placed any trust in TOR. I still have a quote from Dr. Hopper on my laptop login screen, to serve as a reminder: "The problem with privacy on the Internet is that people believe it exists."


Tor specifically doesn't resist pervasive flow monitoring, and their lack of being able to resist that attack is discussed thoroughly in the documentation, at least twice (both general usage and in the threat models section).

The attack you say stands out is literally just implementing the most obvious thing that Tor explicitly doesn't defend against. It's just a demonstration that Tor's threat model is accurate, and not a weakness in Tor that people were unaware of.

The you had a knee jerk reaction to not use Tor for what it's good at when you realized it can't do everything is the all-or-nothing, it must be perfect mentality that is the enemy of good.

It's people like you, with your all-or-nothing extremism that undermine reasonable discussions about partial steps we could take.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: