Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Turning brain signals into useful information (economist.com)
169 points by aficionado on Jan 11, 2018 | hide | past | favorite | 65 comments


I work at CTRL-labs, a startup focused on electromyography (EMG) based control devices. This article has a bit more technical detail about what we do: https://www.wired.com/story/brain-machine-interface-isnt-sci...

EEG--reading signals from the brain--is pretty hard. But EMG--capturing muscle contractions in the arm--produces relatively much cleaner data. This can then be fed into a variety of machine learning algorithms to map high-fidelity time series data to discrete signals or continuous gestures for which we have appropriate training data.

Want to find more? Come visit our offices in NYC!


Is the training data unique for a person, or can you reuse the models between people?


Are you saying people can just show up to your office for a...tour?


In exchange for adding your EMG sensory data into their machine learning dataset while you are walking around? ;-)


Worth it.


I appreciated and upvoted your post with the link in Wired. However, am I getting something wrong in the article?

"The innovation lies in picking up EMG more precisely—including getting signals from individual neurons—than the previously existing technology..."

EMG by definition acquires muscle activity. How do you claim that it is signals of neurons, let alone individual ones?!

(only by proxy)


Can you provide any technical detail about what is unique or novel about what your company does? Neither the wired article nor your company webpage has any useful information on what might differentiate you from the countless EMG devices out and hobbyist setups there.


Can we actually visit your office/research labs in NYC? One of my bosses loves this stuff and is always looking down the road. We would likely use it for advertising at our company or understanding how consumers interact with our products.


> Want to find more? Come visit our offices in NYC!

I'd love to! But I can't figure out where you're located.


I don't think I've ever used my brain signals in a useful way, so this is a step in the right direction.


It's not just you. That title implies that brain signals per se aren't “useful information” to begin with. I wonder how any of us are able to read that headline at all.


"Useful information" means information that someone is willing to pay for, I think...


I suspect you are right, given the publishing venue. But I'd still find it a rather narrow definition of usefulness.


If you find this topic interesting, Neuralink is hiring. We’re looking for a lot of different backgrounds across applied physics, biomedical engineering, software and hardware. Though Neuralink sounds cool - I know many people are skeptical - the reality is even cooler.

Especially if you’re great at firmware, robotics control software, or computer vision get in touch! Either through the links on the website, or there’s an email in my HN profile.


Neuralink is certainly on my radar as a company to watch but I thought it was more in an invite only Ph.D heavy theory stage than a hire and build board stage. The idea to use an acoustic radio for transmission in a mostly water media is very clever even if I worry about the safety margins of the power density involved with transmitting that much data.


While a lot of people on the team have PhDs, I definitely would not describe Neuralink as "PhD heavy theory stage"! The overwhelming emphasis is on building things that work and testing them against reality as quickly as possible. Anyone capable of doing that regardless of credentials is welcome here. To get a better sense of what I mean, check out the interview advice thread here: https://www.reddit.com/r/Neuralink/comments/70gehm/neuralink...


The Neuraink page copyright needs an update. ;)


hey, you already acquihired one of my former professors...


Remote OK?


Depends but probably not. Basically everything we do involves hardware one way or another so difficult to do remotely :(


Are you interested in taking in a prodigal son?

If you are willing to take on a generalist with unlikely skills please email [email protected].


Friendly advice: this is as likely to land you an interview as is scoring a date with a model by leaving a comment on her/his Instagram stating that you are very hot, with no evidence or even a profile picture.


Wow that model tip sounds cool. Thanks for the idea bro!


So you're saying there's a chance?


Nope, he's being sarcastic.


I'm not sure how this would work in practice. Thoughts are incredibly noisy. Any mechanism that could filter out the noise basically can decipher intent. I'd argue intent deciphering is the actual problem trying to be solved by these devices (e.g. I wish I didn't have to type. I wish the computer just knew what I wanted to type, not that I wish the computer simply typed out what I thought). Solutions like "oh, just keep on thinking of the same thing over and over again" is highly error prone and will definitely be slower than typing. Say you wanted to type "[the quick brown brown quick the quick brown quick the brown]" a strategy of repeatedly thinking of the phrase to be typed will be error prone, regardless of any ML techniques you use, simply because it cannot be known in advance what you wanted to type unless you knew the intent.

Perhaps it'll pick it up as "the quick brown", or "quick brown the", and so forth.

---

Another problem can be illustrated below:

Say you had your brain device on now. You're ready to reply to this.

Horse poop.

Oh, I guess you read the above and now have "horse poop" typed. Well, you can just remove that ---

"add comment"

"submit"

Too late.


I don't disagree with your analysis but I think you're making the assumption that brain signals.

So instead of "that is stupid", "add comment", "you're wrong on the internet", "submit", I think we could be able to have more information about the context of the words:

    (:commentary "that is stupid")  
    (:request-interaction "add comment) ; from which the AI figures out it is a button on the screen
    (:request-input "You're wrong on the internet")
    (:request-interaction "submit)
In essence, maybe it is possible to detect beyond just words and understand the context, just from the signals too.


It’s like XML for your mind!


*lisp


People need to really think about plumbing intent- the societal implications for this are grave.

“Fake news” (the original term, not the co-opted one) is an example of society affecting conveniences that we have no answer for.

Messing with intent is another such area.


Your horse poop example has an equivalent for voice interfaces. A naive implementation might get confused by its own output and interpret it as input. But that problem can be solved by predicting the coupling between the two channels, and subtracting that prediction from the input to get a cleaner signal.

The same procedure would be more difficult to implement for thought-based interfaces, though, because you need to predict the brain's reactions to filter the signal. Maybe you could instead use a non-verbal thought to activate the command interface, so that it doesn't get triggered accidentally.


That's a weak analogy because the voice input sounds like the voice output in speech recognition.

When I see "horse poop" the thing I'm imagining and hearing in my head is not the individual letters as they appear on the screen at all. I'm hearing it in an internal monologue and I'm imagining and image and a smell.

In other words, the output generates wildly different input so 'subtracting it out' won't really work.


So maybe your natural thought process wouldn't work for said use case, but like keyboards and other user interfaces, you could surely learn to control it fairly simply. Maybe it's as simple as training yourself to imagine the words spelled out (if that was all it took).

We've all seen people who are new to keyboards, mice and even GUI's attempt to navigate a computer. It's clunky, slow, there's a lot of noise in the movements and erred clicks made.

I think it was here on HN I saw a great write-up once about leaving the gap in UX—that there is a threshold where the use just simply has to learn something in order to effectively use the device.

I don't know how the control interface for thought could look in the end in so few words, but I'm reasonably confident it could be brought down to a certain level that is more than attainable for the average user to meet.


but it might have patterns which can be recognized. Dont really see the problem.


You just stated the problem. It 'might', it 'might' not be recognizable..


That's not a problem that's a potential opportunity which isn't as absolute as the claim I was responding to.


I think it might be easier to type on a virtual keyboard with your mind than it is to dictate with thought. When you move your hand and fingers there is no conscious effort or thought, will is translated into action. Our body is an interface to the physical world, we currently make the jump form physical to digital. Advanced technology, I imagine, would simply eliminate the jump and make digital interfaces feel like physical ones.


I think all you've illustrated is how not to design for brains :)

Why would you need to even think in terms of inputs or buttons? Why even use words when you could post or consume thoughts directly?


Horse poop.

Presumably a device like that would use some heuristics to guess the correct words, like your phone's autocorrect.


sets up short range FM radio transmitter on a station your neighbour uses

OK GOOGLE BUY 500 GIANT PINK DILDOS


Yes, this works.


I predict that BMIs are going to suffer from the same problem as AI, where the applications that are working in the short-term get very overestimated because they are confused with the long-term where you create a singularity. If you had a BMI that could read/write the entire brain on neuron-level resolution, you could create computer back-ups of people, and if hardware were fast enough you could create superhuman intelligence. If you just have cochlear implants and prosthetics, the best case is a world where nobody is impaired, which is good, but still very far from a singularity. The Neuralink version is that if you can do telepathy, that might be valuable in some situations, but it will probably just be like faster email until the computers become smarter than us.


I once met an ex-apple engineer who created a hat that would read your thoughts and play the song you were thinking about. It only worked for certain people and had a limited playlist to choose from but it was really cool watching your "brainwaves" on a screen and then thinking "Daft Punk Get Lucky" and having it play on the speakers.


hey do you have any more info about this? have been wanting to do something similar.


Uhh how is that possible.


For a good review of the the brain learning to use BMI's, see:

https://www.sciencedirect.com/science/article/pii/S095943881...


Are we witnessing the very first steps on the long road to complete erosion of the privacy of our thoughts, the last remaining facet of privacy?


Yup. I hope you're prepared to live in the woods my friend.


What if you could detect the vocalization somehow instead of relying on a very noisy data source (brain signals), which I see becoming a roadblock...subvocalization would be like being able to chat without typing...you would still be interacting with a UI that will make sure you dont give out your bank card number etc.

Maybe even hold up your phone and it will beam some sort of ultrasound or laser to detect tiny movements in the larynx (I have no idea what I'm talking about) but seems like there's patent in the works by physically attaching sensors...

https://en.wikipedia.org/wiki/Subvocal_recognition

https://www.youtube.com/watch?v=xyN4ViZ21N0


Back when I had to write a lot for my courses, I was wondering the same about the usefulness of EEGs [1]. At times all I wanted was to lie on my bed, point a projector to the ceiling, and write.

Alas the tech/understanding of neuroscience is just not here yet, but maybe it would be in a few decades?

There's a lot more in the way of interesting discussion in the link if you would like to read more.

[1]: https://psychology.stackexchange.com/questions/9594/are-ther...


EEGs (at least non surgical ones) provide only the highest level information about what's going on inside of brains. It's a bit like looking at a city as you're flying 10,000 feet above it. You can figure out when it's rush hour, but you're not going to be able to tell what the most popular restaurant is.

The most sophisticated EEG systems actually train your brain to use the EEG, no the other way around.


This is one of the first broad-audience articles I've seen that actually acknowledges neuronal firing rates as an important practical consideration.

Usually it's simplified in explanations as "binary", on or off. This isn't wrong for any instant in time (and is sometimes good enough for conceptual models), but in reality the firing rate varies as a function of the stimulus. Analog, if you like...


I hope that before the end of my career I get to write code with just my thoughts. Add a method here, a loop there, refactor this block out with, etc.


It wouldn't be faster: coding is not input constrained, but thinking constrained. So just being able to connect your thoughts to the input method wouldn't change much, instead the computer really needs to augment those thoughts instead. The trick would be using this tech to create a much tighter feedback loop with the computer, but just having it isn't sufficient.


Sure, but perhaps the reduced mental load from having your thoughts instantaneously actualize would free up capacity for a higher quantity and quality of ideas. The somewhat faster feedback loop should also improve reward seeking behavior.


I disagree. Many times coding is input constrained—common refactoring patterns are one obvious example. You enter a pattern of tight input-evaluate-respond-repeat loops where you evaluate the refactoring change at every location where the text would also change. Very few problems worth solving can be solved without a better editor than pen and paper, and thought controlled ast aware editors are the logical conclusion of that.

Hard problems would not be better aided by better input. In my experience, very few money making products deal with hard problems, just boring problems.


> It wouldn't be faster: coding is not input constrained, but thinking constrained.

Well, it depends on how bad your RSI is. For some people it is input constrained.


Sure, but then this becomes niche to RSI sufferers and other people who can't type easily like voice input.


I’d much rather type out a program than write it by hand with a pen and paper though.

“Thinking it” might be the next logical step.


My main problem with coding is not the input, it's the visual feedback. I hope that before the end of my career I get to write code which is not just a long series of ASCII text files.


The big question for me at least is, are these signals uniform or have some kind of similarities for specific concepts or actions from person to person?

If every brain has its own language, the effort is magnitude higher.


Some aspects of the signals are individual-dependent, some are consistent across individuals (for some definition of individual-dependent and consistent). I know that's vague, but it's accurate.


Very interesting.

Are any of these consistent aspects associated with the type of action or thought?


Now if we could turn useful information into brain signals...


Ya most people's brain signals contain nothing of value ;P




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: