EEG--reading signals from the brain--is pretty hard. But EMG--capturing muscle contractions in the arm--produces relatively much cleaner data. This can then be fed into a variety of machine learning algorithms to map high-fidelity time series data to discrete signals or continuous gestures for which we have appropriate training data.
Can you provide any technical detail about what is unique or novel about what your company does? Neither the wired article nor your company webpage has any useful information on what might differentiate you from the countless EMG devices out and hobbyist setups there.
Can we actually visit your office/research labs in NYC? One of my bosses loves this stuff and is always looking down the road. We would likely use it for advertising at our company or understanding how consumers interact with our products.
It's not just you. That title implies that brain signals per se aren't “useful information” to begin with. I wonder how any of us are able to read that headline at all.
If you find this topic interesting, Neuralink is hiring. We’re looking for a lot of different backgrounds across applied physics, biomedical engineering, software and hardware. Though Neuralink sounds cool - I know many people are skeptical - the reality is even cooler.
Especially if you’re great at firmware, robotics control software, or computer vision get in touch! Either through the links on the website, or there’s an email in my HN profile.
Neuralink is certainly on my radar as a company to watch but I thought it was more in an invite only Ph.D heavy theory stage than a hire and build board stage. The idea to use an acoustic radio for transmission in a mostly water media is very clever even if I worry about the safety margins of the power density involved with transmitting that much data.
While a lot of people on the team have PhDs, I definitely would not describe Neuralink as "PhD heavy theory stage"! The overwhelming emphasis is on building things that work and testing them against reality as quickly as possible. Anyone capable of doing that regardless of credentials is welcome here. To get a better sense of what I mean, check out the interview advice thread here: https://www.reddit.com/r/Neuralink/comments/70gehm/neuralink...
Friendly advice: this is as likely to land you an interview as is scoring a date with a model by leaving a comment on her/his Instagram stating that you are very hot, with no evidence or even a profile picture.
I'm not sure how this would work in practice. Thoughts are incredibly noisy. Any mechanism that could filter out the noise basically can decipher intent. I'd argue intent deciphering is the actual problem trying to be solved by these devices (e.g. I wish I didn't have to type. I wish the computer just knew what I wanted to type, not that I wish the computer simply typed out what I thought). Solutions like "oh, just keep on thinking of the same thing over and over again" is highly error prone and will definitely be slower than typing. Say you wanted to type "[the quick brown brown quick the quick brown quick the brown]" a strategy of repeatedly thinking of the phrase to be typed will be error prone, regardless of any ML techniques you use, simply because it cannot be known in advance what you wanted to type unless you knew the intent.
Perhaps it'll pick it up as "the quick brown", or "quick brown the", and so forth.
---
Another problem can be illustrated below:
Say you had your brain device on now. You're ready to reply to this.
Horse poop.
Oh, I guess you read the above and now have "horse poop" typed. Well, you can just remove that ---
I don't disagree with your analysis but I think you're making the assumption that brain signals.
So instead of "that is stupid", "add comment", "you're wrong on the internet", "submit", I think we could be able to have more information about the context of the words:
(:commentary "that is stupid")
(:request-interaction "add comment) ; from which the AI figures out it is a button on the screen
(:request-input "You're wrong on the internet")
(:request-interaction "submit)
In essence, maybe it is possible to detect beyond just words and understand the context, just from the signals too.
Your horse poop example has an equivalent for voice interfaces. A naive implementation might get confused by its own output and interpret it as input. But that problem can be solved by predicting the coupling between the two channels, and subtracting that prediction from the input to get a cleaner signal.
The same procedure would be more difficult to implement for thought-based interfaces, though, because you need to predict the brain's reactions to filter the signal. Maybe you could instead use a non-verbal thought to activate the command interface, so that it doesn't get triggered accidentally.
That's a weak analogy because the voice input sounds like the voice output in speech recognition.
When I see "horse poop" the thing I'm imagining and hearing in my head is not the individual letters as they appear on the screen at all. I'm hearing it in an internal monologue and I'm imagining and image and a smell.
In other words, the output generates wildly different input so 'subtracting it out' won't really work.
So maybe your natural thought process wouldn't work for said use case, but like keyboards and other user interfaces, you could surely learn to control it fairly simply. Maybe it's as simple as training yourself to imagine the words spelled out (if that was all it took).
We've all seen people who are new to keyboards, mice and even GUI's attempt to navigate a computer. It's clunky, slow, there's a lot of noise in the movements and erred clicks made.
I think it was here on HN I saw a great write-up once about leaving the gap in UX—that there is a threshold where the use just simply has to learn something in order to effectively use the device.
I don't know how the control interface for thought could look in the end in so few words, but I'm reasonably confident it could be brought down to a certain level that is more than attainable for the average user to meet.
I think it might be easier to type on a virtual keyboard with your mind than it is to dictate with thought. When you move your hand and fingers there is no conscious effort or thought, will is translated into action. Our body is an interface to the physical world, we currently make the jump form physical to digital. Advanced technology, I imagine, would simply eliminate the jump and make digital interfaces feel like physical ones.
I predict that BMIs are going to suffer from the same problem as AI, where the applications that are working in the short-term get very overestimated because they are confused with the long-term where you create a singularity. If you had a BMI that could read/write the entire brain on neuron-level resolution, you could create computer back-ups of people, and if hardware were fast enough you could create superhuman intelligence. If you just have cochlear implants and prosthetics, the best case is a world where nobody is impaired, which is good, but still very far from a singularity. The Neuralink version is that if you can do telepathy, that might be valuable in some situations, but it will probably just be like faster email until the computers become smarter than us.
I once met an ex-apple engineer who created a hat that would read your thoughts and play the song you were thinking about. It only worked for certain people and had a limited playlist to choose from but it was really cool watching your "brainwaves" on a screen and then thinking "Daft Punk Get Lucky" and having it play on the speakers.
What if you could detect the vocalization somehow instead of relying on a very noisy data source (brain signals), which I see becoming a roadblock...subvocalization would be like being able to chat without typing...you would still be interacting with a UI that will make sure you dont give out your bank card number etc.
Maybe even hold up your phone and it will beam some sort of ultrasound or laser to detect tiny movements in the larynx (I have no idea what I'm talking about) but seems like there's patent in the works by physically attaching sensors...
Back when I had to write a lot for my courses, I was wondering the same about the usefulness of EEGs [1]. At times all I wanted was to lie on my bed, point a projector to the ceiling, and write.
Alas the tech/understanding of neuroscience is just not here yet, but maybe it would be in a few decades?
There's a lot more in the way of interesting discussion in the link if you would like to read more.
EEGs (at least non surgical ones) provide only the highest level information about what's going on inside of brains. It's a bit like looking at a city as you're flying 10,000 feet above it. You can figure out when it's rush hour, but you're not going to be able to tell what the most popular restaurant is.
The most sophisticated EEG systems actually train your brain to use the EEG, no the other way around.
This is one of the first broad-audience articles I've seen that actually acknowledges neuronal firing rates as an important practical consideration.
Usually it's simplified in explanations as "binary", on or off. This isn't wrong for any instant in time (and is sometimes good enough for conceptual models), but in reality the firing rate varies as a function of the stimulus. Analog, if you like...
It wouldn't be faster: coding is not input constrained, but thinking constrained. So just being able to connect your thoughts to the input method wouldn't change much, instead the computer really needs to augment those thoughts instead. The trick would be using this tech to create a much tighter feedback loop with the computer, but just having it isn't sufficient.
Sure, but perhaps the reduced mental load from having your thoughts instantaneously actualize would free up capacity for a higher quantity and quality of ideas. The somewhat faster feedback loop should also improve reward seeking behavior.
I disagree. Many times coding is input constrained—common refactoring patterns are one obvious example. You enter a pattern of tight input-evaluate-respond-repeat loops where you evaluate the refactoring change at every location where the text would also change. Very few problems worth solving can be solved without a better editor than pen and paper, and thought controlled ast aware editors are the logical conclusion of that.
Hard problems would not be better aided by better input. In my experience, very few money making products deal with hard problems, just boring problems.
My main problem with coding is not the input, it's the visual feedback. I hope that before the end of my career I get to write code which is not just a long series of ASCII text files.
The big question for me at least is, are these signals uniform or have some kind of similarities for specific concepts or actions from person to person?
If every brain has its own language, the effort is magnitude higher.
Some aspects of the signals are individual-dependent, some are consistent across individuals (for some definition of individual-dependent and consistent). I know that's vague, but it's accurate.
EEG--reading signals from the brain--is pretty hard. But EMG--capturing muscle contractions in the arm--produces relatively much cleaner data. This can then be fed into a variety of machine learning algorithms to map high-fidelity time series data to discrete signals or continuous gestures for which we have appropriate training data.
Want to find more? Come visit our offices in NYC!