Hacker Newsnew | past | comments | ask | show | jobs | submit | adinisom's commentslogin

Hearing in noise is both what most people want from hearing aids and what they are least equipped to provide.

The traditional solution is an FM system where you give the person speaking a microphone linked to your hearing aids. There are dedicated ones like Phonak Roger. You could probably also use your phone as a microphone if it's bluetooth connected to your headphones or hearing aids.


That sounds awkward.

The tech for isolating a speaker at conversational distances exists. You use half a dozen microphone transducers (minimum; Crappy microphone transducers are cheap and quality is expensive, so just use a bunch of them), and through a combination of phase and intensity they decode relative location, and amplify that phase expectation while suppressing everything that isn't phased like that. Sound is slow, and readily susceptible to real-time triangulation. The math/processing is much easier if the parallaxes are fixed (eg the microphones are arranged in a line array on the top band of a rigid pair of smart glasses), but with a little latency it's not prohibitive for a deformable array to solve for its own relative position as well.


Addressing comments on hearing aid technology:

Often people who lose their hearing want to be able to hear in social situations such as restaurants and family gatherings. In this context, the signal and noise have similar properties and are coming from the same direction. Directionality helps but can only do so much. Noise reduction can make hearing aids more comfortable to wear but don't necessarily improve comprehension in challenging situations. Progress here is fantastic -- at the same time it helps to have realistic expectations.

Putting the mic on the person speaking sidesteps the problem -- it's like the rest of the room isn't there.


Least equipped to provide? They've been working on machine learning algos for exactly this purpose for twenty years.

That’s more a solution for far more extreme cases, including actual hearing loss. This would be far more involved than me lining up my ears with their mouth ;)

I read it as:

Dan is a moderator on a forum and his goal is to maintain a level of civil discourse rather than an aggressive style of communication. It's a very specific definition of "violence" for a specific context and perhaps there's room for clearer terminology.


If talking about UI, the flip side is not to harm the user's data. So despite containing errors it needs to representable, even if it can't be passed further along to back-end systems.

For parsing specifically, there's literature on error recovery to try to make progress past the error.


Two that come to mind:

1) Embedded systems typically do not allow data to grow without bound. If they were going to keep debugging data, they'd have to limit it to the last N instances or so. In this case N=0. It seems like the goal here was to send troubleshooting data, not keep it around.

2) Persisting the data may expose the driver to additional risks. Beyond the immediate risks, someone could grab the module from the junkyard and extract the data. I can appreciate devices that take steps to prevent sensitive data from falling into the hand of third parties.


It would be trivial to set it up to only delete old instances when free space goes below a threshold.

If the data can expose the driver to additional risks, then the driver can be exposed by someone stealing the vehicle and harvesting that data. Again, that can be trivially protected against using encryption which would also protect in the instance that communication was disrupted so that the tar isn't uploaded/deleted.


My favorite trick in C is a light-weight Protothreads implemented in-place without dependencies. Looks something like this for a hypothetical blinky coroutine:

  typedef struct blinky_state {
    size_t pc;
    uint64_t timer;
    ... variables that need to live across YIELDs ...
  } blinky_state_t;
  
  blinky_state_t blinky_state;
  
  #define YIELD() s->pc = __LINE__; return; case __LINE__:;
  void blinky(void) {
    blinky_state_t *s = &blinky_state;
    uint64_t now = get_ticks();
    
    switch(s->pc) {
      while(true) {
        turn_on_LED();
        s->timer = now;
        while( now - s->timer < 1000 ) { YIELD(); }
        
        turn_off_LED();
        s->timer = now;
        while( now - s->timer < 1000 ) { YIELD(); }
      }
    }
  }
  #undef YIELD
Can, of course, abstract the delay code into it's own coroutine.

Your company is probably using hardware containing code I've written like this.

What's especially nice that I miss in other languages with async/await is ability to mix declarative and procedural code. Code you write before the switch(s->pc) statement gets run on every call to the function. Can put code you want to be declarative, like updating "now" in the code above, or if I have streaming code it's a great place to copy data.


A cleaner, faster way to implement this sort of thing is to use the "labels as values" extension if using GCC or Clang []. It avoids the switch statement and associated comparisons. Particularly useful if you're yielding inside nested loops (which IMHO is one of the most useful applications of coroutines) or switch statements.

[] https://gcc.gnu.org/onlinedocs/gcc/Labels-as-Values.html


In `proto_activities` this blinking would look like this:

  pa_activity (Blinker, pa_ctx_tm(), uint32_t onMs, uint32_t offMs) {
    pa_repeat {
      turn_on_LED();
      pa_delay_ms (onMs);
  
      turn_off_LED();
      pa_delay_ms (offMs);
    }
  } pa_end
Here the activity definition automatically creates the structure to hold the pc, timer and other variables which would outlast a single tick.


I have used this approach, with an almost similar looking define for YIELD myself.

If there is just one instance of a co-routine, which is often the case for embedded software, one could also make use of static variables inside the function. This also makes the code slightly faster.

You need some logic, if for example two co-routines need to access a shared peripheral, such as I2C. Than you might also need to implement a queue. Last year, I worked a bit on a tiny cooperative polling OS, including a transpiler. I did not finish the project, because it was considered too advanced for the project I wanted to use it for. Instead old fashion state machines documented with flow-charts were required. Because everyone can read those, is the argument. I feel that the implementation of state machines is error prone, because it is basically implementing goto statements where the state is like the label. Nasty bugs are easily introduced if you forget a break statement at the right place is my experience.


Yes, 100%. State transitions are "goto" by another name. State machines have their place but tend to be write-only (hard to read and modify) so are ideally small and few. Worked at a place that drank Miro Samek's "Practical Statecharts in C/C++" kool-aid... caused lots of problems. So instead I use this pattern everywhere that I can linearize control flow. And if I need a state machine with this pattern I can just use goto.

Agreed re: making the state a static variable inside the function. Great for simple coroutines. I made it a pointer in the example for two reasons:

- Demonstrates access to the state variables with very little visual noise... "s->"

- For sub-coroutines that can be called from multiple places such as "delay" you make the state variable the first argument. The caller's state contains the sub-coroutine's state and the caller passes it to the sub-coroutine. The top level coroutine's state ends up becoming "the stack" allocated at compile-time.


Worked at a place that drank hiearchical state machines kool-aid. Yeah.

https://en.wikipedia.org/wiki/UML_state_machine#Hierarchical...


Yeah. Protothreads (with PT_TIMER extensions) is one of the classics libraries, and also was used in my own early embedded days. I was totally fascinated by its turning ergonomic function-like macros into state machines back then.


Humans are also created/derived from other works, trained, and used as a tool by humans.

It's interesting how polarizing the comparison of human and machine learning can be.


I would assume EasyTier devs use it to connect their devices within China so the great firewall isn't involved. Attempts to cross the firewall with EasyTier are detectable without things like Tor's pluggable censorship evasion transports.


Agreed: caution is advisable.

At the same time shaping perception is key. Authorities may push a narrative that affords them greater control. If that's not the world you want, the first step is not to legitimize it.


The effective number of bits (ENOB) is only ~8.7. Originally designed to have an ENOB of 9+ bits, but silicon bugs lowered it.

There is also some noise from the Pi Pico power supply which could be a good thing if you're willing to average or filter over a large number of samples.

More details here: https://pico-adc.markomo.me/


Should be noted that this only applies to the rp2040, the ADC silicon bugs were fixed in the rp2350.


That's what I'm using ATM. That ENOB number is ridiculous!


Done for effect: it felt to the OP as if it was the present so the writing conveys that, while elsewhere making it clear the arrest was not the present.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: