Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

All of them I am aware of do hardware decoding of the signal and do the linear algebra to find their location in software. Speaking mostly based on the cheap ublox chips and partially open source navspark chips I've dealt with.

The problem is the signal processing part of GPS is quite computationally difficult. I think it was around 10ish years ago it even became possible to do the full real-time decoding on a laptop. At startup, you need to find the GPS signals. This means searching for all 32 possible satellite code patterns across the range of possible Doppler shifts. During testing, this was what took most of the startup (cold start) time. You need roughly 4-6 of them to get a position, so this has to be done in parallel. Once you've found the satellite it takes another 30 seconds to get the satellite position. GPS signals are very slow at 50 bits/sec.

By comparison, actually solving for location is a simple linear algebra problem with 4 unknowns (lat, long, alt, time; but in a more convenient coordinate system). You only do this a few times per second. The hardware does the higher rate signal phase estimation. For example the navspark is a single core SPARC microcontroller running at 100MHz with 200 kB of RAM. That's enough to do 50 solutions per second, though they reduced that to 10 Hz to make space for a user program too.

A ton of work goes into caching strategies to narrow down that initial search space. Modern chips will let you load in exactly which satellites to expect overhead (e.g. based on position and orbit info from cell network). There is a whole other caching strategy based on a approximate "almanac" in the GPS signal for offline devices. With all of that known before the receiver turns on, you can get a solution in a couple seconds.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: