Japan, Mexico City, and other seismically active places, have such systems in place and have had them for years. Amazing that California, the epicenter of tech, is just getting on board with this.
As someone who used to work for a university's geosciences department as a Unix sysadmin/programmer and who worked on two different seismic monitoring networks (one spread across the state for earthquake monitoring and one parked on top of a coal mine), allow me to explain why earthquake monitoring in the United States is largely stuck in the stone age: There's no f---ing money.
Back in the '80s, there were a lot of regional seismic networks around the country, especially east of the Mississippi. But as time marched on and budgets got slashed, regional seismic networks disappeared one by one. Today, only the largest regional networks survive — generally the ones that are mostly funded by the states they're in and/or have increased their share of USGS/ANSS funding by taking over monitoring for areas of the country that used to be covered by the now-defunct networks.
The regional seismic networks that are left spend pretty much all of their dollars on equipment and operations. Installing/upgrading/running permanent seismograph stations is expensive — a basic solar-powered one with a shallow fiberglass vault, a three component short period sensor, and a three channel digitizer will run you ~$12,000 just in equipment and materials. The sky's the limit if you go fancier than that (broadband sensors, strong motion sensors, atmospheric sensors, borehole sensors, six channel digitizer, elaborate vaults, VSAT, etc.). Then there's the recurring communications cost, the cost of regular site visits, the cost of regular battery replacements, replacing solar panels/equipment boxes that morons shoot at for laughs, etc.
What I'm getting at is that in the monetary battles of "keep seismograph stations working" vs. "hire programmer to write useful software", the stations will win every time. Even this LA Times article about Berkeley's early warning system notes, "A lack of funds, however, has slowed the system's progress."
If the epicenter of tech wants to do something wonderful for earthquake seismology, figure out how to make dirt cheap 1- or 3-channel seismic digitizers (low-pass filter + low noise amp + 20-bit ADC @ 100–200 accurately timestamped samples/s, ≤1 watt average power draw @ 12 VDC, speaks TCP/IP over 802.11g/n) and dirt cheap 1- or 3-component short period sensors (1 or 2 Hz corner frequency, decent sensitivity). Then figure out how to get thousands of these dirt cheap digitizers and sensors in backyards all over the country and contributing data in real-time to IRIS and/or the closest regional seismic network. If the cost of acquiring quality seismic data goes down, that frees up money to actually do something with the data.
When I was still at the university job, my job-related pipe dream was to blanket the state with these non-existent dirt cheap stations. Even one or two per county in my state would've increased our station count by a factor of >15, greatly improved the quality of our earthquake locations, and allowed us to determine focal mechanisms (the orientation of the fault and direction of the slip) even for small earthquakes.
Just in case folks are wondering why funding for seismometer networks has fallen off a cliff, it's not what you might think. It's due to the end of the cold war.
From the 50's to the 80's there was a significant amount of defense funding (mostly DARPA) for seismic monitoring to detect, locate, and analyze nuclear detonations in addition to civilian research purposes. After the cold war ended, the majority of the military funding went away.
Also, on a side note, for those of you looking at the hardware description and thinking "all that exists", notice the sample rate and power requirements. (Don't forget data transmission and/or storage, either.) It's already possible to get dirt cheap sensors. (For some purposes (e.g. surface waves), off-the-shelf accelerometers are good enough, and even traditional oil industry geophones are fairly cheap.) (Obviously broadband stations are a completely different story...) The problem is sample rate and power draw.
Is there any place I can find more detailed requirements for earthquake seismology stations? I feel like modern off-the-shelf parts could be used to build the signal conditioning and digitization stages for well under $20/channel, and cheap wifi-capable ARM dev board have become available in the past few years. 10 millisecond time stamps are possible with NTP and easy with a GPS or WWVB receiver. How hard are the power limits? Do you actually need 20 bits of resolution over the full scale, or is that just a proxy for getting enough small-signal sensitivity?
Basically, nothing is really a hard limit. Seismologists push the data to its limits, so depending on the purpose you _really_ do need a broadband station. For other purposes, though, you could definitely get away with off-the-shelf accelerometers. We'll take whatever we can get. The more precision the better, but the more sensors you have, the less the precision matters.
As far as whether or not you really need 20 bits of resolution over the full frequency spectrum, you certainly don't for every application. For some applications, though, you need it over most of the frequency spectrum. For others (e.g. strong ground motion) you don't at all. (For a quick overview, see here: http://www.passcal.nmt.edu/content/instrumentation/sensors In most cases, the high frequency component isn't the problem on the sensor side, it's maintaining accuracy on the low-frequency component. High frequency sampling generates more data and results in more power consumption, though, so there's a different set of challenges there.)
The power limits depend on where the station is installed. Typically, they're solar powered with lead-acid batteries (or maybe something else these days) to store energy. Solar panels are expensive and fragile. The less power it needs, the easier it is to reliably deploy.
Overall, if the data is available, someone will push it to its limits. You have to make compromises, and which ones to make depends on exactly what you're trying to do. A ton of cheap, easily deployable instruments changes things. There's a lot of talk about things like this, but it has been hard to do in practice so far.
Take anything I say with a grain of salt, though. I'm an exploration seismologist who crosses paths with earthquake seismologists. I don't really know what I'm talking about.
At any rate, this is all largely a non-answer. I think a lot of this is more within-reach than I realize.
How cheap is dirt cheap? $1000? $100? $10? $1? $0.1?
Are there established algorithms to determine what seismic data is 'interesting' as opposed to streaming it all in real time (and keeping the radio on) constantly?
Why 802.11n instead of cell phone networks - don't you need to be away from traffic vibrations, and hence roads and homes?
How cheap is dirt cheap? $1000? $100? $10? $1? $0.1?
I think a sub-$500 per-station cost would be wonderful, but this is all just a pipe dream...
Are there established algorithms to determine what seismic data is 'interesting' as opposed to streaming it all in real time (and keeping the radio on) constantly?
Almost all digitizers I've seen support the same STA/LTA (short term average ÷ long term average) triggering mechanism, where data is declared interesting if the energy over a short time window divided by the energy over a long time window exceeds some configurable threshold. If you only send triggered data, it's a great way to trigger repeatedly on local noise and miss all/parts of events you actually want to record.
Sending continuous data from stations to a central processing site is greatly preferred, especially since the data rate is so low. Three channels of 20-bit, 100 samples/s data from a low- to moderate-noise site that's losslessly compressed by the digitizer fits comfortably in 9600 bits/s.
Why 802.11n instead of cell phone networks - don't you need to be away from traffic vibrations, and hence roads and homes?
Siting seismograph stations is a tradeoff. Too far away from civilization and you have no way to get data back home except via (expensive, power hungry) VSAT or high power radios. Too close to civilization and you are subjected to civilization's noise (but you can use civilization's communications infrastructure to send your data home, sometimes for free).
The higher a site's noise level, the higher your event detection threshold gets. In other words, the noise consumes the signal from weak and/or distant earthquakes. You can make up for this somewhat by deploying a more dense network that pushes stations closer to where the earthquakes are happening...