Hacker Newsnew | past | comments | ask | show | jobs | submit | more cycomanic's commentslogin

The issue with optical neuromorphic computing is that the field has been doing the easy part, i.e. the matrix multiplication. We have known for decades that imaging/interference networks can do matrix operations in a massively parallel fashion. The problem is the nonlinear activation function between your layers. People have largely been ignoring this, or just converted back to electrical (now you are limited again by the cost/bandwidth of the electronics).


Seems hard to imagine there’s not some non-linear optical property they could take advantage of


The problem is intensity/power, as discussed previously photon-photon interactions are weak, so you need very high intensities to get a reasonable nonlinear response. The issue is, that optical matrix operations work by spreading out the light over many parallel paths, i.e. reducing the intensity in each path. There might be some clever ways to overcome this, but so far everyone has avoided that problem. They said we did "optical deep learning" what they really did was an optical matrix multiplication, but saying that would not have resulted in a Nature publication.


There is, and people have trained purely optical neural networks:

https://arxiv.org/abs/2208.01623

The real issue is trying to backpropagate those nonlinear optics. You need a second nonlinear optical component that matches the derivative of the first nonlinear optical component. In the paper above, they approximate the derivative by slightly changing the parameters, but that means the training time scales linearly with the number of parameters in each layer.

Note: the authors claim it takes O(sqrt N) time, but they're forgetting that the learning rate mu = o(1/sqrt N) if you want to converge to a minimum:

    Loss(theta + dtheta) = Loss(theta) + dtheta * dLoss(theta) + O(dtheta^2)
                         = Loss(theta) + mu * sqrtN * C (assuming Lipschitz continuous)
    ==>     min(Loss)    = mu * sqrtN * C/2


No this is not an engineering issue, it's a problem of fundamental physics. Photons don't interact easily. That doesn't mean there are not specialised applications where optical processing can make sense, e.g. a matrix multiplication is really just a more complex lens so it's become very popular to make ML accelerators based on this.


They are doing 10 Gb/s over each fibre, to get to 10 Gb/s you have already undergone a parallel -> serial conversion in electronics (clock rates of your asics/fpgas are much lower), to increase the serial rate is in fact the bottleneck. Where the actual optimum serial rate is highly depends on the cost of each transceiver, e.g. long haul optical links operate at up to 1 Tb/s serial rates while datacenter interconnects are 10-25G serial AFAIK.


What the previous poster is implying is that electrons interact much more strongly than photons. Hence electrons are very good for processing (e.g. building a transistor), while photons are very good for information transfer. This is also a reason why much of the traditional "optical computer" research was fundamentally flawed, just from first principles one could estimate that power requirements are prohibitive.


> This is also a reason why much of the traditional "optical computer" research was fundamentally flawed

presumably also because photons at wavelengths we can work with are BIG


Phase variations will not introduce any issues here, they most certainly are talking about intensity modulation. You can't really (easily) do coherent modulation using incoherent light sources like leds.

SNR is obviously an issue for any communication system, however fiber attenuation is orders of magnitude lower than coax.

The bigger issues in this case would be mode-dispersion, considering that they are going through "imaging" fibres, i.e. different spatial components of the light walking off to each other causing temporal spread of the pulses until they overlap and you can't distinguish 1's and 0's.


Mode dispersion is frequency dependent phase changes.


That's chromatic dispersion, mode dispersion is spatial "path" dependent phase changes. Vibration is actually somewhat more relevant because if it wasn't for that we could theoretically undo mode dispersion (we would need phase information though).

That said all of that is irrelevant to what the previous speaker said, vibration induced phase variation as an impairment. Thats just not an issue, vibrations are way too slow to impair optical comms signals.


How do the gravity wave optical paths solve the vibration issues? Couldn't TSMC do something similar?


That article is really low on details and mixes up a lot of things. It compares microleds to traditional WDM fiber transmission systems with edge emitting DFB lasers and ECLs, but in datacentre interconnects there's plenty of optical links already and they use VCSELs (vertical cavity surface emitting lasers), which are much cheaper to manufacture. People also have been putting these into arrays and coupling to multi-core fiber. The difficulty here is almost always packaging, i.e. coupling the laser. I'm not sure why microleds would be better.

Also transmitting 10 Gb/s with a led seems challenging. The bandwidth of an incoherent led is large, so are they doing significant DSP (which costs money and energy and introduces latency) or are they restricting themselves to very short (10s of m) links?


In datacenters people use optics for longer distances (10m to 2km). Within a rack it is almost always copper. The reason is that for short distances lasers are too expensive, unreliable, and consume too much power. We think microLED based links might replace copper at short distances (sub 10m). MicroLEDs into relatively thick fiber cores (50um) are much easier to package than standard single mode laser based optics.

on the distance - exactly right. The real bottleneck now in AI clusters is the interconnect within a rack or sub 10m. So that is the market we are addressing.

On your second point - exactly! Normally people think LEDs are slow and suck. That is the real innovation. At Avicena, we've figured out how to make LEDs blink on and off at 10Gb/s. This is really surprising and amazing! So with simple on-off modulation, there is no DSP or excess energy use. The article says TSMC is developing arrays of detectors, based on their camera process, that also receive signals at 10Gb/s. Turns out this is pretty easy for a camera with a small number of pixels (~1000). We use blue light, which is easily absorbed in silicon. BTW, feel free to reach out to Avicena, and happy to answer questions.


It’s not correct to say that lasers are unreliable. Last year more than 20M transceivers shipped. Your statement is not at all supported by real field failure data.

The reliability of micro LEDs. and specifically GaN based micro LEDs is however an open question.

In the absence of any dislocation failure mechanisms it will depend on the current density and thermal dissipation. And just like any other material, it will have to survive in a non-hermetic environment and in the presence of corrosive gasses (an issue in data centers).

To get the 10G, it’s probably kind of like a VCSEL without the grating and so current density is probably high. How well you’re able to heat sink it is going to determine how reliable it will be.

Overall I like the idea. It looks like the beachfront could work. I’d spend more time talking about and how the electrical connection works and what kind of interface to a chip would be needed.

I’d also be careful before throwing shade on laser reliability because it could backfire on you (for all reasons above).


Haha, given that he is the founder of Kaiam and Santur I think he has the credentials to throw shade.


You made an account just to respond?

Generally it’s a good idea to sell what’s good about your product and not say things that are not true about other peoples products.

Also I wouldn’t brag about Kaiam.


So has Gb/s a new meaning now? Giga blinks per second?


So, toslink?


I guess they are doing direct modulated IMDD for each link so the DSP burden is not related to the coherence of diodes? Also indeed very short reach in the article.


The problem with both leds and imaging fibres is that modal dispersion is massive and completely destroys your signal after only a few meters of propagation. So unless you do MMSE (which I assume would be cost prohibitive), you really can only go a few meters. IMDD doesn't really make a difference here.


I think this is intended for short distances (e.g. a few cm). cpu to GPU and network card to network card still will be lasers, the question is whether you can do core to core or CPU to ram with optics


But why are they talking about multicore fibres then? I would have expected ribbons. You might be right though.


> I would have expected ribbons.

The cable is just 2D parallel optical bus. With a bundle like this, you can wrap it with a nice, thick PVC (or whatever) jacket and employ a small, square connector that matches the physical scheme of the 2D planar microled array.

It's a brute force, simple minded approach enabled by high speed, low cost microled arrays. Pretty cool I think.

The ribbon concept could be applicable to PCBs though.


You might be right and they are talking about fibre bundles, but that that's something different to a multicore fibre (and much larger as well, which could pose significant problems especially if we are talking cm links). What isn't addressed is that leds are quite spatially incoherent and beam divergence is strong, so the fibres they must use are pretty large, coupling via just a connector might not be easy especially if we want to avoid crosstalk.

What I'm getting at is, that I don't see any advantage over vcsel arrays. I'm not convinced that the price point is that different.


> You might be right and they are talking about fibre bundles

The caption of the image of the cable and connector reads: "CMOS ASIC with microLEDs sending data with blue light into a fiberbundle." So yes, fibre bundles.

> I don't see any advantage over vcsel arrays

They claim the following advantages:

    1. Low energy use
    2. Low "computational overhead"
    3. Scalability
All of these at least pass the smell test. LEDs are indeed quite efficient relative to lasers. They cite about an order of magnitude "pJ/bit" advantage for the system over laser based optics, and I presume they're privy to vcsels. When you're trying to wheedle nuclear reactor restarts to run your enormous AI clusters, saving power is nice. The system has a parallel "conductor" design that likely employs high speed parallel CMOS latches, so the "computational overhead" claim could make sense: all you're doing is latching bits to/from PCB traces or IC pins so all the SerDes and multiplexing cost is gone. They claim that it can easily be scaled to more pixels/lines. Sure, I guess: low power makes that easier.

There you are. All pretty simple.

I think there is use case for this outside data centers. We're at the point where copper transmission lines are a real problem for consumers. Fiber can solve the signal integrity problem for such use cases, however--despite several famous runs at it (Thunderbolt, Firewire)--the cost has always precluded widespread adoption outside niche, professional, or high-end applications. Maybe LED based optics can make fiber cost competitive with copper for such applications: one imagines a very small, very low power microLED based transceiver costing only slightly more than a USB connector on each end of such a cable with maybe 4-8 parallel fibers. Just spit-balling here


Aren't they also claiming this is more reliable? I'm told laser reliability is a hurdle for CPO.

And given the talk about this as a CPO alternative, I was assuming this was for back plane and connections of a few metres, not components on the same PCB.


> Aren't they also claiming this is more reliable?

Indeed they do. I overlooked that.

I know little about microLED arrays and their reliability, so I won't guess about how credible this is: LED reliability has a lot of factors. The cables involved will probably be less reliable than conventional laser fiber optics due to the much larger number of fibers that have to be precision assembled. Likely to be more fragile as well.

On-site fabricating or repairing such cables likely isn't feasible.


I understand that CPO reliability concerns are specifically with the laser drivers. It's very expensive to replace your whole chip when one fails. Even if the cables are a concern (I've no idea), having more reliable drivers would still be preferable to less reliable cables, given how much cheaper/easier replacing cables would be (up to a point, of course).


> I understand that CPO reliability concerns are specifically with the laser drivers.

Yes. I've replaced my share of dead transceivers, and I suspect the laser drivers were the failure mode of most of them.

That doesn't fill in the blank for me though: how reliable are high speed, dense microLEDs?


And are they going to work out any better than Linear Drive Optics, the more obvious alternative?


LDO is just integration. It certainly has value: integration almost always does. So it's clearly the obvious optimization of conventional serial optical communication.

This new TSMC work with parallel incoherent optics is altogether distinct. No DSP. No SerDes. Apples and oranges.


Ok, but I'm just after solutions to problems I have talking to other chips. I don't mind what's novel and what's optimisation. Whatever is adopted, in either case it's a step-change from the past 20 years of essentially just copper and regular serdes in this space.

And I'm not sure how much of this is actually TSMC's work, the title is misleading.

Edit: actually, they are working on the detector side.


we use borosilicate fibers that are used for illumination applications. You might have seen a bundle in a microscope light for example. And they are incredibly robust compared to single mode fibers. Note the very tight bend angle in the picture - that's a 3mm bend radius. Imagine doing that with a single mode fiber!


> And they are incredibly robust

See my other comment about non-datacenter applications. There is a serious opportunity here for fixing signal integrity problems with contemporary high bandwidth peripherals. Copper USB et al. are no good and in desperate need of a better medium.


The fiber cables we use are basically 2D arrays of 50um thick fibers that match the LED and detector arrays. We've made connectors and demonstrated very low crosstalk between the fibers. Advantage over VCSELs is much lower power consumption overall, much lower cost (LEDs are dirt cheap and extremely high yield), because we are blue light, the detector arrays are much easier and can be modified camera technology, and most importantly, much better reliability. VCSELs are notorious for bad rel.


This might be the breakthrough we also have been working on [1] for over 20 years. It would be even better if Avicena wouldn't drive the led array and detector array with high power 10 Gbps SerDes. Even better if you align a blue led array with lenses to a detector array on a second chip: free space optics [2].

I would love to join you at Avicena and work on your breakthrough instead of just acquiring the IP from you in a few years.

[1] https://youtu.be/wDhnjEQyuDk?t=1569

[2] see schematic on page 373 of https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=780...


Just wanted to clarify that I don't necessarily doubt that you have a use case (BTW partnering up with someone like intel or so on for optical thunderbold or similar like someone else mentioned would be very interesting as well), you definitely have people who know what they are doing. I thought the ieee article however does give the wrong impressions as it mainly compares with the wrong thing, somebody here (maybe you?) was also saying 50 mu m fibres are much easier to couple into than SMF, which is correct, but also not relevant because VCSEL links typically use OM fibre with 50-60 mu m core diameters as well.


The article is about chip interconnects. Think like replacing PCIe, NVLink, or HBM/DDR RAM buses, with optical communication.


The article mentions lengths of up to 10 m, so this technology is restricted to links inside a cabinet or between closely located cabinets.

The claimed advantage is a very high aggregate throughput and much less energy per bit than with either copper links or traditional laser-based optical links.

For greater distances, lasers cannot be replaced by anything else.


short links it’s in the article


Ah I missed the 10m reference there. I'm not sure it makes more sense though. Typical intra-datacenter connections are 10s-100s of meters and use VCSELs, so introducing microleds just for the very short links instead of just parallelising the VCSEL connections (which is being done already)? If they could actually replace the VCSEL I would sort of see the point.


There's been a constant drum-beat that even intra-rack is trying to make its way to optical as fast as it can, that copper is more and more complex and expensive to scale faster. If we have a relatively affordable short range optical system that doesn't require heavy computational work to do, that sounds like a godsend, like a way to increase bits per joule while reducing expensive cabling cost.

Sure yes, optical might use expensive longer range optical today! But using that framing to assess new technologies & what help the could be may be folly.


> Maybe we need competing governments, and whichever government is more efficient gets to rule. Seriously: Add a second FAA at some test airports, see if they can do better, with the understanding that if they can't, they get shut down.

And you would be willing to be personally responsible if people die in this experiment?

It's funny how people here always complain that any money government spends is wasted, but if you look at big companies they are "wasting" money as well. Just look at the number of projects that google killed. It's simply a function of large (and small) organizations that they don't get it right all the time, it's difficult to predict the future.


>It's funny how people here always complain that any money government spends is wasted, but if you look at big companies they are "wasting" money as well. Just look at the number of projects that google killed. It's simply a function of large (and small) organizations that they don't get it right all the time, it's difficult to predict the future.

I worked at a Fortune 50 financial services company back in the 1990s and they designed, built and deployed a brand, spanking new customer service platform.

They spent USD$200,000,000.00 building the system. It was ready for production when someone realized they'd spend USD$50,000,000+ per annum supporting the platform. The product roll out was scrapped, and tens of millions of dollars of equipment sat in warehouses.

When trying to redeploy said equipment across the enterprise, group heads would purchase new equipment rather than using the equipment sitting in warehouses as it was "cheaper" on their budget lines to spend real money rather than put the depreciation of already purchased equipment on those budget lines.

So yeah, big companies can be incredibly wasteful. Often more wasteful than government (e.g., US Medicare has ~3% overhead -- show me any private insurer that can top that) as well.

It's all about the incentives. People (and consequently, organizations) respond to the incentives inherent in a situation.

Sometimes those incentives promote efficient outcomes and sometimes not so much. The trick is to maximize the former and minimize the latter. Something easier said than done.


I don't understand this sentence:

> Instead, it elected to send a “mirror feed” of telemetry from the STARS servers at N90, traveling over 130 miles of commercial copper telecom lines, with fiber optics to follow by 2030.

This does not make any sense. If they would really transmit data over a 130 miles copper line (which I doubt even still exist, especially not commercial ones), we would be talking rates in the low Mbit/s. I suspect the situation is that the "last mile" of the center is served by copper connections, not good either but by far not as bad as a 130 miles copper connection.

EDIT: I should add if they really would have a link running on copper lines it would have repeaters, which would be sitting in datacenters. In New Jersey there would by 1000s of km of dark fiber floating around, so it would be trivial to convert at least the majority of the link to fiber.


Telephone providers lease "dry loop" lines for point-to-point signalling.

I've used them for telemetry systems with acoustic modems on both ends.

Also for sending audio between broadcast studios. I recall that it was priced by bandwidth (in the literal, analog, sense), e.g.: 5kHz (~AM radio) was less expensive than 15kHz (~FM radio). For comparison: A normal phone call is 3kHz.

So yes, copper and repeaters. But very inexpensive and quick to provision. :)


Not sure I understand your point, using a 15 kHz carrier gives you nowhere near enough bandwidth to transmit even just 100 Mb/s.


I am guessing that the data feed requirements of ATC telemetry are fairly modest, well within the range that a pair of analog modems could handle. You don't have to send a raw radar feed to the airport, just put the intelligence (object detection, delta computation, etc) at the endpoints and transmit the results.

I'm also guessing that they do this every day at hundreds of other airports, and it works just fine.

Moving to fiber gives you greater bandwidth, reliability, multipath redundancy, etc. But I bet the real motivation is access to newer and more general communications equipment. Maintaining the old, perfectly adequate but increasingly unusual hardware/software is more of a pain than replacing with new stuff that exceeds requirements but is maintained just like all the other modern stuff everywhere.

All guesses. I do not work in ATC, but I've seen the pattern in other industries with similarish requirements.


Yes and I believe this one of the most problematic aspects of representative democracies. Policies are often complex so hardly anyone has the time to educate themselves on all policies of a party, instead many revert to the other extreme and simply cheer for their team.

An interesting concept to alleviate this problem has been pioneered in Melbourne local politics is citizen councils.


My Google skills are failing me here — can you provide a link, and/or some more search terms, regarding these citizen councils? Is it specific to Melbourne municipal council or other Victorian LGA’s?


The correct term seems to be citizen assemblies (apologies). Here's a link: https://www.newdemocracy.com.au/independent-citizens-assembl...


I only ever hear this used by people who insist that bad faith interlocutors just need to be given a 10th chance to make their point.

No one has infinite time or attention.


> >a) I am currently a victim of other people's choices. > > You're not a victim in a democracy when your candidate looses and have to put up with the policies you don't like but which are popular with the majority of the voting population. It's the feature of democracy, not a bug. Grow up. > > >Acting like this is just "politics as usual" > > IT IS politics as usual. Always has been when you look at history. The difference is now you have Twitter and social media to rile you up for the sake of monetizing engagement. Lay off social media and your TDS will heal itself naturally. > > > Do you support his rise to dictator > > As someone born in a dictatorship, you have no idea what a dictatorship actually is. If you were in a dictatorship, a black Volga would show up at your door and arrest you for your previous comment. You don't have the right to complain in a dictatorship, let alone to vote.

Funny that you mention it, this is exactly what's is happening.

https://www.google.com/url?sa=t&source=web&cd=&ved=2ahUKEwit...


Left wing propaganda if such an event makes a dictatorship. Deporting foreign political agitators and aggressors who break the terms of their visa is just proper law enforcement, not dictatorship. Hope to see more of this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: