Apologies--still working on everything (which is why I didn't post the URL).
But here's a technical description:
1. Imagine we build a platform abstraction layer on top of the typical nodes of a datacenter: web servers, database storage, etc.
2. We also create a remoted UI layer on the platform that renders on browser clients as HTML/JavaScript. Think of it like X Windows but higher-level and rendered on a browser.
3. Now we build a VM that can execute byte code and runs on the platform layer. We can now create programs that run entirely on the server/datacenter, but can render UI on the client. From the program's perspective, it thinks it has full control over a machine, but the platform layer takes care of remoting as appropriate.
4. Also, we have abstractions for shared memory, so that multiple programs can share data, even if they happen to be running on different nodes. Multiple programs, each potentially run by different users, can collaborate.
Effectively, all programs think that they are running on a single, giant, global computer.
Happy to answer questions if this is at all interesting.
Actually, I have a question. Why not just any old container on the back-end, and mostly focus on the protocol between the web browser and the backend? Why are you bothering to build an abstraction layer between the cloud service provider and your VM?
1. I want developers to have a unified, integrated API. For example, the UI controls can connect to a database object and get data change notifications. That makes it easy to, e.g., show a table control that updates when rows change.
2. I want to (eventually) handle data and processing that exceeds a single node's capacity. For example, a map function over an array should automatically be distributed over as many cores/machines as needed. In a sense, the program should be able to tap into as many resources as it needs/can afford without special code.
Of course, I could have done both of those things in different ways, without a unified platform layer, but where's the fun in that?
But here's a technical description:
1. Imagine we build a platform abstraction layer on top of the typical nodes of a datacenter: web servers, database storage, etc.
2. We also create a remoted UI layer on the platform that renders on browser clients as HTML/JavaScript. Think of it like X Windows but higher-level and rendered on a browser.
3. Now we build a VM that can execute byte code and runs on the platform layer. We can now create programs that run entirely on the server/datacenter, but can render UI on the client. From the program's perspective, it thinks it has full control over a machine, but the platform layer takes care of remoting as appropriate.
4. Also, we have abstractions for shared memory, so that multiple programs can share data, even if they happen to be running on different nodes. Multiple programs, each potentially run by different users, can collaborate.
Effectively, all programs think that they are running on a single, giant, global computer.
Happy to answer questions if this is at all interesting.