Why is this a language, not just a framework for JavaScript or whatever?
I suppose you could say the same about Terraform using HCL - but that makes extensive use of it being declarative, which is so unusual that they'd be picking an obscure probably-new-to-the-user language anyway. This just looks.. JS-ish, a bit like anything really.
I suppose I'm asking what are the language features that make Wing great for 'cloud', any `new Bucket()` etc. aside?
If you look at its 'design tenets', the only ones that are about the language not the SDK are just saying 'be good/familiar like other languages'... Ok so why not be another language? https://docs.winglang.io/reference/spec#02-design-tenets
re: Why is this a language, not just a framework for JavaScript or whatever?
One product that's taking this approach is shuttle -> https://www.shuttle.rs/ (disclaimer: I work there). It's built around Rust and not TS but we have some updates coming soon which is going to expand on this.
We've been thinking about this DSL question for a while and it's pretty much a trade-off. By having your own language and parser you can build language primitives which get your close to your domain, as for example in the example you gave with HCL. It will be interesting to see where Wing goes with this concept, it could offer a great developer experience.
For us, we've chosen to stick with a robust and performant programming language which is increasing in popularity and has enough meta-programming capabilities to make this 'infrastructure from code' paradigm feel natural - the framework blends in well with the language. The advantages of this for us is that users of the language experience a very small learning curve and they can also use incredible Rust libraries off-the-shelf. We can also piggy-back off the Rust compiler and ecosystem to provide an A-class developer experience (those compiler errors are amazing).
We're excited to see more companies in this space - optimising for a beautiful developer experience.
I hecking love the concept of shuttle. The only thing keeping me from using it is unfamiliarity with rust, but it's enough to make me rethink things.
I think the future isn't as much Lo-code as much as it will be (and goshdarn should be) Lo-DevOps.
There are so many advances in programming, like WASM for example, that will continue to cause this Lo-DevOps movement, but honestly, digging into the language, using the stuff that already is baked in for other existing business purposes.
But things like encore.dev and shuttle.rs (I'm associated with neither) are just mind-blowingly straightforward with no language, infrastructure, or other cognitively burdening information.
I'm not sure about it. If you're going to have Rust-specific deployment workflows, you should reuse the solutions that have been developed for the embedded ecosystem. There's no reason why "cargo flash" could not be extended to upload your binaries to the cloud, and support the same "probe run" workflow for remote logging and debugging.
The main disadvantage of having a custom language, for a user is, having to learn a bespoke language for a single tool. Very few are willing to do this. As a company, it is generally a distraction from delivering value. Engineers like it because it is interesting work to write a language.
Most should not use HCL as an argument for forcing bespoke languages on users. (1) HCL is not a programming language, more of a config language. (2) It was the right time and place, but has shown rough edges as it has aged, it feels more hacky now.
The good arguments for "This should be a new language" are either:
1. You need to enforce a relatively unique set of invariants, which in turn come with new constructs to interact with them. For example, if you want a system for managing a distributed system, enforcing the lack of global state can be a lot more natural by using a language that cannot express it.
2. You want to provide some sort of novel language construct that cannot be expressed well in existing languages (function argument pattern matching, for example, cannot be retrofitted onto an existing language by a library, maybe by pre-proccessor or sufficiently advanced macro system).
This is how we think about this too. Applications that use the cloud use a new kind of "computer" which have different characteristics than the model of a computer exposed by existing programming languages. There are some unique invariants and language constructs that cannot be implemented libraries.
The first of these unique primitives is what we call "inflight functions". You can think of `inflight` as "remote `async`". They are async functions that can be executed on a remote system (such as inside a container fleet or on a FaaS). Inflight functions can interact naturally with cloud resources around them (by simply calling `inflight` methods on the resources such as `bucket.put()`).
Interaction of inflight functions with the outside world poses a unique set of invariants. Currently inflight code can reference immutable serializable data and call `inflight` methods of resources defined outside that closure.
The compiler then analyses these interactions and inverts the control over to the resources to take care of the mechanics like wiring deployment information, synthesizing security policies and anything else that can be deduced from this high level intent.
Defining the cloud architecture of the app and being able to naturally crossing these distributed boundaries is the essence of what we call "cloud-oriented programming", and where we think a lot of the friction and pain of the cloud comes from today: every time I need to interact with "The Cloud", I leave the safety and comfort of my compiler, and I am out in the wild having to understand all the mechanics and layers involved.
Almost all existing languages/compilers take a fundamental assumption that the entire program runs inside a single machine. This is, in our view, the impedance mismatch of cloud development today. This is where we believe language innovation can dramatically reduce the cognitive load and barrier to entry for building and delivering cloud applications that fully take advantage of the cloud. Inflight functions and resources are only the first step. Think first-class support for things like defining and consuming API endpoints, writing distributed workflows, emitting metrics, raising alarms and other things you would expect from your friendly neighborhood cloud programming language.
1. Could a distributed language language really know what I'm using data for, i.e. if I put an object in an object store, is it global or not? Seems like a really complicated analysis. What if this happens across services? (Separate code bases)
Well there exists a language today meant for distributed systems (erlang, elixir).
It doesn’t support global state. Instead you’d write a “process” that’d respond to requests with the state at time of request. Another request could mutate the state, but each request is processed one at a time. So a “bucket” type data store (object store etc) would likely be implemented as “server” that returns the state. If the objects were large you could pass a URL/handle to the file which you’d retrieve later.
These processes/servers/whatever are running inside the language VM and are SUPER lightweight. Multiple language VMs can be tied together so the actual processes can be distributed across a fleet of physical hosts in a data center - this would be transparent to how you write the code.
If you wanted to implement that in native AWS primitives you’d probably run a lambda function to transform an http request into a S3 bucket address, then you’d go and make an s3 request to download/modify the file. Or the lambda would make some modifications to the file.
So, that’s the prior art on a hypothetical distributed language handling global state, which fwiw is basically how “cloud” systems work today.
I am aware of prior art in distributed systems. My point was that being able to enforce certain guarantees is likely impossible, i.e. your use case of global state prevention. Working with global state and making it easier is a different goal with different tradeoffs
Erlang and Elixir both do not have any way to express a global variable, or even a variable that can be accessed by two different processes (green threads, lighter than OS processes). All inter-process communication happens through messages. It's not checked by the language, there is simply no way to express the concept in the language itself.
I'm very excited about Shuttle, I've been watching it for a while and it looks fantastic. The only thing I'm looking for is support for GCP, but I don't see that happening until Google makes official Rust client packages.
> The only thing I'm looking for is support for GCP
Glad you like Shuttle! What do you mean exactly by support for GCP? Shuttle right now is built on AWS but that is mostly abstracted away from you. Do you want to use GCP products (say BigQuery) or do you want to host Shuttle on your own GCP project?
Really both. I'd prefer to be a bit more familiar with the underlying infrastructure and also I'd like to use GCP products, BigQuery is definitely a big one though!
Because lifecycle management, access controls, auditing, monitoring, delegation/federation, budgeting, capacity planning - all of these other things together, which these tools may or may not help you with - are still far less important than troubleshooting.
Seriously, when shit's on fire, the last thing you want is to be deciphering the magic that brought it to its current state. How many layers to dig thru are there, between "thingctl deploy" and a critical S3 bucket having somehow been deleted?
Not trying to shoot down the idea - I'm sure there are people 10x smarter than me working on this, who are more than able to make a lot of this work well in practice. But that's the problem with using stuff made by people 10x smarter than you - when everything goes wrong, it tends to require being 10x-smarter-than-average to fix it.
The true power and success of Terraform is in the blunt simplicity of its interface. Every time you run it, it will spell out in very large writing, what exactly is it going to do: add this, change that, DESTROY this - it even uses the word "destroy" to signify the danger. I really appreciate any tool that makes it harder to do the wrong thing.
> Because lifecycle management, access controls, auditing, monitoring, delegation/federation, budgeting, capacity planning - all of these other things together, which these tools may or may not help you with - are still far less important than troubleshooting.
I think you're right here and it's going to be something that's really hard to get right. I think prioritising the developer experience is the only way to get this right - even at the expense of other things you want to optimise for like costs.
> The true power and success of Terraform is in the blunt simplicity of its interface. Every time you run it, it will spell out in very large writing, what exactly is it going to do
I agree with this statement 100%. The explicit, simple and clear information on what exactly is being modified is one great piece of devex. This precise workflow is something that's inspired us and we want to bake into shuttle.
I don't think having your infra with your code precludes a lot of these things, depending on how it's done. They're good points, and as engineers we'll likely need to be able to do this for a while (until maybe an AI can).
> Seriously, when shit's on fire, the last thing you want is to be deciphering the magic that brought it to its current state. How many layers to dig thru are there, between "thingctl deploy" and a critical S3 bucket having somehow been deleted?
I see it as analogous to how higher-level languages abstracts machine code. When it was a newer technology, people absolutely needed to debug and analyze the "magic" but as the space has matured that's becoming less and less common.
> I see it as analogous to how higher-level languages abstracts machine code. When it was a newer technology, people absolutely needed to debug and analyze the "magic" but as the space has matured that's becoming less and less common.
The fundamental difference is that you can keep analyzing a program in isolation as much as you like. Infrastructure is a living organism - if you shoot it in the head, you can't copy-paste an old working version over it; you have an outage.
> until maybe an AI can
Why is the solution to every problem always more layers, and never less? We understand running production infrastructure far better than we understand AI.
It's not that I don't appreciate ML/AI - it's fairly impressive what it can do, given you keep nudging it in the right direction - but I would never delegate unsupervised authority to it.
Mostly because the two should be separated, you platform is closed, so now you have very specific code just for shuttle.rs inside "business logic", what if I want to run my code in a docker image somewhere else, now I have to strip the service from some integration with a provider.
Well, I'd be curious if that means it has more permissions that necessary to run in production because of that - e.g. to create an arbitrary bucket, or queue, or IAM policy.
Deployed services don't currently have permissions to provision or modify infrastructure. This is done by our build system while your services are being compiled. It is an orthogonal control plane with its own permission system.
> Why is this a language, not just a framework for JavaScript or whatever?
I'm one of the founders of Klotho (https://klo.dev), and we're in the camp of expanding existing programming languages with cloud native building blocks. We’re building Klotho in that spirit.
There's a few reasons to opt for using a compiler rather than SDKs:
- Being able to catch errors as early in the development process as possible, rather than in runtime or production.
- Being able to use existing language features and popular libraries. For example, we’re able to turn a standard Node.js EventEmitter into an asynchronous invocation on a different compute engine (basically, spinning off an AWS Lambda invocation or adding a message to an SQS queue). Not only does this reduce the barrier for entry, but it also often enables fast iteration time by supporting a local development workflow (no more waiting for deployments between changes).
- Being able to trim the runtime to a minimal set that is necessary for that execution unit alone. In SDK land, you include the entire library, which means more lines of code to reason about in a live-site incident, alternatively going fully granular means developers now need to reason about what to include and what not to. Compilers, and specifically static analytics can reason about that automatically.
- Being able to use all of that logic across languages: rather than having to write a separate SDK per language, we can write a single tool that works the same across multiple languages. In addition to aesthetic niceness, that could in the future let us handle polyglot code bases in a more cohesive way.
We support JavaScript/TypeScript/Python, and are developing Golang/C# support
It seems because of the compiler tool chain? Eg this language can be “compiled” to terraform manifests.
To do that with JavaScript or python you’d need some really strong static analysis to work out what to do if the user, say, writes a function that returns a list of new buckets to create, or whatever.
Eg the argument seems to be the ability to statically deduce the required underlying cloud resources the code is talking about
But you can generate e.g. terraform manifests from any language without actually compiling (or transpiling) to it? That seems barely more than an implementation detail: I get .tf or whatever I want at the end of it, that's all I care about isn't it?
I’m a fan/user of Pulumi, but this doesn’t feel similar to me. In Pulumi you write code that describes infrastructure, separate from the application layer code you’re deploying.
This appears to be using application code to derive infrastructure config automatically. It’s a but like the https://www.shuttle.rs/ approach.
Edit: after taking a deeper look, a more accurate description might be that it allows you to embed application code within IAC.
If a library is provided for developers to express the cloud stuff (including custom types), then I believe zero or close to zero static analysis will be needed.
Not necessarily, at least in their JS SDK they have a clever way of actually compiling your Pulumi code directly into things like APIGW and handler Lambdas.
HCL in particular has so many annoying edge-case issues and limitations due to its immaturity that it becomes intractable beyond simple use cases.
Thankfully Pulumi[0] allows me to write Terraform configuration in a language I know without reinventing the wheel. It feels like Pulumi's approach is better in every single way I can imagine. I simply do not see the reason for HCL whatsoever.
There's no reason the code can't be sandboxed, Turing-completeness is a feature not a bug (nor is undecidability a real problem), and non-deterministic behavior where it's undesirable is just a potential hazard in the real world. No system can avoid it unless that system never interacts with the outside world.
What about the lack of a mature language server for code completion and at-your-fingertips documentation? What about the lack of mature tooling for static analysis, linting, code formatting, etc?
What about the poor support for defaults and partial overrides on object-type (read: namespaced without redundant_underscore_thing_*) configuration directives?
What about the very limited module capabilities?
What about the limited set of deterministic functions available (e.g. you get base64, yaml, csv, and otherwise GLHF)?
HCL is its own language so Hashicorp can "own" the experience, tie you into an Enterprise toolchain, and scratch their "I've always wanted to build a language!" itch
But it works and ops people love it because most ops people in industry today cannot even write a simple Python script. So...tools like Pulumi are actually out of their reach.
Not exactly, cause you need to manage state or something (you don’t want to reprovision every time you run the code probably). So you’ll need to do some sort of “compilation”
also terraform has a lot of ergonomic warts from being a language not an sdk -- for example: the minimum unit of reuse is a folder with a file in it, and loops and conditionals were impossible for a while
pulumi is the 'terraform as a library' option, haven't tried but it makes sense
> loops and conditionals were impossible for a while
Yeah they were, in what version was it? 0.11 and 0.12 I think when those were added? But those have been there for a while. It was admittedly beta during the time and under active development. Regardless, its a moot point now, so not worth discussing.
Still, conditional statements are limited to ternary style operators as opposed to complex multi-line if-statements. That would be a stronger argument to make that I could agree with. However, this was a design decision made by the team not a technical limitation. The goal is not to make complex logic decisions inside the declarative nature of Terraform since it takes you away from the declarative (documentation-style) markup that it stands for.
Between using locals and modules and data sources you can still accomplish almost anything despite not having multi-line conditionals. The end result is cleaner, which is a design decision that was made.
> the minimum unit of reuse is a folder with a file in it
I don't really understand the problem with this. This is essentially how Go(lang) is too and other full blown languages. Its another design decision and not really a limitation. There are significant advantages to this, for example modules are folders and everything inside of them gets compressed into a single unit for replication purposes. Within that folder you can break your code up however you wish to improve readability and maintainability. This gives utmost flexibility to the author, and standardizes the consumption for the consumer. You can import a module and use it as a "packaged" unit without needing to understand how the author created it.
These just sound like opinions you have more than "ergonomic warts". If anything I could argue that both of these design decisions of the language actually make it more ergonomic to use (and consume) not less.
Its also worth noting that I mentioned multiple times authors/creators and consumers. Terraform is very much designed around being consumed and read, almost like documentation. If you talk to Hashicorp this is a huge motivation behind the DSL that is HCL. It was designed that someone who has minimal Terraform knowledge could use modules to create powerful infrastructure within a company. This is how we use it in our company. The SRE/Platform Engineers (where I work) build very advanced, complex, and powerful modules that are easy to consume with a handful of variables.
We have developers that can spend a few hours learning Terraform and know enough to build production infrastructure within the company that is built to our design standards and security compliance requirements. They often don't even have to know what that is, they just declare a network, a database, and a cluster of compute with a handful of parameters that are auto-documented and they end up with some incredible infrastructure as a result. This is what Hashicorp is designing Terraform around. If you meet with them or go to their conferences, this is the direction and purpose of Terraform and how they designed it. A handful of Terraform experts at a company can create modules that are consumed by developers that have minimal knowledge of Infrastructure or Terraform. Its a powerful system.
in golang I can write a function and call that function in the next line of the same file. the body of the function is reusable
in terraform I can't do this
don't get me wrong, terraform is a great accomplishment. BUT there are normal 'every programming language does this' features that would make it better
re 'design decision not a technical limitation' -- design decision by the designer of the tool, technical limitation on me
New languages == zero ecosystem—no third-party libraries, community discussion (e.g. StackOverflow), real-world repositories, etc.
A brand new language thus must be extremely compelling at a syntactical level to make it worth leaving behind existing ecosystems. The new language’s syntax must provide capabilities that are impossible to implement in any other commonly used language. Otherwise, just implement the core language functionality using the syntax of another established language.
I just don’t see why that couldn’t be done here. Everything in the demo code could easily be implemented as a Python or JavaScript API.
> New languages == zero ecosystem—no third-party libraries, community discussion (e.g. StackOverflow), real-world repositories, etc.
Bootstrapping a language ecosystem is a challenge we know we needed to face from day 1. To avoid starting from scratch, we're designing the language to compile to JavaScript as an intermediate format so that we can add native syntaxes for users to import JavaScript/TypeScript libraries and leverage that existing ecosystem. Our type systems are not the same, but we're trying to make sure users can still be productive and that things "make sense" out of the box, in case you want to import libraries like axios, express, etc.
We are also designing it to interoperate with the existing ecosystem of "CDK" libraries (https://constructs.dev/) that tens of thousands of developers have already been using to write abstractions for cloud resources based on Terraform, CloudFormation, and Kubernetes.
> A brand new language thus must be extremely compelling at a syntactical level to make it worth leaving behind existing ecosystems.
For many niche languages the runtime or standard library is the main show and syntax is built around making the standard library more accessible. In this case it looks like they've built a standard library that abstracts away a lot of cloud components (i.e. SNS, S3) and made them very easy to call. This one actually looks like they may be on to something because of the level of inconsistency between cloud apis.
> Everything in the demo code could easily be implemented as a Python or JavaScript API.
There are very few domains where this is not true.
>This one actually looks like they may be on to something because of the level of inconsistency between cloud apis.
I still don't see why this requires a totally new language. You could easily write a high-level API in a commonly used language that under the hood generates cloud provider-specific API calls. Indeed, this is exactly what Pulumi does, with API bindings to many commonly used languages. There is no need for a custom runtime, much less a new language specifically designed for that runtime.
> I still don't see why this requires a totally new language.
It doesn't require one, but never underestimate the perceived value of a wheel, except round.
> There is no need for a custom runtime, much less a new language specifically designed for that runtime.
I think the bet here is that Wing could be easier to learn/use, more productive, etc... that they can get enough developers into their product that they can make money selling and supporting it. There's a ton of products like this, and they do make money and provide value - just not to everyone.
Yeah, the example looks like an API demonstration rather than a language demonstration. I would have assumed the API functions were keyword oriented IF it was a programming language.
I really dislike advertisements that want you to "request access." I've submitted them on occasion before, but have never heard back. And the information they want to collect is invasive. I suspect they're all vapourware harvesting information about the gullible.
[Wing team member here] I promise you this is a real project with a very large code-base already.
You are welcome to request access, and I promise to let you in very quickly so you can see.
The reason it is not yet completely open is that we want a way to engage more with our first community members and get some data about their needs.
Vapourware harvesting exists, but tends to have less substantial content. This looks like a startup that doesn't feel ready to open their wares.
Still, you're not alone in disliking this kind of post. "Request access" is a bad basis for a solid HN thread*, because users expect to be able to actually try out the product, and need more information about what it is and how it works in order for the comments to be substantive.
When this product reaches that state, then would be a good time to discuss it on HN.
* When YC startups want to launch on HN, I tell them they need to be past this stage.
This is a common tactic for startups to measure interest and collect leads pre-launch. They probably aren't contacting you because the product isn't ready yet and/or you aren't a "qualified" lead (someone they think is worth spending the time to sell to)
A cloud-oriented language should have a way to output for every function that has financial cost, there should be a cost estimator or inspector that provides estimated cost ranges for execution, or at least some indication of potential cost impact. The language should optimize for cost-reduction instead of performance as the cloud presents a cost / performance tradeoff.
Exactly. The cloud orientation that differentiates from a non-cloud orientation is not simply a matter of execution abstraction -- that is already handled through libraries and scripts, etc. What would be cloud-oriented are the economic aspects of the cloud. I usually tell people that the cloud is really two separate ideas: the abstraction of infrastructure and computing on the one hand (what used to be called "grid computing") and the economic model of pay-per-consumption, pay-per-use, pay-per-service that shifts capital expenses (the stuff you own and have already paid for that depreciates over time) to operating expenses (stuff you have to keep paying to use). The CIOs like the technology on-demand, easy scale-up/scale-down aspects of the cloud while the CFOs like the idea of shifting CapEx to OpEx and pay as you go aspects of the cloud.
[Wing team member here] We are looking into adding cost evaluation and cost optimization abilities in Wing in the future. Right now we are focusing on the basics of the language.
You are welcome to join our GitHub repo and vote on the features that are important for you.
Infracost is OK. It's not great, it's OK. To not be meaningless, it's not always accurate, still has some bugs, doesn't cover everything. But it's still the best tool in its class and I use it.
If the value proposition of a cloud language included "infracost but better", I might be more inclined to listen.
I'm co-founder of Infracost, happy to chat more about how we can make it great :) feel free to join the community chat if you want to DM me: https://www.infracost.io/community-chat
[Wing team member here] We are looking into adding cost gauging and cost cutting abilities in Wing in the future. Right now we are focusing on the basics of the language.
You are welcome to join our GitHub repo and vote on the features that are important for you.
None of the links to the Github repositories for the language resolve. I checked the link for the compiler [1], the SDK [2], and the VSCode extension [3], and all of them 404. Seems like the repositories are all still private, which is odd, given that the page claims
However, it's definitely ready for those brave of hearts who would like to
be involved at this early stage, influence our roadmap and join us for the
ride.
Kind of hard to do that when all the source code is closed.
[Wing team member here] Sorry about the 404s. They are displayed to anyone who has not been given access to the repo yet.
You are welcome to request access, and I promise to let you in very quickly.
The reason the repo is not yet completely open is that we want a way to engage more with our first community members and get some data about their needs.
Seems like a reasonable effort and we probably need more of this kind of stuff. But still I think the industry should focus more on educating developers to prevent them from introducing mostly unnecessary accidental complexity to today's typical apps rather than inventing new layers to manage it.
In general, while it's clear that somehow cloud complexity needs to be abstracted away from developers, I don't think it makes sense that new programming languages or frameworks are necessary to do this well. It makes migration of existing apps and backwards compatibility with the existing software ecosystem too challenging.
At Coherence (withcoherence.com - I'm a cofounder), we believe that a new category anchored by tools like replit, AWS CodeCatalyst, and our products is the solution to this problem, because it does not have the same issues. Instead, it offers best-in-class versions of the same workflows and toolchains that teams are using now, while radically reducing the investment required to get there.
This feels like inventing a technology for the sake of it and then trying to fit a problem to it.
Abstracting over cloud resources is inherently very leaky. Yeah, S3 compliant buckets work, but that's the simplest example possible. Even then, if you're working at scale, you still need to keep in mind features like (AWS) Intelligent Tiering, GET/POST/PUT costs, cross-region costs. This can be the difference between a 15k and 150k bill, you don't want to abstract over it. What's the point of a cloud language if I have to care about the specifics if I'm doing something at scale with it? I can just keep using Java or Python and the respective SDKs.
I don't want to write all of my stack in a cloud programming language. Especially one that is completely new and not cross-compatible with any other language. This isn't just a small thing -- it's a complete dealbreaker. There's no tools, no libraries. It's been a decade since Nim has started development and look at its progress now with so much interest behind it. Creating PLs and compilers follows the 80/20% rule.. it will take mountains of work to even make the compiler truly optimized and usable, and that's a basic prerequisite.
The cloud simulator is cool.. but there's already localstack which will simulate AWS services much more faithfully. If you don't have faithful simulation (and you can't do that for every cloud service), you can't use the simulation for anything besides playing around anyway. In which case, why not have a dev/testing environment and kill two birds with one stone? There's no point to unit testing cloud things, that's basically all integration testing anyway. You can unit test the code that interfaces with the cloud using the same language-specific tools that have always been used.
> Yeah, S3 compliant buckets work, but that's the simplest example possible.
Agree; you can’t just say “give me a queue”. You need to tune in-flight messages, processing times, FIFO or not, dead letters — the list goes on. These details are not 1:1 for different clouds, and they can’t be ignored for anything beyond a PoC. If I need to mess with all that in Wing…I could just use the AWS SDK. I can also trust that the SDK has the complete, latest feature set.
[Wing team member here] We feel that the cloud has matured to a point where the basic services have enough in common with each other to allow us to successfully abstract their _functional_ parts. We do not abstract away the non-functional ones. You are welcome to check out the implementation of the standard library on GitHub and let us know what you feel is missing in the different abstractions.
[Wing team member here]
We also believe that creating good, non-leaky abstractions is hard, but we feel that the cloud has matured to a point where the basic services have enough in common with each other to allow us to successfully abstract their _functional_ parts. We do not abstract away the non-functional ones.
It is true that you would get more benefits from Wing as you write more of your code in it and give the compiler more visibility into your code.
But you are not required to write everything in Wing, and there is interop to other languages. Since the language compiles to TF and JS at the moment, you can import JS libraries to use in your code and take advantage of the huge JS ecosystem.
The cloud simulator is not meant to simulate the cloud, including its non-functional concerns - you have local stack for that indeed. The idea of the simulator is to give you the most light weight and fastest way to test the functional aspects of your code while you develop it. You definitely need to further test it with local stack and/or the cloud.
> I don't want to write all of my stack in a cloud programming language. Especially one that is completely new and not cross-compatible with any other language. This isn't just a small thing -- it's a complete dealbreaker. There's no tools, no libraries. It's been a decade since Nim has started development and look at its progress now with so much interest behind it. Creating PLs and compilers follows the 80/20% rule.. it will take mountains of work to even make the compiler truly optimized and usable, and that's a basic prerequisite.
I think this is especially important in this space. If I use Terraform, in addition to tons of tools and experience in the community, there are a variety of mature tools for things like compliance and cost checks. If I use AWS CloudFormation, the same is true. That doesn't mean I wouldn't consider a new tool but it would have to be especially valuable to make up for the cost of losing that baseline.
We're still working on it. You are welcome to join our GitHub and help influence what shape it will take.
Additionally, you can create your own resource, or a target for a resource if you find the one that comes with the standard library lacking.
I am of a similar mind... I will say the work that Cloudflare, Deno and a handful of others have done has been really interesting/compelling. I mean, if you don't like TS/JS, there's still WASM as an option. Similarly I think WASM target frameworks will become very interesting in terms of cloud scaling apps in the longer term.
IMO, I think considerations for general search is probably one of the biggest shortcomings of most cloud first solutions.
I haven’t looked through everything, but I opened a random TS comparison:
Wing
let x = 1; // x is a num
let v = 23.6; // v is a num
let y = "Hello"; // y is a str
let z = true; // z is a bool
let w: any = 1; // w is an any
let q: num? = nil; // q is an optional num
TS:
const x: number = 1;
const v: number = 23.6;
const y: string = "Hello";
const z: boolean = true;
const w: any = 1;
const q: number? = undefined;
Except you’d never do that in TS unless you were being extra verbose (which you could with Wing too) because the types for x, y, and z are superfluous and don’t change the inferred type. So if you remove those types, the examples become identical (except for using const vs let).
Intentionally showing TS in a worse light than reality in a comparison is not a good look in terms of intellectual honesty.
I'm on the fence about this one. When I heard it was announced, and that it was created by none other than the creators of the amazing AWS CDK, I was really excited by what could be possible. Having worked on complex infra automation using CDK (we use it extensively for our open source project for analyzing security logs on AWS: https://github.com/matanolabs/matano), I was excited because of the room for improvement with an integration that is language-native.
But after having looked into the abstraction that Winglang, and other "infrastructure-from-code" providers have come up with, I'm admittedly very skeptical. As other have mentioned, cloud primitives are almost by nature a leaky abstraction with many bells and whistles to be tuned. So I'm not sure it is a good idea, or feasible in a complex production application, to build on these very high level primitives such as cloud.Queue without limiting yourself to the lowest common denominator of features. But perhaps this issue is solvable by creating a nicer SDK.
What bothers me the most is having to to write code in a completely new language, that kind of treats runtime code as a second class citizen to be embedded in a configuration oriented language that looks like Typescript with some magic added in. Imo, this is far too much friction and risk vs. the benefit that could come from something like this over using your language of choice along with CDK.
I'm still rooting for Wing, and hoping they can figure out these issues, because the problem they are solving is a massive one. I think Winglang has the potential to do for cloud, what Rust did for memory safety by doing smart things at compile time and enforcing policies that could easily be missed by developers. For example, automatically deriving least privilege and minimal permissions for all infrastructure could be a great way to improve security out of the box.
If anyone is interested in Infrastructure as Software, I also recommend giving Pulumi a try. I've been using Terraform for a couple of years, Pulumi for about a year and I think that Pulumi is on a completely different level.
It also supports higher level abstractions (Component Resources). For example: check out AWS EKS provider.
I work on the solutions engineer team for Pulumi. We absolutely offer enterprise support, I'm one of the engineers that is part of that team. If you'd like to discuss further, you can contact me on lbriggs[at]pulumi.com
The example on the homepage is a good example of why I don't think I'd find this useful. Using Node or Go or whatever to create a file in an S3 bucket based on an incremented value from DynamoDB isn't that hard. The code necessary is maybe a few tens of lines using the official libraries. Actually setting up all the infrastructure to get it to work, with monitored quotas to check things won't fall over, with properly secured IAM profiles, etc is where the all of the pain lies for me.
The latter part is exactly what this is going to be doing. Same way that in AWS CDK you can get higher level constructs that can vend more secure resources. The goal is to make that even more fluent, across different infra deployment architectures
The include their motivation in their language spec:
"What makes wing special? Traditional programming languages are designed around the premise of telling a single machine what to do. The output of the compiler is a program that can be executed on that machine. But cloud applications are distributed systems that consist of code running across multiple machines and which intimately use various cloud resources and services to achieve their business goals.
"Wing’s goal is to allow developers to express all pieces of a cloud application using the same programming language. This way, we can leverage the power of the compiler to deeply understand the intent of the developer and implement it with the mechanics of the cloud."
Software stacks keep becoming so unnecessarily complex. I suspect this approach will hide more stack details and therefore promote even more pico-services madness.
I believe in the advantages of using language based techniques for dealing with distributed systems, but this is buying into the AWS ideology. This ideology can be summarized that your application is composed of infinitely scalable functions which sounds great until you account for the overhead. Then, you wake up and realize you are being nickle'd and dime'd to death.
Unfortunately, there is no awakening: you look over at your AWS bill and triple-redundant HA K8s cluster that serves your 3kb SPA blog to 3 users a month.
I may have only looked too quickly, but the cloud-ness of this looks like more of a library thing than something requiring first class language support.
On the language side, it would be helpful to have a comparison with other async-first runtimes (js, go) in order to understand how/whether the fundamentals here differ.
Wing has a concept of `inflight` which you can think of as a "remote async function". It's an `async` function that can be executed on a remote system, such as inside a container fleet or on a FaaS. Inflight functions can interact naturally with cloud resources around them (by simply calling inflight methods on the resources e.g. `bucket.put()`). The compiler analyses these interactions and inverts the control over to the resources to take care of the mechanics like wiring deployment information, synthesizing security policies and anything else that can be deduced from this high level intent.
Defining the cloud architecture of the app and being able to naturally crossing these distributed boundaries is the essence of what we call "cloud-oriented programming", and where we think a lot of the friction and pain of the cloud comes from today: every time I need to interact with "The Cloud", I leave the safety and comfort of my compiler, and I am out in the wild having to understand all the mechanics and layers involved.
Almost all existing languages/compilers take a fundamental assumption that the entire program runs inside a single machine. This is, in our view, the impedance mismatch of cloud development today. This is where we believe language innovation can dramatically reduce the cognitive load and barrier to entry for building and delivering cloud applications that fully take advantage of the cloud. Inflight functions and resources are only the first step. Think first-class support for things like defining and consuming API endpoints, writing distributed workflows, emitting metrics, raising alarms and other things you would expect from your friendly neighborhood cloud programming language.
Interesting. Reminds me of Pulumi that uses Python/Js to define cloud resources.
The differentiator is the simulator, that could be a huge advantage - you could test your terraform and app with a single tool.
No need for terratest and local lambda sim with AWS SAM.
However it does look very limited at the moment - we'll need more than just buckets and queues. And the more you add the more complicated things get. Look at Pulumi and what a mess it is to define an env (depends on, apply(), wait for resource etc etc).
Perhaps it should be a module/lib rather than a lang on its own.
How will I write complex functions? will I need to import py scripts from other files? odd.
Roadmap leads to a 404 and I can't find any information on what kind of cloud support I could expect from this. I don't really see a reason to care about this over something mature like Serverless or CloudFormation templates if it only supports AWS. I also don't necessarily see how well Cloudflare Workers / KV / D1 translates to AWS Lambda / DynamoDB / RDS so that I can write stuff once and get it working on both backends.
My future perfect "cloud" language has grammar for idioms like queues and counters. Ditto pubsub, logging, metrics.
And structured concurrency. Not async/await, promises, and so forth.
--
Along the lines of "we were promised jetpacks", I'm still stuck on 1990s era future perfect notions like software agents and grid computing. Like the cloudlet (vs applet, servlet) manifestation of Sun Computer's Jini and JXTA.
Disclaimer, I work at Monada, the company behind wing
I completely agree with you, a language that considers cloud development as a first class citizen must embrace something like open-telemetry for distributed logging (spans, traces, metrics, logs). When you log something or measure something in your code it should always be part of some distributed trace. We want to embrace this thinking for localhost development and allow our customer to choose their APM provider when they deploy. In a sense we don't see a difference between a cloud resource that is called logger (can be implemented by DataDog, New Relic, Coralogix, etc) and a cloud resource that is called a Queue (can be implemented by SQS, Redis, RabbitMq, etc...)
Also, as you mentioned, because in practice all the `inflight` (https://docs.winglang.io/concepts/inflights) code is async by nature, the language should wait for any API call to be resolved (e.g. "await") by default, deferring an API call is the exception
My main problem with uniform solutions like this (or Shuttle) is that they are losing one of the main strengths of distributed systems: the abiility to mix and match different technologies and platforms (both for development and deployment) in the same system.
I like the idea of creating languages that capture a specific problem really well and making them easier to reason about. At a glance, I couldn't understand why this language makes anything in the cloud easier to do. The front page should focus on what the language does better than others. From what I can tell by going deeper into the docs, the selling points are a unified API for cloud resources across different vendors, inflight functions, and the simulator. Hopefully this language continues to grow and becomes compelling enough to use instead of just TypeScript (which seems to be its closest relative)
its worth noting that the creator of Wing is Elad Ben-Israel - formerly the creator of AWS CDK which has by all accounts been quite popular in the AWS Infra-as-Code world. I kind of view Wing as "CDK++" - if CDK was such a success in flexibily defining and testing infrastructure, what could you do if you further merged in in your language?
I think there would be a lot of value in having an imperativ, very high level language for cloud workflows that abstracts the underlying implementation or even cloud provider away. Think something like "when authenicated user uploads file of type image: resize to width 800px and store in bucket user-images". Given the lack of specific references to S3, IAM or other concrete services, this could then be executed by different cloud providers.
Nothing really goes "wrong" but it has a lot of historic footguns and was designed around finite worker polling not ultra-wide scaling lambdas. IMO, when most people think they want SQS, they actually want SNS. Rarely, Kinesis Streams is what they wanted. SQS + Lambda is an "almost never" when you're building something greenfield. It's a complexity you shouldn't take as a default, but have as an option, IMO.
So a quick aside: SQS is one of the oldest async "serverless" services in AWS. Historically, web applications ran into issues with large processing jobs so they would offload this work from the web server, which would synchronously handle requests, and instead do the work in an async manner that aligned with capacity. Job Queues or were designed to have workers pull messages in a FIFO manner and then capacity for those jobs could be planned separate from web capacity. Major contender in this space was RabbitMQ and something like Rails' ActiveJob, which is an ideal user of SQS.
So SQS is valuable but why isn't it here? Well, the short answer is: it is valuable, but in the conditions laid out (use with lambda) there are significantly better options. So lets look at the pattern they're using and talk about it. They're calling SQS a queue here, which is an easy mistake to make since it's in the name. I'm going to just toss an asterisk on it for a few reasons; first, because lambda will consume more quickly than most producers so it won't really sit idly for the retenetion window; second, because SQS (low scale) and single instance MQs like Rabbit will handle messages in a FIFO manner, but SQS is a service and abstracts scale away. At some point, the FIFO aspect of it becomes "best effort" unless you configure it to be FIFO (really avoid doing this). To me, this is a *major* design consideration and arbitrarily calling things a queue when really it's more of a pubsub is a bit more than just pedantry. At scale you *will* see out of order, potentially duplicated messages with SQS without FIFO on. Handling duplicates should be at the front of your mind.
So... we're using a pubsub pattern which is still maybe pedantically different from a queue, you're still asking why is that so bad? It works with lambda right? Yeah it does. Here's a list of services that work with lambda in a pubsub manner and would allow a user to publish a message and invoke an async lambda with that message:
* Kinesis Streams
* SQS
* SNS
* Event bridge
* Lambda (use Invoke and set "async" to true)
* DynamoDB / S3 (this one is abnormal but still possible -- there are more that I'd categorize here but these are common)
A lot of people look at this list and quickly eliminate options. DynamoDB table entries and s3 object drops are clearly misuses or special case jobs (good call). We want a DLQ via config not writing it ourselves so async invoke lambdas are out (bad call - normally DLQs are /dev/null but more expensive). Kinesis streams has an idle cost (true but lower than most think) and SNS and Eventhub aren't reliable or don't hold messages ** (this is false). The assumption is that SQS is the only remaining option that has:
* Reliable delivery
* Dead letter queue
* Roughly ordered delivery
* Isn't a "write it ourselves" or "misuse" of service (sorry dynamo/s3).
But... that's not the only service that does that. SNS and EventBridge both meet those qualifications. That surprises a lot of people. Both of these services have at-least-once semi-ordered delivery (same as SQS) with lambda and feature retry and DLQs. They also have a really really really huge feature for any operator who runs big serverless pipelines:
* They're emitter/publisher focused, not subscriber focused. You create a queue for every notified party, you create an SNS topic for every notifying item. This means you can add a secondary consumer (like... a network tap?) and see messages flowing through the system without changing the system. This is huge for debugging.
At this point you might say but what if I want disparate services to be able to consume from the SQS queue but steal each others work? Well... don't use lambda -- the lambda is going to steal all the work -- it can scale up to your account limit without adding more settings that aren't set here. Mostly SQS and worker queues aren't what you want if you want lambda. The real limit for burst-ability is set at a fairly large account limit level. There are *some* instances where queuing might be right, but at those scales you'd actually be asking if maybe Kinesis Streams is a better choice (usually it is).
TLDR: SQS is usually not the tool you want with lambda as a consumer. It works, but SNS or Kinesis are much more common. SQS comes with a lot of knobs and whistles that can dramatically run up your bill when using lambda with naive settings (I've seen this a lot, where the DLQ is setup for redrive and it just fail loops).
Hope this helps a bit. Note: I have about 5 years experience working with serverless/event driven application architecture.
Great works! Seeing the demo in the front page really makes me think of the cloud architecture as a whole may just be an operating system out of itself.
Heck, I once even thought of comparing microservices to microkernel in OS design and that might actually some make sense over time and time again.
Going to assume "cloud" is the package/module and "bring" is similar to something like "import" etc. so it's probably because "cloud" in this case is a module/package that you need to import to use, so the language itself isn't actually cloud-oriented, but rather the standard framework (if you can call it that) is cloud-oriented.
The team is top notch (of CDK fame) found a non-trivial bug was assigned and fixed in 30 min. In general joy to interact with. Very strong focus on dev. experience. After a very subpar experience of building a very large enterprise project using AWS serverless stack Wing is the first thing that gives me hope that things can be way better.
I suppose you could say the same about Terraform using HCL - but that makes extensive use of it being declarative, which is so unusual that they'd be picking an obscure probably-new-to-the-user language anyway. This just looks.. JS-ish, a bit like anything really.
I suppose I'm asking what are the language features that make Wing great for 'cloud', any `new Bucket()` etc. aside?
If you look at its 'design tenets', the only ones that are about the language not the SDK are just saying 'be good/familiar like other languages'... Ok so why not be another language? https://docs.winglang.io/reference/spec#02-design-tenets