I appreciate the letter and trying to work with Hashicorp -- I used to have a ton of respect for Hashicorp. But honestly... at this point...
...just fork it into a foundation. Don't wait for Hashicorp's response. I get wanting to have the appearance of working with Hashicorp, but we've been shown again, and again, and again, and a-fucking-gain that private corporations cannot be trusted to maintain public goods. Only community governed non-profit foundations can do that.
Private corporations will put the bottom line first every single time. And in the case of investor funded enterprises, the bottom line is never ending exponential growth or bust.
Even though I strongly believe the OpenTF fork could open up incredible possibilities for the community (I could go on and on about it), it is an equivalent of a civil war. It doesn't serve the community and our only interest is in the continued strength of the community that we continue to build for.
Based on my immense respect for what's been built under Hashi's umbrella I'd rather see a change of mind, and an opportunity to honor our pledge of resources (5 FTEs for 5 years) to the common rather than partisan cause.
I really appreciate that, and I do think it's right of you to at least make the attempt.
That being said, I don't expect this attempt to work and I fully believe that a fork is going to be inevitable. I also think a fork is an amazing opportunity to standardize the language and prioritize the features developers want.
It isn't just about the license, but the way that Hashicorp has maintained the Terraform project. The github insights show that they don't have nearly as many people working on it as I would expect, and most of them are split into also working on Terraform Cloud. At the same time they don't work with the community that well- there are open issues and pull requests that just get ignored as Hashicorp clearly doesn't see value in open source contributors. This isn't just a Terraform issue either- my company had to move off of nomad due to the lack of development and support (as well as broken features).
I have strong concerns about the future of these projects in general beyond just the licensing. An open foundation that had multiple companies involved would by definition need to find a way for those people to collaborate together, and once they do that it makes it easier for them to invite community collaboration. So while I do appreciate that it is a drastic step, I think it's one that would also be far better for the ecosystem and project as a whole.
That said, maybe this is the wake up call hashicorp needs to fix these problems. If you provide five FTEs that basically doubles the size of their Terraform development team (they have more people working on it than five, but those people are split into other projects), and once they start working with other groups maybe they'll work with the community more as well. I'm not holding my breath though.
Smells like the end of Chef. Management doesn't understand how much it takes to maintain the open source project and is just pouring resources into sales and marketing and products that they can charge for, and don't see how that erodes goodwill and the technological foundation of the company.
I also saw that parallel with Chef. I think its the story of all VC funded software that attempts to be "Open Source". For them, Open Source means "You can read the source code, and potentially fix a bug", for us, it means community, transparency, and fixing bugs beyond those your paying customer has.
I looked at github /chef/chef and github /inspec/inspec and its the same as it was shortly after I left. The only changes are from the one person who carried over after the sale to Progress, and the contracting team out of India, with dozens are unanswered queries and pull requests from the community.
What really ruffles my feathers was when they had us define oss-practices (https://github.com/chef/chef-oss-practices), clearly nobody outside our small team read (or understood) those words and goals. It feels like it was work to make us look better in OSS in order to bolster the company sale.
There was a whole lot of community window dressing going on. I still wonder if they weren't trying to ship maintenance of the open source code off onto the community thinking that if all that worked appeared (or thinking that it was actually going on--believing their own bullshit about how involved the community was) that they could just leach that work.
There's probably some manager at Hashi right now trying to argue that they should offload TF maintenance entirely onto the community and they should pivot to hosting services and consulting and making money off of all that free work.
chef said, did and tried some really dumb stuff and lots of it failed for obvious reasons, it's like docker took a chunk of their playbook and their business went the same way.
Hashicorp isn't going to budge here. The same argument that you've made about Terraform being the underpinnings and needing to be open-sourced can be applied to their other important products like Vault, Consul and Nomad as well. The ecosystem of those three is plainly a direct competitor to Kubernetes which is open-source.
There's really no move for them to make here. It's unfortunate.
Tons of organizations run vault and consul as part of their k8s ecosystem so they don't directly compete. The vault CSI driver might be the single most installed CSI driver across all the orgs I've worked for
If you are running Nomad as your orchestrator, because of the tight integrations you are almost certainly running vault for secrets and consul for service discovery/service mesh. The ecosystem of the three is the competitor to K8s.
While the Nomad stack is a direct competitor to k8s, Consul and Vault are both heavily used alongside k8s. In fact, Consul had features that were only for k8s the last time I checked
Genuinely curious - other than Vault - what other product is there for secret management in the cloud infrastructure space. I get that CyberArk Conjur is big in the enterprise space, but I thought cloud users, even with k8s, mostly went with vault.
It's far more likely to be Vault as a base, actually. The MPL would allow someone like Amazon to use Vault's source as a base, and so long as the core source wasn't modified Amazon would be under no obligation to make their modifications public.
The MPL is a lot more "business" friendly than the GPL.
The very fact that HashiCorp is changing their license and restricting use clearly indicates they see this threat as reality. Amazon/Microsoft/Google/Whoever using HashiCorp's work/effort but keeping all the money to themselves.
There's a lot more reasons to run Vault than those.
Having a standardized way to "do secrets" for any team, any service, any app within the organization is very nice. Becoming cloud-agnostic for your secrets (connecting your local Vault with the cloud provider's vault) is another great benefit. Automatic secret rotation is also another great benefit. Secret versioning and auditing... etc.
It's not just "can't have this secret in VCS or viewable via kubectl".
> It's not just "can't have this secret in VCS or viewable via kubectl".
That is exactly what it is.
You seem to misunderstand (and thus downvote?) the statement I made. I'm not saying "haha vault bad", I'm answering "what other product" (from ghshephard) with the reality of today.
This has nothing to do with what Vault is or isn't, but just with the concept of storing secrets in a uniform way in clouds for use with cloud workloads what is being used right now.
I did not downvote you, no. Downvoting because we disagree isn't how that's supposed to work, even though some use it that way.
Regardless, the use of Vault is not exclusive to cloud environments.
All of the listed features of Vault have benefits within larger organizations even if they don't use the "cloud" and deploy monolithic applications.
Most frameworks have built in ways to fetch secrets/config from Vault, making it an easy standardized way to do things across all of your applications/teams.
It doesn't mean you need to use it, of course, but it has a lot of perks for many different situations.
I totally agree that Vault is more than a glorified password manager (because that's what most Clouds have in their implementation of a secrets store), but the thing is, everywhere I go, I don't see people use Vault, I see them use whatever AWS/Google/Azure happens to have (and often badly).
I'm not sure if that's even what ghshephard meant when he was curious for 'products', since technically all those cloud-integrated services aren't really stand-alone products for that matter.
In AWS for example, with or without EKS (and then something like External Secrets Operator in the EKS case), it's all just AWS Secrets Manager and sometimes Parameter Store. In a few cases people do manual encryption (using KMS), but in no case was HashiCorp Vault used.
Often, it's even worse: no secrets management at all. Stuff just gets pumped into environment variables (more often than not they get committed as .env files to Git), and there's just no drive to change that, even when a business policy is in place. Some even 'work around' this by storing secrets in password managers like 1Password and LastPass so they can check the compliance box without actually protecting the secrets (since they also live in plain text in VCS and at runtime in the environment).
In terms of 'products', I'd say Vault and the cloud ones don't really compare, but reality is depressing and secrets are often not as secret as the name implies. From a developer perspective, they might compare them because they desire the secrets to be injected into the environment either way, and as such the source doesn't matter much. I'm not sure if we should see that as a feature or a bug.
We use vault as a framework that associates authentication with secret engines via a policy framework. The Secret Engines could be AWS, PostGres Database, PKI, SSH Certificate Signer, Key value stores, etc... and the Authentication Framework might be LDAP, OKTA, or plain tokens. The Policy framework is pretty dynamic and has many thousands of possible policies mapping various authenticated entities to various authority (read, list, write, etc...) to various secret engines. Combine that with the syntactic niceties of template-rendering integration with the chosen secret-store, and maybe some clever stuff around single-use token wrapping - and I think of all of those features as belonging to a single product.
I'm relatively new to this field - and see tons of Vault at colleagues companies, and have friends who run/support Conjur (Enterprise more than cloud). Those are the only two secret-management framework/products I'd hear of - so was interesting in knowing what else had mindshare.
I wonder if some of this also depends on how the secrets are consumed (and created), I'd imagine that if you store things like an API key and secret for a third party API, someone needs to 'enter' that data at some point in time and then set an ACL to allow a person or system to then consume it.
But if you have two programs what exchange secrets between multiple instances of each other, (one can do CRUD, the other only Read), you'd have much more interaction. Same as with a system creating secrets and a human reading it.
As for where it would make no sense at all: automated workload identities where you get time-limited temporary credentials that represent a role; most public clouds have some sort of link-local API, an injection method or mount method to provide ever-rotating secrets which gets picked up by the client SDK automatically. If you are using something like AWS, you'd be able to consume hundreds of services without ever persisting a secret anywhere.
This is also where my 'cloud' (and K8s) remarks are based on; when your workload and your resources speak the same authn/authz with a centrally coordinated policy system, there really isn't much value in adding something in the middle of that, and as such you don't see a lot of Vault and Vault-like implementations.
That said, as soon as you add something disconnected like local virtual machines, on-prem stuff etc. where authentication has historically been extremely bad and unless you brought a proper Kerberos setup you're screwed beyond mitigation. That's where Vault (when it came out) delivered a lot of value. It's probably also why we see AWS, IBM, GCP, Azure, in the same list with Vault and CyberArk. I'm surprised VMware doesn't have anything yet, but perhaps they recognise they lost this one already.
Why. It is open source. A fork should be no big deal, and definitely not a “civil war”. I think the community should be quicker to fork open source projects that are not serving the needs of the community.
The corporations are trying to have the benefits of open source without the responsibility. Forking is a normal, acceptable part of open source and we should normalize it.
What would it mean to “normalise” forking? The costs of maintaining a fork are significant, and if one group of programmers are being funded to work on the project then it can be very difficult to fork a project in any meaningful way without significant resources behind it.
Also IIUC most of the parties in this conversation are corporations. They’re all trying to enjoy the benefits of open source development for a variety of reasons.
Currently forks are painful, because they aren't normalized i.e. our tools and workflows don't expect them. I'm saying rather than discouraging forks we should adapt our tools and workflows to expect them.
But the real work is all the hard work that goes into a fork. I've watched open forks die all the time--all it takes is no one to step up and do/pay for the work, which is basically the default, because it is in everyone's interest if someone else is the one to do that.
I think that's really the crux of the problem--there are plenty of folks willing to maintain software for money, and a whole lot of people who'd rather it not cost money and if it does, not their money.
If the tooling is better, who is going to maintain this?
I recognize you're in interesting position in Spacelift. Per your recent analyses you may not be impacted and in this case you probably do not want to pissoff Hashicorp folks :)
In reality though force better be responded with force and showing Hashcorp what what was Terraform will be successful as Open Source project with or without them is best way to get them to reconsider.
hashicorp has decided they don't want to contribute to open source any more
they're totally within their rights to do so, and it doesn't harm anybody; it's not the equivalent of going around blowing up buildings, raping women, and napalming children. at most we can wish they had continued doing the beneficial things they were previously doing
maybe they'll change their minds, as you say, but that's no reason for the community to sit around twiddling its thumbs hoping for such a change. what's important now is that the people who are still willing to cooperate can do so successfully, and that's what opentf is about
that's even more obviously not the equivalent of going around blowing up buildings, raping women, and napalming children. it's very much the opposite, in fact
it's unclear to me which of the parties you intend to accuse of doing the moral equivalent of burning innocent people alive en masse, but either way, maybe you should think about walking back that rhetoric a bit
my objection is not that mass graves, piles of mangled bodies, your close friends unexpectedly disappearing into pink mist, and terrible stenches are too sacred to be used as a metaphor for something else
my objection is that warfare involves people intentionally harming each other, and that doesn't seem to be what's going on here. it's not that war is a more extreme version of the situation; it's that it's directionally different
rather, hashicorp is struggling to not go bankrupt, so they've decided to switch to making a proprietary product instead of an open-source products; and terraform users, naturally enough, are reluctant to make their infrastructure vulnerable to a proprietary software license. hashicorp is not intentionally harming terraform users, and terraform users are not intentionally harming hashicorp
they're just not continuing their previous cooperation
Not sacred enough for you to invoke the Holocaust in a discussion about data protection legislation and the "right to be forgotten" within mere hours of this statement, however...
Such hypocrisy, attacking others for "rhetoric" that needs to be "dialled back a bit" when you're every bit as guilty of the exact. same. thing.
please note that the comment you are replying to says the opposite of what you are implying it does; it says 'my objection is not that mass graves, (...) are too sacred'
the comment you are referring to, for anyone who is interested, concerns the question of whether or not there is a higher standard of morality to which legislation can be held, or whether legislation itself is the ultimate moral authority, or whether there is in fact no objective standard of morality at all. anyone who is interested in that kind of thing can read it at https://news.ycombinator.com/item?id=37147305
it's not a possible schism. hashicorp has clearly and unmistakably abandoned the open source community. conceivably they'll change their minds, but their communication doesn't have any ambiguity in it
i have no idea what you could possibly mean by 'focus on the "civil"'. it's good that people are being civil to one another, isn't it? then why are you criticizing them?
You're missing the point- the OpenTF group wants to mend the schism if possible, by getting Hashicorp to change their licensing back. If they immediately fork then that's not likely to happen, so they're attempting this first.
I don't think it will work, but I think it's good of them to try.
mending the schism would be great, but we probably can't do that by pretending it doesn't exist like https://news.ycombinator.com/item?id=37139929, or analogizing it to blowing thousands of children's extremities off, or analogizing acknowledging its existence to blowing thousands of children's extremities off
I'm still not sure which of the latter two was the intent of the comment I was responding to
Don't you think you should check yourself before attacking others?
> "it's unclear to me which of the parties you intend to accuse of doing the moral equivalent of burning innocent people alive en masse, but either way, maybe you should think about walking back that rhetoric a bit"
... but only earlier today were you equating a data protection law with the Holocaust... seems you could do with a bit less projection and rhetoric yourself, Kraggy.
this is careless reasoning; rather than equating a so-called 'data protection law' with the holocaust, i said the justification others were using for that law was incorrect, because if it were correct, it would also justify the holocaust
this is not an extremely advanced form of logic, but i understand that it is not within everyone's grasp
my objection to the war rhetoric in this case is that it casts people as opponents who are not, in fact, opponents, just different parties pursuing largely independent interests. in that context playing 'let's you and him fight' seems unlikely to improve the situation
Why use so-called? It's literally a law concerning data protection. Unless you feel that the empirical truth somehow shouldn't be used to describe the GDPR and colloquial "right to be forgotten" aspects it entails, you're just trolling for trolling's sake -- either that, or you don't even know that you're using "so-called" improperly.
Once again you jump to mansplaining and condescension, followed by failing to even get a username correct when it's literally on your screen.
Your repeated attempts to tell people what they think, what they do or do not know, and where they live, show that every observation of you being arrogant, condescending, and disingenuous, is patently correct.
Unsure if you need a shovel to get out of that hole you've put yourself in, but you're sure backpedaling quickly, yikes.
As a user and collaborator of TACOS, agreed that it could open up opportunities. Though I echo trade-offs (as in other replies) that it could start a civil war that makes it difficult for end users and collaborators -- reminds me of Python2 -> 3, Presto/Trino, and many other stories. Pledging resources is a great approach.
Imagine a future CTO trying to pick the IaC tools for their company. They see Terraform as an option, but then learn there are multiple forks, licensing questions, and a big battle happening in the community. What do they do? They are now way more likely to pick a different tool that is genuinely open source. The same is true of every dev considering where to build their career, every hobbyist, every open source enthusiast, every vendor, etc. In the end, no matter which fork wins, everyone will be worse off: the community will be smaller and more splintered.
So we opted to ask HashiCorp do the right thing first. If they choose to do the right thing, we can avoid a fork, and avoid splintering the community. We still think that's the best option. But if that doesn't work, then a foundation + fork it is.
Imagine a future CTO trying to pick the IaC tools for their company. They see Terraform as an option, but then learn there are multiple forks, licensing questions, and a big battle happening in the community. What do they do?
I truly believe that a CTO who sees Terraform as an option and who isn't scared off by the BSL, but then has all of these other concerns, exists only in fantasy.
You may make production use of the Licensed Work, provided such use does not include offering the Licensed Work to third parties on a hosted or embedded basis which is competitive with HashiCorp's products.
Read benevolently it's a prohibition from spinning up a service based on HashiCorp's code and undercutting HashiCorp's pricing.
On the other hand, if I build a product with HashiCorp-owned BSL'd code, then HashiCorp releases/acquires a product that competes with mine, then my license is void.
Redis is 3-clause BSD, BSD does not have a "your license is void if you sell a product that competes with us" clause. Redis does have enterprise products that are licensed in a manner similar to BSL, but Redis itself is not.
MongoDB and Elastic are SSPL. SSPL approaches the problem like the AGPL; it compels licensees who sell a service derived from the software to make available under the SSPL the source of all supporting tooling and software so that a user could spin up their own version of the service.
There's an argument to be made that SSPL is de facto "you can't compete with us" since it would be more challenging to make a competitive SaaS offering if your whole stack is source available. I don't disagree. However, as distasteful as SSPL is, at least it doesn't grant licensing to a product conditionally on the unknowable future product offerings of HashiCorp.
thanks for the explanation, my understanding is that they are all after limiting competition in various ways, while still trying to maintain the mantle of open source
We are certainly in interesting times around the monetization / financial sustainability of open source
SSPL has no provision even close to the reach of the "anti-competition" clause Hashicorp is using. While SSPL is not considered open source, it isn't that far off from the AGPL. The difference between SSPL and AGPL is that SSPL (1) is in effect regardless of modification of the service and (2) extends copy left virality to all programs which support running the service, including those that interact with the software over a network.
MongoDB, Elastic, etc. cannot stop you from running a competitor based on the terms of their licenses, they just ask that you publish the source code for whatever service you're running in its entirety (I acknowledge there are disagreements about how far "entirety" extends). The clause in Hashicorp's license actually revokes the right to use their software at all if you're a direct competitor.
OK, no one is going to build an open source competitor to Elastic or MongoDB because then you have no moat and your business will probably fail, I get it, but it's still possible to do without repercussion. It's not like the AGPL is that far off in terms of limitation, either, which is why you don't see many copyleft services run by large corporations unless they've been dual-licensed.
Just went with Elastic cloud after evaluating both Elasticsearch and OpenSearch. It was an easy choice to stick with the incumbent/creator that I was familiar with. No complaints so far.
Pulumi has a few languages other than YAML and Pulumi is declarative[1], and the programs you write are only as complex as you want them to be. This python program declares an S3 bucket and declares ten objects to exist in it.
from pulumi_aws import s3
bucket = s3.Bucket('bucket')
for i in range(10):
s3.BucketObject(
f'object-{i}',
s3.BucketObjectArgs(
bucket=bucket.id,
key=str(i),
)
)
Even so, Pulumi YAML has a "compiler" option, so if you want to write CUE or jsonnet[1], or other[2] languages, it definitely supports that.
Disclaimer: I led the YAML project and added the compiler feature at the request of some folks internally looking for CUE support :)
I'm aware of the SDKs, but we don't want them because they are an imperative interface, no matter how you want to spin it as "declarative". I have access to all the imperative constructs in the underlying language and can create conditional execution without restriction.
Even if I use the Yaml compiler for CUE (which we did) I still have to write `fn::` strings as keys, which is ugly and not the direction our industry should go. Let's stop putting imperative constructs into string, let's use a better language for configuration, something purpose built, not an SDK in an imperative language. These "fn::" strings are just bringing imperative constructs back into what could have been an actual declarative interface. Note, Pulumi is not alone here, there are lots of people hacking Yaml because they don't know what else there is to do. CEL making it's way to k8s is another specific example.
This cannot be the state-of-art in ops, we can do much better, but I get that Pulumi is trying to reach a different set of users than devops and will end up with different choices and tradeoffs
The imperative part of that code appears to be analogous to templating. The actual work done under the covers is not imperative, but is based on the difference between the result of the template execution and the current state of the system. That's what makes it declarative.
It really depends on the interaction between the user's Pulumi script and the Pulumi engine.
If there is more than one back and forth, you become declarative, even if you imperatively generate a "declarative" intermediate representation (not really sure what state file at a point in time could ever be imperative), you then would get back some data from the engine, then make choices about what to send off to the engine in the next request.
It's important to understand that with Pulumi, you can end up in either situation. You have to be careful to not become imperative overall is probably the better way to consider this.
Another way this can break down is if the user writes code to call the same APIs in the middle of a Pulumi script. I meant to try this myself to verify it works, but I would assume that Pulumi is not stopping me from doing something like this.
In general maybe, but in the specific context above, I think calling that loop declarative is accurate, and laughing at that classification is a poor response rooted in a deep misunderstanding.
import pulumi
from pulumi_gcp import storage
bucket = "hof-io--develop-internal"
name = "pulumi/hack/condition.txt"
cond = False
msg = "running"
cnt = 0
while not cond:
cnt += 1
key = storage.get_bucket_object_content(name=name, bucket=bucket)
print(cnt, key.content)
if key.content == "exit":
msg = "hallo!"
break
pulumi.export('msg', msg)
pulumi.export('cnt', cnt)
---
769 exit
770 exit
771 exit
772 exit
773 exit
774 exit
775 exit
Outputs:
cnt: 775
msg: "hallo!"
Resources:
+ 1 to create
info: There are no resources in your stack (other than the stack resource).
Do you want to perform this update? [Use arrows to move, type to filter]
yes
> no
details
----
Of note, all but the last exit had a newline, until I `echo -n` the file I copied up
TF might be susceptible to the same file contents manipulation between plan & apply as well, but then again, you can save a plan to a file and then run it later, so maybe not? Another experiment seems to be in order
I think this is an advantage of Pulumi, here are two use cases:
1. Creating a resource where created is not the same as ready. This is extraordinarily common with compute resources (a virtual machine, a container, an HTTP server, a process) where attempting to create follow-up resources can result in costly retry-back-off loops. Even when creating Kubernetes resources, Pulumi will stand up an internet-connected deployment more quickly than many other tools because you can ensure the image is published before a pod references it, the pod is up before a service references it, and so on. (The Kubernetes provider bakes some of these awaits in by default.)
2. Resources graphs that are dynamic, reflecting external data sources at the moment of creation. Whether you want to write a Kubernetes operator, synchronize an LDAP directory to a SaaS product, or one of my favorite examples. When I set up demos, I often configure the authorized public IPs dynamically:
import * as publicIp from 'public-ip';
new someProvider.Kubernetes.Cluster('cluster',
{
apiServerAccessProfile: {
authorizedIPRanges: [await publicIp.v4()],
enablePrivateCluster: false,
},
}
Of course you think it is an advantage, you work for Pulumi
I'm telling you this is not how a potential user sees the same situation, that it is a disadvantage and was one of the reasons we are not making the switch.
This example above is exactly the kind of code we don't want in ops, it depends on the user environment and physical location at the time they run the command, bad practice. Thanks for an extra talking point though
The claim above is that Pulumi uses an imperative interface and that it is quite easy to slip past the declarative guardrails, so in most cases Pulumi is imperative, not declarative. The fact that Pulumi makes this separation opaque can be discussed, as can the clear separation be shown an alternative with benefits
The claim I keep seeing from Pulumi folks is that Pulumi is declarative, which is is not, as shown in multiple posts by many people. Please stop calling it such, it demonstrates dishonesty towards users
The claim above was that a for loop implied that the code couldn't be declarative.
> Please stop calling it such
I'm not claiming it is always declarative, I'm only claiming that a declarative example above can contain a for loop, and that laughing at that is the wrong response. That's it.
When someone tries to make a sophisticated argument that up is down and white is black, dismissive and shallow is the right response.
> The actual work done under the covers is not imperative
Having a declarative layer somewhere in the stack doesn't make something declarative, if that's not the layer you actually use to work on and reason about the system. See the famous "the C language is purely functional" post.
you can have loops and still be declarative, CUE has loops, though they are considered comprehensions more technically, but there is no assignment or stack in CUE
One of the interesting aspects of CUE is that it gives us many of the programming constructs we are used to, but remains Turing incomplete, so no general recursion or user defined functions. There is a scripting layer where you can get more real world stuff done too
The CUE language is super interesting, has a very unique take on things and comes from the same heritage as Go, containers, and Kubernetes
Nagios used to be only Open Source then they created the Enterprise version and left the open source core version lagging behind, it was forked a billion times or more :) creating the Nagios Effect. A lot of monitoring software / companies then removed / replaced the core of Nagios from their products.
I didn't know either, so I did some Googling and found an old announcement[1] from 2009:
> A group of leading Nagios protagonists including members of the Nagios Community Advisory board and creators of multiple Nagios Addons have launched Icinga – a fork of Nagios, the prevalent open source monitoring system. This independent project [is based upon a] broader developer community. [...] Icinga takes all the great features of Nagios and combines it with the feature requests and patches of the user community.
It also looks like in 2014, Nagios centralized and appropriated a domain name and website used for hosting Nagios plugins, away from the community (its plugin developers)[2]:
> In the past, the domain "nagios-plugins.org" pointed to a server maintained by us, the Nagios Plugins Development Team. The domain itself had been transferred to Nagios Enterprises a few years ago, but we had an agreement that the project would continue to be independently run by the actual plugin maintainers.¹ Yesterday, the DNS records were modified to point to web space controlled by Nagios Enterprises instead. This change was done without prior notice.
> To make things worse, large parts of our web site were copied and are now served (with slight modifications²) by <http://nagios-plugins.org/>. Again, this was done without contacting us, and without our permission.
> This means we cannot use the name "Nagios Plugins" any longer.
> [Icinga developer]: "Six months before the fork, there was a bit of unrest among Nagios' extension developers [...] Community patches went unapplied for a long time[.]"
> [...]
> Two years ago, more or less when the split happened, [Nagios author] was having problems resolving [trademark] issues with a company called "Netways".
I'm still not sure what the effect is supposed to be tbh.
I don't get this one, you pick OpenTerraform and get on with your life. It's the same with picking OpenSearch over Elastic. I can use the proprietary version that locks me into a single profit-seeking vendor and doesn't have community backing or the one run by a foundation made up of companies that use and are heavily invested in Terraform.
How dare a vendor come up with an idea, pay people to execute on it, give it away for free to the world, acquire users and soak in all the community contributions from people who thought they were using and contributing to a public good, try and fail to indirectly monetize a hosted version because other people were better at it than them, then rug-pull out from under everyone and use copyright/government-stick to kill their competition because they can't compete on even terms.
Then a group of people who are users of idea and actually making money off it with value-adds step up to maintain it as a community project ensuring that it stays open for everyone -- yeah those guys are the assholes. Terraform would have went nowhere if it wasn't OSS and Terraform would be nothing without its outside contributions that make up far more than the code of Terraform core itself. There's a trail of bodies to prove it.
And you should love this, projects that are stewarded by its own users are incentivized to make it the best it can be instead of rejecting contributions because it competes with their cloud offering [1]
The guys at Pulumi must be having a field day right now. It's exactly how you describe it for us. We're long overdue with an upgrade of our Terraform config from pre v1.0. We have to most likely re-write a big part of our HCL code, so why not try a competitor?
With Vault however that's another story, I've yet to find another secrets management system that has a tight integration with Kubernetes, AWS and supports providers for things like Postgresql to have ephemeral database credentials.
I totally agree. I do not think pleading with Hashicorp to reconsider will result in changing back the license
Doing the Fork and showing it IS sustainable and has broad community support can encourage Hashicorp to make concessions.
After taking this unilateral hostile step I do not think Hashicorp deserves the community trust and what industry needs is "Foundation Governed" Terraform like solution, whatever name this solution will have.
You can see example in Confluenct which builds proprietary solutions around Kafka, where Kafka itself is Apache project.
Why not get it under the umbrella of either the Linux Foundation or CNCF? Things like this and Ansible should be really kept under neutral companies and not companies like Red Hat and HashiCorp that have shown that all they care about open source is the free work they get from contributors.
They can't retroactively take source code away from people who they already granted access to it under the MPL. The old code is still available under the MPL forever- even if they take down all of their own public copies of it, anyone with the old Terraform code is still free to upload their copy for the creation of a new fork. That's kinda the whole idea with these open-source licenses :)
I've heard some people discuss that the contribution agreement that Hashicorp makes people sign gives them the right to change the license for existing contributions, but I'm not a lawyer so I really couldn't say for certain either way.
HashiCorp makes its external contributors sign a CLA to basically hand over the copyright.
However the MPL and licensing in general is irrevocable. They have irrevocably licensed Terraform 1.5.5 under the MPL and an enterprise license (dual license). Anyone can use, modify and distribute version 1.5.5 under the terms of the MPL.
Since HashiCorp retains full copyright they can release the next version under the BSL.
Note that many free software projects (like Linux) don't have a CLA which makes relicensing impractical since every contributor would have to agree to it.
> In economics, a public good (also referred to as a social good or collective good)[1] is a good that is both non-excludable and non-rivalrous. For such goods, users cannot be barred from accessing or using them for failing to pay for them. Also, use by one person neither prevents access of other people nor does it reduce availability to others.
Any free open source product qualifies as a public good. It is free for all to use, and one person using it does not exclude anyone else from using.
Speaking of nuclear options, need to get the providers to pledge/follow the fork, maybe via some kind of API incompatibility. Terraform is useless if the providers don't work with it and only the fork. Focus on disrupting the ecosystem.
This is the interesting part of all of this. The meat of Terraform is in its provider ecosystem. Anyone can make a new frontend (or even fork the existing one?), get rid of all the warts, add the missing encryption features gated under enterprise and have a much better tool.
Hashicorp left the provider frameworks under the original licenses, probably because they don't want to scare provider developers off. So for now both Terraform and a potential fork can continue sharing the same providers without issue.
We at Oxide were honored to be asked to add our name to OpenTF Manifesto. Our statement:
At Oxide, our vision has been that on-premises infrastructure is deserving of a system consisting of both hardware and software, at once integrated and open. Ensuring Terraform users can easily deploy to Oxide has been essential for realizing this vision: we want customers of an Oxide rack to be able to use the tools that they know and love! And while HashiCorp's move to the BSL does not immediately affect Oxide (our Terraform provider is and remains MPLv2), we recognize that the ambiguity in both the license and HashCorp's language has created widespread concern that gives customers pause. We support the OpenTF efforts to assure an open source Terraform. It is our preference to see an MPLv2 Terraform as led by HashiCorp, and we join the call from the OpenTF signatories for HashiCorp to renew its social contract with the community by reverting the change of Terraform to the BSL. That said, we also agree with OpenTF's fallback position: a BSL-licensed Terraform is not in fact tenable; if Terraform must be forked into a foundation to assure its future, we will support these efforts. Open source comprises the foundation of the modern Internet, and is bigger than one company: it is the power of us all together to determine our fate. But we cannot take that foundation for granted -- and we must be willing to work to exercise that power to assure an open source future.
Thank you to the consortium here that is coming together to guide Hashi to see the wisdom in an open source Terraform!
I can rest easy now that my take on this aligns with Oxide's and I'm not being sarcastic. I've been following the podcast for 2 years now, and you guys are spot on with your stance on opening up and keeping things Open.
It would be ironic though if Hubris ever gets relicensed
Just scrolled through your open job postings, and the Control Plane opening is jaw-droppingly interesting. Sounds like a marvelous mission you're on. I'm wishing you all the best and will make sure to check back in with Oxide's progress!
If any Hashicorp people are reading, can you please tell your middle and senior management that this decision has deeply soured my entire DevOps cohort on continuing to use Terraform in the future.
We're already exploring alternatives. Future client projects may not use Terraform at all.
Languages and frameworks must remain open or they will wither and die.
As a Free Software advocate and supporter, I'm thinking about the answer to this question:
- MPL is a weak-copyleft license, which allows companies to grab and run Terraform codebase, provide it as-is (as Terraform), or as white-labeled Terraform compatible feature/layer. This is alright (because license allows this).
- These people also contribute their own fixes upstream, which is great, and maintain their own patches if Hashicorp decides to reject them (which is fine, too, this is how ecosystem works).
But, HashiCorp says that, the thing we develop (i.e. Terraform) is used by others and generate revenue for them, this is great, but we can't generate enough revenue from it to keep the company afloat and continue providing TerraForm development, and sell it as a product at the same time.
What should they do? I'd advocate AGPL, but xGPL licenses are avoided like a plague, because Free Software is not "closed forks" friendly, and companies hate to open everything like that.
BSL is neither Free nor Open, which we all hate, but it allows HashiCorp to survive to a degree (this is not a fact, this is what HashiCorp is thinking).
So, just because people adapted it, and HashiCorp cannot survive, should they say, we're closing shop, it's all MIT now, do whatever you want, bye!?
Weak copyleft licenses are not designed for software that big. Or they assume that the developing party is untouchable. Strong copyleft solves this, but companies hate it because its unrelenting transparency.
What should we do?
P.S.: I neither endorse, nor support BSL, or HashiCorp's decision (or any company treads the same path).
Edit: I mistyped MPL as permissive instead of weak-copyleft. Corrected, sorry.
> But, HashiCorp says that, the thing we develop (i.e. Terraform) is used by others and generate revenue for them, this is great, but we can't generate enough revenue from it to keep the company afloat and continue providing TerraForm development, and sell it as a product at the same time.
> What should they do?
Suck it up, open source Terraform under a non profit foundation, find a new source of revenue. Or stop developing Terraform, cut expenditures, and move on with life.
There's no universe where "bait and switch customers who wanted open source into paying us by switching licenses" is a viable option.
> So, just because people adapted it, and HashiCorp cannot survive, should they say, we're closing shop, it's all MIT now, do whatever you want, bye!?
Exactly, you know the answer, you just don't like the implications. People somehow think that a business which started an open source project "deserves" to profit from it. They do not. Open source is a great way to get people to know who you are and build things that are interoperable with your (proprietary, closed source) SaaS offerings. It is not in itself a revenue source.
If the viability of your business is predicated on being the only one able to provide your project as a service and earn that service revenue gravy, just leave it closed source and proprietary. Sure, you won't get adoption at anywhere near the rate, but that's the tradeoff you make.
> Or stop developing Terraform, cut expenditures, and move on with life.
How would that be an improvement to anyone in any way? If you want, you can just pretend that Terraform is dead and Hashicorp will never push another commit for it. The people who can make the compromises BSL has can continue to use it.
> If the viability of your business is predicated on being the only one able to provide your project as a service and earn that service revenue gravy, just leave it closed source and proprietary. Sure, you won't get adoption at anywhere near the rate, but that's the tradeoff you make.
I don’t understand why this is a binary. If the conditions of the BSL are unacceptable to YOU that’s fine, just pretend it’s closed source if you wish. For others, that the BSL isn’t completely proprietary is useful for them - let it be useful. Your wishes need not dictate everyone else’s.
I don't know which implications you're talking about, it was a question without prejudice or load.
> People somehow think that a business which started an open source project "deserves" to profit from it.
I do not agree. I don't hold a position stating that "TF should stay open, and companies should profit from it while giving it patches if you feel like it". I'm the opposite, and I find the approach to permissive licenses as "Ooo... Free tool to build a new product on and profit" as unethical to begin with. I put anything and everything I put out as A/GPLv3 or GFDL, because I produce that code for myself, on my free time, and I don't have a secret desire for it to be forked and closed down for internet cookie points.
If you want to use my tool for any reason (which are not very sophisticated to begin with), comply with GPL, or roll your own. I. Don't. Care.
I also pay for tons of things. Docker, cloud storage, programming fonts I use, anything I deem worth the money they're asking for.
I also use Vagrant a lot, share my Vagrantfiles (again under GPLv3), and if they begin to charge like Docker, I'll pay for it, if I deem it's worth the money they ask for.
However, at the end of the day, I'm a Free Software advocate. I use Free Software to the extent possible, and develop my software as Free Software. Not Open Source software under some permissive license to be grabbed and forked to death.
> What should they do? I'd advocate AGPL, but xGPL licenses are avoided like a plague, because Free Software is not "closed forks" friendly, and companies hate to open everything like that.
They can offer it under more than one license if they want. I, for one, would be pretty happy if they offered it under both BuSL and AGPL.
- business users who are afraid of copyleft could use Terraform under the terms of the BuSL
- the F/OSS community could distribute it under the terms of the AGPL and freely build on it
- competitors to Hashicorp would have a choice:
+ open their whole stack and compete in the market based on the quality of their services and support alone (admirable but very tough)
+ negotiate and pay HashiCorp to license Terraform under special, proprietary terms
Probably, many of the same companies who want to fork Terraform now would still want to. But I'd be satisfied and it would likely shift the conversation in a way beneficial to Hashicorp's reputation.
MPL is a copyleft license, actually, just like EPL and EUPL. What it is not is a viral copyleft license. Anyone making changes to TF code and productionizing it is expected to contribute back but only the direct changes to TF itself, not the extensions around it. That's my reading of EPL/MPL/EUPL. Does that match your reading?
You're right. MPL is a weak-copyleft license. I mistyped it, I don't know why. Fixed my comment with a note, thanks.
However, it still allows your code to be bundled inside a bigger work. The bigger work can be in any license (maybe not GPL, need to check), but MPL stays MPL. This doesn't prevent white-labeling the codebase, though.
As far as I read the MPL, the contribution back part is not mandatory, but encouraged by not affecting the larger work. It keeps the MPL part open, but doesn't enforce a "send patches back" policy.
> As far as I read the MPL, the contribution back part is not mandatory
That's not how I read it.
> the contribution back part is not mandatory
In that case weak copyleft would not be any different from MIT/BSD.
First, assuming you are distributing only the larger work:
"3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice, provided that You also comply with the requirements of this License for the Covered Software. [...]"
Assuming the covered (OSS) work is distributed as an executable:
"3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then: (a) such Covered Software must also be made available in Source Code Form, as described in Section 3.1 [...]"
Finally, 3.1:
"3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any Modifications that You create or to which You contribute, must be under the terms of this License."
The only wiggle room I see is not triggering the distribution clause by a SaaS-only offering. By the way, EUPL is a non-viral copyleft license that closes the SaaS loophole in MPL/EPL/LGPL.
True, sending back to upstream is not required by the license. But even AGPL does not require this.
Rolling up a tar of your source tree, however, will be required if you get a request from one of your customers by email. The difference from GPL is that you will only have to tarball a portion of the tree that was MPL-licensed. That tar must include your patches, as §3.1 requires.
As I said before, many companies argue that using a private fork of an MPL/EPL/LGPL software in a SaaS does not trigger the distribution clause as customers never get a source or a binary of the program that runs in the cloud. EUPL closes that loophole.
It's also possible that many companies violate EPL/MPL/LGPL terms (knowingly or unknowingly).
One good example of non-viral copyleft working as intended is the Eclipse IDE. There are many closed-source tools that use Eclipse IDE under the hood (the part that Eclipse calls RCP), but there are no commercial forks of the IDE itself because any such company would have to open-source their changes to the fork.
IMO: HashiCorp is in this situation because they want to grow too big, too fast. A lot of their software is tooling, tooling that doesn't necessarily make sense to sell in a SaaS offering, or if it does: it's going to be commoditized. However, SaaS is where the $ is. They have investors they must please, and they want a big return on investment. It's probably too late to fix this, but IMO if they took the slow and steady route of providing support and professional services HashiCorp could easily be profitable while maintaining all of their products, but perhaps to a lesser degree.
HashiCorp has plenty of revenue and that revenue is growing fast. They are losing money, but that's to be expected during the high-growth phase of their lifecycle. If they are having trouble growing as fast as their investors want it to, changing their licensing is not the way to fix it. This change is unfortunate because it won't bring serious new revenue to the company but it is a major blow to the reputation they had built.
It ain't like Terraform itself was ever a huge moneymaker. Terraform Cloud (i.e. the fancy-shmancy Terraform backend), maybe, but the actual language, the providers, the modules, etc. are all pretty universally free-as-in-beer. The only way to make money with them is to use them to build something else that then makes money.
Put simply:
> we can't generate enough revenue from it to keep the company afloat and continue providing TerraForm development, and sell it as a product at the same time.
They were never selling it as a product in the first place. It's more of a loss leader if anything; Costco has its $1.50 hot dogs to attract customers to buy memberships, and HashiCorp has Terraform (and its provider/module ecosystems) to attract customers to buy Terraform Cloud (and Vault, and Nomad, and everything else they actually do sell).
I'm not Hashicorp, but I mean look at Docker. They built the most valuable devops tool of the last generation and can barely muster a viable business. Why would Hashicorp give the slightest worry to losing thousands and thousands of non-paying customers? The upside is lots of money and the downside is loss of halo. Honestly, it's an unfortunate game of expectation setting. If I wrote an open letter decrying Salesforce for not open sourcing their codebase, nobody would take me seriously. But we expect better from Hashicorp for some reason.
Why would Hashicorp give the slightest worry to losing thousands and thousands of non-paying customers?
Because it goes hand in hand with losing hundreds of unpaid developers, testers, bug-reporters and evangelists.
If I wrote an open letter decrying Salesforce for not open sourcing their codebase, nobody would take me seriously. But we expect better from Hashicorp for some reason.
Because the community contributed to the codebase. Salesforce was never open source, Terraform was and took community contributions, that's the reason we have different expectations.
[Docker] can barely muster a viable business
Docker split their development and enterprise offerings into two companies years ago and both are making money.
I think that once you have more than X customers, you don't really need OSS testers and bug-reporters that much – your customers will be the first ones to file a ticket if anything is wrong. And as we saw with CentOS Stream, OSS users are not generally keen to be beta-testers.
Ditto for earnings: once your company is publicly traded or at least lands in a Gartner report, you don't need evangelists that badly anymore.
Free pull requests are always welcome, but I guess a calculation was made in this case.
>I think that once you have more than X customers, you don't really need OSS testers and bug-reporters that much – your customers will be the first ones to file a ticket if anything is wrong. And as we saw with CentOS Stream, OSS users are not generally keen to be beta-testers.
In this case the customers they're losing (as well as the devs, testers, reporters, etc) aren't end users but companies who have built software offerings on top of Terraform. That's the class of user principally impacted by the change to the license. OSS users aren't stoked about testing, but developers who build on top of Terraform in order to eat will submit bug reports and PRs all day.
> They built the most valuable devops tool of the last generation
'They' as in Docker Inc.? They made it accessible for the masses, which counts for a lot, but they built on a lot of pre-existing Linux kernel tech that other people who envisioned containers put in place before Docker came in and seized on the opportunity.
> can barely muster a viable business.
They perused what proved to be the wrong business model until 2019, nowdays they're more than 'barely' viable.
> 'They' as in Docker Inc.? They made it accessible for the masses, which counts for a lot, but they built on a lot of pre-existing Linux kernel tech that other people who envisioned containers put in place before Docker came in and seized on the opportunity.
100%. Docker's UX has always been its killer feature, and it counts for a lot. That's a very real contribution. But this is absolutely still a 'shoulders of giants', 'it takes a village' situation.
> Why would Hashicorp give the slightest worry to losing thousands and thousands of non-paying customers?
The photoshop effect*. That is to say, devs that are familiar with it will push their workplaces, because it's a pain to pick up yet another tool when you know one that will work.
*A largely hypothesized reason photoshop was so easy to crack back in the day, was that Adobe knew that if kids grew up using photoshop, businesses wouldn't be willing to spend the money re-training their employees, and would just buy a photoshop license.
> We're already exploring alternatives. Future client projects may not use Terraform at all.
I'd wait and see what happens with OpenTerraform. If the fork gains some good momentum, it would be the easier choice. Usually, you should be okay with using the latest FLOSS version for a few weeks/months until things settle anyway.
The fantastic cost of running TF in the cloud is painful. Several years ago it was very clear that it would be difficult for HC to survive once they became a company registered at stock markets.
> Languages and frameworks must remain open or they will wither and die.
Terraform is neither a language nor a framework, and I certainly don't think it will wither and die if they transition to BSL. Case in point, docker's revenue grew by 12x once they started taking control of their code and stopped caring about community. Same with postman or nginx or many other companies.
What? Docker is still completely open source apart from the desktop GUI. The engine and (I'm pretty sure) all components are completely free and if anything, they have pushed for the standardization of the container runtime. Buildkit is free, compose is free, no feature is paywalled apart from Mirantis-centric stuff (not part of docker inc)
You can absolutely bet that they would get dropped like a rock if they moved to changing the engine's license. Even the docker desktop code wasn't ever open to begin with anyways.
That's just hosting though iirc. Docker hub is very important, but it's not really part of Docker the software. As in, you could deploy your own container registry with 0 licensing issues. They just didn't want to pay the bandwidth costs anymore, though I think they walked back on that for open source images.
>Imagine if the creators of Linux [] suddenly switched to a non-open-source license that only permitted non-competitive usage.
Linux cannot even successfully switch from GPL2 to GPL3 because of the sheer number of contributors and the fact that not all of them have transferred their copyright ownership to any given organization. This patchwork of different copyright owners has historically been seen as a potential weakness for Linux, but it seems like perhaps license inflexibility is a strength for open source.
I thought Linus and other believed GPLv2 was fine and the improvements of GPLv3 did not outweigh the potential problems introduced by it. It never came to a point where all authors were asked to agree, or sign away their ownership.
My understanding was that some people in the community believed that GPLv3 was better, and one of Linus's criticisms was that it was essentially impossible to switch even if it were better. I also believe Linus was opposed to the switch, which would make it unlikely anyway, but even if he had approved, I still think it would be practically impossible.
Torvalds considered the anti-TiVo clause to be changing the deal and he didn't want to do that, and there's no way in GPLv3 to opt-out of the clause[0].
This is less "locking down devices is a human right" and more him being angry that the FSF was trying to butt into his project's affairs. He's also similarly angry about "GNU/Linux" as it sounds an awful lot like Stallman just demanding everyone stick "GNU" onto the name of Linus's kernel project.
Anyway all of this is going to seem really quaint in 2027 when Broadcom gets sued under DMCA 1201 by a rogue kernel contributor for evading the Linux linker's license checks[1] and they have to hurriedly rewrite them out of the kernel and relicense anyway.
[0] Granting a blanket exception doesn't work because others can just remove the exception. "No further restrictions" is an ironclad law of copyleft.
[1] The Linux kernel checks the declared license of loaded modules and refuses to link non-GPL-compatible code against any kernel symbol not marked as a user-space equivalent. The reason why this works this way is because Linux ships under GPLv2 plus an exception that says user-space APIs don't trip copyleft, so you can legally load code built to those APIs into the kernel, but anything else might violate GPL.
Since this is enforcing an interpretation of the GPL, this is a DMCA 1201 technical protection measure. You absolutely could make a DMCA 1201 anticircumvention claim in court against a proprietary driver developer that tried to evade the checks. Though Linus usually just bans their modules in the next kernel revision since he's mainly worried about keeping proprietary modules from generating spurious bug reports in Linux. But the lawsuit is still possible, since they're on GPLv2. If they had relicensed to GPLv3, this wouldn't be an issue.
Are you arguing that the GPL check “effectively controls access to a work” or “effectively protects a right of a copyright owner [..] in a work or a portion thereof”? Either way, the bar of how “effective” a measure needs to be to count may be low, but probably not that low.
We changed the license[1] of a project which had 10 contributors, and we got every single one of them to do an Acked-by (by email) which took some weeks. That was on the advice of our lawyers. Can't imagine the impossible hassle of doing the same for something like Linux.
And that's assuming that all the contributors are even alive in the first place; unless you've got a Ouija board handy, you're gonna have one hell of a time changing Linux's license.
Can't speak for Linux, but got a few projects I've contributed to I've had to sign a CLA which negates that problem (but causes the one in this thread)
Hey folks, so Terraform did a thing. They changed their license type, and a lot of people aren't too happy about it. There's this OpenTF Manifesto now where people are speaking up about wanting Terraform to be truly open-source again. Some are even thinking of making a new version if HashiCorp doesn't switch back. Just a heads up for anyone using or thinking of using Terraform
We, Terrateam, do not believe we violate the new license but we support Terraform being open due to how important it is to the ecosystem. Unlike Vault or Waypoint, Terraform is closer to a language compiler like Go or Java and benefits from a robust community that can build on top of a stable ecosystem. As such, we have announced our support of the OpenTF Manifest[0]
As an end-user, not competing with HashiCorp, this change doesn't worry me. According to their FAQ [1]:
10. What are the usage limitations for HashiCorp’s products under BSL?
All non-production uses are permitted. All production uses are allowed other than hosting or embedding the software in an offering competitive with HashiCorp commercial products, hosted or self-managed.
24. Can I host the HashiCorp products as a service internal to my organization?
Yes. The terms of the BSL allow for all non-production and production usage, except for providing competitive offerings to third parties that embed or host our software. Hosting the products for your internal use of your organization is permitted.
Even if you don't mind abiding by the terms of the BSL, the licensing change is a signal that Hashicorp is in dire straits and doesn't know how to operate as a sustainable business. They're flailing about trying to increase revenue, and in so doing they're removing one of the core components (the open source licensing) that made their tools ubiquitous to begin with. And what will their next cash grab be?
Here's the kicker though... before the change to BSL, the future of Hashicorp didn't really matter as much, since somebody could fork their projects and keep them going. But with this licensing change, if Hashicorp shuts down one day, nobody could create a fork for several years.
So to me, whether or not I can use the software as currently licensed isn't the biggest issue. I want the ability to have an "escape hatch" should Hashicorp continue its downward trajectory or shut down completely.
Giving away most of your product for free and selling commercial services on top when it's very easy to compete with you on that front is.... well it's not a sustainable business model.
It would be troublesome if any of the vendors at $work past or present went bankrupt, this is the nature of having external vendors. I am not particularly concerned.
I was not the biggest fan of Terraform in the first place, I don't like some of the language choices, but it works better than anything else that exists out there.
I think it should, to some extent. A really quick example comes to mind. Some of the best documentation on how to use Terraform properly comes from folks who provide competitive offerings.
I could also see someome like Amazon eventually launching a CloudFormation like tool that works natively with Terraform, but now that's off the table and I think a net negative.
It also sounds like projects like Atlantis also would be against the BSL, including self-managed installations of the tool.
How do you know you're not competing with HashiCorp?
That's not meant to be a redundant or snarky question. The key issue with the BSL and that FAQ is that the wording is intentionally vague. What does "competing" mean? What does "hosting or embedding" mean? Who decides?
In order to really know if you're a competitor, you have to reach out to HashiCorp (as the FAQ tells you to do). So whether your usage is valid is not controlled by the license terms, but is instead entirely at the whim of HashiCorp. So they switched from a permissive open source license to a HashiCorp decides license: they get to decide on a case by case basis now—and they can change their mind at any time.
That is very shaky footing on which to build anything.
It should worry you - it hurts the ecosystem. Terraform is just a tool. The providers, modules, not supported by HashiCorp, is what makes Terraform useful. Ige the ecosystem dies, Terraform becomes useless.
The ecosystem outside of providers is far less important than people like to claim. Open source modules are almost all poorly scoped, often just wrapping a single resource completely unnecessarily - simultaneously over- and under-abstracted. It's also a huge security risk to pull them in.
The only providers I have ever used in production, or would likely ever consider using would be published by Hashicorp or the software vendor for the resource being managed (for example [1]). Much would need to be done to trust any other third party without good reason.
I have had similar experiences poking around other tf providers which were of apparently low quality.
That's really not the case. Most of the provider I use are third-party - Datadog, Cloudflare, GitHub, PostreSQL, RabbitMQ, MySQL, and tons more. Regarding the module - you should choose them the way use you any other third-party library. I use reputable modules for many things that save me tons of work.
It won't affect you unless you're selling tooling that embeds TF in some form. That, unfortunately, covers too wide a space and there is no telling when your offering is going to be in competition with Hashi's.
As I see it Hashicorp has failed to create a viable business model in an environment where there isn't unlimited perpetual VC money. Now they're at the stage of giving up and simply trying to shake down those who have managed to make better business models.
It's usually not a good idea to be near a company flailing like this since who knows what their next rent seeking approach will be. A company with nothing to lose is a dangerous partner to have.
The worst part is they did create a viable business model. They were profitable when they had their IPO. They then pretended that the IPO was just another Series X investment, blew all the money, and went negative on their cashflow.
Hashicorps problem isn't that their business model doesn't work, it's that they are really bad at their jobs. They ignore customer feedback, laid off support people, and then jacked their prices up. It's a self inflicted wound, and instead of trying to fix it they just keep making it worse.
I've worked for three companies now that went to hashicorp to buy TFE and left with a quote equal to a quarter or more of revenue and the sales rep acting like they are the second coming and obviously we are stupid for not thinking they bring that much value to our org. No org invests 1/4 of their revenue on a single tool. So we used atlantis or spacelift or hand rolled GHA scripts and saved a fortune.
We are trying to pay them and they are being so unreasonable with pricing that we can't give them our money
It’s been that way for years with them. It’ll be interesting to see if they go the road if a private equity buyout, or they’re acquired by a technology company for the copyright/trademark.
Definitely, the fact they're rent seeking against similar small companies clearly shows that. Sad that rather than looking inward to improve themselves they've decided to just attack others.
Great to see your commitment but I'm also curious why you, unlike some other companies, have chosen not to support with any full time employees? It seems your business is largely based on Terraform and saying pretty much "we'll contribute code" doesn't signal too much commitment.
I realize my comment might sound like an accusation but that's not my intention, I want to hear your reasoning about it!
We say OpenTF is (or will be) a fork, and forks are bad, nuclear option, etc, but really, Hashicorp are the ones who made a breaking change, and the "fork" merely maintains that which already was, but for reasons, are not allowed to continue using their own name.
We need for the shortest sound-bite 3-word sentence to the non-technical to somehow use terminology that says that the entity that caused the problem is the one who did some action.
OpenTF did not (or is not prepareing to) fork this project, Hashicorp did.
If it was me and I wasn't legally prevented by something actually binding in writing with signatures, I'd even keep using the original name and duke that out.
It can be rebranded, that's pretty straightforward for something that's such an industry standard. "Oh yeah? Earthworks? That's the open source fork of Terraform" pretty simple. If it were a lesser known technology it would be an issue but most of the (modern) internet runs on it, whatever they name the fork will be well known pretty much instantly
Hashi got really good at ignoring PRs if they weren't their own. They even ignored the PRs coming from the dev teams of their own customers (ie users of TF Cloud and Enterprise) which speaks volumes about their willingness to listen to the community...
I did see https://github.com/diggerhq/open-terraform but have no idea if it is related. And I’m sure there are others. What I’ll be interested to see with the forks is how they will practically be maintained. All the bug and security fixes that HashiCorp is writing can’t just be cherry-picked into these forks (I think?), so what exactly are they supposed to do?
> All the bug and security fixes that HashiCorp is writing can’t just be cherry-picked into these forks (I think?), so what exactly are they supposed to do?
Indeed, any fork will need to implement their own bug fixes.
Ideally they should do this "clean room" and not even look at the BSL'd code, to help defend against any accusations of copyright infringement.
Hi! OpenTF is not connected with a single company. This is an united, community-driven effort.
You can check on the manifesto side who is behind. And we welcome all support!
Which is interesting, since Digger (the company that created that fork) is one of the OpenTF Manifesto signatories. Maybe they're recreating it under a different name / without the Hashicorp/Terraform branding all over the place?
These people have seriously contributed back to the Terraform community. Terraform doesn't have a test suite- Grunt made Terratest, as well as many other tools. These people have seriously contributed back to the ecosystem, in many ways beyond what Hashicorp has done.
Beyond that, I know some of these companies tried to be contributors to Terraform itself but were ghosted by Hashicorp.
At the same time there's only a handful of regular contributors to Terraform[1]. It would not be hard for these companies to provide more resources to Terraform than Hashicorp is.
Probably not difficult for these competitor organizations to fund an OpenTF team to hire some of these people away from HashiCorp and continue it on as FOSS either. I can't imagine Liam turning down .5M/year to do so.
Marcin here, co-founder at Spacelift. We are open to fund 5 FTEs, feel free to reach out to us via the pledge page if you're interested in OpenTF becoming your full-time job.
Sorry, this may be a miscommunication. Terraform itself has a test suite, yeah, but it doesn't have a testing framework that users of the language can use to test their own code.
To make a python metaphor, if cpython had a testing suite but pytest didn't exist then people wouldn't be able to test their own python code. That's kind of the situation with Terraform right now- you can't test your code using just the Terraform tools, you have to rely on Terratest which was written by Gruntwork. Hashicorp has spent years relying on the open source community to fill those gaps, which Gruntwork has done very nicely.
It’s actually not a guesstimate about what actions will be taken, it’s a guesstimate of the state that will result from those actions without any reference to ordering or “actions” as a cloud API would understand them - the plan is purely in terms of CRUD on Terraform provider resources and provisioners.
This may seem like a nit, but it fundamentally changes what Terraform is capable of doing in a single pass without external coordination.
"Astroturfing"... lol OK. I don't think I've ever made it a secret that I worked for HashiCorp in the early days (leaving in 2017), and have been critical to the point of being banned by the CEO from speaking at HashiConf.
Saying "Terraform doesn't have a test suite" is not miscommunication, it is misinformation, plain and simple - the same as most of the other things I have been correcting this week (not least from the same poster in this thread - someone who's clearly has an axe to grind).
You and the OP are referring to different things. Terraform the codebase has a test suite. Terraform the app does not have a test suite/runner as in a way to run tests against your Terraform files.
It doesn’t have a testing tool built in. No one legitimately understands the phrase “terraform doesn’t have a test suite” to mean anything except “there are no tests”. A runner and a suite are quite different.
I can understand the miscommunication in the first post, but I clarified in a direct comment to you what I meant. I was even comparing it to Terratest, which is not used for testing terraform core. At this point you're just being belligerent for the sake of being belligerent.
This is reality of any successful Open Source ecosystem - folks who contribute the most (code, bug reports, marketing) in the project tend to be those who are making money on the project and these are the same folks who compete with you
Coopetition is name of the game in Open Source and too bad increasing number of the companies want to focus on capturing all economic value from ecosystem they have created with help from so many others
As a long time Gruntwork customer, contributor, and fan, it is really nice to see them stepping up as thought leaders here. They run a great open source community already. Our DevOps team has been buzzing all day with what we are going to do. For now, we are staying pinned to the last open source version of Terraform and will likely follow Gruntwork's lead when the time comes.
We use Terragrunt to manage thousands of Terraform configurations. If it and Terraform drift apart, we will have to go one way or the other eventually. Separately, a new license for Terraform means its gotta go back through legal and compliance so we will be paused for months anyway.
There is zero chance of Hashicorp donating Terraform to an open source foundation. If there was they would have never even considered this change in license. Honestly it's not a bad thing, maybe the maintainers of the Terraform fork will actually listen to feedback from the community of people who use it instead of ignoring them.
Anyway, the devil’s in the details, of both the license as well as the internal architecture of our system. I can’t share more here, but if you’d like to learn more please reach out via our chat or email. You can also expect more updates on our blog.
I like Terraform and will continue to use it. I'm just an end user that isn't involved in building other product offerings on it or a user of other derivative products.
Even though this really doesn't affect my use case it does feel like kind of a dirty bait and switch. I do hope for a future where there's a version (and Terraform provider module versions) that are actively maintained under a true open source license. I'll favor using those over the official BSL version as much as possible.
I guess it's the CLA that all of the contributors signed that allows this to happen? I wonder if there's a way for open source licenses to address this, and disallow the use of CLAs, or require some CLA clause that doesn't allow sudden switches to non-permissive licenses?
One thing I particularly hate about license change is lack of notice - If you operate in good faith you probably would want to give time to community to make arrangement, whenever it is negotiating agreement with you or looking for alternatives. Lack of notice this means everyone who embedded Terraform put their customers at risk immediately as in case any discovered CVEs they will not be able to ship security fixes to their customers.
Having recently picked up Rust (yes, sorry for mentioning it, I promise it's relevant), I picked up Terraform the other day. I was shocked by how weak its language-level developer experience story is.
I am working in VSCode, which by and large tends to be the editor supported best, with the most mindshare. Terraform has static and mostly strong typing, yet some testing revealed I was able to pass an argument of the wrong type to some variable I declared. This is type safety 101: variable declared `str` shouldn't accept `int`, ever. Yet `tf validate` was silent, so was all IDE-integrated tooling (whatever the VSCode TF extension does).
Jumping to/from symbol definitions/usages was also flaky (but not entirely absent).
Really disappointing! My excitement of diving into TF went poof. Maybe I'm overly sensitive, but I was so excited to escape YAML hell (Ansible).
Now I'm even firmer in the boat of just using a regular old language, like Pulumi with Python (with full typing).
Did I do something wrong or can anyone confirm my findings?
You can, for most of the cases. You just need a way to tag resources as belonging to the tool. This can be done via prefixed (or suffixed) names, tags, etc.
That… is state, just stored elsewhere. It’s also not usable for lots of important parts of AWS, which does not have consistent tagging support and would leave you running very much foul of API rate limits.
Terraform having state wasn’t some easy button decision, it was absolutely required and carefully considered.
The state is stored in the resources themselves, to be precise.
> It’s also not usable for lots of important parts of AWS, which does not have consistent tagging support and would leave you running very much foul of API rate limits.
As someone who worked on tagging inside AWS, I don't believe that there are any major AWS services left that don't support tagging. These days, tagging-on-creation also guarantees that you won't have untagged resources if your provider happens to die between "CreateResource" and the "TagResource" operations.
You can also sometimes use prefixed names to signal that a resource belongs to the infrastructure-as-code.
API limits for "describe" calls are also pretty lax. And you need to use them anyway to check if the current state of the world matches with the saved state.
State will be needed for some integrations that don't support tagging/naming (Okta, I'm looking at YOU!), but at least for AWS it's not needed.
There may be no services at large that don’t support tagging (certainly was historically not the case though), but there a hundreds of resources that don’t.
Furthermore, tagging is restrictable by IAM, is often co-opted by finance for cost allocation, and is subject to often-bizarre limits about what the content can be (even more so across providers).
Finally, how would you manage tags as an actual resource themselves in this model?
For resources that don't support tagging or user-defined naming, you'll need state.
> Furthermore, tagging is restrictable by IAM, is often co-opted by finance for cost allocation, and is subject to often-bizarre limits about what the content can be (even more so across providers).
You can fix your IAM. Cost allocation tags are treated specially.
> Finally, how would you manage tags as an actual resource themselves in this model?
I think the problem is that if hashicorp thinks you are a competitor you and your clients now have legal/operational issues. Ie you are now a competitor because we are releasing a product just like yours, here is a letter from a lawyer telling you to stop using terraform.
This is precisely the problem with the new BSL license. Whether your usage of Terraform complies with the license isn’t determined by the legal terms, but instead is entirely at the whim of HashiCorp. And they can change their mind at any time. It makes it impossible to build anything on top of Terraform.
This covers really well why I think the BSL license is a non-starter for things like TF. I get trying to prevent AWS from competing with you using your own open source code, but it creates this ambiguity where it's not clear whether lots of uses are or are not competing with HashiCorp.
> For example, if you’re an independent software vendor (ISV) or managed service provider (MSP) in the DevOps space, and you use Terraform with your customers (but not necessarily Terraform Cloud/Enterprise), are you a competitor? If your company creates a CI / CD product, is that competitive with Terraform Cloud or Waypoint? If your CI / CD product natively supports running Terraform as part of your CI / CD builds, is that embedding or hosting? If you built a wrapper for Terraform, is that a competitor? Is it embedding only if you include the source code or does using the Terraform CLI count as embedding? What if the CLI is installed by the customer? Is it hosting if the customer runs your product on their own servers?
The answer is at the whim of HashiCorp and subject to change at any point in the future. Even ignoring the attempt to dilute the meaning of "open source", the practical implications of the BSL license are more than enough reason to coalesce around a truly open source fork IMO.
I worked at a financial institution that heavily utilized terraform. Their business is banking and they do not offer automation, orchestration or IaC as a service. They're fine.
This seems to affect only those places that attempt to build a business off terraform.
I am not saying those businesses can't be mad at the rug getting pulled out from under them, but it's important to be accurate that this doesn't affect end users of TF directly.
Is the financial institution made up of separate legal entities which bill each other for services, and does one of those entities provide tech infra for the other legal entities?
The messiness of the real-world unfortunately doesn't play well with ambiguity in licences :)
It'll be a headache for every large company which now has to send the licence to their legal teams who have to ask these kind of questions (another interesting one is "can contractors touch our terraform setup?") - in fairness to Hashicorp they've tried to address some of these issues in their FAQ, but the FAQ isn't legally binding so legal teams have to go on what's actually written in the licence.
Massdriver was designed to be infrastructure-as-code agnostic from day 1.
Our goal has been to help companies get great operations, compliance, and security posture from day one.
While Massdriver is not a competitor to HashiCorp, the license language is extremely vague and leaves any infrastructure company running containers for their customers wondering if HashiCorp will consider them a competitor tomorrow.
We are proud to be providing development and community support for this initiative.
The weaponisation of open source by the cloud vendors combined with devops culture encouraging only paying for operations and commoditising development is going to lead to constant pointless migrations of this kind (such as Docker/podman etc.)
Devops people need to find a viable way to reward the developers of the tools they make a living from operating. Failing that they will wake up finding no one is willing to make them or that those that do have an ulterior motive.
I actually love the weaponization of OSS. It eats away at the technical gap between proprietary systems and their FOSS equivalents. See elasticsearch + openAI (although open models are still quite far behind)
As a regular end-user of Terraform, what difference does BSL vs MPL make to me? From reading this article it seems not very much? Perhaps I'm misreading this.
You can continue to use plain Terraform forever. It would only affect you if you use a tool like Env0, spacelift, Gruntwork pipelines, etc instead of something like the Terraform Cloud.
These tools are not going to be able to be used with users using new Terraform versions (though they can always use the current or any previous versions, or can use their fork these companies are jointly supporting).
Then there are open source tools that don't directly compete with Hashicorp that are in a bit of a gray area, but I've seen Atlantis, Pulumi, OTF, and other tools all claim that this does not affect them. I would presume this could also apply to things like Terratest, Terragrunt, etc. but I don't know. I am not a lawyer.
And if none of these company/product names are familiar to you, then you shouldn't have any noticeable difference :)
> These tools are not going to be able to be used with users using new Terraform versions
As discussed in the other thread, we believe that we are not in violation of the new license, you can find more details in our today's announcement[0].
It doesn't. And it will not make any difference. The same way as Sentry license, Elasticsearch, etc., doesn't have any difference for regular users. Those changes target cloud providers like Amazon, Google, etc.
It means that whether you can use Terraform at any future company you work for will be determined... by HashiCorp.
That's because the BSL license is intentionally vague. What does "competing" mean? What does "hosting or embedding" mean? Who decides?
In order to really know if you're a competitor, you have to reach out to HashiCorp (as the FAQ tells you to do). So whether your usage is valid is not controlled by the license terms, but is instead entirely at the whim of HashiCorp. So they switched from a permissive open source license to a HashiCorp decides license: they get to decide on a case by case basis now—and they can change their mind at any time.
That is very shaky footing on which to build anything.
And the legal team at every company you work for will have to take that into account before deciding you can or can't use Terraform.
It will affect categories of business users (programmers) who currently embrace terraform: amazon, google, microsoft, oracle, alibaba cloud.
Like it or not, cohorts of engineering organizations like the above (cloud providers) have a very outsized weight and already have contender products they can choose to vigorously fund tomorrow.
From the article:
The license does not allow you to use Terraform if you meet both of the following conditions:
You are building a product that is competitive with HashiCorp.
You embed or host Terraform in your product.
My $0.02: the management of hashicorp is following a stupid trend and should have thought about their customers more.
It will come out to what lawyers think, I guess. Lawyers usually say no to things with poorly established precedent.
> As usual in OSS-goes-private events these all just sound like “keep building our critical infrastructure tool for free or we will go elsewhere”.
If Terraform was 100% developed by HashiCorp employees that would be fair description. It's more like "We're the only company allowed to make money off of the codebase you contributed to."
> What is hashicorp or any other company in their position to do?
I'd suggest paying developers to write proprietary code. That way they could just sell a product they fully own instead of having to pull this bullshit re-licensing of an open source codebase.
Someone correct me if I'm wrong, but if I use TF in a Petstore-as-a-Service to provision new machines for my users, does that not count as embedding TF? So if HashiCorp decides to do Petstore-as-a-Service tomorrow, no matter how shitty the offering, no matter how insincere, even if it's just a single intern working on it, I would have to, overnight, rip TF out of my entire offering, no?
IANAL but it doesn’t seem like much, unless you’re planning on building a terraform-as-a-service company to compete directly with hashicorp. Honestly I’m not surprised given hashicorp’s enterprise offering are basically just… hosting the .tfstate file for you? I still can’t figure out what their upsell is.
Hosting the statefile in a secure way, state-locking, injecting the secrets so you're not keeping the secrets locally or in env var, and pre-built integration with other Hashicorp suite.
I see the use case for it if you don't want to use a 3rd party or open source tool (Atlantis) but the pricing seems prohibitive.
Long term? Possibly less adoption (teams may elect to go with Pulumi or some other alternatives), less 3rd party tooling available (what if Hashicorp decides your tool is their competitor?), etc.
It seems very similar to the spat that community had with Red Hat with how 3rd party captures too much value from their own internal offering and leadership responds by changing the license model and makes things less open-source-y. Perhaps this will become the new normal for OSS/former OSS? IDK.
As a regular end-user, the main difference in the licenses is that it forks the ecosystem. If the fork goes ahead, and you were using some HashiCorp products and some software that is moving to OpenTF, eventually you won't be able to use that combination of tools any more. So you will have to pick what license you are going with, even if you don't care about the license directly.
I think you will be affected by the bigger picture. Mongo did this move also but there they were mostly in control before and after. Here there is a huge community of plugins. If before AWS shared a provider without hesitating, now they will ask themselves why contribute to a closed and possibly competitor garden.
AWS has had a Terraform competitor for a long time; CloudFormation. I'm not sure how much they contribute to Terraform, some I think, but most of the AWS provider for Terraform to my knowledge is developed and owned by Hashicorp. If AWS just stopped contributing to Terraform development, I think it would have effectively zero difference.
It depends on what you're using it for, and whether it competes with anything Hashicorp does -- or might do in the future. If you can guarantee that's "none", you're in the clear. But as the man said, prediction is hard, especially about the future.
this is similar to other opencore debacles, Hashicorp wants you to use their managed Terraform rather than someone else's integration. This means that if you choose TF today for you managing your IaC you'll potentially be locked into using Hashi products down the road. The community is all but guaranteed to fork TF which means that over time the two forks will diverge and you'll have a bad time when trying to read docs, debug, contribute fixes.
> When any company releases their tool as open source, the contract with the community is always the same...
There is no contract. Try to enforce it. Even non-binding expectations differ widely among projects.
> We believe that HashiCorp should earn a return by leveraging its unique position in the Terraform ecosystem to build a better product, not by outright preventing others from competing in the first place.
Nobody at Hashi cares how their competitors think they should make money. As for competition, Hashi just blew the whistle for an all-comers product pace-race against its formerly free-riding rivals. The old code remains MPLv2-licesed. That's the starting line. Their new BSL automatically releases new code under MPLv2 four years after it's published. That's Hashi committing to a minimum pace. They clearly foresaw a fork.
They are betting their maintenance commitment, expertise, new development pace, and existing book of business will make their new, less than four-year-old versions the versions users want, despite the license. Hashi's announcement and FAQs try to minimize perceived cost of the license change by emphasizing they intend no change for users, customers, and contributors, as distinct from product-service competitors. This new fork announcement tries to maximize uncertainty about the license and throw shade on future development prospects. It's all in the game.
Customers can watch the runners run. Eat popcorn.
I think it's highly unlikely Hashi's rivals will make enough marketing pain on this to force them to reverse the change. The database companies made far bigger moves, with more complexity and fewer marketing lessons learned. They held out. So the war's on the product dev and product marketing fronts.
The real test will come in January, after Hashi says it will stop backporting fixes to the current MPL release. At that point, the rivals are under their own power only. Will any MPL-today-licensed fork be so competitive with Hashi's version at that point that customers bet on it over Hashi's long-term? It will have to bear its own development and maintenance costs for whatever differentiates it.
I'm familiar with the products, but not an active user. My main question is whether there's substantial new development still to be done on the most popular projects, or whether it's really a maintenance war. I'd be looking for whether Hashi's new versions break compat, either tactically or as a consequence of new development.
Turns out the proliferation of open source was never because of "collaboration" or "community", it was actually just a result of zero interest rates and "growth".
That's a possibility, but a collection of companies trying to drive FOSS even after the original company stops is a great argument that it is in fact a collaborative thing
> This is similar to how Linux and Kubernetes are managed by foundations (the Linux Foundation and the Cloud Native Computing Foundation, respectively), which are run by multiple companies, ensuring the tool stays truly open source and neutral, and not at the whim of any one company.
> We strongly prefer joining an existing reputable foundation over creating a new one. Stay tuned for additional details in the coming week.
Joining an existing foundation sounds like the right move to me. Many organizations need this fork to take off very quickly, since they are facing legal uncertainty. Make sure it is clear how to support the project, and those organizations will be happy to do so.
How exactly do the companies involved plan to fund a fork? It would require at minimum 3-4 full time engineers, and no one is going to do that work for free.
It’s also telling that this manifesto blithely suggests TF could become Apache 2, which is wholly untrue.
An earlier version of the manifesto contained pledged resources from each company (you can still find it in commit history). It totalled to ~10 full-time engineers just from founding orgs. It was removed to simplify adding their entries for new pledgees
Omitting commitment details may simplify pledges, but it also makes pledges almost meaningless. I'd take pledges much more seriously if they still had details
I am a co-founder of Terrateam and we have made a pledge for OpenTF. I understand where you are coming from, but right now it is difficult to know what exactly to pledge as we want a dialogue with HashiCorp on what they need as well, if they want to donate to a foundation.
Do the founding orgs have public disclosure of their finances? It seems the vast majority of them are VC backed companies that probably don't even have two years worth of runway, let alone be in a position to meaningfully commit to funding engineers for five years.
From Hasicorp’s perspective this license change is less about Terraform and mostly about Vault. Terraform Enterprise isn’t very successful. Their cloud offering for Terraform isn’t competitive with a basic GitHub actions workflow. They’ve learned this the hard way.
With Vault they’ve just recently launched their new cloud secrets service. Most of their revenue comes from Vault. Nearly all of their future revenue growth is vault. Any competitor could relatively easily provide the same service with vault FOSS today. Vault is feature complete. The only moat they have is their license.
Best for the ecosystem if both Vault and Terraform are maintained by a foundation. Not best for Hashicorp, but best for the industry for sure.
He doesn't work at Hashicorp anymore, and even quit the board. Since the company is public he could have just completely cashed out at this point (and I wouldn't blame him for it).
Only Hashicorp employees are allowed to comment here? How about the person that actually built it? Maybe he has interesting things to say. Also, he's been silent on HN as a whole, not just Hashicorp-related threads.
Maybe he signed a gag order and cashed out. We may never know.
After thinking about it for a few days, I think Hashicorp's move is actually net great for everyone. The products that matter will be forked and maintained in the open. Hashicorp would choose wisely to merge mature forks back into their product, if they can, and now you have a pretty standard model of up selling enterprise features on an open product that's developed in the actual open. It frees Hashicorp up to focus on the enterprise, which they have already been doing, but they've been neglecting their OSS, like Terraform (what the actual fuck is going on with the development of Terraform over the last two years?)
This is great. I'm really looking forward to using freed forks that pop up in the months ahead.
This is Hashicorp's sink or swim moment.
If you asked me when they announced the license change, I'd say bet the horse on Pulumi and move on. But now I actually think this could really rejuvenate the TF ecosystem.
We just moved the signatures to a table format, so you individuals can now add themselves to the table: just set the "type" column to "Individual." Thank you!
What if Microsoft buys it for there devtools business like NPM, Github, VSCode + Terraform,Consul,Vault.. all just more gateways into azure but like all the others they own allow you to use it how you want opensource.
Long time YC reader here. Created an account just to make this comment: maybe this idea of a fork is a good thing. We maybe able to potentially explore developing HCL further. Develop further abstractions that are provider agnostic. At least for most common resources like instances, security groups etc.
Obviously there would be tradeoffs involved. But if we can cover the 80% situation, that would be a good start.
I don't get the argument that there's legal ambiguity. It was Mozilla licensed through some version; as long as you use that version it will be fine, right?
Obviously, there's the argument that Hashicorp might sue you even for that, but it feels like the reductio ad absurdum that any company can sue you for anything, and not remotely plausible.
I'm looking forward to seeing the creation of the foundation. Honestly, given the huge number of people that use it, the open source activity, etc., closing the source is a huge deal and Hashicorp could not really expect anything other than a big response to keep the open source version going. Am I wrong here?
While this wont affect 99.9% of us, I am very happy what this will mean for further kubernetes adoption.
VMS are a concept of older times and should be replaced by containers.
YES; there is still use for VM's but everyone and their mother migration to k8s will mean wonders for portability of applications and more.
Ask Roblox employees how they feel about Hashicorp products. Terraform is probably the most solid product they have seconded by vault, but after hearing the consul and nomad horror stories, I don't think I could take their products seriously ever, not when kubernetes is setting right there.
TL;DR The outage was caused by (a) they enabled a new streaming feature in Consul under unusualy high read-and-write load, (b) the load conditions triggered a pathological issue in the third-party BoltDB system upon which Consul relies and (c) all of that was exacerbated by having one consul cluster supporting multiple workloads.
I'd use quotes to catalog this as a "horror" story, because this was clearly a very specific issue triggered by a specific and complex set of circumstances.
The blogpost also mentions that Roblox worked closely with Hashicorp engineers to mitigate the issue, and work towards structural solutions; the post also affirms their choice to manage their infra themselves rather then moving into a public cloud solution.
Sure, Kubernetes covers loads of territory. But there definitely are niches where products like Consul & Nomad do add value.
Hashicorp enterprise support is rock solid for Nomad, Consul, Vault. If there is a P0 problem, they will root cause, and usually have a fix identified in < 48 hours. All three of those products are taken very seriously - running 10,000+ servers in a single cluster.
Enjoy spending the rest of your life trying to get etcd to cooperate. If you think operating Kubernetes at scale is a cakewalk, you don't have the scale problems you think you do.
I'll take consul over etcd ten million times out of ten.
the OP was suggesting that it's just obvious to use Kubernetes instead of Nomad.
I was saying that anyone who operates large scale Kubernetes knows that you will forever be dealing with tuning etcd and fighting to keep etcd alive. It's an underpinning service of Kubernetes.
Roblox's outage was related to the intricacies of running consul and mistakes that they made.
The point I was making was that I would rather, at this scale of operation, be running Nomad and optionally Consul and optionally dealing with the intricacies of Consul than running Kubernetes and being _forced_ to deal with what a miserable pain in the ass etcd is.
I was saying that running Kubernetes is _not_ the obvious choice -- at least once you're at 10^5+ systems.
Until you have 3 days of downtime and end up on a special build of consul that nobody else has while nearly decimating your companies stock price, reputation, and employee morale, the alternatives start looking better. The point I'm making is that it doesn't handle their scale today and that projects with larger communities, support, and usage exist today. No one said k8s was a silver bullet, it's just an alternative to an already failing infrastructure.
If you truly read and understood the Roblox post-mortem you would understand that the problems that they had with Consul were partly and unintentionally self-inflicted and partly due to BoltDB. The "special build" was just early access to Hashi's already-ongoing work to replace boltdb with bbolt, which has long-since shipped.
The hyperbole applied to the description of the harm done to Roblox here I'm just going to ignore. Roblox is still popular. The company still has the reputation of a rock-solid engineering department. Roblox's stock isn't doing much differently than the rest of technology companies on the market and the "damage" you mention is vastly overstated anyway. In fact, Roblox's stock literally had a huge rally after and PEAKED within two weeks after the outage.
iojs comes to mind. This is the beauty of open source in action.
Long living forks and fragmentation sucks, but now HashiCorp has to react to this and provide compelling benefits if they don't want to loose the whole project alltogether.
Spacelift co-founder here.
Not going to comment on the legal aspect, but I'm actually curious when it did become acceptable in polite society to say that "we just want to kill the competition".
Not sure what sort of thought process went into Hashicorp thinking that going BSL was a good idea. I am exagerating but almost all of their code is community driven. So the biggest issue is that this will likely kill all of their profit.
Perhaps if they went down the certification strategy it would've been a safer gamble. Certified Hashicorp Terraform Practioner. 650 a cert, probably would've saved their arse.
I don't think the license change is unwarranted. At a previous employer we used Terraform but the pricing on the cloud/enterprise offerings was prohibitive enough that we instead had a dev create simple wrapper scripts in our CI/CD system to run the deploy jobs. Significantly cheaper, but I spent years pushing for us to eventually move to the paid offerings as the developer experience was significantly lacking (and to support Hashicorp), up until I left the company. I think they're still using those wrappers today despite how awful they were to use.
There was definitely room for improvement around using Terraform to do actual deployments. From better UX around doing PR's -- showing not only the commit diff but the output of a "tf plan" as well to see what it might actually do -- to actually running the deployments on isolated build machines that could hold the sensitive cloud API keys and provide a deployment audit trail, these were all features that teams absolutely needed to use Terraform sanely.
As a solo developer you don't really need those features, but if you're on a team you definitely did, and were almost certainly willing to pay for it. Hashicorp recognized that need and created the cloud/enterprise offerings to provide that.
At some point the thought even crossed my mind of creating some open-source tool that could provide a nice enough web interface for dealing with Terraform for teams, building on what we had and providing the features I listed above, but the main reason I didn't was because it would be biting the hand that feeds. Such a tool would take away people's incentives from using Hashicorp's paid offerings and ultimately reduce their investment in Terraform and their other fantastic tools, and in my opinion, be disrespecting the tremendous work Hashicorp had done up to that point. I've been a user of their stuff since they only had Vagrant, and of course have loved them seeing them succeed.
It seems others, however, had different opinions and saw a business opportunity thanks to the permissive licensing and the high costs of Hashicorp's paid offerings. Plenty of money to be made from making it easy to use TF in teams, especially when you're not obligated to contribute back or maintain the underlying software [1]. Any time I saw a "Launch/Show HN" post from a company that was offering such TF wrapper web interfaces, I kept being surprised that Hashicorp hadn't yet clamped down on preventing lower-cost offerings of their paid services. It was only a matter of time.
[1]: I realize this reads as overly harsh to some of these companies, especially as some of them are in here replying and pledging to give back, so let me try to explain my reasoning here. When I use a product, I like it when the source is available from me to learn from and understand how it works [2] and to contribute back to for needed features or bugfixes [3].
When a company makes a product open-source, that's great! But if that product is the core of that company's business model [4], and another company starts competing with that company using the same open-source product, then I see a problem down the line. While you can make the argument that the competition is good and motivates the two companies to compete on the value they bring to their customers, which is a net-benefit to the open-source ecosystem as a whole as the open-source product is improved, it eventually turns into a race to the bottom. Pricing will be used as a core differentiator, reducing the overall R&D spending on the open-source product because ultimately the two companies have to maintain non-R&D staff like sales, finance, and support. If the Total Addressable Market is fixed (obviously not, but work with me), then that's two or more companies with the same fixed non-R&D costs diverting revenue that could be spent instead on improving the open-source product. Sure, the reality is that a lot of that revenue isn't going back to the open-source product, as a lot of people are complaining about in the comments, but that diversion is probably going to happen anyway whether there's 1 company or 20, so I'd accept it as a cost of doing business.
If instead the competition were on providing a better but different open-source product in the same space (e.g. Pulumi), rather than working off the same base, that would be a different story. But if developers keep seeing businesses take open-source projects and directly compete with their creators, then I think we're going to see a net harm to the open-source community as it creates a sort of chilling effect as it'll demotivate them from going the open-source route so that they can find a viable way to sustain their efforts. I think licenses such as the BSL and SSPL are valid enough compromises, considering that even mentioning the AGPL inside of a lot of companies seems to be like someone saying Voldemort's name. We can't rely on large corporations sponsoring open-source projects, either with money or developer time, if we want them to succeed.
We grant inventors 20 years of exclusive-use on an invention, provided they explain how to reproduce it through the publishing of a patent. What's the difference between that and the BSL? I see a lot of complaints about bait-and-switches, but I don't really see the issue. If you contributed to the project under the old license, it's still available under the old license! You just don't get any of the new changes starting from the license change. If you decided to use Terraform in a non-competing way [5] solely because of the old license, and are concerned about the new one, then you have to recognize that Hashicorp is now another addition to a long-line of "open-core" companies trying to deal with the reality that companies will make money any way they legally can. This is where the industry is currently headed, and whatever replacement you find will probably be next.
If you believe different, then make an open-source offering, and don't just make a public statement saying it'll be open-source forever. Public statements are great and all, up until there's doubts about meeting payroll. Find a way to make the statement legally binding and then we're talking. Which is I guess why there's so much consternation, since the way to do it is through the license, but the OSI doesn't recognize any of these other licenses as "open-source" and the AGPL is a non-starter at most companies.
[2]: Reading the source code for libraries I use has been incredibly valuable in my understanding of how to use the libraries properly, much better than any documentation could. And of course, makes me a better programmer in the process.
[3]: At one point, Terraform was missing a feature that I badly needed. With the source available, I could easily get a new version of it running locally with that feature to unblock me, and then everyone benefited when I contributed it back to the project. It's also been invaluable having these locally modifiable builds to understand the quirks of products from cloud vendors, and to work around them. Ever had multiple deployment pipelines fail because Azure decided to one day change the format of the timestamps they returned in API calls, without publishing a new API version? I have.
[4]: As opposed to supplementing their business model. Google open-sourcing K8s was great for them because it drove adoption of their cloud VMs. Their cloud business makes money off the VMs, not GKE, so sponsoring K8s is essentially a marketing expense. But for Hashicorp, their core business model is paid offerings of their products.
[5]: Yes, I get that the license currently is un-clear, for all their products. But let's simply say that you're not trying to directly sell a wrapper around running Terraform.
Terraform has a bad design. It's a configuration management tool, first and foremost, and configuration management tools need to do one thing well: fix things. Not just "change state", but functionally, actually fix some software to make it work again. Terraform is really bad at this. It's difficult to configure, difficult to operate, and it likes to find any reason at all to just blow up and force you to figure out how to make the software work again.
Configuration management tools should make your life easier, not harder. You shouldn't have to hire a "Terraform Admin with 3 yrs experience" just to learn all the bizarre quirks of this one tool just to get your S3 bucket to have the correct policy again. You shouldn't have to write Go tests just to change said policy. It's like it was invented to be a jobs program for sysadmins.
I have a laundry list of all the stupid design decisions that went into the damn thing. And because the entire god damn industry is stuck on this one tool, no other tool will ever replace it. Its providers are so large and there are so many modules created that it would take years of constant development to replace it. So it doesn't get changed or improved, and it can never be replaced. It is the incumbent that blocks progress. A technological quagmire we can't extricate ourselves from.
The essential purpose of this tool is really to be a general interface to random APIs, track dependencies in a DAG, pass values into resources when it has them, attempt to submit a request to the API, and then die if it doesn't get a 200 back. We can accomplish this in a simpler way that is less proprietary and more useful. And we can ramp up on specific functionality to give the solution actual intelligence, like default behaviors for specific resources in specific providers, hints on how to name a resource, more examples, canned modules that are easier to discover or publish, ability to use different languages or executables, etc. But we need to put forward those alternatives now, or we won't get the chance again for a long time.
This is perhaps the most incorrect post you will ever find on HN. I am not a huge fan of Terraform. However, TF is made for infra not config. There are several tools out there to manage config like Salt, Ansible and so on.
Terraform is literally a program that looks at a declarative configuration file, looks at a state file, queries some APIs, and then submits some API calls. That is all it does.
There is no "infrastructure", or "config", or "cloud". It's literally just calling HTTPS APIs, the same way it would call a system call or a library function. Call function, pass input, receive output.
There is no magic sauce. There is no difference between it and any other tool that has a declarative configuration, a state, and operations to try to change things to match a desired state.
It's all configuration management. The words "infra", "orchestration", "cloud", etc is marketing bullshit. It's all just software.
Yes and no. Yes, all terraform does is wrap APIs and you could easily write a Terraform provider for just about anything.
But there is a very real difference between "Deploying a server" and "Modifying configuration files on that server". The former used to require actual physical actions in a data center and it's only in the world of modern virtualization and clouds that it has become possible to do it through an API. Where as the latter used to require secure access to an individual physical machine, often over SSH after someone had done the physical work of setting it up. Again, it's only in the world of modern virtualization and clouds that you can start to do that through APIs.
It is only modern clouds that has blurred the lines between these two by abstracting away the difference between the physical server and the software running on it behind APIs.
Conceptually, it can still useful to think of "infrastructure orchestration" and "configuration management" as different things and different categories. Like I said, in many cases cloud offerings significantly reduce the utility of those categorizations - because they often abstract both steps behind a unified API where you are launching virtual infrastructure (still largely using the same conceptions that were used when it was physical) and defining its configuration at the same time through the same interface.
None of this is marketing speak. It's just definitions and categorizations. Sometimes useful, sometimes not. And all of it is orthogonal to what terraform does do or should do. Whether or not terraform is "infrastructure orchestration", "configuration management" or both is neither here nor there for the definition of those terms and considerations of their utility.
> there is a very real difference between "Deploying a server" and "Modifying configuration files on that server"
Yeah: latency. Everything else is identical, from the software perspective. Even the distributed aspect is identical: multiple copies of software running in one OS, or multiple machines running one copy of the software, are treated virtually identically.
> it's only in the world of modern virtualization and clouds that you can start to do that through APIs
I've worked in multiple companies, starting nearly 20 years ago, that had automated the process of provisioning and re-provisioning both hardware and software across tens of thousands of machines in multiple datacenters. Without virtualization, without the cloud. Know how we did it? Same way Terraform does it. Make an API, make a tool to call it, API backend does some magic, returns result, tool does something with result. Nothing has changed except the buzzwords (and the programming languages).
Configuration management is "a systems engineering process for establishing and maintaining consistency of a product's performance, functional, and physical attributes with its requirements, design, and operational information throughout its life." [1] It is not Puppet or CFengine. It is an engineering practice that is nearly 70 years old. Terraform is an implementation of it, as are many other tools, and many things that aren't software at all.
> None of this is marketing speak. It's just definitions and categorizations. Sometimes useful, sometimes not. And all of it is orthogonal to what terraform does do or should do.
On the contrary, the categorizations are made up by people who don't understand the history and practice of the field and confuse designers and practitioners into thinking that what they're doing is correct because "that's just what things in this category do". It's throwing out systems thinking and replacing it with a cargo cult of buzzwords and generally useless concepts.
Every week I see somebody talking about "Infrastructure as Code" as if it's a real thing. It's not. IaC just means somebody put a shell script and config file in Git. Yet they treat it like it's both revolutionary and specific to this one corner of tech. Like we haven't been version-controlling or change-managing the provisioning of computing devices for decades. People who weren't aware of standardized practices for management of fleets of devices basically had to stumble upon it, and not having any other reference, decided to give it a new name and pretend it was novel, and in the process did not learn the lessons from past decades of similar practice.
This is not just an "old man yells at cloud" rant - the point is that tech people keep refusing to learn their history, and then poorly implementing something that could have been designed much better if they'd learned their history. It's like the history of medical practice, where some areas of the globe (cough western europe cough) were embarrassingly backward because they never reached out to learn about the history, research, and best practices outside their sphere. They just did what everyone else around them did. People suffered for decades as a result. We don't suffer quite as much now, but the advancement of technology does suffer as a result of the industry's stodgy refusal to improve on its cargo cult mentality. (Repeating whatever you read in a blog post on HN is what makes things like "Infrastructure as Code" seem like a real and novel idea to people; repeat an idea enough and people just believe it and repeat it too)
Another example: "declarative configuration". All configuration is declarative. Even imperative configuration is declarative. This tautology is debated in blog posts the same way you'd debate the use of types of butter in cooking. It's all just butter. Yeah, some comes without salt; just add some salt to your dish. Yeah, some comes with salt; just withhold some salt from your food. We don't need to go on long-winded writings about the use of different butters. But some people create entire software projects dedicated to one kind of butter, because they think it's super important to only use unsalted butter.
It's better framed as infrastructure orchestration vs infrastructure configuration. Orchestration is more about herding the resources while configuration is about delving into the instance/server/resource environment to make changes. The terms are pretty arbitrary IMO, but those are the vocabulary used in the industry.
> I have a hard time thinking of how terraform as a piece of software can do to much more than it already does to fix things?
Oh, that one's easy: have the "plan" phase actually consult the underlying provider in order to know the straight face errors that are going to fail 60% of the way through your "apply" phase. I thought about including an example, but I don't care to try and lobby unless the community fork takes off, because Hashicorp gonna Hashicorp _their_ baby
Look, I know the TF community is allllllllllll about that Omniscient .tfstate file but (related to the sibling comments about the tool _being helpful_) the real world is filled with randos in an organization doing shit to underlying infra or humans fat-fingering something and it is not a good use of anyone's life having to re-run plan and apply due to some patently stupid but foreseeable bug
1000%. The state file causes way more problems than it solves. The tool makes no attempt to look for an existing resource, or import existing resources, or absorb or ignore changes; you have to manually intervene. Meanwhile production is broken because only half the apply succeeded, but you have no idea if it'll blow up until you apply. No idea if you've set the necessary lifecycle policy correctly for this resource; you'll need to destroy the resource or rename something and see what happens. It's ridiculous.
> So it doesn't get changed or improved, and it can never be replaced. It is the incumbent that blocks progress. A technological quagmire we can't extricate ourselves from.
This describes almost every tool in my toolchain that's over a decade old, this is just what happens. If you want to kill terraform and replace it with something better the usual bar is that it must be 10x better. If that's something you think you could do I'd be (genuinely) excited to see it :)
Want to start a GitHub repo? I'll work with you. My list of must-haves for configuration management:
Hierarchical state and provider management. The difficulty of hooking a kubernetes provider to a EKS or GKE provider in a one-shot-apply is pretty terrible. Trying to nest the helm provider under kubernetes isn't quite as painful but still not great and there isn't a way to get the necessary CRD manifests in place for dependencies or resources before they need to be created.
Diffs as a first-class citizen throughout the layers of providers as opposed to situations like helm_release where helm diffs are completely opaque to terraform and especially to tools like Atlantis.
Slightly more of real programming language concepts (pure functions at least), or else insanely good configuration flexibility. Same defaults with simple overrides should be the default for all providers and modules in a standard way. I think deep merge with reasonable conflict resolution is all terraform needs (plus a rewrite of how configuration works in a lot of places), but I want to be able to define a template configuration for e.g. a cluster and be able to instantiate a new cluster with just:
And have deep merging successfully override the default configuration with those values, plus that kind of generic function capacity to turn verbose/complex configuration blocks into a simple definition.
I've been using terraform for 10-ish years, and this is very much not how I feel about it. Terraform absolutely makes life easier; I've managed infrastructure without it and it's a nightmare.
Yes, it can be awkward, and yes the S3 bucket resource change was pretty bad, but overall its operating model (resources that move between states) is extremely powerful. The vast majority of "terraform" issues I've had have actually been issues with how something in AWS works or an attempt to use it for something that doesn't map well to resources with state. If an engineer at AWS makes a bone-headed decision about how something works then there isn't much the terraform folks can do to correct it.
I've actually been pretty frustrated trying to talk about terraform with people who don't "get it". They complain about the statefile without understanding how powerful it is. They complain about how it isn't truly cross-platform because you can't use the same code to launch an app in aws and gcp. They complain about the lack of first-party (aws) support. They complain about how hard it is to use without having tried to manually do what it does. Maybe you do "get it", and have a different idea of what terraform should do. Could you give a specific example (besides the s3 resource change) where it fails?
It's a complicated tool because the problem it's trying to solve is complicated. Maybe another tool can replace it, and maybe someone should make that tool because of this license change, but terraform does the thing it intends to do pretty well.
I'm no Terraform expert but it's been in my resume and toolbox since ~2016.
Up until these changes, I would always pick Terraform for managing AWS. I have my gripes with it but it has been the best choice (as the saying goes, anybody that uses a tool long enough should have complaints about its limitations).
Now, however, I'm finally thinking of going with the CDK to insulate myself from more seismic shifts in the "OSS ecosystem" of devops tools.
I think it's not only the issue with terraform but also the underlying infrastructure. AWS should've never have imperative APIs in the first place. Or at least it's time for AWS V2 APIs
I agree. Cloud infrastructure should be versioned and immutable. If I have an S3 bucket and make 4 changes to it, there should be V0 (making the bucket) and V1-V3 (each subsequent change). I should be able to tell the bucket API to restore the bucket to V2. Terraform is a hack to fill that gap. The AWS bucket service itself should be doing it, not Terraform. Several classes of software that we all maintain ourselves would go away if cloud infra were versioned & immutable.
That is the trivial part (and any tool even worth talking about already implements it).
The problem is things like “create this instance in parallel as a replacement for this one over here, then shut down the original, detach a volume from the original and attach it to the replacement then run command X on the replacement, stopping for manual intervention at any phase the running system reports it is running at reduced redundancy”.
This is not an atypical requirement for infrastructure as code beyond the basics, but none of the declarative tools come close to addressing it without a bucket load of external coordination.
For most services it’s abundantly clear it’s calling the same imperative APIs under the covers you use as an external person because it gets stuck so often. Well, more often than you would think if declarative management of resources was at the top of Amazon’s mind when designing these services.
Oh god I would be writing for hours. Short version, this is not nearly everything:
- Bad UX
- Tool does not have interactive mode to provide suggestions or simple solutions to common problems
- Lack of options or commands for commonly-used tasks, like refactoring resources, modules, sub-modules, etc. (Using 'state mv' and 'state rm', etc is left as an exercise for the user to figure out and takes forever)
- Complains about "extra variables" found in tfvars files, making it annoying to re-use configuration, even though having "extra variables" poses no risk to operation
- (NEW) Shows you what has changed in the plan output, followed by what will *actually be changed* if applied, though both look the same, so you get confused and think the first part matters, but actually it's irrelevant.
- Bad internal design
- HCL has a wealth of functions yet is too restrictive in how you can use them. You will spend an entire day (or two) tying your brain into knots trying to figure out how to construct the logic needed to append to an array in a map element in an array in a for_each for a module (which was impossible a few years ago).
- Providers are inconsistent and often not written well, such as not providing useful error messages or context.
- Common lifecycle policy conventions per-resource-type have to be discovered by trial-and-error (rather than being the default or hinted) or you will end up bricking your gear after it's already deployed.
- The tool depends on both local state and optionally remote state. Local state litters module directories even though nearly everyone who uses it at scale uses modules as libraries/applications, not the location they execute the tool from. Several different wrappers were invented and default to changing this behavior because it has been a problem for years.
- Default actions and best practices (such as requiring a plan file before apply or destroy, automatically running init before get before validate, etc) are left to the user to figure out rather than done for them (again, wrappers had to solve this).
- Some actively dangerous things are the default, like overwriting backup state files (if they're created by default).
- Version management of state is left up to the user (or remote backend provider)
- Not designed for DRY code or configuration; multiple wrappers had to implement this
- You can't specify backend configuration via the -var-files option, and backend configuration can't be JSON ... why? They just felt like making it annoying. Some "philosophical" development choice that users hate and makes the tool harder to use.
- Workspaces are an anti-pattern; you end up not using them at scale.
- You can't use count or for_each for provider sections, so if you wanted a configurable number of providers (say with different credentials each), tough luck. ("We're Opinionated!")
- Can't use variables in a backend block. ("We're Opinionated!")
- Can't have more than one backend per module. ("We're Opinionated!")
- Lots of persistent bad behavior has been fixed in recent releases, like not pushing state changes as resources are applied, others I can't remember.
- Global lock on state, because again, ya can't have more than one backend block per module.
- All secrets are stored as plaintext in the state file, so either you don't manage secrets *at all* with Terraform, or you admit that your Terraform state is highly sensitive and needs to be segregated from everyone/everything and nobody can be given access to it.
- No automatic detection of, or import of, existing resources. It knows they're there, because it fails to create them (and doesn't get a permission error back from the API), but it refuses to then give you the option of importing them. The *terraformer* project had to be invented just to get a semblance of auto-import, when they could have just added 100 lines of code to Terraform and saved everyone years of work.
- Not letting people write modules, logic, providers, etc in an arbitrary executable. Other tools do this so you can ramp up on new solutions quickly and make turn-key solutions to common needs, but Terraform doesn't allow this; write it in Go or HCL or get bent.
- You have to explicitly pass variable inputs to module blocks, so you can't just implicitly detect a variable that has already been passed to the tool. But this isn't the case if you're applying a module; only if you create a sub-module block. This just makes initial development and refactoring take more time without giving the user an added benefit.
- You have to explicitly define variables, rather than just inherit them as passed to the tool at runtime. Mind you, you don't have to actually include the variable type; you just have to declare *the name* of the variable. So again, it wastes the user's time when trying to develop or refactor, for absolutely no benefit at all.
- You have to bootstrap the initial remote backend state resources *outside* of Terraform, or, do it with local state, and then migrate the state after adding new resources or using a separate identical module that has a backend configuration. Does that sound complicated? It is, and annoying, and unnecessary.
- You have to be careful not to make your module too big, because modules that manage too many resources take too long to plan and apply and risk dying before completing. (If you're managing resources in China, make the module even smaller, because timeouts over the great firewall are so common that it's nearly impossible to finish applying in a reasonable time)
- Tests. In Go.
- Schema for your tfvars files? Nope; write some really complicated logic in a variable to validate each variable in a different way.
- Providers don't document the restrictions on things like naming convention for required parameters, so you have to apply to the API and then get back a weird error and go try to dig up some docs that hopefully tell you the naming convention so you can fix it and try again.
- Terraform *plan* will give you 'known after apply' for values it very easily could tell you *before* the apply, but for whatever reason doesn't. You never really know what it's going to do until you do it and it blows up production.
- It's very difficult (sometimes near impossible) to just absorb the current state of the infrastructure into TF (as in, "it's working right now, please just keep it the way it is"). Import only works if you've already written the HCL for the resources, and then look up how the provider wants to you to import that resource.
- Version pinning is handled like 5 different ways, but is still impossible to pin and use different sets of versions when applying different state files for the same HCL module code and values.
its really funny because I needed to create a terraform like functionality & went in depth into looking at both building it into terraform (which ended up not working) and building a new tool.Its not THAT complex. there are a few gimmicks in HCL, like it being an actual language, that create some interesting features.
but it could just be a yml file. In essence, the requirements section says "here are the builders it needs & their versions" which identifies types of jobs. each entry is a job with a job type, a unique name, and some config info. Each builder is just a series of CRUD operations.
Like you said, it builds a directed acyclic graph, queues up the ready jobs, and executes them. updating the infrastructure's "state" with info from the completed jobs & adding new jobs when their dependencies are finished. the state files are just a dump of that structure as json.
Its not thathard. I think of myself of a junior level dev and I built something for me in my side time in a month with a full test suite and its 3/4 of the way there. CLI, builder dependency injection, type checking, relationship dependencies, it took me a few weekends
I think a senior engineer could build out an enterprise grade functional core product in a few weeks. building & maintaining the CRUD APIs is the real headache, but I think vendors would take care of that themselves if there was a popular enough OS solution.
There’s a lot of inertia in ops tooling. Switching costs are very high for an existing project, and once you learn the quirks of one tool or another, it takes a lot to justify something else for a new project even if it’s better, since you know the new tool will have its quirks too.
The cost-benefit analysis of new stuff is also different for ops compared to pure development. You tend to care more about stability and predictability than productivity and elegant design. Problems in pure dev land cause bugs that mostly aren’t super urgent; problems with ops tools bring down whole systems and wake everyone up at 2am. For these reasons, ops is always going to have a more conservative mindset that shuns the shiny new thing to some extent.
People have this kind of reaction to Fuchsia a lot: wow, isn't this great! Then they learn why it doesn't run on anything and why the Linux kernel has 1201530 commits. The real world is imperfect. You are trying to make an abstraction of "everything" and then complain when it's leaking.
I completely agree that it has problems. I use terraform a lot. Compared to nothing, I love terraform. However, it's so overwrought that it has to be hidden from anyone not in infra for a living.
1. IaC description should be format agnostic and transformable (eg definable in yaml, json, whatever).
2. Something about provider interfaces here, but it's already super messy and not sure if it's an improvement or just a shift
3. State files were wild west last time I checked. And there should be a default database interface provider at minimum. Maybe there is now?
4. Forcing the apply->statefile cycle as the default requires all of compute, interface, and a human. This should have been an abstraction on a raw interface for automated use.
While I agree with you about TF having a lot of issues, the comment isn't helpful. What would you suggest otherwise? Kind of a moot point now that the license is fubar'd, but what could be improved to make it better? If you could have a do-over, what would that look like?
Right now there is pulumi as a alternative that supports different clouds. Otherwise AWS CDK or Azure Bicep come to mind.
If i could to a do-over I'd want the solution to look and feel like AWS CDK but without the cloudformation in the background, and support for GCP and Azure.
I've worked with CDK for 2 years now and being able to define your code in Typescript is quite handy and drastically reduces the effort it takes for new people to learn how our deployment work. It's also quite nice to be able to directly bundle and deploy the application together with the infrascructure with very little effort.
How? I've always viewed TF as good at anything except metal; the best I would know to do is remote-exec but at that point you might as well drop to raw shell.
I mean that the only way I can think to use terraform to provision bare metal is to remote-exec a shell script (ex. to `apt install foo`), at which point you might as well skip terraform and `ssh targethost apt install foo` or `scp ./my-install-commands.sh root@targethost: && ssh root@targethost sh my-install-commands.sh`
Sure. That's effectively what Ansible does as well. You could even just have TF call that and be done.
The point that I'm trying to make is that I see a disconnect between deployment and provisioning.
I want both in a single tool (ala: Pulumi), even with bare metal. Ideally, in a programming language like TS or golang that is easy to get up to speed with and wraps up the complexity of getting servers up and running (as well as maintaining them over time).
...just fork it into a foundation. Don't wait for Hashicorp's response. I get wanting to have the appearance of working with Hashicorp, but we've been shown again, and again, and again, and a-fucking-gain that private corporations cannot be trusted to maintain public goods. Only community governed non-profit foundations can do that.
Private corporations will put the bottom line first every single time. And in the case of investor funded enterprises, the bottom line is never ending exponential growth or bust.