NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
IBM to buy HashiCorp in $6.4B deal (reuters.com)
dang 10 days ago [-]
Recent and related:

IBM nearing a buyout deal for HashiCorp, source says - https://news.ycombinator.com/item?id=40135303 - April 2024 (170 comments)

calgoo 10 days ago [-]
Well, it was nice while it lasted! HashiCorp always felt like a company made by actual engineers, not "bean counters". Now it will just be another cog in the IBM machine, slowly grinding it down, removing everything attractive, just like RedHat and CentOS.

Hopefully this will create a new wave off innovation, and someone will create something to replace the monopoly on IaC that IBM now owns.

jbm 10 days ago [-]
A lot of the people I respected from Heroku went there, glad they got a chance to use their skills to build something useful and profitable; glader still that they got their payout.

Sadly I echo your sentiment about the future, as someone who has heard second-hand about the quality of work at modern Redhat.

I am wondering how many more rounds of consolidation are left until there is no more space to innovate and we only have ossified rent-seeking entities in the IT space.

glenngillen 10 days ago [-]
Heh at “got their payout”. HashiCorp IPO’d at $80, employees are locked up for 6 months. This sale is at $35.
shreezus 10 days ago [-]
Wow IBM got quite the discount!
dudus 10 days ago [-]
The stock was at $31. The $80 level was just shortly after the IPO. They paid fair market price
busterarm 10 days ago [-]
They IPO'd in 2021.
glenngillen 10 days ago [-]
Yes. And many of the Heroku employees you speak of would have got RSUs that owed taxes on an $80 basis, been trading far below that for most of that time, and now have a maximum expected value of $35.

This is not a pay day for many people. Anybody who got a pay day were those that could liquidate in the IPO.

9 days ago [-]
epolanski 9 days ago [-]
Yeah okay, if you had 0.15% stock you're still out with $ 10M.

Smaller and bigger percentages will be different but that's retirement money for hundreds and hundreds unless you pretend to live in very high CoL area. Also, most of them will likely have to keep working thereyears before cashing out some other millions likely.

kensey 9 days ago [-]
It's a little more complicated than that.

First of all your percentage of ownership is unrealistic. I joined in November 2019 and got a grant of a few thousand RSUs that fully vested before I left, and that I still have most of, plus I bought some shares in a few rounds of our ESPP when that became available -- as of today I have just under 5,000 shares. HashiCorp has nearly 200 million shares issued, so I own a hair over .0025% of the company. Really early employees got relatively big blocks of options but nobody I knew well there, even employees there long enough to be in that category (and there were very few of them still around by December 2021), was looking at "fuck-you money" just from the IPO.

Second, the current price isn't the whole story for employees. I had RSUs because of when I joined so the story might have been different for earlier employees who had options, but I don't think it differs in ways that matter for this discussion. As background for others:

* On IPO day in December 2021, 10% of our vested RSUs were "unlocked" -- a bit of an unusual deal where we could sell those shares immediately (or at any later time). Note "vested" there -- if you had joined the day before the IPO and not vested any RSUs yet, nothing unlocked for you. (Most of the time, as I understand it, you don't have any unlocked shares as an employee when your company IPOs -- you get to watch the stock price do whatever it does, usually go down a lot, for six months to a year.) * At a later date, if some criteria were met (which were both a report of quarterly earnings coming out and some specific financial metrics I forget), an additional tranche of vested shares (I think an additional 15%) unlocked -- I believe this was targeted at June 2022 and did happen on schedule. * After 1 year, everything vested unlocked.

At the moment of the IPO the price was $80, but it initially climbed into the $90's pretty fast. At one point, during intraday trading, it actually (very briefly) broke just above $100.

So, if you were aware ahead of time that the normal trajectory of stock post-IPO is down, and if you put in the right kind and size of limit orders, and if you were lucky enough to not overestimate the limit and end up not selling anything at all, then you could sell enough shares while it was up to cover the taxes on all of it and potentially make a little money over that. I was that lucky, and managed to hit all of those conditions while selling almost all of my unlocked shares (I even managed to sell a small block of shares at $100), plus my entire first post-IPO vesting block, and ended up with enough to cover the taxes on the whole ball of already-vested shares, plus a few grand left over. Since then, I haven't sold any shares except for what was automatically sold at each of my RSU vesting events.

For RSUs not yet vested at the IPO, the IPO price didn't matter because they sold a tranche of each new vesting block at market price to cover the taxes on them when they vested -- you could end up owing additional taxes but only, as I understand it, if the share price rose between vesting and sale of the remaining shares in the block, so you would inherently have the funds to pay the taxes on the difference. (And if the price fell in that time, you could correspondingly claim a loss to reduce your taxes owed.)

There were a fair number of people who held onto all their shares till it was way down, though, and had to sell a lot to cover their tax bill in early 2022 -- I think if you waited that long you had to sell pretty much all your unlocked shares because the price was well down by tax time (it bottomed out under $30 in early March 2022, then rose for awhile till it was back up over $55 right before tax day, so again, if you were lucky and bet on the timing right, you didn't end up too bad off, but waiting till the day before April 15 was not something I bet a lot of people felt comfortable doing while they were watching the price slide below $50 in late February). I even warned one of the sales reps I worked with, while the price was still up, about the big tax bill he should prepare for, and he was certain I was wrong and that he would only be taxed when he sold, and only on the sale price. (He was of course wrong, but I tried...)

The June unlock was pretty much irrelevant for me because by that point the share price was down under $30 -- it spent the whole month of June after the first week under $35. The highest it went between June 30, 2022 and today, was $44.34. The entire last year it's only made it above $35 on three days, and only closed above $35 on one of them. I figured long-term the company was likely to eventually either become profitable, or get bought, and in either case the price would bump back up.

I was thinking about cutting my losses and cashing out entirely when it dropped below $30 after the June layoffs, and again in November when it was below $20, and then yet again when I left the company in January of this year, but the analyst consensus seemed to be around $32-34 through all of that so I held on -- kinda glad I did now instead of selling at the bottom.

hughesjj 9 days ago [-]
> if you had 0.15% stock you're still out with $ 10M.

... Barely any employees could have that much stock. There's 2200 employees from the most recent data I see. Even if the outstanding shares were 100% employee owned, a uniform allocation would at best see a 0.045% between them all. Obviously, the shares are not uniformly distributed across employees, nor is hashicorp 100% employee owned.

glenngillen 9 days ago [-]
You've misunderstood my point. RSUs became taxable at the $80 stock price for many. Depending on where you're based that could mean you owe(d) anywhere from from $22 - $38 per share in taxes. At the top end of that range, if you're still holding any stock, this acquisition has just permanently crystalised a capital loss for you. There's no upside that gets you above what you owe/paid in taxes.

There are many many people who made a loss on this, even before the acquisition announcement.

Also I think your ownership % is way off. There's a pretty small group of people, most of them the earliest employees + execs, who would have got out with $10M. HashiCorp currently has thousands of employees and would have churned through thousands more over the years.

lbotos 9 days ago [-]
I don't know how pre-public to IPO RSUs work but let's do some math assuming IPO day is "day when RSUs vest":

IPO day and you get 1000 RSUs unlocked/vested. Share price is $80. You made 80k gains. For simplicity let's say you owed 40K in taxes.

One of two things happens:

- Hasihcorp auto sells to cover and you get 500 less shares. - You need to pay your taxes on your own and earmark 40K.

Let's pick the easy one: If Hashicorp sold for you that day you are now sitting on 500 shares with a cost basis of $80.

Let's go to today, IBM buys and the person held. 500 shares are now were $35 so the value is $17,500.

You cash out -- getting 17,500 in your account, and a capital loss of $22,500.

Sure, 17K isn't as cool as 40K, but the person still "made money" just _less_. You make it sound like this person is now "underwater" because they had a capital loss.

=====

And kids at home, this is why you sell some/all of your RSUs as you get them. No one company should be more than 15% of your portfolio. Even the one you work at.

throwaway2037 9 days ago [-]

    > No one company should be more than 15% of your portfolio. Even the one you work at.
Tell that to the guy who went all-in for NVidia employee share purchase plan and is worth more than 50M USD. (I think it was a Register article posted here recently.) Sometimes the gamble is worth it. That said, for every one of those once-in-a-lifetime stories, there are many, many more about engineers who walked away from post-IPO start-ups with very little wealth gained. So many have posted here before, it just isn't worth it.
glenngillen 9 days ago [-]
I don't need to make any assumptions about anything here, other former colleagues have gone through the specifics in other replies. Nothing is auto-sold at IPO to cover taxes, a maximum of 10% of what had vested was allowed to be sold before the 6mo lockup expired. There was a total of a few weeks before a combination of trading blackout window, lockup, and market crash conspired to have make it easy to be underwater if you hadn't elected to sell everything you could coming into the IPO.

_A lot_ of people ended up with a loss.

lbotos 9 days ago [-]
Ok -- I need your help. I'm missing something here.

People got RSUs. They owed tax on said RSUs. The tax cannot be higher than the value of the RSU at the time of vest.

If people did not have enough cash to pay their tax bill, and did not sell enough RSUs to get cash to pay said tax bill, then yes, I can see those people "with a loss" because they had a "surprise" tax bill, RSUs price went down and a cash problem now. Is this what you mean happened?

They shouldn't have had to "sell everything" -- at most like 50%.

I'm arguing with you here because this stuff is complex, and many people shy away from trying to understand it, and that's a huge disservice for those in our industry.

For anyone reading along -- It's as simple as this: understand the tax implications of the assets you own, pay your taxes.

glenngillen 5 days ago [-]
That's part of the surprise - I can't speak to the specifics for US citizens more than others in this post have as I'm not based there. Tax definitely wasn't determined _at time of vest_ for anybody though, it was time of liquidity.

In Australia we were granted options, which ordinarily are taxed at time of exercise. Lots of people were surprised to discover, almost a full year after the IPO, that those options were also subject to a tax deferral scheme and any tax already paid at exercise wasn't sufficient. The actual taxable amount determined by HashiCorp and the ATO was the $80 IPO price. If you sold the full amount you were entitled to (10% of your vested holdings) at the IPO you were probably fine. If you sold nothing, because you thought you had already paid the required taxes, by the time you received the tax statement the value of your stock would have been less than what you owed in taxes.

sgerenser 9 days ago [-]
I’m pretty sure U.S. law requires companies to withhold at 22% (or optionally higher) for any bonus/non-salary payments, which includes RSU vesting. Companies can choose to either “sell to cover” or just issue a proportionally lower amount of shares (e.g. you vested 1000 shares but only 780 show up in your brokerage account).

The problem occurs when 22% isn’t enough, which is often the case.

9 days ago [-]
acchow 9 days ago [-]
The taxes are computed using the IPO price, not the price at opening or closing on the first day of trading.

IPO price was $35.

glenngillen 9 days ago [-]
IPO price was $80. Briefly touched slightly above $100, and then crashed with the rest of the market and has spent most of its time since below $30.
kensey 9 days ago [-]
What are you talking about? The December 2021 IPO price was $80.
hobs 9 days ago [-]
What? What's their strike price? If they are above the sale price their return is 0.
esprehn 9 days ago [-]
RSUs are regular shares, folks with options would have a different story.
mogwire 9 days ago [-]
It always amazing me how people play telephone with Red Hat and how bad the quality of life is post IBM.

When they show the service awards they don’t even cover 5 years because they don’t have all day.

If it was so bad then you wouldn’t see engineers with 10, 15, or 20 years experience staying there. They already got their money from the IBM purchase so if it were bad then they would leave.

Oh but they don’t innovate anymore.

Summit is coming. Let’s see what gets announced and then a live demo.

MajimasEyepatch 9 days ago [-]
> If it was so bad then you wouldn’t see engineers with 10, 15, or 20 years experience staying there. They already got their money from the IBM purchase so if it were bad then they would leave.

Every big, old, stagnant company is full of lifers who won’t move on for any number of reasons. The pay is good enough, at least it’s stable, the devil you know is better than the devil you don’t, yada yada yada. There are people in my life who work in jobs like that. They will openly admit that it sucks, but they are risk averse due to a combination of personality and family circumstances, so they stick it out. Their situation sucks, and they assume everything else sucks too. And often, because they’ve only worked in one place so long, they have a hard time finding other opportunities due to a combination of overly narrow experience and ageism.

The movie Office Space is about exactly the sort of company that is filled with lifers who hate their jobs but stay on the path of least resistance.

(I know absolutely nothing about working at Red Hat, so I’m not trying to make a specific claim about them. But I’ve known people in this situation at IBM and other companies that are too big for their own good.)

steelframe 9 days ago [-]
> they have a hard time finding other opportunities due to a combination of overly narrow experience and ageism

I too know several lifers at IBM. One thing I've realized is that staying loyal to a company over several years won't save you from ageism.

Your best defense against ageism may be to save more than 50% of your tech income for about 20 years, then move into management and build empires until the music stops.

frost_knight 9 days ago [-]
Red Hat Principal Consultant here, July will be 7 years at the company for me.

Before IBM purchase: travel to clients, build and/or fix their stuff, recommend improvements

After IBM purchase: travel to clients, build and/or fix their stuff, recommend improvements

At least on my side of the aisle I haven't noticed any notable changes in my day to day work for Red Hat. IBM has been very light touch on our consulting services.

throwaway2037 9 days ago [-]

    > Oh but they don’t innovate anymore.
IBM was #4 in the US last year for patents here: https://www.ificlaims.com/rankings-top-50-2023.htm
me_again 9 days ago [-]
Patents are a stronger signal of a company focused on financial engineering than a company focused on innovation.
nomat 10 days ago [-]
our current economic model kind of depends on the idea that we can always disrupt the status quo with american free market ingenuity once it begins to stagnate but maybe we have reached the limits of what friedman's system can do or accounted for.
buxtehude 9 days ago [-]
The American market is highly over regulated and most market libertarians would argue it hasn't been "free" in a long long time.
rank0 10 days ago [-]
I don’t understand people’s beef with IBM. They have been responsible for incredible R&D within computing. I even LIKE redhat/fedora!

HashiCorp had already been sold out since waaaay before this acquisition and I also don’t understand why their engineers are seen as “special”…

Rinzler89 10 days ago [-]
People's beef here with IBM is they don't make shiny phones and laptops and don't create hip jobs where you're paid 500k+ to "change the world" by selling ads or making the 69th messaging app.

They just focus on tried and tested boring SW that big businesses find useful and that's not popular on HN which is more startup and disruption focused.

redserk 10 days ago [-]
This is unnecessarily dismissive.

While Hashicorp hasn’t been exciting for a while, I fail to see how an acquisition from IBM will invigorate excitement, much less even a neutral reaction from many developers.

Hashicorp had a huge hand in defining and popularizing the modern DevOps procedures we now declare as best practices. That’s a torch to hold that would be very difficult for a business like IBM.

Perhaps I missed some things but the core of Ansible feels like it’s continuing it’s path to be much less of a priority over the paid value-adds. I can’t help but to think the core of Hashicorp’s products will go down this path, hence my pessimism.

darkwater 9 days ago [-]
> This is unnecessarily dismissive.

No, it is not. HN has both a "greybeard" audience that will cheer in "Go boring tech" posts and an "hipster" audience that is heavily start-up and disruption focused as GP was saying. When talking about IBM and acquisitions or similar topics, it's usually the second audience that speaks more.

That doesn't mean that some acquisition really kill the product, but you don't need to be as big and old as IBM to do that.

JojoFatsani 9 days ago [-]
Do you mean Terraform, not Ansible?
roland-s 9 days ago [-]
IBM owns Ansible, redserk is saying Terraform will go a similar route. Although I don't see what they mean by core being lower priority than paid. The paid features are all available for free via AWX, which is the open source upstream of the paid product AAP.
candiddevmike 9 days ago [-]
Red Hat's business model is "Hellware"--the open source versions are designed to be incredibly difficult to install/manage/upgrade or without any kind of stability that you're forced to pay for their versions.
nomat 10 days ago [-]
There are a number of valid criticisms about IBM
magnetowasright 9 days ago [-]
IBM repeatedly cleaning house of anyone approaching (let alone in or even rarely beyond) middle age is abhorrent.

It's funny to characterise people's beef with IBM as that they're boring, old, and stale when IBM are apparently allergic to anyone over 40.

Also their consultants have been some of the most weaponised incompetence laden, rude, and entitled idiots I've ever had the sincere displeasure to deal with.

IBM are an embarrassment to their own legacy imo.

rank0 8 days ago [-]
Yeah I mean I feel you but imo this is just what the world is. Ive been fucked over many times in my career…people just have to learn to fuck back.

I was more so commenting on the HN hate for the technology/products aspect. IBM has accomplished FAR more than hashicorp and everyone here acts like they were gods gift to software.

op00to 10 days ago [-]
My beef with IBM as someone who worked for a company they acquired is that they would interfere with active deals that I was working on, force us to stand down while IBM tried to sell some other bullshit, then finally “allow us” to follow up with the customer once it’s too late, and the customer decided to move on to something else. Repeatedly.

Fuck IBM.

alemanek 10 days ago [-]
You have obviously never been the victim of IBMs consulting arm. I caution anyone against buying anything IBM now. Absolute nightmare to work with.
coredog64 9 days ago [-]
IBM’s consulting arm was finally so radioactive that they spun it out into a new company (Kyndryl). What I’ve seen is that customers still have a low opinion of the new company and they continue to refer to it as IBM.
Foobar8568 9 days ago [-]
Kyndryl is IBM??
pragmatick 9 days ago [-]
Yes and you wouldn't believe how bad they are. We had multiple incidents where colleagues had to explain basic stuff to them and hold their hands. I was in a couple of calls with their engineers and those instantly reduced my impostorship syndrome.
Foobar8568 9 days ago [-]
I worked for several years with IBM solutions and the like, I thought they ended up opening near shore centers in Europe to "sell" "local" ressources but it was just detached Indian employees from upper cast billed more than us as they were IBM experts.
tempest_ 10 days ago [-]
or just work anywhere within IBM
wredue 9 days ago [-]
Nah dude. Their business internal is a dinosaur both in girth and age. If they estimate 2 years for you, put away budget for 10. And all you’re gonna get is excuses and blame.
altairprime 10 days ago [-]
IBM took away the ability of CentOS to be a free and trivial to swap-in alternative to the paid product RedHat Enterprise. That RedHat was already in financial trouble due to self-cannibalizing their own paid product is irrelevant; emotionally, “IBM” – not “RedHat” – made the decision to stop charging $0 for their custom enterprise patchsets and release trains, and so IBM will always be the focus of community ire about RedHat’s acquisition.

I expect, like RedHat, that the Hashicorp acquisition will result in a lot of startups that do not need enterprise-grade products shifting away from “anything Hashicorp offers that needs to charge money for Hashicorp to stay revenue-positive” and towards “any and all free alternatives that lower the opex of a business”, along with derogatory comments about IBM predictably assigning a non-$0 price for Hashicorp’s future work output.

kensey 9 days ago [-]
* Red Hat wasn't ever "in financial trouble" -- their revenue line was up-and-to-the-right for a ridiculous number of consecutive quarters. Even when they missed overall earnings estimates, it was rarely by much and they still usually beat EPS estimates for the quarter.

* IBM had little to do with Red Hat's maneuvers around CentOS (I worked at Red Hat for several years and still have friends there, and nothing anybody there said publicly about CentOS in 2020 or 2023 was materially different from things people there were saying about it internally in 2012). Some people have tried to blame IBM for a general culture shift but as far as I've seen, every bit of the CentOS debacle was laid squarely at the feet of Red Hat staff by most in this industry -- as it should have been, since most of those involved were employed there well before IBM bought the company.

IBM's reputation as an aging dinosaur was well-earned long before it bought Red Hat, and continues to be earned outside it. That earned reputation was why they bought RHT in the first place: IBM Cloud market share was (and still is) declining and they wanted a jumpstart in both revenue and engineering credibility from OpenShift in particular.

jmspring 10 days ago [-]
IBM was taken over by bean counters years ago. There were researchers and others that would literally skip being in or find a way to avoid bean counters when they walked through IBM Research Labs (like Almaden Research Center) years ago (heard from multiple people years back that were working on contracts/etc there - mainly academics).

Also, IBM has been extremely ageist in their "layoff" policies. They also have declined in quality by outsourcing to low cost/low skill areas.

datadrivenangel 9 days ago [-]
I knew a guy who was laid off from IBM specifically for being older, which came out years later as part of the class action lawsuit...
jmspring 9 days ago [-]
There is a former column that was under multiple writers (same name), that did a great expose on IBM and age discrimination, but I don't want to give said column their due since the columnist had other issues.
prewett 9 days ago [-]
If it's really their due, you should give it to them. This value system where you have to punish people if they don't have the "right" views needs to stop. Would you like someone to do that to you? If they did good work, it doesn't get infected by whatever "issues" they had.
jmspring 9 days ago [-]
dullcrisp 9 days ago [-]
Like Bourbaki? Or they all happened to share a name?
michaelcampbell 9 days ago [-]
I never worked there, but I worked at a security company that hired a bunch of ex-IBM X-Force security guys, and they hated IBM with a passion.

Self selection, to be sure, but their beefs were mostly about the crushing bureaucracy that was imposed on what was supposed to be a nimble type domain; (network) security is, after all, mostly leapfrog with the black hats.

blacksmith_tb 10 days ago [-]
I just got to spin down a bunch of infra that was originally in Softlayer, which IBM acquired years ago. IBM were terrible to work with, they frequently crashed services by evacuating VMs from hosts and then not powering them back up, and only notifying us long after our own monitoring detected it. Won't miss that.
me_again 9 days ago [-]
IBM is to software as Boeing is to planes.

I will not be taking questions ;-)

elevader 9 days ago [-]
I have the "honor" of getting to use IBM $PRODUCT at $COMPANY.

- it uses some form of consensus algorithm between all nodes that somehow manages to randomly get the whole cluster into a non working state by simply existing, requiring manual reboots

- Patches randomly introduce new features, often times with breaking changes to current behaviour

- Patches tend to break random different things and even the patches for those patches often don't work

- For some reason the process how to apply updates randomly changes between every couple of patches, making automation all but impossible

- the support doesn't know how $PRODUCT works, which leads to us explaining to them how it actually does things

- It is ridiculously expensive, both in hardware and licensing costs

All of this has been going on for years without any signs of improvement for now, to the point that $COMPANY now avoids IBM if at all possible

akashcoach 9 days ago [-]
Look at what they did with the Phoenix project for Canadian Government. They are not the same IBM they were 50 years ago. Now they are a consulting firm that employ cheap labor.

https://news.ycombinator.com/item?id=15303555

TheCondor 10 days ago [-]
IBM has always been a punching bag.

I had been wondering who would buy HCP, I sort of figured it was either going to be AWS, Google, or Azure and then I figured the other vendor were going to have support removed (maybe gradually, maybe not.)

coredog64 9 days ago [-]
It could have been worse: It could have been Oracle.
mr_person 14 hours ago [-]
Or Broadcom...
kensey 9 days ago [-]
One of the reasons I left when I did was that it was starting to get really obvious that an acquisition was likely and I desperately did not want my work e-mail address to end in oracle.com.
akashcoach 9 days ago [-]
You talk about beef, look at what they did with a project for Canadian Government. They are not the same IBM they were 50 years ago. Now they are a consulting firm and a shitty one.

https://news.ycombinator.com/item?id=15303555

rank0 8 days ago [-]
So which of the other potential buyers of HCP is the magical non-shitty $BIGCORP you would’ve preferred?
akashcoach 8 days ago [-]
I would have like Microsoft to buy it.
akashcoach 9 days ago [-]
Look at what they did with the Phoenix project for Canadian Government. They are not the same IBM they were 50 years ago. Now they are a consulting firm.

https://news.ycombinator.com/item?id=15303555

neurostimulant 10 days ago [-]
It was special when Mitchell Hashimoto was still at the helm.
paulddraper 10 days ago [-]
Watson
renegade-otter 10 days ago [-]
Hashi code, such as Terraform, was (is) an amazing example of a good reference Go codebase. It was very hard for me to get into Go because, outside of the language trivia and hype, it was hard to learn about the patterns and best practices needed for building even a mid-sized application.
hpeter 10 days ago [-]
That's interesting. I found Go to be a very productive and easy language, coming from Typescript.

But I had a similar experience like yours with PHP, I just couldn't get into it.

janosdebugs 10 days ago [-]
After having written probably over 100k lines of Go code, my impression is that Go is simple, but not easy. The language has very few features to learn, but that results in a lot of boilerplate code and there are more than a few footguns burried in the language itself. (My favorite [1])

I find it very hard to write expressive, easy to read code and more often than not I see people using massive switch-case statements and other, hard to maintain patterns instead of abstracting away things because it's so painful to create abstractions. (The Terraform/OpenTofu codebase is absolutely guilty of this btw, there is a reason why it's over 300k lines of code. There is a lot of procedural code in there with plenty of hidden global scope, so getting anything implemented that touches multiple parts typically requires a lot of contortions.)

It's not a bad language by any stretch, but there are things it is good at and things it is not really suited for.

[1]: https://gist.github.com/janosdebugs/f0a3b91a0a070ffb067de4dc...

epgui 10 days ago [-]
I’ve always found that the Go language is simple in all the ways that don’t matter.

(In contrast to languages like Haskell and Clojure, which are simple in most of the ways that matter.)

nine_k 9 days ago [-]
Compilation speed matters, among other things, and monomorphization is often costly.
djbusby 10 days ago [-]
Is it because secondSlice is a reference (pointer?) to firstSlice?
MoOmer 9 days ago [-]
Slices are structures that hold a pointer to the array, a length, and a capacity!

So, when you slice a slice, if you perform an array operation like “append” while there is existing capacity, it will use that array space for the new value.

When the sliced value is assigned to another variable, it’s not a pointer that’s copied, it’s a new slice value (with the old length). So, this new value thinks it has capacity to overwrite that last array value - and it does.

So, that also overwrites the other slice’s last value.

If you append again, though, you get a (new) expanded array. It’s easier to see with more variables as demonstrated here: https://go.dev/play/p/AZR5E5ALnLR

(Sorry for formatting issues in that link, on phone)

Check out this post for more details: https://go.dev/blog/slices-intro

geoka9 10 days ago [-]
It's because slices have underlying arrays which define their capacity (cap(s)).

Both slices start out having the same underlying (bigger) array -so appending to one slice can affect the other one.

In the "bonus" part, though, the appends outgrew the original array, so new underlying arrays were allocated (i.e. the slices stopped sharing the same backing array).

Thanks for the heads-up, janosdebugs :)

janosdebugs 10 days ago [-]
Yes-ish? Slices are this weird construct where they sometimes behave like references and sometimes not. When I read the explanation, it always makes sense, but when using them it doesn't. For me the rule is: don't reuse slices and don't modify them unless you are the "owner" of the slice. Appending to a slice that was returned to you from a function is usually a pretty good way to have a fun afternoon debugging.
renegade-otter 10 days ago [-]
I find the claims that Go is easy just wrong. It's actually a harder language to write in because without discipline, you are going to end up maintaining massive amounts of boilerplate.

That's from someone who did a bunch - Perl, Ruby, Python, Java, C++, Scala.

Syntax is one thing, assembling an application with maintainable code is something else.

NomDePlum 10 days ago [-]
What in particular did you find difficult building a maintainable codebase in Golang? Not quite understanding the boilerplate reference.

Code generation in Golang is something I've found removed a lot of boilerplate.

renegade-otter 10 days ago [-]
I am not used to writing code where 2/3 of it is "if err" statements.

Also, refactoring my logging statements so I could see the chain of events seemed like work I rarely had to do in other languages.

It's a language the designers of which - with ALL due respect - clearly have not built a modern large application in decades.

aaomidi 10 days ago [-]
Yes because other language just hide errors from the user.

I think the reason people find go a bit annoying with the error condition is because go actually treats errors as a primary thought, not an after thought like Python, Java.

freedomben 9 days ago [-]
I assume you're talking about languages with exceptions when saying "other language just hide errors from the user." I think that's a gross over-simplification of exception-based error handling. I generally do prefer explicit, but there are plenty of cases where exceptions are clearly elegant and more understandable.

My preference is a language like Elixir where most methods have an error-code returning version and a ! version that might raise an exception. Then you (the programmer) can choose what you need. If you're writing a controller method that is for production important code, use explicit. If you're writing tests and just want to catch and handle any exception and log it, use exceptions. Or whatever makes the most sense in each situation.

hughesjj 9 days ago [-]
I've never gotten the explicit argument. Java checked exceptions are also part of the function signature/interface and nothing prevents one from making a language where all exceptions are checked then just doing

    try {
       maybeError := FunctionThrowingValueError()
    } catch (ValueError e) {
       // do stuff
    }
I get at the end of the day it's all semantics, but personally I kinda like the error-specific syntax. If you want to do the normal return path, that's fine, but I prefer the semantics of Rust's Result type (EITHER a result OR an error may be set).

To each their own, it's not something I really worry about.

freedomben 9 days ago [-]
Yeah same, Go's explicit argument never resonated with me either. In Elixir it's similar to a Result type, being a tuple such as either `{:ok, return_val}` or `{:err, err_msg}`, which is perfect for using with `case` or `with` depending on your situation.
renegade-otter 9 days ago [-]
You can't hide an exception if it crashes your program. You can definitely ignore a return from a function, essentially swallowing it. It's the definition of an anti-pattern.
hpeter 9 days ago [-]
I prefer to handle errors than ignore them. "If err" is actually one of the best things about Go
renegade-otter 9 days ago [-]
In most web applications I write, I have one error-handling block.

Access forbidden? Log a warning and show a 403 page. Is is JSON? Then return JSON.

Exception-handling in general is a pretty small part of most applications. In Go, MOST of the application is error-handling, often just duplicate code that is a nightmare to maintain. I just don't get why people insist it's somehow better, after we "evolved" from the brute-force way.

hpeter 9 days ago [-]
Errors usually happen during IO, but not in the main business logic and those two can be neatly separated.

But If you are coming from java I can understand the single error handling block is more comfortable, but coming from JavaScript/Typescript it's much more easy to check if err != nil, than to debug errors I forgot to handle, during runtime.

NomDePlum 10 days ago [-]
I understand were you are coming from but I actually like the explicit error handling in Golang. Things being explicit reduces complexity for me a lot and I find it easier to spot and resolve potential issues. It's definitely something that I can understand not working for everyone.

I agree on the logging point but my experience was the explicit error handling and with good test coverage meant we rarely got into situations were we had non-deterministic situation were we relied extensively on logging to resolve. But we also went through several iterations of tuning how we logged errors. It's definitely a rough edge in what is readily available in the language.

takeda 9 days ago [-]
> I understand were you are coming from but I actually like the explicit error handling in Golang. Things being explicit reduces complexity for me a lot and I find it easier to spot and resolve potential issues. It's definitely something that I can understand not working for everyone.

This sound a lot of like Apple user arguments about iPhone 1 missing copy & paste over a decade ago.

I am very pedantic about checking responses for errors, but from my experience when working with a team and existing project I see that people notoriously forget to check the result. TBH it is a pain to essentially repeating the boilerplate `if err !=nil ...`.

What's worse is that even documentation skips checks. For example `Close()` method. It's almost always returning error, but I almost never seen anyone check it.

The reason for it, is if you want to use `defer` (which most people do) you would end up with very ugly code.

The other alternative would be to then making sure you place (and properly handle error) close in multiple places (but then you risk of missing a place).

And other solution would be using `goto` in similar way as it is used in Linux Kernel, but there are people who have big problem with it. I had a boss who religiously was against goto (who did not seem to understand Dijkstra's argument), and asked me to remove it even though it made the code more readable.

xyzzy123 9 days ago [-]
I think go makes more sense if you imagine spending more time reading MRs and code than writing it.

Standard go error handling maximises for locality. You don't see many "long range" effects where you have to go and read the rest of the code to understand what's going to happen. Ideally everything you need is in the diff in front of you.

Stuff like defer() schedules de-alloc "near" to where things get allocated, you don't have to think about conditionals. If an MR touches only part of a large function you don't have to read the whole thing and understand the control flow.

The relative lack of abstraction limits the "infrastructure" / DSLs that ICs can create which renders code impenetrable to an outside reader. In a lot of C++ codebases you basically can't read an MR without digging into half the program because what looks like a for loop is calling down into a custom iterator, or someone has created a custom allocator or _something_ that means code which looks simple has surprising behaviour.

A partial solution for that problem is to have a LOT of tests, but it manifests in other ways, e.g. figuring out the runtime complexity of a random snippet of C++ can be surprisingly hard without reading a lot of the program.

I personally find these things make go MRs somewhat easier to review than in other languages. IMHO people complaining "it's more annoying to write" (lacking stronger abstractions available in many other languages) are correct but that's not the whole story.

P.S: For Close(), you're right that most examples skip checking the error and maybe it would be better if they didn't. It only costs a few lines to have a function that takes anything Closable and logs an error (usually not much else you can do) but people like to skip that in examples.

  type Closable interface {
    Close() error
  }

  func checkedClose(c Closable, resourceName string) {
    if err := c.Close(); err != nil {
      log.Printf("failed to close %s: %v", resourceName, err)
    }
  }
takeda 9 days ago [-]
Thanks for the Close() example, that's a nice solution, although would it work if you wanted to handle an error (not just log it?)

> Standard go error handling maximises for locality. You don't see many "long range" effects where you have to go and read the rest of the code to understand what's going to happen. Ideally everything you need is in the diff in front of you.

I'm assuming you're comparing to exceptions.

I don't know about that. I think this relies on discipline of the software engineer. I can see for example someone who is strict and only uses exceptions on failures and returns normal responses during usual operation.

With Go you can use errors.Is and errors.As which take away that locality. Or what's worse, you could have someone actually react based on the string of the error message (although with some packages, this might be the only way).

I still see your point though, but I also think Rust implemented what Go was trying to do.

You get a Result type, which you can either match to get the data and check the error, you can also pass it downward (yes, this will take away that locality, but then compiler will warn you if you have a new unhanded error downstream), or you can chose to unwrap without checking error, which will trigger panic on error.

xyzzy123 9 days ago [-]
Good points, I think it's fair to claim Result and Option are technically better (when combined with the necessary language features and compile-time checks).

Re: Close() errors yeah most times you would be better off writing the code in place if you really need to handle them. You can make a little helper if you find yourself repeating the same dance a lot. Usually there's not much you can do about close errors though.

NomDePlum 9 days ago [-]
Not really understanding the iPhone reference or how it relates here.

Sounds like the problem you have with the error checking relates more to development practice of colleagues than the language.

We used defer frequently. Never considered it ugly.

'goto' (hypothesising here as not used it) and use of exception handling that is expected to be handled at edge of boundary points in codebase can be elegant but does need careful thought and design is my experience. Can hide all sorts of issues, lead to a lot of spurious error handling for those that don't understand the intent. That's the biggest issue I have with implicit (magical) error handling - too many people do it poorly.

hughesjj 9 days ago [-]
Everything is explicit until someone decides to introduce a panic() somewhere... (I get that exists in more or less any language)

That said, in practice I see it following a similar philosophy to java checked exceptions, just with worse semantics.

Personally, I don't like high-boilerplate languages because they train me to start glossing over code, and it's harder for me to keep context when faced with a ton of boilerplate.

I don't hate go. I don't love it either. It's really good at a few things (static binaries, concurrency, backwards and forwards compatibility). I hate the lack of a fully-fleshed out standard library, the package management system is still a bit wonky (although much improved), and a few other aesthetic or minor gripes.

That said there's no language I really love, save maybe kotlin, which has the advantage of the superb java standard library, without all the structural dogma that used to (or still does) plague the language (OOO only, one public class per file, you need to make an anonymous interface to pass around functions, oh wait now we have a streaming API but its super wonky with almost c++ like compilation error messages, hey null pointers are a great idea right oh wait no okay just toss some lombok annotations everywhere).

End of the day though a lot of talented people are golang first and sometimes you just gotta go where the industry does regardless of personal preference. There's a reason scientists are still using FORTRAN after all these years, and why so much heavy math is done in python of all things (yeah yeah I know Cython is a thing and end of the day numpy etc abstract a lot of it out of the way, but a built in csv and json module combined with the super easy syntax made it sticky for data scientists for a reason)

throwaway2037 9 days ago [-]

    > I am not used to writing code where 2/3 of it is "if err" statements.
I don't write Go, but I have seen this a lot when reading Go. It seems hard to escape. The same is true for pure C. You really need to check every single function output for errors, else errors compound, and it is much harder to diagnose failures. When I write Java with any kind of I/O, I need careful, tight exception handling so that the exception context will be narrow enough to allow me to diagnose failures after unexpected failures. Error handling is hard to do well in any language.
ZealousIdeal 9 days ago [-]
disagree. k8s is written in it just fine. plus, tons of other modern large applications in enterprise settings
renegade-otter 9 days ago [-]
K8s was famously written in Go by ex-Java developers, and the code base was full of Java patterns.

Which kind of proves my point. Even Google struggled to write clean, idiomatic Go.

karmajunkie 9 days ago [-]
> Code generation in Golang is something I've found removed a lot of boilerplate.

Not a gopher by any stretch, but to my way of thinking code generation is literally boilerplate, that's why its generated. Or does Go have some metaprogramming facilities I'm unaware of?

NomDePlum 9 days ago [-]
I took took the comment to relate to writing boilerplate.

So unrelated to code generated, if that makes sense. The generated code I'm sure had lots of boilerplate, it's just not code we needed to consider when developing.

janosdebugs 10 days ago [-]
Not the parent, but I find that doing dependency injection or defensive programming results in a lot of boilerplate. Custom error types are extemely wordy. The language also doesn't allow for storing metadata with types, only on structs as tags, which seriously hampers the ability to generate code. For example, you can't really express the concept of a slice in a slice containing an integer needing validation metadata well. You'll need to describe your data structure externally (OpenAPI, JSON schema, etc) and then generate code from that.
NomDePlum 10 days ago [-]
My experience of Golang is that dependency injection doesn't really have much benefit. It felt like a square peg in a round hole exercise when my team considered it. The team was almost exclusively Java/Typescript Devs so it was something that we thought we needed but I don't believe we actually missed once we decided to not pursue it.

If you are looking at OpenAPI in Golang I can recommend having a look at https://goa.design/. It's a DSL that generates OpenAPI specs and provides an implementation of the endpoints described. Can also generate gRPC from the same definitions.

We found this removed the need to write almost all of the API layer and a lot of the associated validation. We found the generated code including the server element to be production ready from the get go.

janosdebugs 9 days ago [-]
For OpenTofu specifically, having DI for implementing state encryption would have been really nice. Of you look at the PR, a lot of code needed to be touched because the code was entirely procedural. Of course, one could just make a global variable, but that is all sorts of awful and makes the code really hard to test efficiently. But then again, this is a 300k line project, which in my opinion is way beyond what Go is really good for. ContainerSSH with 35k+ lines was already way too big.
NomDePlum 9 days ago [-]
Out of interest what language do you think would have been more appropriate and why?

For that size of codebase I'd have thought code structure and modularisation would be more important than language choice.

janosdebugs 9 days ago [-]
I wish I had an answer to that, but I don't know. I only worked on projects of comparable size in Go, Java and PHP. Java was maybe the best for abstractions (big surprise), but it really doesn't lend itself to system-level stuff.
jnsaff2 10 days ago [-]
> HashiCorp always felt like a company made by actual engineers.

IDK about this, in 2018 I was in a position to pay for their services. They asked for stupid amount of money and got none because they asked so much.

Can't remember what the exact numbers were but but it felt like ElasticSearch or Oracle.

afavour 10 days ago [-]
Inability to price things correctly sounds exactly like engineer behavior to me…
freedomben 9 days ago [-]
Same. I wanted to pay them for their features, but the pricing was such that I actually thought it was a gag or a troll at first and laughed. When I realized they were serious, I was like Homer fading into the bushes.
raffraffraff 9 days ago [-]
Same. And I didn't feel like we were getting anything for that crazy money aside from than "support" (which management wanted, pre IPO, to make a bunch of security audits seem easier ). We preferred to stick with our own tooling and services that we built around Vault (for example) than use the official enterprise stuff. Same goes for terraform today: I don't feel like we need Terraform Cloud, when we've got other options in that space, including home grown tooling.
kensey 9 days ago [-]
Vault's client-based pricing was (is) the worst thing about selling it. When I was there, nobody in sales liked it except the SEs and account reps dealing with the largest customers (and those customers loved it because it actually saved them a substantial amount of money over other vendors' models like per-use or per-secret). All the customers except those very largest ones hated it. The repeated response from those who believed in the client-based pricing model, to those of us pointing out the issues with it, was essentially "if your customers don't like it, they must not understand it because you aren't doing a good enough job explaining it".

What I thought we really needed was a "starter/enterprise" dual-model pricing structure, so that smaller customers could get pricing in some unit they could understand and budget for, that would naturally and predictably grow as they grew, to a point where it would actually be beneficial to them to switch to client-based pricing -- but there seemed to be a general reluctance to have anything but a single pricing model for any of our products.

tecleandor 9 days ago [-]
But it's even more expensive now! There's no limit!
tithe 10 days ago [-]
The timing of this acquisition, and the FTC's ban on non-compete agreements is perfect.
binarymax 10 days ago [-]
Usually during an acquisition like this, the key staff are paid out after two years on board the new company. So not a non-compete, but an incentive to stay and get their payout.

Most staff with no equity will leave quickly of course, so the invalidity of non compete will definitely help those souls.

cratermoon 10 days ago [-]
"golden handcuffs" they call them.
unstatusthequo 9 days ago [-]
Ban isn’t yet in effect and would have started discussions a while back. Plus, FTC ban is already being litigated by business groups, unsurprisingly.
cedws 10 days ago [-]
I see this as an opportunity. Not to replace HashiCorp's products - OpenTofu and OpenBao are snapping up most of the mindshare for now - but to build another OSS-first developer darling company.
cube2222 10 days ago [-]
Btw. OpenTofu 1.7.0 is coming out next week, which is the first release that contains meaningful Tofu-exclusive features! We just released the release candidate today.

State encryption, provider-defined functions on steroids, removed blocks, and a bunch more things are coming, see our docs for all the details[0].

We've also had a fun live-stream today, covering the improvements we're bringing to provider-defined functions[1].

[0]: https://opentofu.org/docs/next/intro/whats-new/

[1]: https://www.youtube.com/watch?v=6OXBv0MYalY

joshmanders 10 days ago [-]
Onboardbase is a great alternative to HashiCorp Vault.

https://onboardbase.com/

mootpt 10 days ago [-]
i can only speak to the early days (joined around 11 folks), but the engineers then were top tier and hungry to build cool shit. A few years later (as an outsider) seemed innovation had slowed substantially. i still know there are great folks there, but has felt like HashiCorp’s focus lately has been packaging up all their tools into a cohesive all-in-one solution (this was actually Atlas in the early days) and figuring out their story around service lifecycle with experiments like Waypoint (Otto in the early days). IBM acquisition is likely best outcome.
BossingAround 8 days ago [-]
Isn't that how it always is as any company matures? In a big company, you don't need just 5-star devs. You also need a 3-star devs (and even 2-star devs) that work 9 to 3:30 (and maybe do emails/slack between 3:30 - 4; bonus points if they study from 4 to 5). You need people that can take basic requirements and turn them into code that your 5-star devs are too bored to write. You need people who look at customer bugs, can do debugging and submit a patch to fix a corner case your 5-star dev didn't think about 4 years when they were hopped up on caffeine, hopes and dreams.
andrewstuart2 10 days ago [-]
Honestly, Mitchell should still be very proud of what he built and the legacy of Hashicorp. Sure, the corp has taken a different direction lately but thanks to the licenses of the Hashicorp family of software, it's almost entirely available for forking and re-homing by the community that helped build it up to this point. E.g. opentofu and openbao. I'm sure other projects may follow and the legacy will endure, minus (or maybe not, you never know) contributions from the company they built to try to monetize and support that vision.
cjk2 10 days ago [-]
My personal opinion is it was a company for crack monkeys. Consul, Vault and Packer have been nothing but pain and misery for me over the last few years. The application of these technologies has been nothing but a loss of ROI and sanity on a promise.

And don't get me started on Terraform, which is a promise but rarely delivers. It's bad enough that a whole ecosystem appeared around it (like terragrunt) to patch up the holes in it.

skywhopper 10 days ago [-]
When a massive ecosystem springs up around a product, that means it’s wildly successful, actually.
rad_gruchalski 10 days ago [-]
The person you are replying to made no statement about the success of the product. Success and pita-ness are completely orthogonal.
cjk2 9 days ago [-]
Yeah I'm not saying it's not successful. It's just shit!
sureglymop 10 days ago [-]
Regarding Red Hat, I dearly hope someone will replace the slow complicated mess that is ansible. It's crazy that this seems to be the best there is...
deadbunny 10 days ago [-]
Saltstack is IMO superior to Ansible. It uses ZMQ for command and control. You can write everything in python if you want but the default is YAML + JINJA2. And is desired state not procedural.

Not used it for about 5 years and I think they got bought by VMWare IIRC. The only downside is that Ansible won the mindshare so you're gonna be more on your own when it comes to writing esoteric formulas.

mixmastamyk 9 days ago [-]
I wrote a tool similar to ansible in the old days. We both started about the same time, so wasn't really a goal to compete with it. Later I noticed they had some type of funding from Red Hat, which dulled my enthusiasm a bit. Then Docker/containers started hitting it big and I figured it would be the end of the niche and stopped.

Interesting that folks are still using it, though I'm not sure of the market share.

alecsm 10 days ago [-]
Why slow and complicated?

We're just starting to implement it and we've only heard good things about it.

dontdoxxme 10 days ago [-]
Ansible is great if you have workflows where sysadmins SSH to servers manually. It can pretty much take that workflow and automate it.

The problem is it doesn’t go much beyond that, so you’re limited by SSH roundtrip latency and it’s a pain to parallelize (you end up either learning lots of options or mitogen can help). However fundamentally you’re still SSHing to machines, when really at scale you want some kind of agent on the machine (although ansible is a reasonable way to bootstrap something else).

madcadmium 9 days ago [-]
When I managed a large fleet of EC2 instances running CentOS I had Ansible running locally on each machine via a cron job. I only used remote SSH to orchestrate deployments (stop service, upgrade, test, put back in service).
alecsm 9 days ago [-]
Well, that's exactly what we need. Our servers are growing in numbers and it's a pain in the ass the log into each one of them via SSH and do stuff.
dapf 10 days ago [-]
[dead]
sureglymop 7 days ago [-]
There is Mitogen [0] that helps a bit. Their website also kind of explain some of the issues:

> Requiring minimal configuration changes, it updates Ansible’s slow and wasteful shell-centric implementation with pure-Python equivalents, invoked via highly efficient remote procedure calls to persistent interpreters tunnelled over SSH. No changes are required to target hosts.

Then of course python itself is not very performant and yaml is quite the mess too. With ansible, you have global variables, group level variables that can override them, host level variables that can override those, role level variables, play/book level variables that can override those and ad-hoc level variables that can override all of the above. I am telling you, it can get incredibly messy and needlessly complicated quickly.

As I said though, it's still the best we've got even if not optimal. So I think it's a good idea to implement it to at least have something.

[0]: https://mitogen.networkgenomics.com/ansible_detailed.html

skywhopper 10 days ago [-]
It was this, but hasn’t been for a couple of years at least. The culture really shifted once it was clear the pivot to becoming a SaaS-forward company wasn’t taking off. As soon as the IPO happened and even a little bit before, it felt like the place was being groomed down from somewhere unique and innovative to a standardized widget that would be attractive to enterprise-scale buyers like VMware or IBM.
pjmlp 9 days ago [-]
What we are seeing with VC driven "innovation", is only going to get worse when the Linux/BSD founders generation is gone.
neom 9 days ago [-]
I think it's ok to tell this story now. Long long time ago when I was still at DO, I tried to buy HashiCorp. Well, I use "tried to buy" very loosely. It was when we were both pretty small startups, Joonas our Dir. Eng at the time was really into their tooling, thought it was very good plus Armon and Mitch are fantastic engineers. So I flew out from NYC to SF to meet with them "to talk". Well, I had no idea how to go about trying to buy a company and they didn't really seem that interested in joining us, so we stood around a grocery store parking lot shuffling our feet talking about how great Mitch and Armon are at building stuff and then I flew home. I think that's about as loosely as it gets when it comes to buying a company. Probably would have been a cool combo tho, who knows. Either way, they're great guys, super proud of them. <3
berniedurfee 9 days ago [-]
I was in a similar position in a company that _might_ have been able to make a good enough offer, but never could convince the brass how amazing a company it is and I never got any traction.

Disappointing to hear about this, Hashicorp was an amazing company. C’est la vie…

neom 9 days ago [-]
When I got back to NYC I said to my boss (our CEO) "we should probably buy HashiCorp" and he said "Yeah, probably" and then we never spoke of it again. We both knew the problem, even if we could have got it together to make an offer and had they been interested, we were growing considerably too quickly to integrate another business. It was a fun idea, and we had a good time entertaining it, but it wouldn't have worked.

My shopping list during those years was NPM, Deis, Hashi and BitBalloon (now Netlify). These days: I generally think startups should do more M&A!

ClassAndBurn 10 days ago [-]
Hashi never sold me on the integration of their products, which was my primary issue with not selecting them. Each is independently useful, and there is no nudge to combine them for a 1+1=3 feature set.

Kubernetes was the chasm. Owning the computing platform is the core of utilizing Vault and integrating it.

The primary issue was that there was never a "One Click" way to create an environment using Vagarent, Packer, Nomad, Vault, Waypoint, and Boundry for a local developer-to-prod setup. Because of this, everyone built bespoke, and each component was independently debated and selected. They could have standardized a pipeline and allowed new companies to get off the ground quickly. Existing companies could still pick and choose their pieces. On both, you sell support contracts.

I hope they do well at IBM. Their cloud services' strategy is creating a holistic platform. So, there is still a chance Hashi products will get the integration they deserve.

candiddevmike 10 days ago [-]
FWIW, "HashiStack" was a much discussed, much promised, but never delivered thing. I think the way HashiCorp siloed their products into mini-fiefdoms (see interactions between the Vault and Terraform teams over the Terraform Vault provider) prevented a lot of cross-product integration, which is ironic for how "anti-silo" their go to market is.

There's probably an alternate reality where something like HashiStack became this generation's vSphere, and HashiCorp stayed independent and profitable.

JohnMakin 10 days ago [-]
I was an extremely early user and owner of a very large-scale Vault deployment on Kubernetes. Worked with a few of their sales engineers closely on it - was always told early on that although they supported vault on kubernetes via a helm chart, they did not recommend using it on anything but EC2 instances (because of "security" which never really made sense their reasoning). During every meeting and conference I'd ask about Kubernetes support, gave many suggestions, feedback, showed the problems we encountered - don't know if the rep was blowing smoke up my ass but a few times he told me that we were doing things they hadn't thought of yet.

Fast forward several years, I saw a little while ago that they don't recommend the only method of vault running on EC2, fully support kubernetes, and I saw several of my ideas/feedback listed almost verbatim in the documentation I saw (note, I am not accusing them of plagiarism - these were very obvious complaints that I'm sure I wasn't the only one raising after a while).

It always surprised me how these conversations went. "Well we don't really recommend kubernetes so we won't support (feature)."

Me: "Well the majority of your customers will want to use it this way, so....."

Just was a very frustrating process, and a frustrating product - I love what it does, but there are an unbelievable amount of footguns laden in the enterprise version, not to mention it has a way of worming itself irrevocably into your infrastructure, and due to extremely weird/obfuscated pricing models I'm fairly certain people are waking up to surprise bills nowadays. They also rug pulled some OSS features, particularly MFA login, which kind of pissed me off. The product (in my view) is pretty much worthless to a company without that.

neurostimulant 10 days ago [-]
They probably don't want their customers to use a competitor's product instead of Nomad.
kensey 9 days ago [-]
> was always told early on that although they supported vault on kubernetes via a helm chart, they did not recommend using it on anything but EC2 instances (because of "security" which never really made sense their reasoning).

The reasoning is basically that there are some security and isolation guarantees you don't get in Kubernetes that you do get on bare metal or (to a somewhat lesser extent) in VMs.

In particular for Kubernetes, Vault wants to run as a non-root user and set the IPC_LOCK capability when it starts to prevent its memory from being swapped to disk. While in Docker you can directly enable this by adding capabilities when you launch the container, Kubernetes has an issue because of the way it handles non-root container users specified in a pod manifest, detailed in a (long-dormant) KEP: https://github.com/kubernetes/enhancements/blob/master/keps/... (tl;dr: Kubernetes runs the container process as root, with the specified capabilities added, but then switches it to the non-root UID, which causes the explicitly-added capabilities to be dropped).

You can work around this by rebuilding the container and setting the capability directly on the binary, but the upstream build of the binary and the one in the container image don't come with that set (because the user should set it at runtime if running the container image directly, and the systemd unit sets it via systemd if running as a systemd service, so there's no need to do that except for working around Kubernetes' ambient-capability issue).

> It always surprised me how these conversations went. "Well we don't really recommend kubernetes so we won't support (feature)."

> Me: "Well the majority of your customers will want to use it this way, so....."

Ha, I had a similar conversation internally in the early days of Boundary. Something like "Hey, if I run Boundary in Kubernetes, X won't work because Y." And the initial response was "Why would you want to run Boundary in Kubernetes?" The Boundary team came around pretty quick though, and Kubernetes ended up being one of the flagship use cases for it.

JohnMakin 9 days ago [-]
Thanks for the detailed explanation - some of what you say sounds familiar, but this was nearly 5 years ago so my fuzzy recollection of their reasoning - I recall it being something like they didn't trust etcd being compromised on kubernetes. My counterargument to that internally was "if your etcd cluster is compromised by a threat actor you have way bigger problems to worry about than secrets"
kensey 9 days ago [-]
My vague recollection is that that concern was that the etcd store (specifically the keys pertaining to the Vault pod spec) could be modified in some way that would compromise the security of the encrypted Vault store when a Vault pod was restarted. It's been a long time since I remember that being a live concern though, so I've mostly recycled those neurons...
JojoFatsani 9 days ago [-]
(I have no idea what your infra is so don’t take this as prescriptive)

My feeling is that for the average company operating in a (single) cloud, there’s no reason to use vault when you can just used AWS Secret Manager or the equivalent in azure or GCE and not have to worry about fucking Etcd quorums and so forth. Just make simple api calls with the IAM creds you already have.

leetrout 9 days ago [-]
Caveat: the HCP hosted vault is reasonably priced and works well.

However, strong agree on using your home cloud's service.

We used Vault with Heroku and were happy.

candiddevmike 9 days ago [-]
> Caveat: the HCP hosted vault is reasonably priced and works well.

HCP hosted Vault starts at ~$1200/month, you'd have to use a metric shit ton of secrets in AWS or GCP to come close to that amount. Yes Vault does more than just secrets, but claiming anything HC sells as reasonably priced is a reach.

leetrout 9 days ago [-]
Ah, they have changed the public pricing page. Maybe we were on a grandfathered in deal. They had a starter package between free and enterprise with configurable cluster options that was $60ish a month. We heavily used the policies, certs and organization features that made it a no brainer for that price point for things outside AWS like Heroku.

We were running about $12/mo in aws secrets with no caching and no usage outside our aws services. I taught the team how to cache the secrets in the lambda function and it dropped to a buck a month or less.

If they killed off the starter package then you are right, there are only outrageous options and HCP would not be worth considering for small orgs.

elzbardico 9 days ago [-]
This^ Unless you're a hybrid/multiple cloud environment, there's no much point in using Vault.
JohnMakin 9 days ago [-]
ime that’s a way better product to use for secrets management unless you’re trying to do very advanced CA stuff.
downrightmike 10 days ago [-]
We really need a 2.0 version that actually delivers the promise these tools never reached because legacy decisions
brian_herman 10 days ago [-]
Community fork https://opentofu.org/
ohad1282 10 days ago [-]
Indeed. Owned by The Linux Foundation, so this will remain OSS forever/no rug pulls are possible.
takeda 9 days ago [-]
Tomorrow's title:

"Linux Foundation joins IBM to accelerate the mission of multi-cloud automation and bring the products to a broader audience of users and customers." ;)

pjmlp 9 days ago [-]
I wouldn't bet on that, some Linux Foundation hosted projects like Zephyr, not only don't have anything to do with Linux, they are under licenses that are quite business friendly as well.

So yeah, one can always fork the last available version, if it then survives to the extent that actually matters beyond hobby coding is seldom the case.

How many Open Solaris forks are actually relevant outside the company that owns those forks?

Also IBM, Microsoft, Oracle,.... and others that HN loves to hate are already members.

orf 10 days ago [-]
Back in 2015 I discovered a security issue with some Dell software[1]. I remember vividly getting an email about a job opportunity based entirely on this from a company with a strange name, that after some googling made a thing called Vagrant. They seemed super nice but I was far too young and immature to properly evaluate the opportunity, so after a few emails I ghosted them out of fear of the unknown. In 2015 they had 50 employees and had just raised a 10 million series A[2].

Regardless of various things that have happened, or things that could have been, the company has pushed the envelope with some absolute bangers and we are all better for it, directly or indirectly.

Regardless of what the general opinion is of Hashicorp’s future post-IBM, they made an impact and that should be celebrated, not decried or sorrowed over for lack of a perceived picture perfect ending.

Such is life.

1. https://tomforb.es/blog/dell-system-detect-rce-vulnerability...

2. https://www.hashicorp.com/about/origin-story

neurostimulant 10 days ago [-]
I guess you're not active in hacker news around 2013 because vagrant was absolutely popular here a long time ago. Mitchell Hashimoto showed up a lot too when we're talking about vagrant back then. If only you had procrastinated more you might ended up as employee #51 :)
wmf 10 days ago [-]
Official: https://newsroom.ibm.com/2024-04-24-IBM-to-Acquire-HashiCorp...

Confirming what everybody knows, IBM views HashiCorp's products as Terraform, Vault, and some other shit.

amateurhuman 10 days ago [-]
bogantech 9 days ago [-]
But what about the dozens of us using Nomad and Vagrant?
aragilar 9 days ago [-]
"Additional products – Boundary for secure remote access; Consul for service-based networking; Nomad for workload orchestration; Packer for building and managing images as code; and Waypoint internal developer platform" - Vagrant doesn't even get a mention...
foxandmouse 10 days ago [-]
I expected this when the terraform license changed. Not IBM specifically but it was obvious they weren't interested/ able to continue with their founding vision.
paxys 10 days ago [-]
Hashicorp had a $14 billion IPO in Dec 2021 and was trading at ~$4.7 billion right before the acquisition announcement. At that point it doesn't matter what the company or its founders want or what their long term vision is. Shareholders are in charge and heads are going to roll if the price doesn't get back up quick by any means necessary.
bigstrat2003 10 days ago [-]
Yet another example of why I think it's a mistake to take your company public. If I put in the work to build up a successful business, no way would I ever let it be turned into a machine that ignores long term health for the sake of making the stock price go up.
typeofhuman 10 days ago [-]
You have no idea what decisions you'll make if you ever were to get that successful.

I'm sure you've broken many promises to your younger self.

financetechbro 10 days ago [-]
If companies didn’t go public regular people would not be able to invest in innovation. As much as people hate it, public markets democratize access to investments
tootie 10 days ago [-]
True but no company has a vested interest in the democratization of investment. IPOs are purely about getting paydays for founders.
freedomben 9 days ago [-]
*and early investors. Mostly early investors in many cases.
ZealousIdeal 9 days ago [-]
crux of the problem is the SV model is completely broken and leads to these cycles. wish it were more about sustainable progression and not rapid half-baked innovation to achieve paydays for greedy founders/investors
geodel 9 days ago [-]
Huh, they won't get pay day if no one use their products. And there are plenty of examples of failed products. If people have idea and execution capability for sustainable progression they can very well try outside the valley. It is not like companies don't start outside valley.
ZealousIdeal 9 days ago [-]
which is why the majority of the startups fail and then a lucky unicorn comes and funds the next cycle. look at how many poor ideas got massive investment on the bet of payout; so many blockchain companies and none solved a real world problem. lots of potential investment in things that could have greatly helped many more people in the world, but instead invested into a technology looking for a problem.
freedomben 9 days ago [-]
I agree that the vast majority of the blockchain companies were "technology looking for a problem," (or at least, technology looking for another problem besides money ledger) but blockchain really was (is) a pretty damn good technology. The most unfortunate part of it is that the only thing it may really stick for is DRM :-(
tootie 9 days ago [-]
I guess it's not possible to fuel "hypergrowth" this way, but why not just issue debt? Let the market buy in to your growth with a healthy dividend and reduced risk.
financetechbro 7 days ago [-]
This is misguided and myopic. There are many valid reasons for a company to go public besides “pay day to founders”, here are a few

1. Easier access to capital markets and liquidity in general 2. Marketing/publicity provided by equity research coverage 3. Legitimacy / transparency and trust building for customers (public filings, outsides can gauge health of business) 4. Thanks to number 3, companies have an easier time getting larger corporations as clients or partners

Just because you don’t understand something doesn’t make it bad

krainboltgreene 9 days ago [-]
Yeah, they sure innovated with all that public money they got over...three years? What did the release in the last three years exactly?

Also, what "democratic access" did people get? The ability to buy at $80 a share and then eventually sell it at $30?

Does anyone really believe this kind of stuff anymore?

portaouflop 9 days ago [-]
What is there to believe?

Capitalism is not a religion, there is no belief involved.

krainboltgreene 9 days ago [-]
I'm referring to the parent comment, but pithy reply.
berniedurfee 9 days ago [-]
If I put in the work to build a successful business and someone offered me a hundred million dollars for it, I’d have a hundred million dollars.
paxys 10 days ago [-]
While I would think that, realistically most of my principles are for sale for a few billion dollars.
chrishare 10 days ago [-]
If anyone is listening, I'll gladly undercut this guy by a few orders of magnitude
elzbardico 9 days ago [-]
You need to let some head space for the inevitable bargaining, let's act as sensible cartel members and not undercut ourselves in a race to the bottom, capisce?
JeremyNT 9 days ago [-]
> Yet another example of why I think it's a mistake to take your company public. If I put in the work to build up a successful business, no way would I ever let it be turned into a machine that ignores long term health for the sake of making the stock price go up.

It's a mistake if you care about the long term health of a company. But... why should you?

Hashicorp had a great run, and contributed a lot of great open source products over the years. Today, their products have large user bases and healthy forks seem likely. The founders and early employees cash out, and it's a win for everybody involved.

Nothing lasts forever.

ekianjo 9 days ago [-]
if you go thru VC you are expected to go public to generate returns for the early investors. its baked in
vdfs 10 days ago [-]
My fear of missing out by not using any HashiCorp product is officially over
nkotov 10 days ago [-]
Certainly an interesting turn of events. I really enjoy using Terraform (and Terraform cloud) for work but the license changes made me cautious to integrate anymore.
mywittyname 10 days ago [-]
What was the licensing changes? I see a lot of references to it as though it was common knowledge, but I'm not aware of them.

Edit: found something: https://www.hashicorp.com/blog/hashicorp-adopts-business-sou...

rahkiin 10 days ago [-]
Nobody else is bow allowed to make a public offering of a terraform-using product. That is, you can not provide terraform as a service. Gitlab, Azure DevOps, etc all have to move to something else as they can not provide terraform builders without a special license.

This was a major blow to the participating open source community. The license bow used is also vague and untested.

williamDafoe 10 days ago [-]
Also you should know that while the terraform language is okay (albeit a little too dogmatic in a functional programming sense for my tastes), the terraform cloud product (runners for terraform executions) is pretty terrible, slow, and overpriced, snatching defeat from the jaws of victory based on the terraform language.

This encouraged at least 4 companies to launch terraform-cloud-like products, and rather than compete and provide better service, Hashicorp responded by saying "take it or leave it, internet!" and they closed the open-source license on the interpreter (BUSL)... At my previous company we were driven away from terraform cloud and into the arms of env0 ... when it often takes 10 minutes for an execution to begin and you have no other executions in progress you realize that the terraform cloud SAAS product is just a total joke...

thedougd 10 days ago [-]
Totally agree. I had to switch to Scalr. I’m now paying more than I did with Terraform Cloud, but I’m happier and finally have all the features I needed.

Those who take this to the next level by offering Enterprise like features such as change window and approval gates from Jira/ServiceNow will land whales.

kevindamm 10 days ago [-]
Yeah, they went from a more permissive license (Mozilla MPL) to a less permissive one (BUSL) but I can kind of understand why. I can also understand why the OSS community is upset, and after Hashicorp went after OpenTOFU recently, I'm siding more with the OSS community here.

Before the license change, another project (Pulumi) built something that was basically a thin wrapper on Terraform and some convenient functionality. They claim they tried to submit PRs upstream. Hashicorp loudly complained about organizations that were using their source without making contributions back when they changed to BUSL. I wasn't close enough to be aware of details there, but maybe there were other groups (I can think of Terragrunt, too, but I'm not sure they're included in the parties Hashicorp was complaining about. Terragrunt did side with OpenTOFU after the license change, though). This also means cloud providers can't stand up their own Terraform cloud service product as it could interfere with the BUSL license.

When the license was updated to BUSL, several contributors forked the last MPL-licensed version into OpenTF, then renamed to OpenTOFU. Some say that Hashicorp should have gone full closed-source to own their decision. I think they knew they were benefitting greatly from several large corporations' contributions for provider-specific configuration templates and types.

Then, earlier this month (two weeks ago?) Hashicorp brought a case against OpenTOFU claiming they have stolen code from the BUSL-licensed version, with OpenTOFU outright denying the claim. We'll see how that shakes out, but it shows that Hashicorp wasn't merely concerned about copyright & business/naming concerns (a big part of why other BUSL-licensed projects chose the license). I don't know if the upcoming M&A had anything to do with their license decision but I kind of doubt it? Maybe others here have more context or are more familiar with matters than I am.

glenngillen 10 days ago [-]
It’s been widely speculated, months ago when the change happened, that Terraform has become the scapegoat for this licensing change. The actual impetus was IBM reselling Vault. IBM then helped push the OSS fork of Vault (OpenBao) and this acquisition just brings this whole license change thing to a convenient conclusion for IBM.
kensey 9 days ago [-]
Almost all the talk I saw internally, from well before to well after the license change, about competitors "taking advantage" of our open-source versions was about TFC competitors like Spacelift, Scalr, etc. and Terraform OSS. The Vault competitor mentioned most often was Akeyless but for reasons less like the TFC competition. I saw IBM Cloud Secrets Manager mentioned maybe once or twice.

I'm sure IBM Cloud's Vault offering was part of the decision, but from where I was sitting, it didn't look like the reason or even the primary reason.

geodel 9 days ago [-]
Well folks are already migrating from Terraform to OpenTofu. I am sure similar open source projects for other HashiCorp's products unencumbered with IBM business model will be out pretty soon.

So all in all I think another big win for open source even if little indirectly.

hbogert 10 days ago [-]
> By joining IBM, HashiCorp products can be made available to a much larger audience, enabling us to serve many more users and customers.

I'm really wondering who is kidding who here. Is it IBM or Hashi?

empressplay 10 days ago [-]
IBM has its mitts into finance, defence, aerospace -- and these industries generally stick to IBM / IBM sanctioned products. So with IBM selling Vault / Boundary (in particular) they will get better adoption.
op00to 10 days ago [-]
In my experience IBM uses the sexy stuff (used to be OpenShift) to get meetings then sells the same old boring IBM software and services after the initial meetings.
JojoFatsani 9 days ago [-]
Dude some 28 year old marketing rep wrote that copy, don’t take it seriously
ilrwbwrkhv 10 days ago [-]
It's a shame that HashiCorp gave up. The govt bans foreign competition like Tiktok and in house competition don't have the stamina. Doesn't bode well for capitalism.
dralley 10 days ago [-]
"Give up" is not really the appropriate terminology, the board of directors are the only ones that really have a say in acquisitions, and if the offer was given with a sufficient premium their own choice is limited by willingness to face shareholder lawsuits if they turn it down.
lma21 10 days ago [-]
> IBM will pay $35 per share for HashiCorp, a 42.6% premium to Monday's closing price

Is that an insane premium or what?

NewJazz 10 days ago [-]
I think typical premium is about 20% for acquisitions.

The amount may have been negotiated prior to this month's downturn, which Hashicorp was hit pretty hard by (they had about a 10% fall based on what I'm seeing).

mdasen 10 days ago [-]
Yea, I think it often depends on where a company's stock has moved recently. IBM's offer is still below HashiCorp's 52-week high. That means there's probably a lot of current investors who likely wouldn't approve a deal at a 20% premium. If your stock is near its 52-week high, then a 20% premium looks a lot more reasonable.

April-August of last year, HashiCorp was regularly above a 20% premium over Monday's close. Many investors might think it would get back there without a merger - and it had been higher. IBM is offering $35/share which is close to the $36.39 52-week high. In some cases, investors are delusional and just bought in at the peak. In other cases, a company's shares have been under-valued and the company shouldn't sell itself cheaply.

I don't think one can really have a fixed percent premium for acquisitions because it really depends. Is their stock trading at a bargain price right now? Maybe people who believe in the stock own a lot of the company and don't have more capital to buy shares at the price they consider to be a bargain - but would vote against selling at that bargain price even if they can't buy more. They're confident other investors will come around. An acquiring company wants to make an offer they think will be accepted by the majority of investors, but also doesn't want to pay more than it has to. If the stock has been down and investors think it's a sinking ship, they don't have to offer much of a premium. If the stock is up a ton and investors sense a bubble, maybe they don't have to offer much of a premium. If the stock has been battered, but a lot of shareholders believe in it, then they might need to offer more of a premium.

kensey 9 days ago [-]
Analyst consensus I've seen on long-term price has been floating around $32-34 per share. Take that with as much salt as you think it needs but it's at least interesting that it's within shouting distance of (but not over) the IBM offer.
rwmj 10 days ago [-]
It was a 63% premium when IBM bought Red Hat. Sadly I'd sold my RSUs about 2 days before :-(
dralley 10 days ago [-]
Same, but with ESPP stock, and it was a few months earlier. Ouch.
NewJazz 9 days ago [-]
I still voted no.
10 days ago [-]
jedberg 10 days ago [-]
Yeah, congrats to the people who held the stock yesterday!
mkovach 10 days ago [-]
So, will they now add JCL extensions to HCL? Will they be pulling TCL into the fold be the next plan?
phlakaton 10 days ago [-]
They'll do what they should have done years ago: give up on all this fuddy duddy syntax and just go with XML. ;-)
doubled112 10 days ago [-]
Was this supposed to make my eye twitch? Don’t give them any ideas.
tecleandor 9 days ago [-]
doubled112 9 days ago [-]
Oh God, of course they do.
phlakaton 8 days ago [-]
The good news is, you no longer need Dhall or some crazy scripts to generate your Terraform files. Just a bit more XML and an XSLT stylesheet oughta do it!
10 days ago [-]
liveoneggs 10 days ago [-]
Although I think they have very different use cases this means IBM own both Ansible and Terraform, both claiming to be IaC
aodin 10 days ago [-]
Although there is significant overlap between the two, I prefer Terraform for resource provisioning and Ansible for resource configuration.
DerpHerpington 10 days ago [-]
Same, but now IBM will be able to merge them to create Terrible (or Ansiform). ;)
rahkiin 10 days ago [-]
I like the joke. But a better integration between terraform and ansible for config would be pretty neat.
janosdebugs 9 days ago [-]
How would you imagine that working? I think a lot of people would love that, but I have seen very little specific so far.
freedomben 9 days ago [-]
Same. I view them like peanut butter and jelly. Terraform is my preference for new stuff and everything that isn't a stateful VM, and Ansible is my preference for managing manually created resources (which I try very hard to avoid, but always end up with some) and for managing VMs (even VMs created by Terraform). For stateful services (like a database cluster) Ansible is so much better it's not even a question, and for cloud resources (s3 buckets, managed databases, etc) terraform is a much better approach. I've never felt the two were really competitors even though there is some gray-area where they overlap.
cdchn 10 days ago [-]
Soon to be built into Ansible Automation Platform. Should only cost $100 per managed resource.
indigodaddy 10 days ago [-]
Is the implication that we won’t be able to freely use ansible-playbook anymore, and/or development will end on the “freely” available one?
angulardragon03 10 days ago [-]
No, the implication is that Terraform will become prohibitively expensive to use. AAP has been around for a while, as Red Hat’s downstream of (iirc) AWX. It’s also quite pricey, like Terraform may become.
indigodaddy 9 days ago [-]
Thank you
matthewtse 9 days ago [-]
It's really sad to me that Hashicorp never found a monetization model that worked.

100% of the companies I worked for over the last 6 years all used Terraform, there really wasn't anything else out there, and though there were complaints, it generally worked.

It really provided a lot of value to us, and we definitely would have been willing to pay.

Though every time we asked, we wanted commitment to update the AWS/GCP providers in a timely fashion for new features, and they would never commit and tried to shove some hosted terraform service down our throats, which we would never agree to anyway due to IP/security concerns.

matthewtse 9 days ago [-]
Perhaps an open source fork of Terraform, where the cloud providers themselves maintain the provider repos, is the correct end-state. AWS started doing that in the last few years, assigning engineering resources to the open source TF provider repos.

That way, the profit beneficiaries bear the brunt of the development/maintenance costs.

teeray 10 days ago [-]
Thereby really putting the Corp into HashiCorp.
candiddevmike 10 days ago [-]
I wonder how this will work with Red Hat. Traditionally, Red Hat and HashiCorp competed more directly than other IBM portfolio products, fighting over the same customer dollars.
throwup238 10 days ago [-]
Number one rule of megacorp M&A: Juice quarterly numbers first, ask capital allocation efficiency questions never.
achristmascarl 10 days ago [-]
terraform changed to business source license pretty recently too: https://www.hashicorp.com/blog/hashicorp-adopts-business-sou...
worik 10 days ago [-]
> terraform changed to business source license pretty recently

Now we know why!

glenngillen 10 days ago [-]
I suspect you have the causality on this backwards: https://news.ycombinator.com/item?id=38579175
freedomben 9 days ago [-]
Wow, I read that thread with great interest at the time, and reading it now knowing about the acquisition is quite the mind blowing experience.
jrockway 10 days ago [-]
I worked at a startup that got acquired by a big company and we switched our custom proprietary license back to Apache 2 after acquisition. The reason we switched in the first place was because it's what we thought was best when we were out on our own. Being owned by a hardware company, you can have the software for free. (We still sell licenses and have a cute little license key validator, though.)
mathverse 10 days ago [-]
IBM will gut everything to the bone and send most of the jobs to India.

There will be nothing worth of using pretty soon as we will all move to the next big foss thing.

op00to 10 days ago [-]
There is plenty of money to milk from existing customers using Vault. For everyone else, yes - time to move on.
geekodour 10 days ago [-]
I've spent last 3 days learning nomad for my homelab setup, hope things stay more or less the same for it :)
wmf 10 days ago [-]
Nomad will indeed stay the same if all future development ceases.
chucky_z 10 days ago [-]
Nomad has a remarkably strong community for it's size. I'm almost positive it will continue to live in some format, even if completely hard-forked.

I know if nobody else does anything I will do something myself, personally.

I love Kubernetes, however I feel like things like Nomad and Mesos have a space to exist in as well. Nomad especially holds a special place in my tech-heart. :)

orthecreedence 10 days ago [-]
> Nomad especially holds a special place in my tech-heart.

Same. I'm not a fan of the recent licensing changes and probably won't use it for any new installations, but Nomad enabled me to be an entire ops team AND do all my other startupy engineer duties as well with minimal babysitting. It really just works, and works fantastically for what it is. Nomad is like the perfect mix of functional and easy to manage.

sunshine-o 9 days ago [-]
The question is what to replace it with?

There doesn't seem to be enough forces to create a MPL fork but at the same time we have a gap between "Docker Compose is enough" and running Kubernetes. Because there are many situations where going Kubernetes (or even lighter k0s, k3s type setups) does not make any sense.

My guess is no organisation which can afford to dedicate resources to contribute or create a fork need Nomad. So we end up with a big gap in the ecosystem.

orthecreedence 9 days ago [-]
Right, it's unfortunate. Maybe IBM will open the licensing back up and pour some resources into Nomad? I doubt it, though.
achristmascarl 10 days ago [-]
terraform changed to business source license pretty recently too: https://www.hashicorp.com/blog/hashicorp-adopts-business-sou...
brian_herman 10 days ago [-]
When they did this the community forked it into https://opentofu.org/
playingalong 10 days ago [-]
In what sense did they side with OpenTofu? Genuinely curious.
lolinder 10 days ago [-]
I think you meant to reply to this one:

https://news.ycombinator.com/item?id=40149230

dralley 10 days ago [-]
Considering IBM sided with the fork, I suspect it'll be reverted for most or all of Hashicorp's projects.
rezonant 10 days ago [-]
I bet they'll organize it under Red Hat, and Red Hat will apply their open source policy to it, and that will involve reverting to OSI approved licenses
blcknight 10 days ago [-]
That doesn’t seem like what’s happening from first appearances. Looks like it’ll remain separate for now which means no RH influence to fix the licensing boondoggle.
op00to 10 days ago [-]
Red Hat is a shell of itself. There is no appetite for taking on Terraform when Ansible is their ugly baby.
tristan957 10 days ago [-]
I've found they are complements to each other. One provisions infra, the other customized that infra for your needs.

But I could be totally off-base.

rad_gruchalski 10 days ago [-]
Yes. I did this a while back: https://github.com/radekg/terraform-provisioner-ansible. That received some contributions from IBM. Unfortunately, HC never wanted to maintain it, then in 0.15 they replaced provisioners with providers or plugins (can’t remember anymore). I had a couple of discussions with their OSS head for TF at the time but the bottom line from them was „why don’t you rewrite it in your spare time”. The problem was their replacement didn’t give access to the underlying communicator (your SSH or winrm). So I hung the towel.
op00to 8 days ago [-]
You’re right, I was off base with my comment. They are indeed complementary.
calgoo 10 days ago [-]
I mean, they bought Red Hat, and killed CentOS; I can say after 25 years in enterprise IT, I have zero trust in IBM to keep any open source licensing "open".
dralley 10 days ago [-]
IBM didn't kill CentOS.
calgoo 10 days ago [-]
They where under IBM ownership at the time, so IBM did kill it. The software now branded as CentOS is basically Fedora, which is fine for desktops, but never felt good on servers. CentOS was perfect for a lot of us SysAdmins back in the day to use on our own servers etc, while using Red Hat at work. We also used it for anything PoC or servers that did not require support. These days licensing is easier using models like AWS Subscriptions, but we used to buy licenses in bulk, and if there where not enough licenses, we had to do the whole procurement dance.

Side note, in the 12 years that I used Red Hat at work, we used the support 2 times, and both times they forwarded some articles that we had already found and implemented. However, enterprise always demands some support contract behind critical systems to blame in case of disaster.

Honestly, who knows what would have happened if Red Hat was left as an independent entity, but we do know for sure that they did make the changes after the acquisition.

dralley 10 days ago [-]
I work at Red Hat. IBM was not involved in the decision to kill CentOS.

>The software now branded as CentOS is basically Fedora

CentOS Stream (what replaced CentOS) is vastly more similar to CentOS than Fedora.

It's CentOS with rolling patches instead of bundling those same patches into minor releases every 6 months. Only the release model is different from RHEL / CentOS, otherwise it's built the same and holds to the same policies in terms of testing, how updates are handled and compatibility.

Fedora on the other hand is very, very different. Packages are built with different flags, different defaults (e.g. filesystems), very different package versions, a different package update policy (even within one major release Fedora is much more aggressive than CentOS Stream / RHEL / CentOS), etc.

I understand that not having an near-exact replica of RHEL supported for 10 years was very convenient and the way the EOL was announced, and the timelines, sucked massively. But CentOS Stream is suitable for a large number of the use cases where CentOS was used previously, it is not "basically Fedora". It's more like 98% RHEL-like wheras Fedora is doing something else entirely.

dralley 9 days ago [-]
I also should have mentioned that the CentOS Stream lifecycle is 5 years whereas Fedora's is 13 months

5 years is less than 10, but it's a lot less different than 10 vs 1

colechristensen 10 days ago [-]
That is the kind of thing that could have been a kind of negotiation tactic for purchasing Hashicorp, not necessarily done in good faith.
rezonant 10 days ago [-]
Hashicorp's relicensing could also have been a tactic to get the sale to happen.
kensey 9 days ago [-]
IBM didn't just fork Vault to make a statement -- IBM Cloud Secrets Manager was (openly) built directly on Vault OSS.
dzonga 10 days ago [-]
funny how these things are sometimes.

technically, couldn't have IBM have hired Mitch when he was still doing vagrant ?

and put him in a closet somewhere. Given how Mitch, cranks out products -- could technically been cheaper than 6.4bn but then again IBM ain't hurting for cash.

dbalatero 10 days ago [-]
> technically, couldn't have IBM have hired Mitch when he was still doing vagrant ?

That sort of vision/foresight seems fairly rare, I'd think particularly rare at an IBM type place.

objektif 10 days ago [-]
It is extremely rare I would say. Also when you can buy a proven product why risk?
primax 10 days ago [-]
Simply put, IBM doesn't have the kind of foresight and restraint to do something like that and not fuck it up
heipei 10 days ago [-]
Here's hoping they don't run great tools like Consul and Nomad into the ground somehow. If I'm ever forced to ditch Nomad and work with a pile of strung-together components like k8s I might just quit tech altogether.
devhead 10 days ago [-]
I wonder if this may mean we will see the Terraform dogmatic approach to declining to implement much requested functionality in the name of "it doesn't fit our ideals" go by the wayside. I hope so, otherwise, OpenTofu here I come; or well, I'm sure someone's got a ML infra tool in the works by now.

I always have mixed feelings when a software company like this grabs their bag and leaves the community that helped build them, high and dry; good for them but still bad for everyone else nine out of ten times.

0xbadcafebee 10 days ago [-]
So long, and thanks for all the time we spend maintaining and fixing our Terraform code rather than just deploying an instance manually once. (It's been great for my job security!)
bloopernova 10 days ago [-]
If this accelerates migration away from Terraform towards a standard, open, IaC platform, then it's a good thing. Something like the JSON version of Terraform that can be generated by different tools, but an open standard instead.

Be "interesting" to see what happens to the recently-renamed Terraform Cloud (now Hashicorp Cloud Platform Terraform :eyeroll:)

Edited to add: I'm guessing the feature I want added to the terraform language server is never going to happen now. Terraform's language server doesn't support registries inside Terraform Cloud, it doesn't know how to read the token in your terraformrc. bleh.

vundercind 10 days ago [-]
> Something like the JSON version of Terraform that can be generated by different tools, but an open standard instead.

God, please no. The worst thing about all these tools is the terrible formats they keep choosing.

Given the directions we’ve (“cutting edge” programmers and server ops folks) chosen to go instead, leaving XML behind was a big mistake.

I’d prefer something better, but yaml and json are so terrible that going back to xml would be an improvement.

bloopernova 10 days ago [-]
You'd write in a language designed for humans, and that would get translated into a language for computers. In other words, JSON.

What are your reasons for disliking JSON?

vundercind 10 days ago [-]
Terrible, awful type system. And I just mean at the level of primitive types it can represent, nothing fancy. It doesn’t even have a date type, let alone things like decimals.

That’d be my argument specifically against using it to communicate between pieces of software—at least if you’re hand-writing it there’s the excuse that it’s kinda, sorta easy to read and write (at least, people say that—IMO it’s only true for tiny, trivial examples, but that may be a matter of taste)

My take on it as a hand-written config/data language is that it’s simply absurd. JSON-schema is terribly unwieldy, but also the lingua franca, so if you want to keep your sanity you write something better to define your data structures (probably in some actual programming language) and generate JSON schema to share structure definitions among readers. Oh my—why?

mike_hearn 9 days ago [-]
JSON isn't designed for humans. It's designed, originally, to be eval()d in a browser. HOCON is a JSON-type-system compatible language designed for human config files:

https://hocon.dev

bloopernova 9 days ago [-]
I didn't express myself very clearly, my apologies.

I meant that you'd write in a human centric language that would then be translated to JSON. Not editing JSON directly.

e12e 9 days ago [-]
> language designed for humans

Hardly - it's a hack as a data-only subset of JavaScript, as a sibling comment mentions.

It has no support for comments (even though JavaScript does). No support for optimal trailing commas. No integers. No enums.

bloopernova 9 days ago [-]
I didn't write very clearly, my apologies.

I meant that you'd write in a human centric language that would then be translated to JSON. Not editing JSON directly.

0xbadcafebee 9 days ago [-]
Data formats don't typically have comments because they are (supposed to be) generated by machines and read by machines. The .DATA sections in binaries don't have comments... Network protocols don't have comments... Pickle files don't have comments... JSON is supposed to just encode data.

It goes like this:

- XML was created to allow humans to write human-friendly data encoding (AKA "markup") that had lots of features they wanted programs to take advantage of.

- It turned out the format they chose was great for the machines, but really annoying for humans.

- They refused to change the format for humans, so humans got sick of it, and decided the problem was it was "too complicated" (as opposed to merely "too clunky").

- So they created some other formats, which weren't in any way better, but were simpler, so they could ignore the fact that they made the formats too clunky.

- Some formats' designers were opinionated, and decided things like "comments are an anti-pattern in a data format", so they took those features out.

- So now humans could manage the formats better. But they still wanted programs to take advantage of useful features - like multiple data types. So they implemented multiple data types in the formats.

- But the humans forgot that humans are still pretty dumb, and that most people never read specifications. So the users of the new format would incorrectly use the format, and run into the different data types accidentally (like true/false or null in JSON, or "The Norway Problem" in YAML), and claim the problem was the format, and not their own ignorance of it. (isn't the human ego amazing?)

- So the humans, not having learned from history, invented yet more data formats, with even fewer features, so that they would not continue to screw up the things they themselves invented. And so you get things like "restricted yaml" or "toml" (which is basically an .ini file, a format from 50 years before).

A data format that allows comments is called a "configuration file", and is supposed to be primarily read and written by humans, and requires a machine to implement a parser for it. Those are not always easy to write, which is why most people today have chosen to use data formats rather than configuration formats. But that has the unintended consequence of humans not understanding that types in data formats are a thing.

Back in the day we wrote the configuration format for the human, and used data formats for machine<->machine communication. Some of those data formats were very easy for humans to read, but that was largely an accident of the fact that most programs had records so simple that we separated everything with newlines... not an engineering decision as much as a "hey, it's really easy to just read an entire data record as everything up to '\n'" thing.

Over time people have sort of become confused about what each format is and how it should be used, and what for. The data format churn (and constant griping) will continue, forever, because humans never learn their history.

dralley 10 days ago [-]
Agree on YAML, disagree on JSON.
0xbadcafebee 10 days ago [-]
So basically you want OpenTofu. It's open source, you can make it do whatever you want, and there's a >0% chance your PRs will get accepted (compared to with HashiCorp)
JojoFatsani 9 days ago [-]
Maybe you’d like Pulumi?
jakozaur 10 days ago [-]
Should we migrate to OpenTofu?
wmf 10 days ago [-]
Since IBM loves the Linux Foundation it's not impossible that Terraform and OpenTofu will merge like GCC and EGCS back in the day.
jillesvangurp 9 days ago [-]
Exactly. Now that they own it, they can just roll back the license change and tap into the rest of the world doing the heavy lifting in terms of development. Hashicorp retired from that role when they changed the license.

Win win for IBM. They offer stability to their big corporate clients and they get to resell the work done by third parties. The BSL license is an obstacle to that because it means they have to reinvent wheels internally. Changing the license back means they can gut the R&D department at the price of a simple license change and focus instead on sales, support, and consulting.

ChrisArchitect 10 days ago [-]
Some more discussion yesterday ahead of the deal: https://news.ycombinator.com/item?id=40135303
dang 10 days ago [-]
Thanks! Macroexpanded:

IBM nearing a buyout deal for HashiCorp, source says - https://news.ycombinator.com/item?id=40135303 - April 2024 (170 comments)

justinsaccount 10 days ago [-]
Not unexpected, I saw a comment a ways back when they started with the BSL stuff that it had nothing to do with terraform, but was a response to IBM selling Vault.
bayindirh 9 days ago [-]
Good for them.

Also, it's probably the time to archive my Vagrant Machines repository. I guess all HashiCorp tools will be rolling downhill for personal use.

alando46 10 days ago [-]
Gg hashicorp
DLA 9 days ago [-]
Sad. Here comes HashiWatson. IBM will totally trash Hashi’s awesome products is a sea of “enterprise” trash.
declan_roberts 10 days ago [-]
Congrats to Mitchell and all others involved.

Not a bad place to end up after automating class sign-up at UW!

helloericsf 10 days ago [-]
Why IBM, not Kyndryl? It's interesting to see how it fits into the overall IBM org.
jll29 9 days ago [-]
IBM is trying to increase its "AI revenue" through acquisitions, a standard MBA playbook move (although analysts see through this and often ask specifically for "organic" revenue instead to tease that apart from revenue via acquisitions).

In the past, IBM was a technology leader, and probably still has substantial talent excellent inhouse, but from what I'm hearing it has become less appreciative of its researchers and engineers: for instance, my IBM friends lost any patenting activity related bonuses already several years ago.

Also, the Watson debacle (trying to monetize the Watson brand and the (impressive) Watson Jeopardy challenge results by quickly acquiring a bunch of stuff, only to then sell it as "our Watson AI technology") didn't help bolster its reputation, but rather harmed it further.

Companies like IBM and HP should go back to the roots, value science and engineering, take on bold blue-sky projects (don't leave those only to Musk!), and lead by example. Perhaps this could happen, but only with an engineer-scientist at the top instead of professional managers or bean counters (I'm not attacking the perormance of any individual here as I have not been following recent leadership activities of either company recently).

It is unlikely, IMHO, that an acquired company can change the culture of the acquirer. The only time I've seen this happening was Nokia benefitting Microsoft's culture, but that's because they made Nokia's CEO Microsoft's CEO, which is not going to happen with any likelihood in IBM's case.

dade_ 9 days ago [-]
IBM is a finance company with a tech brand. Business units are black (profitable) or red. They buy them, juice their profits, eventually they extract too much, they turn red. They bundle a few husks together, sell them off eg. Lenovo, or IPO them eg. Kyndryl.
ergonaught 9 days ago [-]
Sad.

Next up, Canonical, though they’ve been tilting sideways without an acquisition to push them.

pjmlp 9 days ago [-]
I am betting on them being acquired by Microsoft, although nowadays Microsoft has their own Linux distributions, so maybe not.
bzmrgonz 9 days ago [-]
Let's hope they don't try the Broadcom shenanigans on terraform!!
CSMastermind 10 days ago [-]
Well I will immediately be pivoting my company off of their products.
rzr999 10 days ago [-]
What happens to shares? Are people getting IBM stock or cash?
KingOfCoders 9 days ago [-]
As predicted here by several people on the license change.
renegade-otter 10 days ago [-]
The copy could not be more IBM: https://www.hashicorp.com/blog/hashicorp-joins-ibm

Accelerate! Multi-cloud! Automation!

bschmidt1 10 days ago [-]
Question: Is the tldr of companies like these that they sell enterprise server software? And often own the hardware too (data centers)? And then sell a bunch of consulting services on top of that to Fortune 500s and governments? It's tempting to think "How are these guys even relevant anymore?" but IBM's making $60B+ a year with over $10B cash on hand, apparently from mostly "consulting services".

For a lot of developers including me, I never think about IBM or HashiCorp (or Oracle, SAP, etc.) and it's hard to imagine why someone would want to use their software compared to something newer, friendlier, cheaper, and probably faster. Is it just relationships?

Just curious how customers are actually getting value from an IBM or a HashiCorp or an Oracle.

kevindamm 10 days ago [-]
Terraform does help with managing medium-large fleets, and a lot of special sauce is the structured types corresponding to cloud platforms (dubbed "providers") and the different services they offer. You could write your own configuration language and launcher but Terraform has been tested against many setups and can manage rolling restarts and other deployment methods. It's modular so you can define the configuration of a single server and then say "bring up 20 of these, use this docker image, name them thus," etc.

Vault for securely storing keys is also a convenient system component.

Both can be spun up in production without having to go through Hashicorp directly, but they also offer a service for managing the current state of the deployment (some aspects of the system are not queried at runtime and must be kept in a lock file of sorts, and coordinated with others doing any production changes). Some teams will coordinate using an S3 folder or some other ACL'd shared storage instead of relying on Hashicorp Cloud.

I find it's the closest thing to a public version of the service management tools I grew used to within Google, and it has been a driving force for the DevOps movement. I think something else could come along and do it better but it does seem like a lot of upkeep to retain parity with all the cloud services' products. I hope OpenTofu is successful, competition helps.

bschmidt1 9 days ago [-]
Yeah I know of Terraform (for me it was via AWS) but I just wonder how it's that valuable. For personal use, I never drank the koolaid on IaaS to begin with. Always found PaaS to be a nicer experience and I like that it actually simplifies DevOps, doesn't add complexity like an AWS or GCP does. I figure if I want more control over the server I can just use a Linux on-prem (no cost) or virtual server and I can fully control the machine - where IaaS like AWS/GCP just feel like expensive jargon hell with too many products. For a larger org sure, you need regional deployments, IAM, and some other stuff - but mostly stuff that is peripheral to code and its hardware requirements.

My favorite DevOps setup is my Raspberry Pi home server running Raspbian, love this thing - WiFi, touch screen so I can hold it like a mobile device or just set it down somewhere while it's serving several APIs, websites, etc. all the time including a local business in SF. Haven't stopped or restarted it in months.

I look at some of these big, old behemoths, and just don't get it. Take Oracle - when you really get into what they "do" it's like... oh... so, a database? Right now they offer clone services of the other cloud providers too, and some other things, but it's mostly just those huge consulting contracts. I just wonder how they get them (and at those rates) if not for relationships, it doesn't seem like their technology is particularly good.

Personally I run stuff like React sites on Vercel, backends on a mix of my Raspberry Pi and Heroku, and 1 thing still in GCP that I can't wait to port out of there. Still looking for a home for my LLMs. As an individual developer, I will probably embrace PaaS and convenience more and more with regards to DevOps, but yeah interesting to see where open-source Terraform goes - would be cool to see companies doing more customized infra internally instead of everyone using AWS.

JojoFatsani 9 days ago [-]
Hashicorp didn’t make much money because they gave their products away and their professional services (tf cloud, vault enterprise etc.) are inferior or not enough of a value add over rolling your own.

Setting up a remote state in S3/Dynamo takes 5 minutes with a publicly available module and solves most of the problems TF cloud does.

arcticgeek 10 days ago [-]
I guess this shows us how overvalued HCP IPO was.
10 days ago [-]
roschdal 10 days ago [-]
Overpriced, haha.
beastman82 10 days ago [-]
Agree, this number seems enormous. Maybe there is a big stream of hosted services revenue that I'm just not participating.
shawabawa3 10 days ago [-]
Hashicorp is public. It's like a 15% premium on what their stock was trading at so it can't be too overpriced (or at least, more than it was already)
dilyevsky 10 days ago [-]
It’s ~10x their revenue which is not crazy in saas. If anything it’s underpriced only because of current economic headwinds
paulddraper 10 days ago [-]
It's "only" 15% above market. (Which is not unusual for an aquisition.)
praveenweb 10 days ago [-]
12x multiple for a Cloud SaaS company is not overpriced typically. I was surprised at this low multiple. Could be due to the current economic situation. And also the licensing changes, lack of product moat contributing in the wrong time.
atlantasun33 10 days ago [-]
What happens to the employees of Hashicorp?
hi-v-rocknroll 9 days ago [-]
Half are laid off. The other half get briefcases and pocket protectors and are told to wear ties (excluding Jerry Garcia brand) and black dress shoes.

https://web.archive.org/web/20110220214126/http://www-03.ibm...

kickofline 10 days ago [-]
Will we see Red Hat / IBM Terraform?
op00to 10 days ago [-]
Never. Red Hat is focused on Ansible.
Khaine 9 days ago [-]
Fuck. IBM is where companies go to die
tiffanyh 10 days ago [-]
MangoCoffee 10 days ago [-]
Based on what happened to RedHat/CentOS. i hope there'll be some forks like Rocky Linux on all HashiCorp's products.
thinkmassive 10 days ago [-]
https://opentofu.org/

https://openbao.org/

Backed by the Linux Foundation

hacknews20 9 days ago [-]
Oh dear! Any buyer but them!
ceocoder 10 days ago [-]
Once again, thank you 'mitchellh for Vagrant, I'm sure you have heard this many times before but it really changed the way we worked for the better in every way.
osigurdson 9 days ago [-]
Terragone
spxneo 10 days ago [-]
so what are the alternatives now? preferably MIT licensed on github
op00to 10 days ago [-]
I’m going back to CFengine!
EMCymatics 10 days ago [-]
How is Red Hat doing?
op00to 10 days ago [-]
I left Red Hat a bit after the IBM acquisition, and in my experience the management bullshittery was encroaching about a year after the deal closed. I hear their sales team are all frustrated and leaving due to IBM’s interference in Red Hat deals.
dralley 10 days ago [-]
Contrary to the HN narrative, pretty OK. Not perfect, I have complaints, but most of them aren't related to IBM specifically.

IBM doesn't assert their will upon Red Hat anywhere near as strongly as HN seems to think they do and in particular the whole story about IBM killing CentOS is BS.

the_real_cher 10 days ago [-]
Is OpenTofu better?
devjab 10 days ago [-]
Hashicorp does so much more than terraform, but I don’t think OpenTofu is better than terraform. I’m not sure that was ever really an interesting issue, however, I think the main competition to terraform was/is things like Bicep.

I know the decision makers in our shop spent quite a lot of time deciding between the two. Finally decided on bicep after a number of what has probably been the most boring workshops I’ve ever attended. I’m fairly certain they are very happy with that decision now though. Not so much because big blue is evil, but because now we’re only beholden to one evil (Microsoft) and not two.

I don’t actually think Microsoft or IBM are evil. They are just not ideal from an European enterprise perspective because they are subject to an increasing amount of anti-non-eu legalisation and national/internal security issues.

hi-v-rocknroll 9 days ago [-]
TF, Vault, Packer, Consul, Nomad.

Waypoint and Boundary don't seem all that useful.

Vagrant has fallen by the wayside supplanted by Docker and K8S. Vagrant was the origin, but quickly went from FOSS to FOSS-washed when it reneged on VMware support as a premium-only, closed-source option.

IBM is indistinguishable from Progress and Broadcom... it buys things and milks them while they decline.

Microsoft just lacks taste and any sense of accountability for all of the vulnerabilities and exploit damage it has, and continues to, inflict on the world.

9 days ago [-]
DrStartup 9 days ago [-]
it's dead Jim
notnmeyer 10 days ago [-]
booooooooooo hiss
rdl 10 days ago [-]
Now to find Vault alternatives.
dangtony98 10 days ago [-]
You should look into Infisical: https://github.com/Infisical/infisical

Disclaimer: I’m one of the founders.

gagoako1995 9 days ago [-]
[dead]
8 days ago [-]
10 days ago [-]
coachEnvy 8 days ago [-]
[dead]
10 days ago [-]
10 days ago [-]
krooj 10 days ago [-]
[flagged]
yevpats 10 days ago [-]
[flagged]
ilrwbwrkhv 10 days ago [-]
Terrible news. New startups should be buying IBM. Not the other way around.
dralley 10 days ago [-]
IBM has ~280,000 employees. There's no sensible way for a company like IBM to be acquired by a startup.
soraminazuki 10 days ago [-]
Like how Google bought Doubleclick and definitely not the other way around?
miningape 10 days ago [-]
Oh if only - it seems like somehow the shittiest culture manages to out survive the other and entrench itself inside the business. I know HN likes to blame this on the stock market forcing short term revenues but I think it goes deeper - the good "culture" employees actively flee these environments.

Boeing acquiring McDonald Douglas is a classic example of this exact scenario: "McDonald Douglas bought Boeing with Boeings money."

soraminazuki 9 days ago [-]
Both can be true at the same time, though. With one reinforcing the other.
vundercind 10 days ago [-]
Heh, interesting example because DoubleClick kinda did take over Google.
rank0 10 days ago [-]
Lmao. Which startup has ~$200B to buy out IBM? Folks are loopy in startup land!
racl101 10 days ago [-]
A startup like Microsoft? lol. IBM is pretty fucking huge still.
cqqxo4zV46cp 10 days ago [-]
Huh? HashiCorp is a large, post IPO company. They aren’t a ‘new startup’. You just think that they’re flashy.
solardev 10 days ago [-]
I didn't know IBM still had money to throw around like that. What do they even do these days? Who are their customers?
belter 10 days ago [-]
This type of comment appears here every time the name IBM shows up, but it is more symptomatic, of the bubble a part of HN lives on.

Think every core IT infra of most of the developed world countries, most of the ebanking and core messaging infra of your large banks and insurance companies, plus billions per year in consulting services revenue.

https://www.ibm.com/products

https://en.wikipedia.org/wiki/List_of_IBM_products

TillE 10 days ago [-]
Indeed, there's also a whole world of B2B software which you may be almost entirely unaware of if you've spent your whole career in consumer-facing web/app development.
bluGill 10 days ago [-]
It is more than IBM used to be a big name in areas where hackers would be expected to be. They have left all those behind though. there are lots of other companies that none of us would recognize that are big, but IBM is a name that we all know as once somewhat important who now are not.
cqqxo4zV46cp 10 days ago [-]
Uhm, the term “hacker” in this context, is, itself, just a coded way of saying “cool developer in the same circles as me”.

Again, HN users are in a bubble, and HN users think that they’re very trendy.

bluGill 9 days ago [-]
Right. If you are in the HN bubble, then 30 years ago IBM was a big name in computing as the inventor of the PC, OS/2, maker of a great keyboard, and a bunch of weird systems we never touched. They sold their PC and keyboard businesses, and let OS/2 die. We think some of those weird systems still exist, but those never were very relevant unless you had to work on them.

HN users are trendy. If you didn't grow up in the 1990s or before though you may not remember just how picked on this type of crowd was. Now while we are never exactly the "in" crowd, we are respected, being a "nerd" or "geek" is now an acceptable thing. We have come a long way and that is enough trendy enough for us.

Waterluvian 10 days ago [-]
Yep. My dad sold IBM server software for most of his career. His customers were banks, railways, and governments.
busymom0 10 days ago [-]
I worked for IBM in early 2010s and Universities were also one of their customers for business analytics.
silisili 10 days ago [-]
This may be a slightly more clear wiki link, especially looking at post 2000 -

https://en.wikipedia.org/wiki/List_of_mergers_and_acquisitio...

MangoCoffee 10 days ago [-]
a lot of banks still use the main frame. IBM got out of pc/server game but not the main frame. IBM is a big player in that game.
nullindividual 10 days ago [-]
Airlines, as well. COBOL is still running business and flight critical services.
lolinder 10 days ago [-]
They bought Red Hat in 2019 for $34B, which means their customers are at a minimum every RHEL customer.

https://www.redhat.com/en/about/press-releases/ibm-closes-la...

timr 10 days ago [-]
They did $14.5 billion dollars in revenue in Q1, with a 54% gross margin, split across software, consulting and infrastructure:

https://finance.yahoo.com/news/ibm-releases-first-quarter-re...

onlyrealcuzzo 10 days ago [-]
IBM had ~$14B cash to start the year.

Additionally, you don't need the full purchase price in cash to buy the company. You can do leveraged buyouts, etc.

czbond 10 days ago [-]
IBM M&A person: "We'd like to buy you in all IBM stock! <jazz hands>"

Hopefully someone at HashiCorp: "Hell no, cash please"

Disp4tch 10 days ago [-]
IBM stock is basically cash. Limited growth potential, very stable, fat dividend.
mywittyname 10 days ago [-]
Press release says it's all cash at $35/sh using cash-on-hand for the purchase.
cqqxo4zV46cp 10 days ago [-]
Ugh, and have you actually looked at IBM’e share price? Or is it this because IBM isn’t cool to you? A bit rich for a community that’ll go work for shares in some dinky web app startup.
10 days ago [-]
czbond 9 days ago [-]
Yes I did, it was overextended by about 30 points yesterday
frognumber 9 days ago [-]
As far as I can tell, the vast majority of the universe is completely incompetent with IT, but needs a lot of boring things done.

If you're a shipyard, an oil company, a bank, an automaker, etc. you still need software to manage things like inventory, employees, logistics, and similar, and you have zero expertise to do it in-house. They also have zero expertise to find a qualified vendor.

IBM is a safe bet.

That's a huge market.

op00to 10 days ago [-]
IBM is essentially a large bank with a side business of tech. IBM is known for financially complex deals that are highly lucrative. They make their money by taking advantage of inefficiencies in the largest enterprises purchasing and technology teams.
playingalong 10 days ago [-]
Just guessing - large corporate bespoke software / integrations projects?
arp242 10 days ago [-]
IBM did ~$62 billion in revenue last year, with a ~$7.5 billion net income. They employ ~280,000 people.

I think they can find a few billions lying around, without having to turn the sofa cushions.

listenallyall 10 days ago [-]
Other than a period around 2012-13, IBM's market cap is higher than it's ever been.
hnthrow289570 9 days ago [-]
That's alright. HashiCorp stuff was 2nd tier compared to any offerings from the cloud providers themselves just because those providers' own solutions would get preference (obviously!). And cloud is the environment for 95% of app development these days.

If HashiCorp stuff is destined to die, something else will eventually rise to fill its niche if it's still valuable.

You can always count on technology to churn for no good reason.

To avoid sounding completely pessimestic: don't discount an IBM comeback either, for the same churning reasons.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 11:49:01 GMT+0000 (Coordinated Universal Time) with Vercel.