Hacker Newsnew | past | comments | ask | show | jobs | submit | jrs235's favoriteslogin

In a bit related way I've been trying to push the idea of "engineers need to be grumpy."

Not so much that we need to not be happy and not enjoy our lives, but that our job is to find problems and fix them. In that setting, being "grumpy" is recognizing the problems. If you're dogfooding your product, you should be aware of its strengths and faults. Fixing the faults gives clear direction that allows you to make your products better. You don't have to reach perfection, such a thing does not even exist. Instead we do this iteratively, as the environment is dynamic, just as is the customer's needs.

I would say that this is loyalty to the company and the product, though not loyal to the politics. It is clearly loyal to the product, which we as engineers are in charge of creating. But it is also loyal to the company because the things we build are the very foundation of the company itself. Being loyal to politics may keep you your job, but it is a short term solution that reinforces a culture of politics itself.

To managers: don't dissuade engineers who raise issues or complaints. These are not "no" in the language of engineers, they are "yes, but". Encourage those conversations because that's how we resolve the issues and build the things. The managerial role is to help those conversations not get stuck or too heated. Your job is to help maintain engineers' passions because that's what pushes the product, and consequently the business (and consequently your success), forward. But that passion is fragile. STEM is a creative endeavor through and through. If not given time to "play around" and try new things then that passion dies. When that passion dies your innovation turns lazy and your only goal is to make something bland like a thinner phone for the 7th year in a row.


that's because a next token predictor can't "forget" context. That's just not how it works.

You load the thing up with relevant context and pray that it guides the generation path to the part of the model that represents the information you want and pray that the path of tokens through the model outputs what you want

That's why they have a tendency to go ahead and do things you tell them not to do..

also IDK about you but I hate how much praying has become part of the state of the art here. I didn't get into this career to be a fucking tech priest for the machine god. I will never like these models until they are predictable, which means I will never like them.


Former TLM that was involuntarily reclassified as an EM because I had too many reports. I'm from old-line (pre-2011) Google, so was an engineer back when the TLM role was one of our unique competitive advantages.

I have a lot of thoughts on this. IMHO, it's appropriate for the state that Google is in now, where it is a large mature conglomerate, basically finance & managerially driven, built around optimizing 10-K reports and exec headcount & control. It's not a particularly good move from the perspective of shipping great software, but Google doesn't really do that anymore.

The reason is because software construction and management is unintuitive, and concrete details of implementation very often bubble up into the architecture, team structure, and project assignments required to build the software. TLM-led teams have a very tight feedback loop between engineering realities and managerial decisions. Your manager is sitting beside you in the trenches, they are writing code, and when something goes wrong, they know exactly what and why and can adopt the plan appropriately. Most importantly, they can feed that knowledge of the codebase into the new plan. So you end up with a team structure that actually reflects how the codebase works, engineers with deep expertise in their area that can quickly make changes, and management that is nimble enough to adopt the organization to engineering realities rather than trying to shoehorn engineering realities into the existing org structure.

Now, as an EM with 10+ reports, I'm too far removed from the technical details to do anything other than rely on what my reports tell me. My job is to take a slide deck from a PM with 10 gripes about our current product, parcel it out into 10 projects for 10 engineers, and then keep them happy and productive while they go implement the mock. It will take them forever because our codebase is complex, and they will heroically reproduce the mock (but only the mock, because there is little room for judgment calls in say resize behavior or interactivity or interactions with other features and nobody's holding them accountable for things that management didn't have time or bandwidth to ask for) with some hideously contorted code that make the codebase even more complex but is the best they can do because the person who actually needed to rewrite their code to make it simple reports up through a different VP. But that's okay, because the level of management above me doesn't have time to check the technical details either, and likewise for the level of management above them, and if it takes forever we can just request more headcount to deal with the lack of velocity. Not our money, and it's really our only means of professional advancement now that product quality is impossible and doesn't matter anyway.

Ultimately the value of the TLM role was in that tight bidirectional feedback between code, engineers, and management. As a TLM, you can make org-structure decisions based on what the code tells you. As an EM, you make org-structure decisions based on what your manager tells you. But at some point in a company's lifetime, the code becomes irrelevant - nobody reads it all anyway - and the only thing that matters is your manager's opinion, and by transitivity, your VP's opinion. A flattened org structure with as many reports per manager as possible is a way for the VP to exert maximal control over the largest possible organization, mathematically, and so once that is all that matters, that is the structure you get.


I see there has been a “spirited discussion” on this. We can get fairly emotionally invested into our approaches.

In my experience (and I have quite a bit of it, in some fairly significant contexts), “It Depends” is really where it’s at. I’ve learned to take an “heuristic” approach to software development.

I think of what I do as “engineering,” but not because of particular practices or educational credentials. Rather, it has to do with the Discipline and Structure of my approach, and a laser focus on the end result.

I have learned that things don’t have to be “set in stone,” but can be flexed and reshaped, to fit a particular context and development goal, and that goals can shift, as the project progresses.

When I have worked in large, multidisciplinary teams (like supporting hardware platforms), the project often looked a lot more “waterfall,” than when I have worked in very small teams (or alone), on pure software products. I’ve also seen small projects killed by overstructure, and large projects, killed, by too much flexibility. I’ve learned to be very skeptical of “hard and fast” rules that are applied everywhere.

Nowadays, I tend to work alone, or on small teams, achieving modest goals. My work is very flexible, and I often start coding early, with an extremely vague upfront design. Having something on the breadboard can make all the difference.

I’ve learned that everything that I write down, “ossifies” the process (which isn’t always a bad thing), so I avoid writing stuff down, if possible. It still needs to be tracked, though, so the structure of my code becomes the record.

Communication overhead is a big deal. Everything I have to tell someone else, or that they need to tell me, adds rigidity and overhead. In many cases, it can’t be avoided, but we can figure out ways to reduce the burden of this crucial component.

It’s complicated, but then, if it were easy, everyone would be doing it.


We’re already at bus factor of close to zero for most bank code written in cobol lol

He also published animated video versions:

45min https://youtu.be/xguam0TKMw8

5min https://youtu.be/BB2r_eOjsPw


Indeed. With AI juniors can create horrible software faster, and in bigger quantities. Based on my experience AI really only enhances your existing talent, and if you have none, it enhances the lack of it, since you still need to be able to know how your AI slop fits in the larger system, but if you're a junior you most likely don't know that. Couple that with skipping the step where you must actually understand what you copy and paste from the internet for it to work, you also lengthen the time it takes for a developer to actually level up, since they do much less actual learning.

That's at least my experience working with multiple digital agencies and seeing it all unfold. Most juniors don't last long these days precisely because they skip the part that actually makes them valuable - storing information in their head. And that's concerning, because if to make actually good use of AI you have to be an experienced engineer, but to become an experienced engineer you had to get there without AI doing all your work for you, then how are we going to get new experienced engineers?


This all reminds me a lot of the early 2000's, when big corporations thought they could save a lot of money by outsourcing development work to low-income countries and have their expensive in-house engineers only write specifications. Turns out most of those outsourcing parties won't truly understand the core ideas behind the system you're trying to build, won't think outside the box and make corrections where necessary, and will just build the thing exactly as written in the spec. The result being that to get the end product you want, the spec needs to be so finely detailed and refined that by the time you get both specification and implementation to the desired quality level, it would have been the same amount of effort (and probably less time and frustration) to just build the system in-house.

Of course outsourcing software development hasn't gone away, but it hasn't become anywhere near as prevalent and dominant as its proponents would've had you believe. I see the same happening with AI coding - it has its place, certainly for prototyping and quick-and-dirty solutions - but it cannot and will not truly replace human understanding, ingenuity, creativity and insight.


To repeat my litany of shirking:

1) We don't need software tests because we have a QA team.

2) We don't need a QA team because we can just watch for user bug reports.

3) We don't need to watch for user bug reports because anything that actually matters will show up in the usage or sales stats.

4) We don't need to watch the usage or sales stats because it's redundant with our general financial picture.

5) We don't need to watch our general financial picture because we already have a canary in the form of repo men hauling furniture out of the lobby.

Different companies have different standards for when they start noticing and caring about a bug.


This site is the best visual explainer for QR code generation.

https://www.nayuki.io/page/creating-a-qr-code-step-by-step

It reproduces what the article is saying, with more detail.


The short version is I got lucky. The slightly longer version is that at the small company I was doing tech lead things because I learned I could get more done to help people by helping organize the other engineers. Then when my boss quit to go sail around the world I was offered his job. I was now a "manager" but initially I still acted like a tech lead, writing a little code, taking care of the database, that sort of thing. The nice thing about the small company was they gave me space to learn and lots of mentorship. I got to see all the numbers on the business side and changed from being the "Let's write something new in Rust!" kind of developer to being the "But what's the simplest thing we can do to help our customers now" kind of manager.

Then I got a call from a friend I had worked with many years agi who was staffing up a new org. He needed people and had a very big budget. This is where the "career" part came in. I had a job with people I really liked, making okay money and could probably work there until normal retirement age. The new job offer was much more risky for much more money and I was always bad about taking risks. So I took a lot of long walks with my wife and we talked about the upsides and the downsides (upside: _so_ much money. Downside: What if I'm no good at the job?) and in the end I took the job. The job was in another state and my son only had two more years at one of the best high schools around so I got a small apartment and flew home every three weeks.

It was an incredibly learning experience. My new manager jokingly explained to me that my new job was people and if I was looking at code I wasn't doing my job. I took that to heart. I met some amazing people. I went to an insane number of meetings. I also got paged awake at 2:00am to be low-key yelled at by a group of Irish people because a computer in India wasn't getting enough network traffic and had run out of entropy. I think I helped some junior engineers with stories like "Ha! You think that was a screw up, let me tell you about my friend who turned off amazon.com for 6 minutes many years ago." And I learned the trick of going toe to toe with a senior architect in a design review meeting by asking "Okay, but what if these two things I'm picking at random happen at the same time?"

In the end it worked out for me. I saw other people go from SDE to SDM and then go back to SDE after a year because it wasn't a good fit for them. They were better engineers for having spent a year in management, but they didn't like it at all. Also I'm typing all this with the benefit of hindsight and probably making it sound easier than it was. I made lots of mistakes in my career, but going into management turned out okay for me.

And now I'm trying to write a Smalltalk VM in Rust and no one in Ireland is waking me up at 2:00am. I got lucky.


"every team working to improve the customer experience - guided by leadership priorities and initiatives"

The fundamental issue is that in almost all larger companies, upper management does not trust that their employees are either intrinsically motivated to do a good job, or are smart enough to determine what "a good job" is.

So rather than having a chain of trust from upper management to middle management to individual contributors, they seek to create a measurable control system. This inevitably replaces people's intrinsic motivation to do a good job with an extrinsic motivation, which only poorly represents the company's actual goals. At this point, most people are no longer trying to do a good job, they're instead trying to make their numbers look good.

Upper management has effectively replaced real, meaningful work with a game where everybody tries to score points, and the people who don't participate in that game are eventually stackranked out of the company.


“The fundamental issue is that in almost all larger companies, upper management does not trust that their employees are either intrinsically motivated to do a good job, or are smart enough to determine what "a good job" is.”

That’s what i have concluded a long time ago. Upper management has a deep distrust of their employees and acts accordingly. They will hire consultants or external people before they will listen to their employees. I think part of it is that a lot of them don’t really believe in anything themselves and only blindly try to fulfill goals set by their CEO or board of directors.


I just fundamentally don't believe that most people in most companies have an understanding of the company's priorities that is worse than what an OKR encodes. In fact, my experience is that most people in larger companies believe, and are correct in believing, that they are forced to intentionally make worse decisions because better decisions have negative impacts on their measured performance.

I've been part of an extremely effective 200 people company that got acquired by a 4000 people company. We all understood why we were acquired, we built a platform that solved a fundamental problem the larger company had.

After the acquisition, this larger company's OKR and measurement system was implemented for our teams.

We initially all ignored the system and went on as usual, starting to implement our platform. Initially, things went well, we made steady progress and started migrating legacy projects to out platform.

Then, the annual stack ranking firings happened. Some of our best engineers were fired. Seeing this, many other top performers started looking for jobs immediately. The ones that got hired elsewhere started poaching even more top performers. The ones left started playing the numbers game to avoid being fired.

Within a year, most people went from trying to solve the larger company's problem to optimizing their numbers. Within another year, the platform initiative had completely failed and was abandoned, with most of the remaining people being fired or integrated into other teams.


  magick convert IMG_1111.HEIC -strip -quality 87 -shave 10x10 -resize 91% -attenuate 1.1 +noise Uniform out.jpg
This will strip ALL exif metadata, change the quality, shave 10 pixels off each edge just because, resize to xx%, attenuate, and adds noise of type "Uniform".

Some additional notes:

- attenuate needs to come before the +noise switch in the command line

- the worse the jpeg quality figure, the harder it is to detect image modifications[1]

- resize percentage can be a real number - so 91.5% or 92.1% ...

So, AI image detection notwithstanding, you can not only remove metadata but also make each image you publish different from one another - and certainly very different than the original picture you took.

[1] https://fotoforensics.com/tutorial.php?tt=estq


The hardest part from my perspective is keeping focus on the fact that you need actual, live customers to keep everything going.

I don't know about others, but I find myself increasingly putting tech aside and riding the ass of the sales teams at smaller tech companies. It's really hard to keep going on a roadmap when you have nothing to validate it against.

As you grow a company, it is very easy to get drawn into the politics of org charts, teams and policies. You do need some of this, but if you are adding middle management layer at 12 employees you might be on a non-ideal path.

Everything always seemed to fall into place automatically when things were going well with the customer pipeline. Pressure from actual, paying customers is the magic trick for a healthy company. Without this, you will invent fake non-problems to solve. This is where virtually all of the drama and bad technology choices come from in my experience.


IMHO the number 1 threath to look out for is premature 'maturation'. The strenght of a small business is in agility and getting things done. As you grow there will be people urging you to become 'more professional' and introduce more process, management and a formalized approach. After all, the big succesfull companies work this way, and you want to be big an successful no?

Wrong! Allthis will achieve is an explosion in overhead, deffered responsibility and learning, and as you get mired down into the spiral of ever more CYA instead of agile GTD your top delivery performers will leave as an MBA cadre and their process drones take over leaving you wondering where it all went wrong.

This is the famous 'wall' business hits typically around the 50 headcount. Meanwhile truly successful scalers postponed 'growing up' as long as possible and even then some, keeping the startup mentality even when they hit 1000+ people and many product lines.


It is best explained by common scenarios an Italian merchant in the Middle Ages experienced. The basic concept is Assets==Liability (plus Equity). Where positive Assets are entered on the left hand side (debit). And positive Liabilities are entered on the right hand side (credit). In accounting, debit and credit just means left and right.

1. Merchant takes out a loan for $5,000 and receives $5,000 in cash. • Assets (Cash) increase by $5,000 (Debit). • Liabilities (Loan Payable) increase by $5,000 (Credit). • Equity remains unchanged.

2. Merchant buys inventory for $1,000 cash. • Assets (Cash) decrease by $1,000 (Credit). • Assets (Inventory) increase by $1,000 (Debit). • Total assets remain unchanged, and liabilities and equity are unaffected.

3. Merchant sells all inventory for $1,500 cash. • Assets (Cash) increase by $1,500 (Debit). • Assets (Inventory) decrease by $1,000 (Credit) (recording cost of goods sold). • Equity (Retained Earnings) increases by $500 (Credit), representing the profit ($1,500 sales - $1,000 cost).

4. Customer1 deposits $500 in cash for future delivery of goods. • Assets (Cash) increase by $500 (Debit). • Liabilities (Unearned Revenue) increase by $500 (Credit). • Equity remains unchanged.

5. Customer1 transfers half of the future delivery of goods to Customer2. • No changes to assets, liabilities, or equity occur at this point. The merchant’s obligation to deliver goods (reflected as Unearned Revenue) is still $500 but now split between two customers (Customer1 and Customer2). Internal tracking of this obligation may be updated, but the total financial liability remains the same.


Worked as a PM at a well known tech company, great relationship with my director. He leaves, new director comes in and within three months I'm on a PIP. I'm given a list of work products to create for a new offering that has been discussed, which on the face of it are entirely reasonable, and the standard 30 days.

100% ghosted by my Director. Weekly 1:1s? He no-shows 2 of them. Near zero input. In "fairness", I knew what was happening, but had some tiny semblance of good faith. Hah.

Final meeting, he shows up with HR. "So we've been talking about (when?) and I have just completed my final review of the documents you created (bear in mind these have had significant input from multiple stakeholders who, not for nothing, generally approved), and I am still left believing that your output is not up to the quality or depth that we expect from our PMs, so..."

I pulled up the receipts, because why not? I think he may not even have known that GDocs provides good metrics on documents, including who has viewed, and when, and how many times. I did this with the HR person sitting awkwardly there. "You reviewed this document? GDocs says you've never accessed it. And this one? Never accessed. What about this deck? Never accessed."

At that point he turned his cam off and clumsily handed it over to the HR person. They asked if I'd like to follow up, but that the company would support the Director's decision. Fine, didn't expect any different. They did acknowledge that they could see too that he hadn't done anything to even present a token perspective that the PIP was anything other than firing with 30 days notice.

Lives and learns, we do.


And that usually breaks, as another reply said, when the manager of the two people delegates responsibility for the problem getting solved to one of those two people instead of keeping it for themselves.

If you have a boss who cares whether the task gets done, you can’t make excuses about how it violates your moat to have to do it. Shut up and help your coworker. Now. Or you’re on PIP.

The coworker who isn’t getting useful collaboration gets blamed for their soft skills, when it’s the boss’s soft skills that should have reigned over both.


>> [Why not build programmer performance measurement tooling?] It's the job of a manager to know what their reports are up to, and whether they're doing a good job of it, and are generally effective. If they can't do that, then they themselves are ineffective, and that is the sort of thing that is the responsibility of THEIR manager, and so on up the line.

Agreed wholeheartedly, but for slightly different reasons. To wit, laziness and Goodhart's law. [0]

In the absence of infinite time, automation will excuse a lack of manager curiosity, as other competing tasks absorb the freed time.

Consequently, most managers with automated dashboards showing performance metrics won't use those dashboards... plus all the person-to-person work they were previously doing. They'll only use those dashboards.

Which then slowly but inexorably turns your employees into dashboard-optimization drones via operant conditioning.

Helping a colleague doesn't show up on the dashboards? Fuck that. Digging into what looks like a security vulnerability isn't on the sprint board? Fuck that.

Which is incredibly corrosive to quality, creative system design.

And then, because this is the work reality you've created, the creative folks you really want working there bail for greener pastures, and you're left with bottom of the barrel talent and retention problems.

[0] https://en.m.wikipedia.org/wiki/Goodhart's_law


Ah, the contract relationship. Minimize your output while maximizing your ROI. I think this works fine if you plan on being in the software "service" industry, kind of like the plumber. It helps if job mobility/demand is high so that when the "product" decisions tank the company/product, you just shrug and move on. That's what the plumber would do. In fact, if your customer does the stupid thing, and ends up having to have you come back to pay even more money, hoorah, more money. It's a reverse incentive. Do it enough, and you'll be just like Boeing and the US Government. I've also observed that this type of relationship tends to cause people to optimize short term over mid to long term.

Where this deteriorates (imo), is if you're in it for additional reasons other than only the money, but want to build quality things, make the world a better place, yada yada. Or perhaps you like the company and what they do, or something about your job, and actually depend on long term viability. By somewhat strained analogy, imagine the plumber works for a housing co op, and they themselves live there. Suddenly, the plumber becomes a bit more coupled to the choices of "product" or the customers. Poor decisions could devalue the neighborhood and your own resale value, or even damage your own property.


The place I work has a list of security guidelines that is, like, ten pages long and full of links to more detailed explanations.

The exact advice depends on how you’re running your services. My starting advice, for cloud, is this:

1. Run in multiple, separate accounts. Don’t put everything in one account. This could be as simple as having a beta account for testing and a separate production account.

2. Use cloud-based permissions systems when possible. For example, if you are running something on AWS Lambda, you create an IAM role for the Lambda to run as and manage permissions by granting them to the IAM role.

3. If that’s not possible, put your credentials in some kind of secrets management system. Create a process to rotate the secret on a schedule. I’d say 90 days is fine if you have to rotate it manually. If you can rotate it automatically, rotate it every 24 hours.

4. Set up logging systems like CloudTrail so you can see events afterwards.

Finally, as a note—people at your company should always authenticate as themselves. If you are TheBigDuck234, then you access your cloud resources using a TheBigDuck234 account, always.

This is just the start of things.


I firmly believe that monolith vs microservice is much more a company organization problem than a tech problem. Organize your code boundaries similar to your team boundaries, so that individuals and teams can move fast in appropriate isolation from each other, but with understandable, agreeable contracts/boundaries where they need to interact.

Monoliths are simpler to understand, easier to run, and harder to break - up until you have so many people working on them that it becomes difficult for a team to get their work done. At that point, start splitting off "fiefdoms" that let each team move more quickly again.


I think the overall thesis here is accurate. I can fill in the blanks where Fowler is light on anecdotal evidence. I have done many architecture reviews and private equity due diligence reviews of startup systems.

Nearly all the micro services based designs were terrible. A common theme was a one or two scrum team dev cohort building out dozens of Micro services and matching databases. Nearly all of them had horrible performance, throughout and latency.

The monolith based systems were as a rule an order of magnitude better.

Especially where teams have found you don’t have to deploy a monolith just one way. You can pick and choose what endpoints to expose in various processes, all based on the same monolithic code base.

Someday the Internet will figure out Microservices were always a niche architecture and should generally be avoided until you prove you need it. Most of the time all you’re doing is forcing app developers to do poorly what databases and other infrastructure are optimized to do well.


With software you have a situation with two problems.

First is the "gap" between those doing the work, and those writing the checks. (When it's the same person, this problem disappears.)

The guy editing the checks likes to understand progress is being made, and that the project both has an end and will be successfully completed.

The second problem is that by it's nature software "never ends" and many (dare I say most?) projects fail and are simply abandoned.

The moment the check writer is not the direct manager of the development you have an intractable problem. The person in-between (quite literally middle management), is often not technical. But he has to convince the bean-counters that this project is "on time and on budget".

He can't help but feel sometimes that he's herding cats. He's an irritant to those who are "doing the work" so they treat interactions with him as a waste of time. Inevitably he starts trying to measure things. (And we all know what that means.)

His job is hard. He's stuck between developers who don't want anything to do with him and higher-ups who want reassurance, bit don't really trust what he's saying.

The miracle is not that this process sometimes fails. The miracle is that it ever works at all.

And sure, you may not like your meetings, but at least understanding the game might help you understand why his job is the crappiest of all of them.


Magnesium L-Threonate - has the most potent therapeutical effect because it can effortlessly cross blood-brain barrier. The drawback is that some people are sensitive to this form of magnesium, those people can have nausea, vomit, migraines, etc. IMHO, I would advise against everyday use because this form is more a medication than a supplement. It is used for serious conditions like dementia, neurological impairment, nutrimental deficiencies.

Magnesium Taurate - a combination of magnesium and taurine. A good form for people with metabolic conditions: T1DM, T2DM, hyperlipidemia, vitamin and mineral deficiencies.

Magnesium Glycinate (aka Magnesium Bisglycinate) - a bit less potent form of magnesium, but has good bioavailability, fewer side-effects. This form is also a source of glycine which is an important amino acid beneficial for metabolism, has a mild calming and stabilizing effect on nervous system. Helps to cope with anxiety, panic attacks, insomnia.

Magnesium Citrate - a cheaper but ok magnesium form for everyday use.

Magnesium Oxide - the cheapest and the least efficient form of magnesium. Unfortunately, this is the most widespread form in many countries due to its low price. Try to avoid this form if you have a choice.

Bonus point: if you have a specific condition, you can combine several forms of magnesium to reach multiple therapeutic goals. For example, some popular combinations are presented below:

  a. Magnesium Taurate + Magnesium Glycinate
  b. Magnesium L-Threonate + Magnesium Taurate
  c. Magnesium L-Threonate + Magnesium Taurate + Magnesium Glycinate

I once worked on an in-house ERP system which had been developed over about 15 years by various developers. It was the engine of the entire company, everything passed through it. The CFO and some senior leadership erroneously blamed it for their shortcomings/used it as a scapegoat. When new management took charge, an initiative was started to replace the system with an industry standard solution. Both myself and the CTO (my boss) made it clear that we strongly felt this would not only go way over budget, but ultimately fail as a project.

Having no understanding as to the technicalities involved, the project was given the go ahead by the directors after several meetings with a vendor. After the CTO and I expressed our concerns about the scale of the project and the sheer amount of functionality involved, the vendor gleefully assured us that they were experienced with "migrations of this scale" and were more than prepared, which was music to the ears of the CFO.

Daily 2-3 hour meetings followed (for many months) to define the scope of the project. Within each meeting I sort of zoned out because it became very obvious that no only did the vendor not understand the scale of the work involved, but had started cutting corners everywhere/leaving out crucial functionality, and this was just the scoping stage, no development had even started yet.

I eventually departed the company but kept in contact with the CTO and learned that after 5 years (project was scoped for 2), the migration was abandoned costing multiple millions of dollars with nothing to show for it.


I love listening to young developers guess at the history of XML, and why it was "complex" (it wasn't), and then turn around an reinvent that wheel, with every bit of complexity that they just said they didn't like... because it's necessary.

So a bit of history from someone who was already developing for over a decade when XML was the new hotness:

The before times were bad. Really bad. Everybody and everything had their own text-based formats.[1] I don't just mean a few minor variants of INI files. I mean wildly different formats in different character encodings, which were literally never provided. Niceties like UTF-8 weren't even dreamt of yet.

Literally every application interpreted their config files differently, generated output logs differently, and spoke "text" over the network or the pipeline differently.

If you need to read, write, send, or receive N different text formats, you needed at least N parsers and N serializers.

Those parsers and serializers didn't exist.

They just didn't. The formats were not formally specified, they were just "whatever some program does"... "on some machine". Yup. They output different text encodings on different machines. Or the same machine even! Seriously, if two users had different regional options, they might not be able to share files generated by the same application on the same box.

Basically, you either had a programming "library" available so that you could completely sidestep the issue and avoid the text, or you'd have to write your own parser, personally, by hand. I loooved the early versions of ANTLR because they made this at least tolerable. Either way, good luck handling all the corner-cases of escaping control characters inside a quoted string that also supports macro escapes, embedded sub-expressions, or whatever. Fun times.

Then XML came along.

It precisely specified the syntax, and there were off-the-shelf parsers and generators for it in multiple programming languages! You could generate an XML file on one platform and read it in a different language on another by including a standardised library that you could just download instead of typing in a parser by hand like an animal. It even specified the text encoding so you wouldn't have to guess.

It was glorious.

Microsoft especially embraced it and to this day you can see a lot of that history in Visual Studio project files, ASP.NET web config files, and the like.

The reason JSON slowly overtook XML is many-fold, but the key reason is simple: It was easier to parse JSON into JavaScript objects in the browser, and the browser was taking off as an application developer platform exponentially. JavaScript programmers outnumbered everyone else combined.

Notably, the early versions of JSON were typically read using just the "eval()" function.[2] It wasn't an encoding per-se, but just a subset of JavaScript. Compared to having to have an XML parser in JavaScript, it was very lightweight. In fact, zero weight, because if JavaScript was available, then by definition, JSON was available.

The timeline is important here. An in-browser XML parser was available before JSON was a thing, but only for IE 5 on Windows. JSON was invented in 2001, and XMLHttpRequest become consistently available in other browsers after 2005 and was only a standard in 2006. Truly universal adoption took a few more years after that.

XML was only "complex" because it's not an object-notation like JSON is. It's a document markup language, much like HTML. Both trace their roots back to SGML, which dates back to 1986. These types of languages were used in places like Boeing for records keeping, such as tracking complex structured and semi-structured information about aircraft parts over decades. That kind of problem has an essential complexity that can't be wished away.

JSON is simpler for data exchange because it maps nicely to how object oriented languages store pure data, but it can't be readily used to represent human-readable documents the way XML can.

The other simplification was that JSON did away with schemas and the like, and was commonly used with dynamic languages. Developers got into the habit of reading JSON by shoving it into an object, and then interpreting it directly without any kind of parsing or decoding layer. This works kinda-sorta in languages like Python or JavaScript, but is horrific when used at scale.

I'm a developer used to simply clicking a button in Visual Studio to have it instantly bulk-generate entire API client libraries from a WSDL XML API schema, documentation and all. So when I hear REST people talk about how much simpler JSON is, I have no idea what they're talking about.

So now, slowly, the wheel is being reinvented to avoid the manual labour of RETS and return to machine automation we had with WS-*. There are JSON API schemas (multiple!), written in JSON (of course), so documentation can't be expressed in-line (because JSON is not a markup language). I'm seeing declarative languages like workflow engines and API management expression written in JSON gibberish now, same as we did with XML twenty years ago.

Mark my words, it's just a matter of time until someone invents JSON namespaces...

[1] Most of the older Linux applications still do, which makes it ever so much fun to robustly modify config files programatically.

[2] Sure, these days JSON is "parsed" even by browsers instead of sent to eval(), for security reasons, but that's not how things started out.


The article is describing debt incurred in connection with private equity investments at two separate and distinct levels of the overall private equity fund structure. I’m trying to keep this simple and straightforward, but assume some knowledge of corporate finance - apologies if I’ve missed the mark and have either undersimplified or oversimplified.

A private equity fund is an entity - usually a limited partnership - that is created by a “sponsor” (called a “manco” in that article for “management company”) and raises money from outside investors. The sponsor manages the fund and the investments it makes using that raised capital (and in exchange for various management fees paid by the fund to the sponsor). Note that, for a variety of reasons, those investments usually take the form of an investor (limited partner) agreeing to contribute cash up to a designated amount - but not upfront. Instead, the investor funds that agreement to invest cash in the fund when the sponsor asks for a portion of it to be sent to the fund so the fund can make an investment.

It’s been standard practice for a while for a private equity fund to borrow money secured by its assets, which are generally a combination of the ownership interests in the companies it has invested in and its right to cause its limited partners to contribute additional capital to the partnership. By using proceeds funded from debt, the fund can improve the way that its returns to its equity investors are measured, which results in the sponsor receiving enhanced incentive economics.

The main new development reported by this article is that private equity sponsors are now pursuing debt at the sponsor level that is secured by the sponsor’s incentive economics mentioned above as well as any management fees paid to the sponsors by the funds that it manages.

This doesn’t mean that the companies that private equity invests in are more indebted or more likely to collapse due to leverage - instead, the article is reporting that the overall structure of a private equity investment (an operating company owned by a fund that is managed by a sponsor) is seeing additional levels of debt at the top level of that corporate structure.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: