Hacker Newsnew | past | comments | ask | show | jobs | submit | edouard-harris's favoriteslogin

Former TLM that was involuntarily reclassified as an EM because I had too many reports. I'm from old-line (pre-2011) Google, so was an engineer back when the TLM role was one of our unique competitive advantages.

I have a lot of thoughts on this. IMHO, it's appropriate for the state that Google is in now, where it is a large mature conglomerate, basically finance & managerially driven, built around optimizing 10-K reports and exec headcount & control. It's not a particularly good move from the perspective of shipping great software, but Google doesn't really do that anymore.

The reason is because software construction and management is unintuitive, and concrete details of implementation very often bubble up into the architecture, team structure, and project assignments required to build the software. TLM-led teams have a very tight feedback loop between engineering realities and managerial decisions. Your manager is sitting beside you in the trenches, they are writing code, and when something goes wrong, they know exactly what and why and can adopt the plan appropriately. Most importantly, they can feed that knowledge of the codebase into the new plan. So you end up with a team structure that actually reflects how the codebase works, engineers with deep expertise in their area that can quickly make changes, and management that is nimble enough to adopt the organization to engineering realities rather than trying to shoehorn engineering realities into the existing org structure.

Now, as an EM with 10+ reports, I'm too far removed from the technical details to do anything other than rely on what my reports tell me. My job is to take a slide deck from a PM with 10 gripes about our current product, parcel it out into 10 projects for 10 engineers, and then keep them happy and productive while they go implement the mock. It will take them forever because our codebase is complex, and they will heroically reproduce the mock (but only the mock, because there is little room for judgment calls in say resize behavior or interactivity or interactions with other features and nobody's holding them accountable for things that management didn't have time or bandwidth to ask for) with some hideously contorted code that make the codebase even more complex but is the best they can do because the person who actually needed to rewrite their code to make it simple reports up through a different VP. But that's okay, because the level of management above me doesn't have time to check the technical details either, and likewise for the level of management above them, and if it takes forever we can just request more headcount to deal with the lack of velocity. Not our money, and it's really our only means of professional advancement now that product quality is impossible and doesn't matter anyway.

Ultimately the value of the TLM role was in that tight bidirectional feedback between code, engineers, and management. As a TLM, you can make org-structure decisions based on what the code tells you. As an EM, you make org-structure decisions based on what your manager tells you. But at some point in a company's lifetime, the code becomes irrelevant - nobody reads it all anyway - and the only thing that matters is your manager's opinion, and by transitivity, your VP's opinion. A flattened org structure with as many reports per manager as possible is a way for the VP to exert maximal control over the largest possible organization, mathematically, and so once that is all that matters, that is the structure you get.


I wouldn't actually say that, but I would say that the TLM role works at a very specific stage in a company's lifecycle, and many companies that use it (including Google itself from around 2010 onwards) have long since past that point.

IMHO, the conditions where a TLM role is appropriate are:

1.) You need to be in the company growth phase where you are still trying to capture share of a competitive market, i.e. it matters that you can execute quickly and correctly.

2.) There needs to be significant ambiguity in the technical projects you take on. TLMs should be determining software architecture, not fitting their teams' work into an existing architecture.

3.) No more than 3 levels of management between engineer and person who has ultimate responsibility for business goals, and no more than 6 reports per manager. The mathematically inclined will note that this caps org size at 6^3 = 216, which perhaps not coincidentally, is not much larger than Dunbar's number.

4.) TLMs need to be carefully chosen for teamwork. They need to think of themselves as servant-leaders that clarify engineering goals for the teammates who work with themselves, not as ladder-climbers who tell others what to do.

Without these, there is a.) not enough scope for the feedback advantages of the TLM structure to matter and b.) too much interference from managers outside the team for the TLM to keep up with their managerial duties. But if these conditions are met, IMHO teams of TLMs are the only way to effectively develop software quickly.

Perhaps not coincidentally, these conditions usually coincide with the growth phase of most startups where much of the value is actually created.


Yes, there are tons of resources but I'll try to offer some simple tips.

1. Sales is a lot like golf. You can make it so complicated as to be impossible or you can simply walk up and hit the ball. I've been leading and building sales orgs for almost 20 years and my advice is to walk up and hit the ball.

2. Sales is about people and it's about problem solving. It is not about solutions or technology or chemicals or lines of code or artichokes. It's about people and it's about solving problems.

3. People buy 4 things and 4 things only. Ever. Those 4 things are time, money, sex, and approval/peace of mind. If you try selling something other than those 4 things you will fail.

4. People buy aspirin always. They buy vitamins only occassionally and at unpredictable times. Sell aspirin.

5. I say in every talk I give: "all things being equal people buy from their friends. So make everything else equal then go make a lot of friends."

6. Being valuable and useful is all you ever need to do to sell things. Help people out. Send interesting posts. Write birthday cards. Record videos sharing your ideas for growing their business. Introduce people who would benefit from knowing each other then get out of the way, expecting nothing in return. Do this consistently and authentically and people will find ways to give you money. I promise.

7. No one cares about your quota, your payroll, your opex, your burn rate, etc. No one. They care about the problem you are solving for them.

There is more than 100 trillion dollars in the global economy just waiting for you to breathe it in. Good luck.


1) DPO did exclude some practical aspects of the RLHF method, e.g. pretraining gradients.

2) the theoretical arguments of DPO equivalence make some assumptions that don’t necessarily apply in practice

3) RLHF gives you a reusable reward model, which has practical uses and advantages. DPO doesn’t have useful intermediate product.

4) DPO works off preference, whereas desirable RL objectives could have many forms

in practice big labs are testing all these methods to see what works best.


It's a really large company so, has you pointed out, this is just my experience.

Pros: It has provided compelling compensation in Oregon (significantly more than my previous position, but I'm junior so large jumps are normal) and work (leading edge tech in my field, plenty of external eyes looking at it). All my managers all the way up to and including the CEO are very successful engineers and understand the technical detail of what they're managing extremely well. Pat is very capable of talking about details of upcoming processes during internal meetings, for example. I also feel empowered by my manager and colleagues to point out how we could improve our methodology, although that often results in extra projects for me, requiring some sacrifice of personal time if I want to complete them all (which is what I'm looking for right now but not sustainable in the long run). I quite like OKR system and I feel like my manager really understands what I'm doing and provides useful feedback.

Cons: I feel there's a bit of a "fire-fighting mentality" in parts of the foundry, especially as you get closer to the fab. There's an unspoken pride in spending your day working late to fix some issue. Many teams are constantly in meetings debugging issues caused by human error and no one seems to ask the question "how can we automate this so it doesn't happen again?" because they're HW not SW people. A lot of my automation side projects require buy-in from other teams and there's been some friction in showing them the value of their cooperation ("this is how we've always done it" kind of attitude). Thankfully I have more senior engineers and managers who have a vision to fix these issues so it's going in the right direction. I believe this is not unusual in HW manufacturing.

If you get placed in a team that has a bad methodology and has a manager that has no vision on how to completely revamp it, then it's going to be a real slog.

If you're thinking of accepting a position at Intel Foundry, feel free to drop contact and we can talk further about my experience here.


> humans do not need 10,000 examples to tell the difference between cats and dogs,

I swear, not enough people have kids.

Now, is it 10k examples? No, but I think it was on the order of hundreds, if not thousands.

One thing kids do is they'll ask for confirmation of their guess. You'll be reading a book you've read 50 times before and the kid will stop you, point at a dog in the book, and ask "dog?"

And there is a development phase where this happens a lot.

Also kids can get mad if they are told an object doesn't match up to the expected label, e.g. my son gets really mad if someone calls something by the wrong color.

Another thing toddlers like to do is play silly labeling games, which is different than calling something the wrong name on accident, instead this is done on purpose for fun. e.g. you point to a fish and say "isn't that a lovely llama!" at which point the kid will fall down giggling at how silly you are being.

The human brain develops really slowly[1], and a sense of linear time encoding doesn't really exist for quite awhile. (Even at 3, everything is either yesterday, today, or tomorrow) so who the hell knows how things are being processed, but what we do know is that kids gather information through a bunch of senses, that are operating at an absurd data collection rate 12-14 hours a day, with another 10-12 hours of downtime to process the information.

[1] Watch a baby discover they have a right foot. Then a few days later figure out they also have a left foot. Watch kids who are learning to stand develop a sense of "up above me" after they bonk their heads a few time on a table bottom. Kids only learn "fast" in the sense that they have nothing else to do for years on end.


> “Grief, I’ve learned, is really just love. It’s all the love you want to give, but cannot. All that unspent love gathers up in the corners of your eyes, the lump in your throat, and in that hollow part of your chest. Grief is just love with no place to go.”

Saw that quote I think on HN a while ago.

Grief sucks. It's different than our other emotions. You can do all the right things and have everything going for you after, but it's still always there and never goes away. Something you truly how to live with and not be afraid to face or run from. This tragedy is different because in a way is ongoing. I found the post extremely inspirational. Best of luck on the new journey. Seems like they'll figure it out.


Notes:

- In mice.

- Tested 16:8 pattern.

- Sustained consistently for nine weeks.

- No exploration in this study of the effects and/or benefits from the gene expression.

N=1 but I've been doing ~16:8-ish along with strict keto for the past five years and the net benefits for me have been transformative in fitness (Weight: -100+ lbs, BMI: Obese->Fit), long-term health (A1C, LDL/HDL, Trigs, BP, resting heart rate all from bad to great), cognitive acuity and emotional stability. First 12 weeks required serious effort/focus to transition habits, palette and metabolism (must RTFM and be rigorous) but after that it's been surprisingly easy to sustain long-term, requiring no will power or conscious effort.

Other surprising experiential learnings: Dietary intake impacts long-term mental/emotional states FAR more than I ever suspected. Food & taste prefs I had since childhood are not innate. Many things I loved no longer even taste good. Hunger pangs and cravings are driven by my blood sugar cycle. Once I stabilized that I no longer get hungry or feel food deprived/obsessed. (<--- all N=1 of course.)


Pizza Hut pan pizza is the easiest pizza to make yourself at home. You don't need a stand mixer, it's a no knead recipe (long overnight rise builds all the gluten). You don't need a fancy super hot oven, it just cooks at 400 degrees F in a cast iron skillet. Give this recipe a shot, it's unbelievably good and very accessible: https://www.seriouseats.com/foolproof-pan-pizza-recipe or https://m.youtube.com/watch?v=-srfPL5CWZs

I tried learning a bit more of the project and couldn’t understand the design decision and how it led to the claimed throughput numbers. Then I found Shoup’s take[1] and it all made sense. Highly recommend reading it.

[1]: https://www.shoup.net/papers/poh.pdf


(I love this question. I have a Ph.D. in the making of semiconductor devices, and I once worked as a troubleshooter in a factory that was making transistors with a twenty-year-old process.)

The first fallacy that's tripping you up is marginal cost. Just because it's cheaper to buy a 800nm-process chip today than it was in the 1990s doesn't mean that it's cheaper to build the factory, employ the packaging engineers, or source the materials (let alone stuff all those things into a refrigerator-sized box). The finished parts are cheaper because the R&D, factories, processes, and HR procedures were bought and paid for in the 1990s, and those things are all still there, so long as a market is there. The workers are very happy to keep doing their jobs, and the marginal cost to keep them working is relatively low, particularly because the yield on a mature process can be really high.

The second fallacy is the physical-plant fallacy. You look at the factory and the machines and you think that's what it takes to make semiconductors. But if I gave you the keys to a shiny new Intel factory today, you would not succeed in making 80486 processors in a few weeks. Even if I gave you a new factory and its staff and the services of the world's leading experts in semiconductor devices and went back in time to arrange the delivery of a steady stream of raw materials, you would still not succeed in making working 80486 processors in a few weeks, although the Dream Team might manage to make some things that looked like working devices right up until you tried to turn them on... or until you tried to turn them on three weeks later.

The expensive part of manufacturing is the learning curve. Every one of those shiny machines has five hundred knobs, and every one of those knobs needs to be set correctly or the products won't work. Your experts can guess the approximate settings for everything, but the crucial final 5% needs to be dialed in by trial and error. You must exercise the factory, then correct for the mistakes.

That's expensive because the feedback is expensive. The difference between a broken part and a working part might take weeks to manifest, and it's literally microscopic, so you need an entire little team of highly trained QA scientists with thermal-cycling ovens and electron microscopes and Raman spectroscopes and modeling software and coffee in order to develop hypotheses about the problems with your process, hypotheses which must be tested by running more doomed wafers through that process.

(I've watched a few thousand people come within a hair of losing their jobs because we couldn't make this iteration converge fast enough.)

This is where economy of scale comes from: Practice. The Nth wafer coming out of a fab has high yield if and only if the (N-1)st wafer had high yield, so you have to bootstrap your yield up from zero one batch at a time. Your fab is only as valuable as the number of wafers it has made, or tried to make. The factory needs practice, and practice takes time, and time costs money.

---

So, here's how your refrigerator-sized fab is going to work. You'll take delivery and set it up. Unfortunately, shipping being what it is, parts will have slipped or gotten bent or stretched. Your humidity and temperature cycles will be different than they were back in Shenzhen. Your ambient dust level will be different. The batch of photoresist that you pour into your hopper will have been manufactured on a different week than the batch that the manufacturer used to calibrate the machine, and your sputtering targets will contain a different mixture of contaminants.

All of these things can probably be calibrated out – if the knobs are well-built enough to stay where you set them, and your environmental controls are comprehensive enough that the conditions remain constant, and you aren't forced to change suppliers, and you have the operational discipline to resist the urge to get blind drunk and start twiddling settings at random while sobbing. But how do you know which experiment to run, on your microscopically-flawed parts, in order to converge on working parts? You need to order the optional "electron microscope" kit, which ships in a slightly smaller box. The box next to that one will contain the materials scientist that you ordered. Hopefully they remembered to drill the air holes!


Good execution pretty much is a continuous stream of good ideas. "The idea" is merely the top level idea, and it matters most because it is at the top level. Good execution is rarer because it requires not just the possession of a single idea, but the propensity to come up with them day in day out.

So she lives in San Francisco for 6 months and feels qualified to explain what is 'wrong' with Silicon Valley? (which is nominally the Santa Clara Valley btw) I've lived (and worked) in the actual silicon valley for over 25 years and I can tell you that evaluating the area based on a 6 month snapshot is worthless.

In 1984 I was Intel and the 'problem' was that there wasn't any real use for personal computers. A 1024 x 768 color CRT monitor was huge and cost about $3,000. Running at 640 x 480 in monochrome with a 'Hercules' graphics card didn't come close to the experience you could get with a decent minicomputer 'workstation.' But a workstation cost $50,000 and up.

In 1994 I was at Sun Microsystem (the Liveoak project, aka Java) and I would have told you that the 'problem' was that you couldn't do business over email and without an economic engine what would fund all the work. I told Eric Schmidt (who owned Sun Labs at the time) that so called 'e-commerce' was where all the money would be in 1995 and if Sun wasn't able to participate it was toast. (sounds pretty lame in retrospect, but they did sell a lot of servers to Amazon :-)

In 2004 I was at NetApp (after having my startup acquired by a company which would later be folded into Motorola in a deal which was reminescent of one of those trades in baseball where you get cash and a draft pick and oh by the way this guy over here.) I would have told you that the problem was that technologists had been pushed aside by MBA types who had lasered in on the 'rent seeking' business model and killed off innovation along the way.

There isn't a 'problem' with Silicon Valley, it simply exists like a beaker sitting over a bunsen burner. Over time different chemicals are available in the beaker and sometimes something magical happens, and sometime noxious fumes come out, but the place is an engine. A lot of startups are endo-thermic with respect to cash but a few are wildly exo-thermic. Often times the by products of those become the ingredients of the next round of innovation.

I'm not sure the author has had time to appreciate that while she may have encountered dozens of GroupOn clones, she seems to have missed that there are dozens of GroupOn clones. If you were in, say Minneapolis, how many GroupOn clone startups are there? Energy makes reactions possible, the SF Bay area is full of energy (and resources) which makes it easy to create a new company. That the companies that have currently been created are boring to you is merely a side effect.


Wow, I feel like this is the most interesting thing on HN right now by far.

Looks good, just curious why they didn't do it 5 years ago when this stuff was still hard.

* Incorporate with Stripe Atlas (just did it for the 2nd time and it was even smoother then the first)

* Mercury bank is incredible, I'm sure Blue Ridge would be a step down.

* Carta for equity

* Fundraising would be interesting, but PartyRound seems to be moving fast here.


There are a lot of misconceptions and misinformation coming out in this thread. I'm going to try and shed some light on them. (HN says my post is too big, so I'll have to split up a bit).

There are four major groups of physicians:

-Residents. These are the folks that just completed medical school, and are doing four-plus years of training in a hospital setting to become independently practicing physicians. In year one they are called interns. By year three or four they have various amounts of independence: in internal medicine, family medicine, etc. they are basically practicing as full physicians, with some light supervision (the heavy supervision is years one and two). They are one of the hospitals most valuable employees: taking into account supervision costs, they are producing about 80-90% of the revenue of a "real" physician, for less than 1/4 the cost. These are the guys who work 80+ hours per week without exception, do all the scut, etc. These are not "mid career physicians". This is where "old physicians had to go through it, so young physicians have to go through it."

--Resident Training: the AMA has been pushing to expand resident training spots for years. The funding is part of Medicare legislation, and no one has been willing to back expanding medicare spending in the name of training physicians. I know the AMA has been backing this because I've attended the Region 7 and national meetings where the resolution to push for it has been passed, repeatedly. Literally, hit DDG and enter "AMA restricted residency training funding" and your entire page of results is the opposite. They may have done so more than a generation ago, but... let's move onto things that were done by, and affect, people not currently retired, eh?

-Hospitalists. These guys have completed their residency training, and elected to work for a hospital, doing in-hospital medicine. Their specialty is "hospital medicine." They have no private clinic, no private patients, and are paid a salary by the hospital. Whether this is an integrated system like Kaiser, or ... every other hospital in the market, they're very common. Their practice patterns are heavily dictated by the hospital, which is heavily dictated by the Centers for Medicare/Medicaid Services and the major insurers. Their work is increasingly focused strictly on documentation, since documentation is the way that CMS and insurers (a) find excuses to refuse reimbursement, and (b) the way that CMS and insurers outsource collection of "quality" information, by forcing docs to structure their input in very discrete ways. These physicians don't have to deal with billing directly, but they are constantly being pulled into trainings for the ways documentation requirements are constantly evolving, the ways in which payors want them order tests and in what order, etc. They constantly get phone calls from "helpful billing people" raking them over the coals whenever there's a mistake. THe hospital keeps running tallies and reports on doctors' mistakes in this arena, aiming for public pillorying and, ultimately, withheld wages. (Docs don't generally get bonuses, they get withheld wages - except for high-revenue services like procedures, where they may get a bonus for very high productivity.) These are "mid career physicians." They tend to work an official 10-12 hour day, ten days on, ten days off. In reality, due to documentation requirements, and the fact that they get more patients than anyone could ever see and document in 10-12 hours, they tend to work 14+.

-Private Practice. These guys completed their residency and either opened their own private practice (almost no one can do that these days, with the complexity of the documentation and EMRs required by CMS and insurers, and attendant overhead costs) or have become employed by such a practice with the medium-term goal of buying in as a partner. They are likewise having their arms heavily twisted by insurers and CMS, without any sort of leverage to fight back and negotiate better terms. These guys are going out of business left and right. These are "mid career physicians." Hours worked here are highly variable, depending on the specific practice pattern, number of employees and partners, etc.

-"Private Practice." Because of the complexities and overhead that are now required to stay open, many practices... can't. They sell to a local hospital - often at cost - and become hospital employees. The hospital offers solid salaries for the first couple of years, and then drives them out, replacing them with younger employees. Many of the "private practices" you go to are thus actually practices run by the hospital, with an employee acting as the physician. These are "mid career physicians." These tend to work 9-5 with one evening hour a week, or none. The spread of this is why no one can find a doctor to see in the evenings anymore.

Key to Understanding Medical Reimbursement:

This is not a free market. It is fee for service. You get a patient visit, it is coded as a particular service (usually a Level 3 Evaluation & Management), and a fixed amount of reimbursed, assuming you meet various documentation requirements. If you do not, the amount is decreased or denied altogether. Private insurers peg their fee schedules to CMS, so CMS - directly or indirectly - drives all physician reimbursement. If you own a geographic area (such as part of a sweeping hospital network), that network will negotiate better reimbursement (e.g., "112% of Medicare"), but that is not passed along to employee physicians. Total revenue for a physician is amount of work-time per year divided by time-per-average-service, times reimbursement-per-average-service.

That's it; that's your cap.

Thus, most services patients want are strictly cost centers. The sort of things that other businesses compete on - e.g., ambiance, good front desk staff - are problematic for physicians, because you can't pass that along to patients in moderately higher prices. The only way you can compete on service, and be free to set your prices accordingly, is to refuse all insurance and only take cash patients. There are vanishingly few such patients, largely due to a cultural expectation that insurance = healthcare. Actually paying cash for a primary care physician, at least, isn't that expensive, but since that doesn't cover all of your other healthcare costs, who can afford to pay that extra premium? Only upper-middle-class and up.


I remember reading this published insight[1] from Marissa Mayer a few months ago:

Burnout is caused by resentment

Which sounded amazing, until this guy who dated a neuroscientist commented[2]:

No. Burnout is caused when you repeatedly make large amounts of sacrifice and or effort into high-risk problems that fail. It's the result of a negative prediction error in the nucleus accumbens. You effectively condition your brain to associate work with failure.

Subconsciously, then eventually, consciously, you wonder if it's worth it. The best way to prevent burnout is to follow up a serious failure with doing small things that you know are going to work. As a biologist, I frequently put in 50-70 and sometimes 100 hour workweeks. The very nature of experimental science (lots of unkowns) means that failure happens. The nature of the culture means that grad students are "groomed" by sticking them on low-probability of success, high reward fishing expeditions (gotta get those nature, science papers) I used to burn out for months after accumulating many many hours of work on high-risk projects. I saw other grad students get it really bad, and burn out for years.

During my first postdoc, I dated a neuroscientist and reprogrammed my work habits. On the heels of the failure of a project where I have spent weeks building up for, I will quickly force myself to do routine molecular biology, or general lab tasks, or a repeat of an experiment that I have gotten to work in the past. These all have an immediate reward. Now I don't burn out anymore, and find it easier to re-attempt very difficult things, with a clearer mindset.

For coders, I would posit that most burnout comes on the heels of failure that is not in the hands of the coder (management decisions, market realities, etc). My suggested remedy would be to reassociate work with success by doing routine things such as debugging or code testing that will restore the act of working with the little "pops" of endorphins.

That is not to say that having a healthy life schedule makes burnout less likely (I think it does; and one should have a healthy lifestyle for its own sake) but I don't think it addresses the main issue.

Then I finally realized how many times I've burnt out in my life, and I became much better into avoiding it. Which is really hard to do.

And it seems to me that this is one of the many points that Ben Horowitz talks about on his What’s The Most Difficult CEO Skill? Managing Your Own Psychology[3]

[1] http://iamnotaprogrammer.com/Burnout-is-caused-by-resentment...

[2] http://iamnotaprogrammer.com/Burnout-is-caused-by-resentment...

[3] http://bhorowitz.com/2011/04/01/what%E2%80%99s-the-most-diff...


"A large factor of it would actually be that our brains don't store repetitive events very well."

My theory, which is logically consistent with yours, is that as we get older we spend more of our cognitive cycles thinking in terms of abstractions rather than using our senses to process direct experience. This is partly because as we get older we tend to immediately place things we experience into categories and then move onto processing these abstractions rather than continuously observing reality directly and unfiltered. So the other part of the theory would be that when we think in terms of abstractions, time goes by much faster than when we focus on direct sensory experience. It's likely that novelty also plays a role here as a mediating variable, in that we tend to pay more sustained attention to novel phenomena rather than immediately placing them into conceptual buckets and moving onto the next thing.

I also like this theory because it's consistent with how drugs that alter our sensory perceptions and make us focus more on our perceptions also tend to subjectively slow down time. Anyway I have no idea whether this is correct, but IMHO it's a much better theory than the original, which isn't especially logical and doesn't even really make sense.


I have a tip on finding a cofounder. I've been looking for the last 12 months and I tried a lot of things, but the thing that worked best is this:

Post a job ad.

That's it. My ad sounded like a normal job in every way, with a title like "Director of X". A couple of specific notes: 1) the first line said "cofounder" in it somewhere, and 2) it specified "part-time to full-time, heavy on equity compensation" somewhere else.

The result: Lots of inappropriate candidates applied. But also, my 2 job posts resulted in three extremely excellent team members: a equity-only cofounder, a mostly-equity cofounder/early hire, and an amazing advisor. I have been working with these folks for months and they are very solid.

I also recommend my vetting process: just like a job. That is, I did a phone screen, a multi-hour interview, a little take-home, and I winnowed down the candidates at each stage. With the chosen few, I kicked off a "let's do this" conversation. My pitch was, "I know it's ridiculous, but let's get business married, we'll set up an offsite and start working together full-time... and, if we really need to, we'll get an annulment". That's what vesting is for!

Hope that helps :)


The 10 minute rule: if you want to do something, really force yourself to do it, but only for 10 minutes. If you want to build a daily habit, commit yourself every day no matter how you feel, but only for at least 10 minutes.

More generally, if you have a bunch of tasks, commit yourself to at least one. Don't necessarily push yourself to do everything but force yourself, no matter how hard, to do at least one. If it's a big task, break it up into smaller tasks.

This one rule is seriously like a magic bullet. 90% of the time you'll end up doing everything. Somehow you're brain just goes from thinking "I hate this, it's so hard" to "wow, this is easy, let's keep going". The other 10% of the time, you can keep your pride, because 1) you at least tried, and 2) it's sometimes a sign of an actual issue (e.g. you feel bad running because you're actually sick). I've heard this advice many times, and somehow I've never yet heard a single person say it didn't work for them.


Being a hedgehog is useful on the journey to domain mastery because sticking to frameworks saves you time and headache compared to not having any frameworks at all.

The 3 stages of domain mastery:

Stage 1 - No knowledge or structure to approach a domain (everything is hard, pitfalls are everywhere)

Stage 2 - Frameworks that are useful to approach the domain (Map to avoid pitfall areas)

Stage 3 - Detailed understanding of the domain (in which you can move through pitfall areas freely and see where frameworks fall short)

Hedgehogs are at stage 2. You move from stage 1 to stage 2 by adopting frameworks; hence, hedgehogs are seen as "thought leaders" because they teach the frameworks that lead MOST people to more mastery. Except when you're at stage 3, in which case frameworks lead you to more inefficiencies compared to your own understanding.

All good decisions must be made by stage 3 persons, but ironically, training is most efficiently done by stage 2 persons. Hedgehogs get more limelight because 90% of the population is at stage 1 and values the knowledge of stage 2 (and can't grasp the complexities and nuances of stage 3).

Many hedgehogs struggle to touch stage 3, and instead see stage 2 as mastery. This is compounded by the positive feedback loops of success - the frameworks save time, it gives them reputation, it allows them to save stage 1 persons from their ignorance, and it's the foundation of their current level and achievements. Frameworks are also convenient and broadly applicable to many problems; detailed domain mastery in contrast is difficult, time consuming, and highly contextualized.

All of this makes it hard to move beyond stage 2 into stage 3.


Replying to myself to collect some observations and responses.

- Someone noted downthread that meditation is dose-sensitive. This is is an incredibly useful observation. Most folks doing 30-60 mins of meditation a day are not at risk but doing an intensive weeks long retreat with 8-10hrs/day of cushion time is in a different risk bucket.

- Second, there were many objections to using meditation as a singular monolithic phenomenon vs. distinguishing between the varieties of meditative experiences. I have struggled with this in the past too. For the sake of this discussion let's define meditation as a purposeful focus on an object of meditation with the goal of single pointed absorption.

This definition is incomplete and inaccurate in measures. However I do think it encompasses a fair variety of meditative experiences. The object might be the breath, the flow of thoughts, mantras (prayer), visualizations or an analytical meditation. I include conscious breath work in this category - pranayama, wimhof - since you are consciously moving the breath/attention to create a heightened psychological state. Yogic/Vedic/Buddhist meditation practices - even Zazen - all ascribe to the state of one-pointed absorption, nay, union of the object and subject (shamatha/shiney).

- More importantly for us all meditations - Vipassana, Mantra-based and Yogic - can create breaks as evidenced by experiences related else where in the thread. For the sake of this discussion we can transcend the varieties and just lump them as meditation.

- Based on many conversations, I believe that meditation first amplifies the psychological makeup. If someone's mind has nervous tendencies or a tendency to mania then meditation will amplify it and bring it out. The analog of Crossfit and related intense regimens are useful here. Crossfit done without consideration for the body's deficiencies (e.g. lack of mobility in certain joints) - will lead to injury centered on the where the deficiencies lie. So if I have bad shoulder mobility, doing overhead cleans will probably cause a tear in my rotator cuff. Intense meditation has to be tailored to the individual and just like intense workout regimens, there has to be concurrent rehab of deficient areas. I unfortunately am not qualified to speak more about this.

- Benefits of Meditation: There is a notion that meditation is beneficial and can cure things that ail us. I think this is a very mistaken idea bought about by commercialization and packaging of meditation. I break down benefits from meditative practice especially retreat into the following components but would appreciate hearing more on this:

-- Greater focus. Focusing on the breath or another object greatly helps sharpen our ability to focus. This has obvious benefits in daily life rife with distractions. Greater focus also manifests as "mindfulness" or just awareness. A person who is mindful, as a result of meditation, is perceived in an interaction as present and focused. AFAIK, the world mindfulness does not occur anywhere in the Buddhist teachings and is a artefact of translation ;)

-- Dopamine fasting - Removing all stimuli for a few days in retreat and simple sustenance amplifies the senses and rewinds the hedonic adaptation treadmill. Upon exiting retreat, this leads to feelings of euphoria on encountering everyday stimuli like food or a cup of tea or a flower.

- Meditative absorption - Single pointed absorption in an object is a flow state and can be very pleasurable. Watching a sports event or a great movie is a flow experience where you forget the distinction between object and subject. Meditation makes this experience reachable on a daily/weekly basis to a accomplished meditator.

- Finally, behavior change from meditative realizations. This is function of the framework of meditation - Vipassana, Yogic, Mantra, etc. For example, if I can realize that my states of mind are transient by experience than perhaps I will be more detached from them which might give me a superpower of acting far more rationally in the face of my own fleeting passions.

In summary, meditation in an established tradition can be bring transformation, but approach it cautiously. A meditation retreat will not cure you and meditation is a journey like exercise and has to be repeated, ideally daily, to keep the benefits. The benefits might lie on the other side of a long journey.

Meditation in all traditions is a means to an end - the end being realization or one-pointedness. Meditation in the modern context is stripped from its goal and made an end in itself. Meditation on its own, sans realizations, will not heal or solve any of our problems but can offer transitory benefits.


as a former physicist i couldn't agree more. in fact you can construct a lagrangian almost uniquely from the underlying symmetries of the system. it's literally the most used tool in theoretical physics, and it's so powerful and elegant that actually convinced myself to do physics in the first place.

How to Criticize with Kindness: Philosopher Daniel Dennett on the Four Steps to Arguing Intelligently [0]

1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.”

2. You should list any points of agreement (especially if they are not matters of general or widespread agreement).

3. You should mention anything you have learned from your target.

4. Only then are you permitted to say so much as a word of rebuttal or criticism.

[0] https://www.brainpickings.org/2014/03/28/daniel-dennett-rapo...


Most useful article I've read probably this year.

After selling our last company I was surprised that the acquirer went on an even bigger spending spree just months after acquiring us. As a bootstrapper this blew my mind.

This article helps shine a light on how they pulled it off. They acquired us for the free cashflow the company threw off (uncommon in our industry) and the leveraged that to further their expansion.

I've always looked at accounting as "backwards facing" (meaning it looks at what has happened vs where a company is going) but this article has changed my perspective dramatically.


Former Uber engineer/EM here: I worked on the Rider app.

The “there are only a few screens” is not true. The app works in 60+ countries, with features shipped in the app that often for a country, and - in rare cases - a city.

The app has thousands of scenarios. It speaks to good design that each user thinks the user is there to support their 5 use cases, not showing all the other use cases (that are often regional or just not relevant to the type if user - like business traveler use cases).

Uber builds and experiments with custom features all the time. An experimental screen built for London, UK would be part of the app. Multiply this by the 40-50 product teams building various features and experiments outside the core flows you are talking about (which core flows are slightly different per region as well).

I worked on payments, and this is what screens and components are in the Uber app:

- Credit cards (yes, this is only a a few screens)

- Apple Pay / Google Pay on respective platforms

- PayPal (SDK)

- Venmo (SDK)

- PayTM (15+ screens)

- Special screens for India credit cards and 2FA, EU credit cards and SCA, Brazil combo cards and custom logic

- Cash (several touch points)

- AMEX rewards and other credit card rewards (several screens)

- Uber credits & top-ups (several screens)

- UPI SDK (India)

- We used to have Campus Cards (10 screens), Airtel Money (5), Alipay (a few more), Google Wallet (a few) and I other payment methods I forget about. All with native screens. Still with me? This was just payments. The part where most people assume “oh, it’s just a credit card screen”. Or people in India assume “oh it’s just UPI and PayTM”. Or people in Mexico “oh, it’s just cash”. And so on.

Then you have other features that have their own business logic and similar depths behind the scenes when you need to make them work for 60 countries: - Airport pickup (lots of specific rules per region)

- Scheduled rides

- Commmuter card functionality

- Product types (there are SO many of these with special UI, from disabled vehicles, vans, mass transport in a few regions etc)

- Uber for Business (LOTS of touchpoints)

- On-trip experience business logic

- Pickup special cases

- Safety toolkit (have you seen it? Very neat features!)

- Receipts

- Custom fraud features for certain regions

- Customer support flows

- Regional business logic: growth features for the like of India, Brazil and other regions.

- Uber Eats touchpoints

- Uber Family

- Jump / Lime integrations (you can get bikes / scooters through the app)

- Transit functionality (seen it?)

- A bunch of others I won’t know about.

Much of the app “bloat” has to do with how business logic and screens need to be bundled in the binary, even if they are for another region. E.g. the UPI and PayTM SDKs were part of the app, despite only being used for India. Uber Transit was in a city or two when it launched, but it also shipped worldwide.

And then you have the binary size bloat with Swift that OP takes about.


I disagree on the choice of benchmarks as they're not really comparable. A more comparable benchmark for angel investments in Internet / SW startups would be a broad-based ETF that covers those. Picking a couple of the larger ones, I looked at the same periods (2012-2019 and 2016-2019) for each of them:

FDN: 4.31x / 1.86x IGV: 4.41x / 2.27x overall mean: 3.2x

Not much different than QQQ's 3.04x, but SPY is not a good benchmark due to big differences in underlying constituents.

I'd want to get a better handle on the timing of investments as well to make the benchmark more comparable - e.g., if 50% of the capital was deployed in 2012 (hypothetically) and 10% in each of the following 5 years then I'd weight my benchmark performance in the same fashion.

Finally, I would want to discount the angel investment portfolio for lack of control and liquidity. Much depends on the specifics of the recent fundraising - is the valuation using the pref figure or is it a reasonable approximation of the valuation of the seed paper (adjusting for structural differences)?

Personally, I'd rather have a well-diversified liquid ETF return of X than a portfolio of illiquid minority stakes that are marked to 1.2X. At 2X, I'd be happy with the restrictions.

Please don't misinterpret this as dumping on the result - IMO I believe this to be an above-average outcome and I congratulate the author on their success. Angel is harder than most if the top-few % of investments are not in a portfolio.


This article has a large gap in the story: it ignores sensor data sources, which are both the highest velocity and highest volume data models by multiple orders of magnitude. They have become ubiquitous in diverse, medium-sized industrial enterprises and it has turned them into some of the largest customers of cloud providers due to the data intensity. Organizations routinely spend $100M/year to deal with this data, and the workloads are literally growing exponentially. Almost no one provides tooling and platforms that address it. (This is not idle speculation, I’ve run just about every platform you can name through lab tests in anger. They are uniformly inadequate for these data models, everyone relies on bespoke platforms designed by specialists if they can afford the tariff.)

If you add real-time sensor data sources to the mix, the rest of the architecture model kind of falls apart. Requirements upstream have cascading effects on architecture downstream. The deficiencies are both technical and economic.

First, you need a single ordinary server (like EC2) to be able to ingest, transform, and store about 10M events per second continuously, while making that data fully online for basic queries. You can’t afford the latency overhead and systems cost of these being separate systems. You need this efficiency because the raw source may be 1B events per second; even at that rate, you’ll need a fantastic cluster architecture. Most of the open source platforms tap out at 100k events per second per server for these kinds of mixed workloads and no one can afford to run 20k+ servers because the software architecture is throughput limited (never mind the cluster management aspects at that scale).

Second, storage cost and data motion are the primary culprits that make these data models uneconomical. Open source tends to be profligate in these dimensions, and when you routinely operate on endless petabytes of data, it makes the entire enterprise problematic. To be fair, this is not to blame open source platforms per se, they were never designed for workloads where storage and latency costs were critical for viability. It can be done, but it was never a priority and you would design the software very differently if it was.

I will make a prediction. When software that can address sensor data models becomes a platform instead of bespoke, it will eat the lunch of a lot of adjacent data platforms that aren’t targeted at sensor data for a simple reason: the extreme operational efficiency of data infrastructure required to handle sensor data models applies just as much to any other data model, there simply hasn’t been an existential economic incentive to build it for those other data models. I've seen this happen several times; someone pays for bespoke sensor data infrastructure and realizes they can adapt it to run their large-scale web analytics (or whatever) many times faster and at a fraction of the infrastructure cost, even though it wasn't designed for it. And it works.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: