Hacker Newsnew | past | comments | ask | show | jobs | submit | worriedformyjob's commentslogin

Obviously using a throwaway here. Our board members got it into their heads that successful companies must have an AI "play", so they instructed the CEO to invest about 10% of our development budget on AI.

We are doing absolutely inane projects that have no hope of succeeding.

We serve a niche industry where certified professionals have to do certain tasks personally, instead of being able to delegate to secretaries. Somehow our CEO has been convinced that AI can be trained to do these tasks, at a reliability level not achievable by other humans.

Team motivation is in a weird space: everyone is relaxed because there is no pressure to succeed - we all know the project will fail unless someone develops well-perfoming, human-level AGI before Q4/2020. Lots of long lunches and checking out early in the afternoon.

At the same time, everyone is worried how terrible the fallout is going to be once the project reaches its inevitable conclusion.

Interesting times, but at least we can now tell investors we are a keen company with an AI play up our sleeve!


Many of us have been there. I think what struck me was how low the treshold really is for what's accepted as "AI" in most industries.

Do this:

1. make something that has some "modern" AI such as deep learning in it, but that doesn't do anything much useful (because that's almost impossible in most cases). E.g. predict something trivial based on a time series, and repoort it. We all know how hard this is, so just take some hello world ML project and change some inputs.

2. Find some easy project that actually makes business sense, and that isn't yet done, and can be done with simple logic or some old school "traditional" ML or constraint solving e.g. a scheduling/optimization problem.

3. Make sure these efforts are "merged" so that the fancy AI tech AND the measurable good outcome is the same project. The "AI" moonshot of the company.

4. You can now say you are doing "AI", you have measurable success.


It cracks me up that on the new LG TVs (I have a C9), it has “AI preview”. Which means: when I hover the Netflix icon in the dashboard it presents a list of the most recently played shows.

I think the “Open Recent” menu debuted in 1980s? But now it is AI!


I thought AI in C9 tvs was about improving upscaling quality?


That too is “AI”. But there is a setting to turn off “AI previews”, that is just pulling some content from services into the dashboard.


Yes that sounds unnecessary confusing.


Maybe the confusion is deliberate.


To the average person AI makes everything sound more modern, "smarter". So it gets sprinkled in everything. Things that have been using the same logical algorithm since the dawn of computing are now called AI. The worst is hearing people in the field calling every piece of automation and single purpose code that does one thing the same way every time as "AI".

Everyone does it just to attach their old or boring tech to the fancy new trend and have some "cool" rub off on them.


A "friend" of mine is doing exactly that in its disruptive startup. Their AI part is a joke and handle a part that nobody really care about, but hey they are doing it like every other cool kids.


genuine question here: why is using deep learning to do something useful impossible?

I can think of a bunch of potential use cases for gpt 3 alone.

or do you mean its impossible to build useful models from scratch because all the "easy" problems are solved?

this also seems like a limited mind set.

context: I'm a ML noob


It's not impossible. But there is a very real gap between "this is cool in a research paper" and "this is deep learning that works in real life".

It's a large gap, covering everything from application topics, to data quality, to the need to actually run the damn think in a production setting with scalability, availability, error handling, etc.

Production applications of deep learning aren't particularly glamorous, they're not the "next big thing" right now. Rather they're improvements of existing applications.

Google's on device live captioning works really well, but still somewhat niche, and requires special / higher end SoC's to run.


makes sense. thanks.

models I have used seem to have their usefulness greatly outweighed by performance demands.

scaling and economics are another question entirely.

Perhaps we were spoiled with democratized web tech and it's wishful thinking to want everything to be that.


It will get there eventually. A lot of it is hardware / deployment constrained.

Search by image and object detection and computer vision in general is cool and potentially useful, but right now, it's cumbersome as fuck to pull out your phone, find the Lens application, take a picture etc. Needs to be baked into a wearable / neuralink type setup.

But self driving applications of CV work because the cameras are always deployed and running. But the hardware is expensive.


> why is using deep learning to do something useful impossible?

This is completely false. Here are some examples: Google translate. It's infinitely useful. It's not perfect, but it's good enough for me when I want to quickly check whether my translation into my second language is okay, or I'm not sure I got the meaning right of some translation.

Second example: My home security cameras now uses object classification and only alerts me when there's movement in "high risk zones" and it's human. So many stupid false positives of shadows and stray cats completely gone. I'm pretty sure I can fine-tune it with examples of myself and my family and it will ignore them when it's reasonably confident it's them, but I couldn't be bothered.


Yes. Google Translate is obviously a useful product, and obviously Google will have thousands of areas where they can apply AI (enough that they'd even write frameworks for it). But 99(.99)% of us work in mundane jobs doing boring CRUD apps. My post was meant in this context.

You work in a company making an intranet product for the paper tissue industry. Your manager wants v3.0 to have some AI in it. No one remembers the fiasco when "Cloud" was added in v2.0


so what you're saying is...

you either swallow the CRUD app red pill, or you live long enough to become the "AI a la carte" manager


Google Translate was good before the so-called "AI revolution".


Google Translate today is absolutely spectacular.

A fun game used to be "translate this phrase English to French, then translate it back again and laugh at how meaningless it's made by the round-trip". That doesn't work any more.


As far as I remember it was close to unusable. Even most basic translation to languages besides German, Spanish and French was a nightmare. That was at lest true during my second year at the university (~2010-2011).


From what I've read, the early translations were really rough (but on par with competing free products), until they fired all their linguists and brought in AI guys.


Not exactly. Google Translate was always a statistical approach from day one. That was the reason it was better than the rules engines that dominated before that. The Translate guys really pioneered statistical machine translation.

However it was all done with hand-crafted statistical functions and code. The new stuff using deep learning is the first time it'd have been referred to as "AI".


Most of the "useful" tasks AI has helped us with are problems that are domain specific and lend themselves well to machine learning (for example object detection in images and machine translation as another poster mentioned).

However, what I think the poster meant was that just tacking on AI for the sake of it to a problem which most likely doesn't have an applicable use for it (which currently is the case for most existing IT projects or apps) is doomed for failure.


> but at least we can now tell investors we are a keen company with an AI play up our sleeve

Obviously I don't know the details of your company situation, but I've experienced myself over the years, that sometimes it can be less overall cost and trouble to give this kind of lip service, rather than fighting an overwhelming fashionable trend until that trend runs its natural course to extinction.

> Team motivation is in a weird space: everyone is relaxed because there is no pressure to succeed

If I was leading that team, I would try to find (or insert) nuggets of goodness (smaller/non-obvious objectives and tasks) in that project which allow the team members to get some personal longer term benefit out of it.

And ideally find (or insert) nuggets of goodness (smaller/non-obvious objectives and tasks) in that project, which have a good chance of becoming useful to the company later - beyond the scope of the official project.

And hopefully the team would become excited about doing something meaningful beyond the official project objective.

p.s. I'm speaking from experience in this kind of a situation.


Well, it sounds so weird but intriguing. Do you mind telling the rest of the story?


The story would probably make little sense to most HN readers, since it's very old and what was very non-obvious back then, became very obvious 5 years later.

Around 1994/95 client/server was unassailable in board rooms (much like AI is now). And after initially speaking up against it, I ended up shutting up about my criticism of that architectural approach, since it would have just gotten me fired.

So I ended up sneaking some early Internet technologies into a big client/server project I managed. Doing that ended up benefiting those developers in future jobs, and the company, because they didn't have to re-write that part so quickly. :-)


This may be a stupid question, but what does "client/server" mean in this context?

I would interpret it to mean any networked or Internet-based application that has both client- and server-side logic, which obviously isn't anything special in 2020, but it sounds like you're referring to something else that would now be considered outdated?


> This may be a stupid question, but what does "client/server" mean in this context?

Not a stupid question at all, since terminology has been shifting over the decades.

Back in the early 1990s, the original client/server generally meant all application code (logic and UI) on the client computer, and all the data on the server. So a very fat client.

Database stored procedures (i.e. some logic on the server) only arrived on the scene later during that time, because the original architecture of all code on the client caused performance nightmares. But don’t get me started on the trials and tribulations of early stored procedures. They often caused more project turmoil rather than smoothen things out. — Partly because they weren’t good languages yet, partly because of amplifying developer vs. dba conflicts.


Without any server-side logic, I suppose user permissions must have been handled entirely by the database, with clients connecting as different database users?

Did row-level security already exist in some databases at that time? Otherwise this must have been quite limiting.


Row level security was usually handled through database views.


There were alternatives to stored procedures for centralization. I remember RPC and CORBA being hot topics around that time. The tiered architecture was a thing before the Web.


I might be totally off the mark here, but in context I assumed the contrast was with doing all work on a thick "client". A client-server architecture in principle lets you run thinner clients and offload the heavy work, but in practice the network would've been expensive.

A modern analogue might be something like premature distributed computing.


If my memory serves me right, distributed computing was the advent of smaller computers (back then often called mini computers like DEC Vax or IBM AS/400, but those were more like smaller mainframes, still using dumb terminals as frontends.

But you’re right in saying that the original client/server had all of the UI and business logic on a very fat client - and the server only served the data over very low bandwith networks.


Maybe rather than:

A modern analogue might be something like premature distributed computing

A modern analogue might be n, where n is more familiar to the composer of this, or, the source of information determining the production of this text is most familiar with n, where n does not include the word premature


I think they mean a dedicated thick client using the OS’s UI framework, rather than a web browser.


Not only the UI, but all of the business logic - so a very thick client indeed.


I've always called these direct database applications. The implication that the client directly connects to the database without the benefit of a middle tier.


Something like Oracle forms with a custom clinet application talking to a database server.


Yeah - kind of, but Oracle forms originally didn’t even have a graphical client - that’s probably why Powerbuilder and Visual Basic dominated the early 90s. Also, Oracle was still fighting it out with Sybase in the mini computer dbms space.


Oracle forms 2.3 & 3 where graphical terminal apps, starting forms 4.0 it also included windows (and X) clients.


Is this related to the "Network Computer" idea that Sun was pushing around that time in the 90s?

https://en.wikipedia.org/wiki/Network_Computer


Doesn't this speak more to the board's astounding foresight than inane projects?


Somehow PC Mag seems like the best reference here: https://www.pcmag.com/encyclopedia/term/clientserver

It’s hard to see the hype up close sometimes. The closest analogy I can think of and it’s an older one might be how in the mid-2000s everyone needed to make a “Web 2.0 mash-up” or how for awhile everyone needed “social” (remember Ping?)... if you were stuck with a stack based on jQuery and global variables, good luck transitioning to React without Facebook’s scale. Sometimes all you can do is try to stay organized, see a project through and hope it can evolve to something greater after the fad wears off...


The PC Mag article seems mostly pretty accurate - however, I’d argue that the web isn’t classic client/server, since the server sends the code to the client just in time. Classic client/server had a totally different code deployment mechanism. It was a full app installation every time - often via sneaker network, since few corporate networks had enough bandwidth to deploy new apps over their network. And installation software infrastructure was almost non-existent. Version control systems? Seldom. Netscape was one of the very first apps that was entirely network delivered at really large scale.

If I had to describe the web to a time traveller from the 1980s, I might say the modern web browser is something like a super smart and capable 3270 terminal, while the cloud is the mainframe.


I remember just before my first web project. I worked on a Oracle client server project (SMDS management tool) and to install the client, you installed about 4 products via 14 floppy disks one after the other on the client PC.

Took two days for two of us to install it on five or six client PC's


So you can probably imagine what it was like when we had several dozens of client pc’s across several customer companies across several cities across the continent.


Oh yes in 94 that was my key take away, you could launch a new system as a website and save a lot on deployment cots on say 500 seats.

I recall discussing this with the engineering centre manager about how in 5 -10 years the web could replace fat clients on PC's


> Version control systems? Seldom.

SCCS ftw!


> if you were stuck with a stack based on jQuery and global variables, good luck transitioning to React without Facebook’s scale.

My team is doing exactly this on my initiative and seeing some positive effects already, even though we're only like 2% of the way there. Would you mind expanding on what you think is futile about it? I wouldn't want to be the one advocating something that with hindsight will look obviously stupid.


Well, leaving aside the React vs Web Components debate, the only issue is that some folks add technologies like React but forget to remove the old ones as they go. They half-finish the job. Then they do that up to 5 times more...

My advice would be to try to refactor the jQuery app to reduce global state first. For example, use ASTs and other techniques to programmatically determine what global state is used where. Think of it not as “porting” to React, but more as ‘re-writing in React” — the sooner you can finish the rewrite, the better. Take it in smaller, thin, but global (across the whole app) steps, and work to break the app dependencies back down to separate components — this is especially true for CSS, which historically has been global by design for much of its life...

If shipping during a rewrite, make sure you have some very basic tests running. Try to implementation-agnostically look for and click text or otherwise use the web app in your tests, then run the tests before and after making widespread changes... Use screenshots from the tests to quickly visually scan for errors. Pay attention to errors in browser consoles also, if not using TypeScript to its fullest yet.


There's loads written on re-writes.


I may be misreading them, but GP seems to be doing the re-write in the recommended way, small chunks at a time. I think the advice cautioning against wholesale replacement re-writes is sound, but piecemeal transition is much more achievable. I think it took about 5 years to transition Wayfair from asp to PHP one page/system/scheduled task at a time, but the result was a working system at all times that improved with the re-write. Not that I have any love for PHP, but (Classic) asp was even worse. (I can’t speak to asp.net, never used it.)


Asp.net Core is fantastic. ASp.net is ok but not as good.

Asp.net Core is cross platform, has a self hosted https server called Kestrel that is insanely fast, has a really sensible composition of middleware, built in DI, and a strong community. Hangfire is a great job runner, signalr is available with real-time push over web sockets, identity server 4 can handle all your Authn/Authz and Entity Framework Core + Linq is very powerful and a lot faster than the full framework version.


Cool, very good info thank you! I’ve heard great things about .net core but never had the pleasure of using it yet. I would love to play with F#!


I think he's saying the "foresight parts" were his initiatives, not the board's, and the board's part in it all had to be scrapped sooner rather than later while his initiatives did not.


> we all know the project will fail unless someone develops well-perfoming, human-level AGI before Q4/2020

This is hilarious, and makes me wonder how many similar corporate AI initiatives are under way in the world right now.


I have had a couple conversations where a business guy starts off with a decent idea then he says something like "the cognitive AI module will fill out your expense report after watching you do it for a couple weeks".

What the fuck? If I had the ability to code a cognitive AI module which could do that I would not be building it for you, I would solve self driving and license the software for billions.


Business guys usually comes together and brag about their works. Quite often, they start to talk about things they heard but don’t understand much.

And the other one hear about it, get excited, but also don’t understand much. Then on the meeting, he brought the idea up and convince everyone to do it.

The next you know, you’re pulling your hair out trying to understand what is going on


This happens way too often. They often read a blog that talks about how X was implemented at company Y and mindlessly cite the blog for their own related-but-different-in-important-ways ideas and it’s upto the engineers to fact check the wackiness. It’s not pleasant.

This happened to me recently where a PM shared a blog about someone who had built an “all encompassing multi cloud cost calculator” for their org and had blogged about it. The PM was naturally extremely excited but I asked him to find more details about the tool and if/how that can be used. Turns out even though it was supposedly open sourced it wasn’t really available to just anyone but the author promised he would release it in a month. That was over 6 months ago, no word from the author or PM.

These blog fueled hype trains are extremely destructive. It makes engineering appear trivial. Building useful tool might be easier now but it’s not trivial. And if a tool sounds too good to be true it probably is.


Yes, the best salesperson is usually the one least concerned with reality.

Or as one VC put it: Never let facts get in the way of a good story.


This makes more sense when you remember that being "concerned with reality" for a salesperson mostly means "Getting the commission paid and moving to a different role before everything blows up too badly", rather than "delivering software that meets the sales deal's contractual requirements".

Salespeople are too rarely judged/punished for the disastrus messes they leave behind, and all too quickly judged/rewarded for being able to convince a customer to sign off on a big number without any care for long term implications of that deal...


Hm... Vested commision?


Him selling you that as an original was an instance of that itself as it's a Mark Twain quote. Funny how that works.


I don’t have the number but the percentage of success that “best salesperson” is small.

Most of the time, the development will be discontinued, or change the direction.


A few years ago at a small company after a round of golf with some other shmuck, our CEO told us we needed SSO via OAuth, because that's how we can convince people we're secure. How soon could we get it developed? Spare no expense!

We had a single website with an already written to OWASP standards login, no external API or plans for any.


What always drives me crazy about instances like this, is that managers and the c-level seem to be much more willing to listen to some "random" guy or blog over their own people. People they hired, people that have a much better understanding of the issue, the processes and so on.

This effect is by no means limited to tech. And engineers, regardless of type, are by no means immune to that as soon as they reach higher management positions. I have yet to figure this one out. Which drives me crazy sometimes, because my gut tells me that as soon as I did most "problems" I have regarding managment would be solvable instantly.


I think a big worry for any C-levels is "what if my employees are wrong". And there is good reason to hedge against this, because insiders have clear interests in defending past mistakes.

And if you hired your people yourself, you probably know you skipped over some qualified people who were to expensive, and weeded out some overconfident people who turned out not to be all that. In the end, you wonder what those expensive smart people would have said, and you wonder if your confident sounding employees are just overconfident incapables that you failed to weed out.

Hence, getting an outside perspective from someone you trust has a lot of attraction. However, getting that outsider person to be knowledgeable enough, and getting them the right information, is a tough task.


When I was 17 I worked [production] in a sofa factory that made huge profit. They hired every highly specialized consultant available. I asked one of the consultants and some office folk for an explanation (I was paid 3 guilders or so per hours ($1.5)) To my surprise both the consultant and the office folk thought it was a fascinating question and explained elaborately how [to them] it was worth every penny to have written proof for every business process. Investors could point at anything and get a pile of reports explaining exactly why the chosen method was the right one.

(When I left they continued to pay me for months. It struck me just now that cheap employees probably looked great on paper.)


Which question?


Why they paid me roughly 1.5 USD and the consultants several thousands per hour.

My bad, I originally wrote " They hired every highly specialized consultant available for .... guilders each" but I only hear the price of one and I failed to remember if it was 5000 or 20 000 for a 2 hour chat.


They're just seeking some outside the box solutions. If you don't course correct, how do you know/show that you're driving?


At my last job, about a month or six weeks after starting, the CTO would meet with new devs and ask them if they thought we were doing anything wrong. He was clear “I have to ask you now because in a month it’s gonna seem perfectly normal to you.”

I recall telling him they were doing builds like no one I had ever seen (“Yes we have a plan to change it.”) and and asking why do you use R as the main language for the ETL pipeline (“It makes it easier for data science and we can run it in Spark.”)


In my case, the business guy stubborn and think he know best, many times.

Every times, I told him it wasn’t what he think. For a few times, I let it go, part to let he learn about the reality. I thought he would change the evaluation process. But no, he still stubborn.

I remember at Microsoft, people would need to win an argument with Bill Gate to get their idea approved. Sometimes, it was a really hear arguments. Maybe some people get inspired by this story too much.


Plus, how else would you create all that synergy.


I know a guy that works at a company where the CEO will sell something to customers not knowing anything about the technical details, and then come back to the business and "make them do it".


I previously worked for a Dutch fintech company that operated in the same way, but from my understanding they were (probably still are) really quite effective, profits hugely increasing year-on-year.

But this Dutch company also makes use of some kinds of SWAT dev teams that help customers on the spot with issues, making sure the product lives up to the clients’ expectations, even if this means modifying a product delivered by the core dev teams. I.e.: if some feature was promised, but not yet part of the current release, the SWAT team might hack something quickly, often on the spot in the clients offices. Later on, such a hack might be replaced with a proper solution.


Isn’t this the norm? I complained that sales were “demoing new features” to customers before the back-end dev team had even heard of these features. I was basically told to stay in my lane. A property developer making design mockups of new highrises doesn’t run it past the bricklayers first, so why should product/sales talk to us digital bricklayers...

In one sense, great, I don’t want to bother about what a customer’s priorities are. But it turns out that only works if you TRUST the product team.


In my previous job in a consulting company one of the sales people mentioned how this is the first place where he has to sell the project twice: once for the customer, and one for the co-workers who'll work on it.

It was a nice company. Bosses had very limited direct power over the developers and designers, and rightfully so -- it's supposed to be a team of experts, after all.


Because he is selling buildings that are too tall to be built with bricks...


I was once being sold that as a partnership advantage - that a guy would sell with a .ppt and then afterwards we'd just have to build it.


I had a moment like that once where the product manager said something like "at this stage of the process the software will go off and find the documents specifically relevant to this stage" - and I was like pointing out we could do a search but that might bring back irrelevant stuff or fail to find relevant stuff - he insisted that it would automatically find just the exact documents people would need at that stage in the process.

It was an "AI complete" feature as up to that point it required someone who knew what they were doing to decide what documents were actually relevant - not "close enough".


Oh? So the moment you no longer have people who know which are relevant the AI works?


The product was used for arranging formal approval processes for things like drugs and government contracts - the definition of what documents were relevant was often quite strictly defined in a practical sense but less so in sense that a bit of software could make sense of.


Fire testers and QA, disable bug reporting, et voila, the most perfect software ever to have been created!


Been there so many times.

I've been kicked off a project because I couldn't stop talking about training data (they already had) and machine learning algorithms (NLP, text classification) that we could implement right now to start automating a couple of internal processes that are currently pointlessly manual. Think moving incoming e-mail to appropriate downstream support channels.

Not enough magical AI / AGI in there.

The countless PowerPoint presentations built after my departure described processes along the lines of "the AI will detect when you're about to miss your connecting flight and book you into your favourite hotel with your favourite dinner pre-ordered".

Surprisingly, the project survived and now they collect training data and use machine learning for text classification.


Solve self driving? How bout market prediction and just play the market forever. Why would you risk failure when you can just do nothing and make trillions?


Ahem, actual humans cannot consistently predict the market.


Exactly why we should use AI, no?

    - your ceo


Well, to be honest self driving sounds more useful to mankind ;)


If someone offered me AI that drove me to work while I slept or watched movies, or AI that filled out my expense reports, I'd ask if the AI is able to scan the receipts by itself.


I'd ask the AI to go to work for me, then I'd go hiking in the mountains.


And possibly tell the AI to chat with my relatives and help them with their endless computer problems.


Just start a software consulting business. An outsourced programmer goes for $100+ per HOUR. No need to have the initial capital you need for market prediction.


Financier Martin Armstrong claims to have exactly such a system but it doesn't predict the market, it only makes quite accurate market forecasts.


I have heard about this guy's forecasts but have been unable to find a list of predicted vs actual. Do you know of such a list?


He started with a list of 1024^w512 sure things, but 7 years of mostly good picks later now he just promotes 8 companies ... /s


Oh, it's this guy - nothing to see here https://en.m.wikipedia.org/wiki/Martin_A._Armstrong


Dude I see hilariously absurd ideas for AI implementation in both the film and podcast worlds. There is definitely potential - such as transcriptions - but people are way overselling the efficacy and trying to “disrupt” with AI in ways no industry professional is asking for.


Most people say they want "speech recognition", but actually mean they want "speech understanding". Think computer from Star Trek, not speech-to-text.


Even transcription is hard when it's used in the way manual transcriptions are used. Whenever there is sufficient value in transcription to bring the manual process close to worthwhile the cost of mistakes will also be high. But lower value applications can have much lower quality requirements, e.g. imperfect transcription could still be used to generate a high level topic log of a conversation that might be useful e.g. for backtracking after a digression.

The pinnacle of the low failure cost principle must be ad targeting, it costs nothing besides opportunity to display the wrong ad. And the success metric can even be inverted if the mistake is sufficiently surprising: I'd probably be more likely to deliberately click an ad that is entirely off my beaten path, out of curiosity, than something that aligns with my actual interests. Who wouldn't click on an ad for e.g. curling brooms? Curlers.


Isn't that a given when you're disrupting the industry itself? Of course not every idea is good.


It’s only disruptive when it actually disrupts something I guess is my point. I’m not so curmudgeonly as to think “this stuff will never work,” I just see so many folks selling it as if it already does.


I find that a C suite member talking about AI without a specific goal in mind (“we will use AI to automate X, which will help us do Y better”) is a huge red flag.


It also seems to be a main selling point of a lot of logistics and supply chain star-ups these days. I guess it sounds way sexier than "give us millions to build a new forwarder and our own tech because we don't want to use off-the-shelf software and we want to be acquired by DHL one day".


<cynical thought> At least they've stopped talking about blockchain though...


Isn't that a prerequisite to begin with by now?


Arguably most C-suite execs shouldn’t even be dictating the specific techniques used to reach an objective or solve a problem.


At least once a month I get contacted by some recruiter contracting with a company like this. They all seem to have a vauge screen reader -> OCR -> AI fill in the blanks, form submission charter. They claim to be profitable and have 10+ customers, but their examples of how their product is used sounds like they are automating Sally in Accounting's job, and she's 2 years away from retiring, so if they could get the software to interface with the 1990s software she's been using for the last 10 years, they won't have to retrain her replacement when she finally retires. I'm sure Boomer Replacement is a market, but does not seem like a 1000x unicorn growth market.


Once the CEO asked me to build a human level AGI in front of our only customer (small company).

I asked them what they wanted to do (it was solvable with keyword search) and told the CEO that I’d be happy to get it into the next version. It took an afternoon or so to implement.

Gotta love the PHB effect.


It's interesting that you seem to be insulting your CEO, who managed to solve a customer problem with reasonable tech requirements for you and a grandiose upsold sales pitch. Seems like a win-win-win


If it happened like the OP described, they just got lucky, but it could have gone differently. A good (but less than honest) CEO would have colluded with the developer before talking to the customer, both agreeing to upsell an easily implemented feature. However, the mark of a PHB is that he/she will ask for random features while completely misjudging their feasibility -- sometimes you'll get lucky, sometimes you'll crash and burn horribly.


My new hobby is using "collusion" instead of "collaboration"


My rule of thumb is that if it's a plot to dishonestly deceive the customer, it's probably collusion. If instead it's an honest plan made with the understanding of the customer, it's a collaboration ;)


> grandiose upsold sales pitch

Yes, liars should be insulted.


PHB Effect?


Pointy haired boss. (From Dilbert)




"PHB" means "Pointy-Haired Boss": https://en.wikipedia.org/wiki/Pointy-haired_Boss

The relevant bit:

> He is notable for his micromanagement, gross incompetence, obliviousness to his surroundings, and unhelpful buzzword usage; yet somehow retains power in the workplace.


Certainly not. But the results they list are interesting - my gut tells me they wouldn’t be reproducible.

Can’t seem to find the full text though, pretty messed up that my tax dollars go to finance academia that I can’t even consume.


To kind of go against the flow here: _most_ of the projects I worked on seemed to be a waste of time when I worked on them, but ended up making a lot of money many years later, years after I'd moved on. I like the green field stuff, and green field stuff looks like a waste of time most of the time. The first three versions suck ass and don't do much, but without having them you don't get to something that doesn't suck. Both the first and the second project I worked on at MS seemed like horrible waste of time in early aughts, but have likely made tens of billions of dollars for it by now.

Then there are projects where I did something ahead of its time (and was ultimately unsuccessful) yet other people did the exact same thing later and made a ton of money.

Finally I ran engineering at a startup which spent ~3 years largely wasting time and money, only to be acquired for hundreds of millions of dollars for reasons that are still unclear to me.

You never really know. Success is a random, nonlinear process and your effort is but one input to it. I've given up trying to predict if something will be successful if the work that's being done at the front lines is solid from the engineering standpoint. Business is mostly about being very persistent and/or being in the right place at the right time.

For all you know your company might be one PhD hire away from absolutely blowing the doors off competition. Or it may get to that point years later because it started thinking about it now. Or having at least some sort of an AI strategy (even if not very successful) might double the acquisition price in some upcoming merger. All of those things would make your work worthwhile in retrospect even if the project fails. You never know such things.

These uncertain situations are usually a blessing in disguise: because management has no clue, you can pretend that you have clue, and steer things in the direction that's useful to you. Put some cool things on your resume, if nothing else. Move on, see if you have better luck elsewhere.


This is a great comment. If you're comfortable sharing any more information, I'd love to know just a bit more detail. What were the technologies or strategies that seemed useless at the time, but eventually succeeded?


In the first project I worked on at MS we were entering a new, established market with multiple strong competitors, and because Microsoft at the time did not have the right tech stack to do web services, the initial tech choices were dubious at best. The first version looked great, but under the hood it sucked like you wouldn't believe. Then sales somehow sold it to some pretty large/important customers, and the team had to spend the next 2 years fixing things "under the hood". Now it's one of the leading products in its market.

My second project there was about enabling the desktop version of Office to be rented and backed by online services. At the time the idea was novel, and nobody believed it'd succeed, _especially_ because Ballmer was pushing it pretty hard. But succeed it did, wildly.

I know multiple such examples anecdotally as well, from friends. The initial versions of Azure seemed like a waste of time also (VMs topped out at like 32GB, everything sucked mightily throughout the entire stack, nobody thought it has a fighting chance). Today it's the second largest cloud, and growing rapidly. Still kinda sucks though. :-)

The initial version of Bing crawler was pretty horrible also (index team had more time, so their code looked squeaky clean in comparison), and it spent years trailing behind Google until they figured out how to close the gap (with "hiybbprqag" thing - the press got it wrong, they weren't "stealing" Google results per se; they augmented training data with clicks that come from Google search pages IIRC). Today Bing powers Duck Duck Go among other things. It's still not a wild success, but it's most definitely not a waste of time either.


Thank you - it's nice to have a considered view on these things


Ha. CTO at my last place did something similar. He hired an AI/ML team. I asked him what projects he had in mind and he said once the team was on board they'd find things to do. He also said we needed a voice app because he read an article about the size of the market. I quit before seeing how any of that worked out.


Previous employer did something similar. Hired a crazy expensive data scientist, spent truckloads of money on a data swamp, er, lake, and then didn’t really have any goals for what to do with all that capacity.

The new hire got lucky, as there was some low hanging fruit (optimizations we all knew about). The obvious solution was implemented and then he ex post facto claimed 100% credit for the revenue enhancement. Since it’s like 10x his salary every year he’s pretty much on autopilot.


So you all knew about optimizations 10x a data scientist salary, but didn't act on them?

Actually good that the company hired him then.


> So you all knew about optimizations 10x a data scientist salary, but didn't act on them?

Sadly, most engineers aren't allowed to pursue that kind of stuff. Normal dev jobs can be stifling.


If you walk into your CEOs office today and say "I can increase revenue from our site by 1 million dollars" (#) you will get a hearing. If three devs on that site do it, with a poc they will get the chance to do it 95% of the time.

But the CEO who hires a data scientist is explicitly asking that person to walk into their office and say "I can increase revenue by X" - that is their job. The website dev does not have that job.

And there are two views on this problem - the CEO did the right thing by hiring someone to explicitly optimise the site - there are improvements to be made, create the right organisation so that those improvements flow to company.

Or there is the other argument, that the organisation is stifling innovation from below, and that the CEO should have been actively soliciting improvements from dev team and elsewhere

Generally both are right


Not this place. Our department produced multiple designs and products that had substantial positive revenue. We even did some ML projects with our existing team and/or SaaS products. Anything that didn't come from the head office was treated as a rogue experiment.


That's why you have management to listen, decide direction and argue about it. Seems like engineering management wasn't doing it's job so someone hired a Data Scientist outside their hierarchy to actually get things done.


Yeah I thought this exactly. If he had any brains he’d negotiate with management a 5x bonus for solving it, tied to achieving the 10x optimisations

I did this several times in my early career and was paid a lot and it was win/win

You have to phrase it carefully, but if you phrase it as a win/win with no risk, they’ll sign off on it


I had a similar situation, but it didn't go through. I offered to reduce the company-wide monthly costs by 5% in exchange for 40k once by replacing an outside vendor with my own self-developed clone which the company would then own. Didn't go through because our investors feared an increased dependency on me as an employee.


funny how in any other type of relationship that increased dependency would be considered a strength and not a risk


Especially with investors, the incentives might be that you're better off doing exactly what was asked for i instead of going rouge to boost the company's profits. For you, the developer, that's an unnecessary risk and there's no reward for taking it.


That kind of ignores power structures. Higher ups seem to be more willing to listen to external peope than internal ones. Plus, why should they pay you more for doing "your job"? Compared to a super expensive external guy who's hiring needs to be justified.


What exactly are you talking about?


Yeah my fairly small non-profit recently hired someone with a math degree to do stats and "big data analysis." We are talking about a company with <100,000 total "market."

In my experience, to these people, big data means "the excels look really big."


In my own experience, often non-technical people will attribute things as AI that we wouldn’t, eg any basic automation. Really anything that’s in any way intelligent or seems that way. I had a project where people were excited about a feature that was basically just alerting when one value crossed over or under another... I wonder if your team could provide real value to customers providing some smart “AI” feature that isn’t actually machine learning.


I had a boss not so long ago who kept selling our customers our "Conversational AI Bot". What we gave them was a fairly dumb keyword search of a library of questions the customer would type in along with answers - wrapped in some html widgets displaying cartoon robot heads.

It still astounds me that not only did none of those customers sue us for blatant misrepresentation, but most of them were delighted with what they'd bought.


There are a couple of lessons in that anecdote.


Yep. Still learning them...

I had a similar cognitive dissonance when doing "R&D Grant applications" a long time back. I kept on thinking "Research" was the sort of rigorous academic work you do to earn a PhD. In the government/business world, it seems pretty much spending time or money to learn how to do anything not 100% required for your business previously counts. Including things like "selling our widgets, only via the internet" being valid and acceptable thing to get government rebates for "R&D". I still feel dirty everytime I draft one of those grant applications... :shrug:


The government only cares if you're developing something that will make more money, which presumably selling via the internet will.

It doesn't have to change the world, as long as it moves the needle.


The line of were AI starts is not well-defined. If you think about it, virtually all software can be considered AI if you put expectations low enough. From the other end, one could argue that once you understand the inner workings of a piece of AI, it just becomes a fancy algorithm to you. In the end, everything is just a piece of software converting input to output.

My experience is that people tend to consider things AI whenever it feels like "magic" to them, at least in non-technical circles.


On the other hand, actual AI is kinda cut-n-paste accessible now too.

I slapped together a POC javascript tinyYolo feature detector demo last weekend (using someone else's code and pre-trained model from their github project), and joked when I showed it around that "And we can tell investors we're running AI, deep learning, and convolutional neural networks in our realtime production workflow!"...

(But as we all know, all the _proper_ magic is done by the regexes buried in that Perl module dependency hidden in a deeply nested source code directory that no-one without a grey beard is ever game to peer into...)


If you have someone doing a task manually, you could technically argue that you’re using an incredibly advanced neural network. Just not an artificial one.

I was recently joking that I was training a neural network with a water spray bottle (teaching the cat not to jump on the kitchen counter). And just while I type this, my little neural network came to walk over the keyboard...


> my little neural network

Free idea for a children's TV show right there.


One interpretation of AI is "things we didn't think computers could do". Chess used to be considered an AI problem, now it's something we just expect.


Completely agree. If its artificial (all software) and its in some way intelligent p, or appears that way, (a lot of software, or features thereof), then you could argue its artificial intelligence, based on just those two words (ignoring the definition technical people give AI). That’s why machine learning is usually more useful from a technical discussion viewpoint.

> From the other end

Yeah, using wongarsu’s example of chess: brute forcing chess was once a highlight if AI achievement and now its just a crude search algorithm.


Times change, patterns stay the same. I remember when Facebook got big, so obviously everybody had to copy them.

Imagine working on an accounting application when the CEO suddenly instructs everybody to figure out how accountants could connect with each other and maybe publish their balance sheets instead of cat pictures or whatever. Details changed, but it was that weird.


Just pivot to "assisted-AI" where your solution is really bad but it adds "super-powers" to all their existing employees. It may give cover to some other businesses to fire staff under the "force-multiplier" argument that your software adds ("If their software makes every employee 33% better, we can fire 33% of our staff!").


And of course you're being facetious, because if everyone is 33% better, the company can fire 25% of its staff (right?).


That's hilarious - I can't help but think of this dilbert cartoon on making change:

https://dilbert.com/strip/1993-03-20

(I am not putting you down)


I would have thought of this one:

https://dilbert.com/strip/1998-03-17

My dad always disliked the strip you cite, because the change Dilbert expects is $5.25, but there's no way to provide $7.14 in a way that couldn't be trivially reduced to $2.14. If the goal is to make things easy for the cashier, he should obviously provide $2.14 and just keep the $5 he already has anyway.


I'm sure that's intended; Dilbert is smart but dumb. Or dumb but smart.


I'm pretty sure it's just a mistake because Adams wasn't careful.


Oh dear. Erroneous 'grossing up' percentages and mis-non-use of percentage points is a pet hate of mine. I didn't realise I was a cartoon!


You are right, that is a much much better cartoon as a reply! If it had been in my faulty memory banks...

(and I agreee - the first dilbert cartoon is super contrived)


Years ago I worked at a store with registers that didn't calculate change; myself and the other cashiers would need to calculate the change mentally. Once in a while I would get people like this who threw out convoluted amounts of change. Normally it was fine, but sometimes towards the end of a long day when my brain was tired I wanted to strangle them.

The store actually had really great employees overall and I suspect this little factor played a big part in that. Part of the application process was a basic math test and it was funny to see how quickly some applicants recoiled when they found that out. It seemed to be a great filter.


You don't calculate change. You just count it out, picking up coins and bills as you go. Start at the sum to pay and count towards the amount the customer gave you. Work your way up from the least significant digit using the smallest denominations. No hard mental calculations are involved, you just need to know how to count. Basic cashier knowledge they should be teaching you on day one.


"Basic cashier knowledge they should be teaching you on day one."

About 10ish years ago I had an idea for a product that would essentially provide online training on this for cashiers. Back then I thought 'but everything is moving to cashless payments, plus more and more registers calculate change, no need for this". I wonder sometimes if this was The One That Got Away for me...


"How about you tell me how much change you are expecting" and then add to check.


Is that like putting a total on a restaurant check and leaving the tip "as an exercise for the server?"


That's actually how paying in a restaurant usually works, in Germany.


The Python effect for cashiers?


On the other side you could have this pretty good AI, which can be made amazing just by adding a human step to it.


Oh boi does this hit hard as a engineer that recently got hired as a part of the AI/automation team on a startup, that few month in got told by the boss that "we need a AI team because we promised the investors. Investors are not interested in tech without AI baked in". Whelp.


Replace AI with ______. Some things never change!


I had completely repressed the memory of the previous fad, until sibling comment reminded me of it https://news.ycombinator.com/item?id=23536009


my butt


I wish I knew the details of the project; getting to mess around with new technologies for a year and no strict deliverables is a great position to be in. Worst case, you got to learn some new tech and pad your resume for your next job.


I used to work at a software house which makes software for few big companies. A few years ago our CEO thought it's time to diversify our income stream by creating some SaaS. He came up with a really then-buzzword-packed(AI, ML, IoT, microservices, Kubernetes, mobile first) product. Quite ambitious, especially for a team of 5 people, regular web devs who had no working experience with any of those. Everyone voiced their concerns but we were told to just use our best efforts.

Long story short: after a year or so, our boss decided to scrap our half-baked system and gave up on the idea of creating some internal product entirely.


It looks exactly like my company, but I am few months ahead. I am now at the time where investors starts to ask for results. The CEO wants to please them and starts to put pressure on our team by changing priorities almost everyday (which impact the project quality). The thing is, everyone knows that the project is doomed to fail.


I've had a very similar experience. I'm surprised at how little so many managers and investors understand AI.

They talk about it like it's some sort of panacea that will make a company automatically grow 10x. Then you ask them what they actually want you to build and... Crickets.


I'm cynical about a lot of 'trends' that have popped up over the year, because oftentimes they're just compromises or marketing to get employees to join them. Notably, in recent years, a LOT of employers / recruiters will stuff "IoT" or "Blockchain" into their job descriptions, knowing full well they do nothing with it - or if they do, it's a couple hours a week they give up on to keep the employees happy, interested and motivated. It's a weird theater.

Most of those jobs are generic Java work, but it's hard to find motivated / qualified people to work on just that.


I worked on a project that basically was a very simple rules engine, encoding a bunch of domain knowledge in a digestible dashboard format, which happened to use a few predictive models' output as well to drive a few of the rules. The product owner insisted that "AI" be in the name, whereas I was reluctant to put lipstick on that pig. Ultimately (as usually happens if the product owner is your boss) the "AI something something" name stuck, but the whole branding of the endeavor rubbed me the wrong way (even though the product was actually somewhat useful).


> Interesting times, but at least we can now tell investors we are a keen company with an AI play up our sleeve!

To be fair, this is absolutely not unique to you but literally every (software) company in america right now.


Perhaps the most significant outcome is that you guys will be peppering your resume with AI buzzwords, which the board may be paying through expensive staff retention.


It could be worse... you could be at a company where using such technology is banned... because marketing and sales are scared of the blowback.


Ahh, I see. You misunderstand your true customer. It is not the purchaser. It is the investor — PE. Or, "the market".


Would it be a better (even marginally) play to use AI to detect if they are passing off the work to underlings?


Almost seems like menial work that your CEO might be using to gain a better foothold on gathering external investments. My experience with CEO's says that they aren't complete morons. I hope for your sake there is another valuable reason why he has you all doing this work.


what about human augmented ai? that is what successful ai companies use


Yeah exactly, the play here is to make AI assistants to human decision makers.


yeah I've resisted these fake ai project as much as possible, but sometimes the budget is there so you're better off spending it.

bet there's something between a full AGI and a set of ifs on the audit log that could help the human experts reduce error rates and fit within the direction constraints


Any chance at sharing what consulting company put the AI play idea in your CEO's head?


Think about open-sourcing some of the stuff you're doing. Otherwise it's likely to stay buried in the corporate IP portfolio


Is it really worth doing so if it's as inane as they say?


Machine learning requires a lot of nuts and bolts devops type stuff that isn't directly related to the problem you're trying to solve. I think it's a shame that a lot of this stuff gets reinvented at different companies and kept under a cloak of secrecy.

Any part of the problem that is not specifically related to the company's vision should be part of a communal effort make machine learning and AI tractable.

Companies should compete on their core Mission and cooperate on all the plumbing.


IBM?


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: