I just wish the Gemini app would stop inserting and auto playing a YouTube video into nearly every response when I'm on a mobile connection. There appears to be no way to stop it.
RouterOS 7 with the wifiwave2 package supposedly improves on this by (finally) supporting 802.11r/k/v for roaming between APs.
I don't have any mikrotik hardware new enough to support it so I haven't tried it myself yet and documentation is (as usual) pretty lacking, but like you I want to believe.
Most of these are what I'd call "cloud in name only" providers - everyone uses the term but you would have significant challenge moving a cloud workload that makes use of the higher layer abstractions to these.
There are very few workloads that require more than what you can accomplish with a handful of VMs. Using tools like Terraform makes it a lot easier to abstract away the specialised services.
I work with a couple workloads that can’t even be completely deployed in cloud environments. Those aren’t common.
The vast majority of companies can get along with a Google office license and a Wix website.
Not everyone works at a company with hundreds of thousands of employees and hundreds of millions of users.
I agree RDS, Aurora, Big Query, S3, load balancers, declarative security policies, artifact repositories, managed auto-scaling clusters and so on are very convenient, but they aren’t a requirement.
You “worked with a couple of workloads” and that’s what makes you an expert on infrastructure and architecture at scale?
Your LinkedIn profile is your HN profile. I see you have worked at some large well known companies. How can you possibly not have been exposed to some large deployments?
And none of those is “the simple case” I alluded to. The vast majority of businesses need, perhaps, email, file sharing, instant messaging and, perhaps, a website. They won’t train their own ML models, nor have parallel sysplexes of mainframes spread across multiple datacenters.
So, in the grand scheme of things, how much revenue do you think all of those small businesses combined make compared to just the combined business revenue and compute needs of just the large companies you have worked for?
It surprises me that you have blinders on to an industry where you personally have worked for companies with large enough implementations that AWS has felt the need to brag about
With 20-30 people on staff if I were responsible for that architecture today. I would need:
A MySQL instance with read replicas (we had that back then onsite)
SQL Server for a some legacy projects - we had those too.
File Transfer family for FTP transfers and some automations around that.
Web servers and load balancers.
SQS and probably Lambda. Back then we used MSMQ and later MQSeries and a home grown application servers that took care of asynchronous message processing.
Web servers - a couple of EC2 instances and a load balancer and these days because of how the internet is probably WAF.
We would have needed something to orchestrate our ETL jobs. Back then we ran on 15 physical computers we would probably use something like AWS Batch today.
And of course S3.
You see how quickly your needs escalate once you are doing any real workloads?
The next smallest company I worked for had 50-60 people this was between 2018-2020. We sold access to aggregated publicly available health care provider data as well as some other health care related data. Our micro services were used by large health care companies as the backend to their websites and mobile devices and one new customer could increase the load on services they subscribed to by 20%.
Here we also needed multiple MySqL databases, CloudFront, WAF, Cognito, ElasticSearch, Redshift for large analytical loads, EC2 for some legacy software, S3, ECS for the microservices, Lambda/SQS and step functions for some ETL jobs that scaled from 0 to hundreds of thousands of transactions, Cognito for authentication, etc
You might not remember. But around March 2020, health care providers websites were being hit hard because of a little virus that was going around, the scalability that we put in place came in handy then.
Do you propose that we should have hosted all of that on some VMs?
You need to find the right balance between an expert IT team and cheaper employees. Using pre-baked cloud services is always easier, requires less management, but the operational expenditure is higher while the staffing cost might be lower. Where the company is based will also impact the professionals you'll have access to - there are places where you can have highly skilled people, and places where you'll struggle to even fully staff your IT operation.
> Do you propose that we should have hosted all of that on some VMs?
What do you think Amazon uses to run the services you pay for? Unicorns?
I think the point was not that we won't still use a lot of hardware, it's that it won't necessarily always be Nvidia. Nvidia got lucky when both crypto and AI arrived because it had the best available ready-made thing to do the job, but it's not like it's the best possible thing. Crypto eventually got its ASICs that made GPUs uncompetitive after all.
I think less of that and more of real risks - Nvidia legitimately has the earnings right now. The question is how sustainable that is, when most of it is coming from 5 or so customers that are both motivated and capable of taking back those 90% margins for themselves
They don't have anything close to the earnings to justify the price they have reached.
They are getting a lot of money, but their stock price is in a completely different universe. Not even that $500G deal people announced, if spent exclusively on their products could justify their current price. (Nah, notice that just the change on their valuation is already larger than that deal.)
Regarding their earnings at the moment, I know it doesn't mean everything, but a ~50 P/E is still fairly high, although not insane. I think Ciscos was over 200 during the dotcom bubble. I think your question about the 5 major customers is really interesting, and we will continue to see those companies peck at custom silicon until they can maybe bridge the gap from just running inference to training as well.
The M1 macbook did feel like a big jump for me because it had great performance and no fan, which was a huge thing compared to everything else at the time.
Agree. This reminded me that I am talking about mac and forgotten about PC. I have a 2018 Thinkpad ( which uses KabyLake, a 2016 CPU ) at work for specific usage and it is absolutely piece of crap. Not only is the Fan constantly spinning, all the security extra due to policy and requirement making it absurdly slow. It is also another reminder, hardware cant out innovate software performance issues.
You would be the best one to evaluate if this applies in your case but in many cases where my users say "it's not possible" I end up finding a gap that's more related to usability than technical. I often still find there's something worth learning from this kind of feedback even if it's "wrong".
reply