So if Kubernetes makes management simpler and more robust for teams of 500+, but is overly complex for teams of 5, what solution would people recommend for teams of 5?
AFAIK the bulbs are compatible with v1 and v2 hubs, so the bulbs do not need to be replaced. The v1 hubs will continue to work across a local area network. Philips offered a heavily discounted upgrade for v1 hub users to get a new v2 hub.
However, I still think they should support the v1 hubs for the lifespan of a typical bulb (15 years), at minimum.
Yea, as long as it works locally and they don't pull the old app .. still it's shitty they're trying to push people to buy new shit they don't need. It's just creating waste to increase sales. The attempt to grow without responsibility really needs to die. Consumerism with the goal of more consumerism is a cancer.
While it's tempting to blame this on Philips doing this to increase sales, I think it's just as plausible that they just don't want to spend resources on maintaining the infrastructure required for the v1 bridges anymore. The v1 was very early in the market, and doesn't even have the horsepower to support HomeKit etc.
This is also a more likely explanation if you consider that Philips offered heavily discounted v2 bridges to v1 bridge owners. It's probable that it's cheaper to Philips to sell v2 upgrades to customers at or below cost than it is to maintain support for the v1 bridge.
Having said that, it sucks for v1 owners that they have a product that's going to lose features unless they upgrade. I do have a few Hue bulbs (with a v2 bridge), but I have everything set up through openHAB, with remote access going through a server I control. This way I don't have to rely on anyone but my VPS provider to keep all of it functional. Unfortunately this route is out of most people's technical know-how. And I certainly sympathize with people who could do this, but would rather use an off-the-shelf product that just works.
The v1 / v2 split was arguably Apple's fault. v1 launched before Homekit; then Homekit required special hardware authentication; then v2 launched to support this; then Apple walked back on the special hardware and allowed things to be done in software, but Philips had already made breaking changes for v2.
Google keeps Chrome around because the alternative is just too risky for their core business. If (for example) everyone used IE then Microsoft would be able to shut out Google's ads.
It's not just ads. It's also Microsoft's ecosystem around Windows and Office. From Outlook, Sharepoint, to XBox, to 3rd party automation tools like BluePrism, UIPath, AutomationAnywhere that are tools built around IE and are getting a lot of traction in corporations at the moment. And probably a lot more tools I don't know about.
That's what I thought too (ie. that Google is critically dependent on web browsers that don't discriminate against Google somehow, and is paying for two such browsers). But so many people say that Chrome and Firefox are competitors that I'm curious to hear arguments for that.
Owning a browser is a business as every search engine wants to be the default. I think google is forking several billions a year to be the default on Safari and is probably Firefox’ primary source of revenues.
I don’t know if Microsoft has any other motivation for Edge other than being the front end for Bing.
Then here's where typescript shines. To be able to move fluidly between no static checks of JavaScript and strong-ish type safety of typescript without too much work.
The whole point of strong encryption is to prevent adversaries (including forensic scientists) from extracting any information without possession of the key.
If the key involves a password that you, a human, have memorized in your squishy pink organ, it's privileged under the Fifth Amendment. (This hasn't been tested in court yet, of course. There's no precedent to fall back on.)
> If the differences are only minor then what does it enable that the more popular alternative does not?
Fair point. I think there's a tradeoff in there somewhere. The fact that you need to do things like wrap JavaScript in order to use it within Mint sets off all kinds of alarm bells for me. I still like the concept, but justifying an entirely new language (to myself, to my higher ups, to the company at large) is a high bar.
Yeah Vuejs was super similar to angular 1.0 that at first I thought why the heck would anyone use this clone framework. But just after using it once I realized how it had taken everything good from ang 1, dropped everything that sucked (di, decorator,etc) creating a super awesome version of it.
So yeah it isn't bad if it looks similar to something lots of people already use.
> it doesn't even solve the problem because dependencies are just downloaded from the package manager.
The advantage of Docker is that you can verify the container works locally as part of the build process rather than finding out it is broken due to some missing dep after a deployment. If you can verify that the image works then the mechanism for fetching the deps can be as scrappy as you like. Docker moves the dependency challenge from deployment-time to build-time.
Does container mean something different to y’all than it does to me?
I ask because I read your comment as saying “the advantage of Docker is that it uses (explanation of what containers are)” and the parent comment as saying “all I want from Docker is (explanation of what containers are)” and I am confused why (a) y’all are not just saying “containers” but rather “the part of docker that packages up my network of scripts so I can think about it like a statically linked binary” and (b) why you think this is a competitive advantage over other things you might have instead recommended here (Buildah, Makisu, BuildKit, img, Bazel, FTL, Ansible Container, Metaparticle... I am sure there are at least a dozen) to satisfy the parent comment’s needs.
Is there really any container ecosystem which has write-an-image-but-you-can’t-run-it-locally semantics? How do you finally run that image?
Docker is too general, too much of a Swiss army knife for this particular problem. The problem I am talking about is where a C++ program has all of its dependencies vendored into the source tree. When you run Make, everything including the dependencies build at the same time. All you need is a chroot, namespaces, cgroups, btrfs, squashfs--plain old Linux APIs--to make sure the compiler has a consistent view of the system. Assuming the compiler and filesystem are well behaved (e.g., don't insert timestamps), you should be able to take a consistent sha256sum of the build. And maybe even ZIP it up like a JAR and pass around a lightweight, source-only file that can compile and run (without a network connection) on other computers with the same kernel version.
Again, Bazel is basically this already. But it would be nice to have something like OP's tool to integrate in other build systems.
I could just make a Dockerfile and say that's my build system. But then I'm stuck with Docker. The only way to run my program would be through Docker. Docker doesn't have a monopoly on the idea of a fully-realized chroot.
For some scenarios, most (all?) of them have write-an-image-but-you-can’t-run-it-locally semantics.
My build server is x64, but target output is ARM. Can't exactly just run that locally super easily. Perhaps somebody has created a container runtime that will detect this, and automatically spin up a qemu container, running an arm host image, and communicate my container run request (and image) to that emulated system, but I haven't heard of that feature. (Not that I actually looked for it.)
In my current company we are deploying almost all code as docker (with exceptions of lambda functions) when talked to multiple developers. No one uses docker for local development, except maybe using it to spin another service that might interact with the app, but even that isn't preferred. Mainly because unless you're running Linux, docker is quite expensive on resources due to running under VM.