Hacker Newsnew | past | comments | ask | show | jobs | submit | mmatoscom's commentslogin

Wish it to be real, not a click bait.


why this? because a trezor is more expensive than a safe you would keep inside your wall? how about travelling with the gold? how about moving a lot of gold lets say like a truck as the example already mentioned? I usually travel and take my trezor along when needed... and its still in the same safe behind the wall as I would keep a gold bar or something valuable.


I would like to ask something.. cant we have estimates or analogies of same type like "internet servers now consuming more electricity than China" for example? I mean, its obvious! Any system wide available and requested, as internet or blockchain nodes to process transactions, based in computing which uses computers which are powered by energy, will spend energy to work. You guys are smart and I cant even comp to those maths on kilowatts etc, but really... I only see this electricity consumption GROWING as we are powering more and more systems, to be datacenters, vms, containers for the internet, same way to feed blockchain systems around the globe. I don't know if the point here is like "bad bitcoin it consumes energy" or just a pure analysis. cheers folks


I know the feeling. I was sick of using company's laptop (ideapad) to everything I needed to (messing personal stuff together), and went forward looking for a new (used) laptop in internet classified boards. I wanted a well built, portable (like 13" max), and mint condition used laptop. Went to Docker Slack and asked what were their experience running docker on mac. There are still some improvements to be made there, and besides I dont want to run VMs again unless its totally needed (like a windows environment in a vm, or coreos on qemu as I use here for k8s testing, but not docker itself). I know xhyve is transparent and everything, but lets face it, its just another virtual machine running docker, from boot2docker to xhyve was a no-go to me. So I decided to go for a Linux laptop (using linux since 98, in fact I got excited about it). My options for a decent linux laptop concerning portability were the Dell XPS, or a Thinkpad X260. So I started to watch reviews on youtube for both laptops. Doing so I found that the Dell has a coil whine issue that would really drive me crazy having a laptop screaming right out of the box. It was a no-go for me. If you want to check what I am talking about, watch this https://youtu.be/cwR4CWzDtfQ. My boss has one and I started to listen to the laptop and yes it is kinda noisy. So there was this thinkpad. At amazon the X260 was USD 2750. Far from my spending range right now, so I decided to check classifieds for a used one. Keeping in mind the linux laptop for docker development, I found a thinkpad x240 which is a beast - 12,5"/i7 proc/8GB ram/256 ssd. its silent, portable enough, feels well built as most thinkpads, its possible to upgrade the memory to 16GB when I have the resources for it, has back-light keyboard, a nice keyboard indeed. The trackpad is sort of weird, but I am getting used to it. The version I got is a touchscreen HD display and not FHD, so my max res is 1366x768. Using the gnome tweak tool I was able to change some fonts and icons size, its just great now, feels like HD for me. Has 2 video ports, VGA and miniDV. it has card reader. Even a sim card modem built in. It was USD 500. I dont see myself changing to another one soon. And docker is flying over here - as well qemu. Cheers mates.


well pointed!


cant believe I have lived enough to read this news!!


this is a great idea, congrats. reminds me of a project where people intending to learn English could speak to elders and seniors living in a retirement house. Will follow up. Thanks, M Matos


thanks - yes I've heard of that one! Great example of the internet promoting human connection...

Hope I can find you an interesting conversation :)


will try toastmasters, thanks for the info


Hi there, good day fellows

I was expecting this kind of thread long ago, thanks guys for sharing your concerns, I am learning a lot from them!

IMO we can't compare containers vs VMs like many are doing now - and I was too when first heard about Docker and containers.

I hold almost all VMware certs (VCP/VSP/VTSP for 3.1/4.x/5.x and VCDX), I sign the 3 major VMware P2V migrations in BR/LA (287 P2Vs in 2005, 1600/2008 and 2500/2011). I was REALLY into VMware from 2000 to 2010, so I feel confident using it and recommending to many environments. I even manage some of them still today.

When we clone or migrate a physical machine to virtual, or whenever deploying a VM from scratch (or even using templates or copying VMdks etc) into production, we aim to develop the environment in order to see it lasting "forever". We want this to be flawless, because even with given most-players virtualization deploying resources (hyper-V, VMware, xen, kVM, vbox+vagrant, etc), nobody wants to troubleshoot a production environment, we cheer to see the VMs always up and running. I remember when P2V in mentioned projects during the night, and needed to fallback physical servers because the cloned VM didn't behaved accordingly. Please VMware, clone it ok, otherwise the troubleshoot for legacy shit will be a pain.

On the other side, containers are made to be replaced. They are impermanent. You can tune your images as your app/env needs. You can have an image for each service you have. You can have many images running many containers, and some of them providing services you have. You are able to customize an image in a txt file called Dockerfile, and generate an image from this file.

So imagine we got this infrastructure to "Dockerize" , a website with a DB. Does your webserver runs apache? so you can code a Dockerfile, that will deploy an apache instance to you. It could deploy FROM an ubuntu:16.10 or 10.04, depends on what is better for your app. OR, we could just pickup Apache's own image, like in FROM apache. You can save this image as yourid/apache. And you can do the same regardless of what DB you are using, just install it (the mysql using apt-get in a FROM ubuntu/debian based system), or use mysql images directly. You are able to publish the website cloning your site dev repo direct in Dockerfile itself, or you could have the website at some dir in your host, using ADD or CP to make it available in the right container dir (eg /var/www/) You could even use Volumes, to mount some host dir (or a blob, or a storage account, or a partition in your storage, something available in the host itself). This is specially interesting for DB in my opinion. Once you have your Dockerfile ok, you can name it yourid/db.

And if you have a main website, and a blog website, you could use the same apache Dockerfile changing only the git clone line, and save them as yourid/apache:website and yourid/apache:blog for example.

And when redeploying the container, you will have the same volume data available in the same container dir. Even if you redeployed it from ubuntu:15.10 to ubuntu 16:10.You can use the latest improvements from the freshest image (or patching some vuln), and redeploy all your containers that uses this same image at once.

The same goes on, you can test jenkins without knowing what OS jenkins image is made off. You dont have to worry about it. It will just work. You pull the image and run the container and voila.

NOW, my Docker instances are like this: I use Docker-machine, and locally I got the default and local envs. I got also an Azure instance at Docker-machine (that runs on Azure), and another instance configured in Docker-cloud using Azure as cloud provider (I use Azure due bizspark credits). So, 4 of them. All those instances, are VMs themselves. Ubuntu VMs to be sure.

You just replace the container (probably published in a cluster if you care enough your prod env). Not the same as with VMs at all.

I see Docker as a hypervisor for containers the same way VMware and hyperv are to VMs.

So I understand my Docker hosts VMs have the same HA, failover, load balance, resources allocation, and so many resources VMs have. I use Docker on those VMs to make easy deploys, images, tune images, really guys I was the VMware guy for so long, I went just crazy in the resources Docker gives to us.

Docker has many weak points indeed (NAT networking, privileged containers must be run sometimes, security concerns, etc), but again, I don't see Docker to be erasing VMs from my life and now on I can deploy everything that will run happy forever in containers. We still need HA, failover, load balance, resources allocation and so on. Docker needs to be used together with TOOLS that allows it to run smoothly, and allows us to maintain our environments easier.

One of those tools are containers clusters. I work mostly with Google Kubernetes, but there are other as Docker Swarm, Apache Mesos, DCOS... Azure has its Azure Container Services ACS, IBM has its BlueMix containers, etc. Using a cluster and a deploy methodology, you are able to deploy your containers in different namespaces such as DEV / STAGING / PROD. You can use a BUILD environment to read your Dockerfile, build the image and deploy containers to the namespace you need. You can configure this build to trigger with a commit in the git repo for exemple.

So lets say we have a developer, and he needs our yourid/apache:website to be deployed, with the new website version. If the website is already updated in your git repo, you just clone it. The Dockerfile would look like this:

  FROM apache
  MAINTAINER Your Name <[email protected]>
  WORKDIR /var/www/
  RUN git clone https://github.com/yourid/website/website.git .
  EXPOSE 80
  CMD ["/run.sh"]
This would be named as website.Dockerfile. If you change the project git repo to any of your other sites that runs on apache, you can SAVE AS other.site.Dockerfile, and always deploy this service from this specific repo.

You can customize your Dockerfile of course and add support to specific codes like installing PHP, Ruby, Python, etc. You could even use Configuration Managers (CMs) as Ansible, Salt, Chef, Puppet, juju etc to apply those changes.

Lets say we will start the build now. We are developing this image together. So I just changed my git url on the Dockerfile. When we commit, the autobuild triggers this build in our build system (in my case, Docker-cloud or jenkins). This is what Continuous Integration (CI) and Continuous Deployment (CD) are about.

So when build is triggered, it gets the Dockerfile from the repo, builds from its image, deploys the container in the namespace you wanted (our case, DEV). This service could be published as website.dev.mydomain.com for example. Same concerning to staging namespace. And to www.mydomain.com when ready to production, in the PROD kubernetes namespace for example. Kubernetes is a distributed thing, so you could have minions (nodes) splitted across different datacenters, or geo locations. This pretty much reminds me of VMware VMs running inside VMfs storage made available through a set of ESXi servers, all with access to the same luns/partitions/volumes.

This is just my point of view, so please feel free to comment and ask me anything.

Please, just dont blame Docker because you aren't aware of mainstream techs available nowadays. If you are comparing Docker to VMs, or SSHing inside the containers, and often mad cause your data vanished while redeploying your Docker Containers, believe me you are doing it wrong.

Being a pioneer is often the same: in the 90s we had to explain why Linux was good for the enterprise, in the 2000's we had to prove VMware was really going to cluster your legacy systems, and now we have to explain what's possible to do with Docker. And, as the tech is new (I know there were previously solaris zones, google borg, etc), but I see Docker will mature its features relying in other tools (and even copying features from k8s to Swarm eg). Docker is just one skill needed to run your stuff.

Cheers!

M Matos https://keybase.io/mmatos


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: