Hacker Newsnew | past | comments | ask | show | jobs | submit | more volf_'s commentslogin

Location: Seattle, WA

Remote: Yes

Willing to relocate: Maybe, for the right gig.

Technologies: Python, Rust, Kubernetes (Helm, ArgoCD, CNPG, AWS ACK), AWS, AWS EKS, AWS CDK, Terraform, Grafana LGTM (Loki, Grafana, Tempo, Mimir), GitLab, PostgreSQL, OpenTelemetry, System Architecture, Hybrid Cloud, DevOps/SRE stuff, and more!

Resume: https://www.linkedin.com/in/volf/

Email: hn [at] m1.volf.co

Been doing DevOps/SRE for the past 7-8 years. I've found myself working well in positions where I can work closely with other engineers to build out Cloud/Distributed Infrastructure and develop software in a way that makes it not painful to run in the cloud.

I can help take small MVP applications & infrastructure, and help turn it into something that can scale up as the business does. I'm comfortable working with any stack- just depends on how much time it'll take me to brush up on the language or design.


Any chance for remote, or is it all on site?


Cool. How about you open source the application? Let the community take over if they want to.


You use EFS as a shared folder that you can share between a number of different workloads. If you want a POSIX compatible shared filesystem in the cloud, you're going to pay for it.

For example. I setup Developer Workspaces that can mount an EFS share to their linux box, and anything they put in there will be accessible from Kubernetes Jobs they kick off, and from their Jupyterhub workspace.

I can either pay AWS to do it for me, or I can figure out how to get a 250k IOPS GlusterFS server to work across multiple AZs in a region. I think the math maths out to around the same cost at the end of the day


You you could spend a fraction of what you would pay a month to get a rack with 3x the performance.


Operating file systems like this e.g. GlusterFS, Lustre etc require skill and experience.

It's not something you can just throw together if you want something that is stable and performant.

Which means that the cost of that person to operate it is going to be significantly more expensive than if you just use AWS.


If you don’t need this level of durability, then plain old local filesystems can work, too. XFS or ZFS or whatever, on a single machine, serving NFS, should nicely outperform EFS.

(If you have a fast disk. Which, again, AWS makes unpleasant, but which real hardware has no trouble with.)


'if you don't need all the things that make this expensive, its way cheaper' is kind of a redundant, right?


This is dependent on your usecase, what types of storage you use, familiarity with tuning systems, setting up raid layouts, etc.

I love ZFS. It's incredibly powerful. It's also incredibly easy to screw up when designing how you want to set up your drives, especially if you intend to grow your storage. This also isn't including the effort needed to figure out how to make your filesystem redundant across datacenters or even just between racks in the same closet.

At the end of the day, if I screw up setting something on EFS I can always create a new EFS filesystem and move my data over. If I screw up a ZFS layout, I'm going to need a box of temporary drives to shuffle data onto while I remake an array.


> At the end of the day, if I screw up setting something on EFS I can always create a new EFS filesystem and move my data over. If I screw up a ZFS layout, I'm going to need a box of temporary drives to shuffle data onto while I remake an array.

True, but…

At EFS pricing, this seems like the wrong comparison. There’s no fundamental need to ever grow a local array to compete — buy an entirely new one instead. Heck, buy an entirely new server.

Admittedly, this means that the client architecture needs to support migration to a different storage backend. But, for a business where the price is at all relevant, using EFS for a single month will cost as much as that entire replacement server, and a replacement server comes with compute, too. And many more IOPS.

In any case, AWS is literally pitching using EFS for AI/ML. For that sort of use case, just replicate the data locally if you don’t have or need the absurdly fast networks that could actually be performant. Or use S3. I’m having trouble imagining any use case where EFS makes any sort of sense for this.

Keep in mind that the entire “pile” fits on ~$100 of NVMe SSD with better performance than EFS can possibly offer. Those fancy “10 trillion token” training sets fit in a single U.2 or EDSFF slot, on a device that speaks PCIe x4 and costs <$4000. Just replicate it and be done with it.


I'm aware.

Buuttt... you're trying to compare apples (a rack in a DC) to oranges (an AWS Native Solution that spans multiple DCs). And that's before you get into all the AWS bullshit that fucking sucks, but it sucks more to do it yourself.

A Rack in a DC isn't a solution that's useful to people who are in AWS.


A rack in a DC and services in AWS can securely talk to each other as easily as services in AWS can talk to other services in AWS.


https://aihorde.net/ is one that I've been running off and on.


Discord for Support? That would require me to either use my personal discord for work, or make a work discord.


Sure. Do the latter.


I have one (16GB model), and have (arch) linux on it dual booting with Windows! Started @ https://github.com/ironrobin/archiso-x13s

  * Battery Life. ~6/7 hours max as it currently (linux-x13s 6.4). There is a lot of room for improvement, which will happen as more people tune/tweak.

  * Performance. Honestly, the only noticeable issue is lzma2 compression. It's ass slow, even when using 8 threads. Other then that- Gnome (Wayland), CLion, Firefox, and Rust have no issues. I can't think of any performance issues when comparing it to my HP 14t-ea000, which is a 11th gen i7. 

  * Stability. Firefox occasionally crashes and I've not done enough digging to file a ticket on it yet. The WiFi Drivers crash on suspend (https://bugzilla.kernel.org/show_bug.cgi?id=217239) but it's being worked on. The GPU crashes on startup but recovers. Going to file a kernel report on this one when I get around to it. I'd put it at 95% of what I'd expect on my x86-64 laptop. 
Camera doesn't work (yet), as well as some of the other GPU features. BIOS is limited, and boils down to full access to the security options as well as the to-be-expected Lenovo keyboard options. There's a "beta" Linux boot option, but I didn't need that when I first installed linux. Actually, I'm not sure what I'd want exposed in the BIOS that's not already there. Memory Overclocking?

It runs cooler than my HP, which runs on the hotter side. Port selection is limiting, but having a 5G Modem is cool. Arch has almost all of their packages cross compiled to arm64, and I've not run into an issue where there was a package I needed that wasn't there.

TL;DR. A Very functional laptop. I've started using it as my primary laptop, but still carry my HP with me if I'm traveling somewhere. Give it a few months for the kernel issues to be ironed out and it'll be a very nice laptop.


Funny. Just upgraded to 6.4.1 and got another hour of battery life.


What about virtualization? Can you run qemu/firecracker on the system?


There does not appear to be hardware virtualization support, so those would work but you'd be using software virtualization which will be slower.

I'll try running firecracker later and see if it'll boot.


Yes, I was hoping that kvm with qemu works by now. If kvm support is not enabled, it is not likely that firecracker would work either..


I made a very similar project in Rust that seems to mimic this idea: https://github.com/volfco/boxcar

The core idea I had was to decouple the connection from the execution of the RPC. Mats3 looks to be doing a lot more than what I've done so far, but it's nice to see similar ideas out there to take inspiration from.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: