Hacker News new | past | comments | ask | show | jobs | submit login
Meta's Segment Anything written with C++ / GGML (github.com/yavorgivanov)
233 points by ariym on Sept 5, 2023 | hide | past | favorite | 31 comments



This is a port of Meta's Segment Anything computer vision model which allows easy segmentation of shapes in images. Originally written in Python, Yavor Ivanov has ported it to C++ using the GGML library created by Georgi Gerganov which is optimized for CPU instead of GPU, specifically Apple Silicon M1/M2. The repo is still in it's early stage


Do you know how the time to do the image embedding takes? In SAM, most of the time is spent generating a very expensive embedding (prohibitive for real-time object detection). From the timing on your page it looks like yours is also similarly slow, but I'm curious how it compares to the pytorch Meta implementation.


I am the creator of the repo.

Depends on the machine, number of threads selected and the model checkpoint used (Vit-B or Vit-L or Vit-B). The video demo attached is running on Apple M2 Ultra and using the Vit-B model. The generation of the image embedding takes ~1.9s there and all the subsequent mask segmentations take ~45ms.

However, I am now focusing on improving the inference speed by making better use of ggml and trying out quantization. Once I make some progress in this direction I will compare to other SAM alternatives and benchmark more thoroughly.


This is amazing. Thank you!


Well... damn. Is there a framework like this (or this directly?) which can run object detection? People, car types, makes, animals, etc?


Yes, GroundingDINO is an open set object detector. There are some others (eg DETIC and OWL-ViT) as well.

We’ve been working on using them (often in conjunction with SAM) for auto-labeling datasets to train smaller faster models that can run in real-time at the edge: https://github.com/autodistill/autodistill


Would this be suitable for labeling images to search by keyword (think Apple Photos-like “car” searches to pull up photos of cars)


I think you would want to use something like CLIP embeddings for image search.

Really enjoyed using this app for iOS: https://github.com/mazzzystar/Queryable HN discussion: https://news.ycombinator.com/item?id=34686947

Or explore the dataset stable diffusion was trained on: https://news.ycombinator.com/item?id=32655497


I am looking for a model similar to this, but for text. I want to group text with different labels that apply to subsets of the text. Think of being able to quickly pull-out related segments from a large body of text. Let's take, for instance, a sales contract that specifies a discounted price for various goods. If you select the label "data rows", the system should be able to extract all the text pertaining to the table that specifies which SKUs are being purchased, and at what discounted price. Moreover, this model should be capable of segmenting the content into semantically relevant chunks. One example: each row in the aforementioned table would be tagged with multiple labels. One would be just that it is a row, the data in the first column should be labeled for what it represents, e.g. "product number". Another example: if there's a section discussing the terms of delivery or warranty conditions, selecting the respective labels would instantly extract that specific information, regardless of where it's located within the document. Would be great for it to be able to segment into some controllable range of tokens/characters to allow for pulling those chunks into a vector database, along with the relevant tags related to the chunk.


Big fan of your work GGML friends


Another GGML model port that I'm pretty excited about is https://github.com/PABannier/bark.cpp.

The Bark python model is very compute intensive and require a powerful GPU to get bearable inference speed. I really hope that bark.cpp with GPU/Metal support and quanticized model can bring useful inference speed on a laptop in the near future.


Hmm wonder how this compares to stuff like FastSAM and MobileSAM. Is SAM quantized better or are those knock of architectures more performant.


Bravo, the demonstration is genuinely impressive!

Next Step: Incorporate this library into image editors like Photopea (via WebAssembly) to boost the speed of common selection tasks. The magic wand is a tool of the past.

I'd pay for such a feature.


While I love the efficiency from these Python to C++ ports I can't stop thinking about the long tail of subtle bugs that will likely infest these libraries forever but then the Python versions also sit atop C/C++ cores


Good news! Deep learning inherently has a long tail of subtle bugs (SolidGoldMagikarp anyone?) so no one will care if C++ introduces a few more.


Just because Python silently ignores the bugs doesn't mean they're not there.


Just wait until they’re ported to C++ using AI!


I'm so glad the AI community is finally starting to ditch Python. It has held progress back for far too long.


The AI community is nowhere close to ditching Python. Most model development and training still use python based toolchains (torch, tf...). The new trends is for popular and useful models to be ported to more efficient stack like C++/GGML for easier usage and inference speed on consumer hardware.

Another popular optimisation is to port models to WASM + GPU because it makes them easy to support a variety of platforms (desktop, mobile...) with a single API and it can still offer great performance (see Google's mediapipe as an exemple of that).


That's why I said "starting to" not "close to".


In general if you don't know what you are doing, it's much faster to first figure out a good strategy for a solution in a language that does not suffer from all of the encumbrance C++ brings in.

Python is really great for fast prototyping. It can be argued most AI products so far are result of fast prototyping. So not sure if there is anything wrong with that.

As practical models emerge, at that point it indeed makes sense to port them to C++. But I would not in my wildest dreams suggest prototyping a data model in C++ unless absolutely necessary.


How has Python held it back? Most of the heavy computation lifting is done by C extensions/bindings and the models are compiled to run on CUDA, etc. What am I missing?


Presumably what you're missing is that it's that IshKebab probably doesn't work in AI/ML at all (no links in his profile, but you can judge his post history yourself). Anyone can have voice opinion, but that doesn't mean it's particularly well informed.


I worked for an AI startup for 5 years until recently. Nice try though.


Setting up and deploying models in production or on edge devices is much much more complex if you have to deal with Python and Conda and whatnot.


You can compile the models to something that runs on edge though, right? For example, Tensorflow is a C++ framework that has Python bindings and a Python library, but when the models are served they are running on C++.

Maybe the act of compilation is an extra step, but I'd much rather have my development be in a high level language that is very suited to experimentation, probing, and testing, and then compile the final result down to something performant.

EDIT: I don't know much about the IOT world, and Tensorflow is likely a bad example as it's not designed to run on edge. So, I could understand that things like llama.cpp, GGML and GGUF are making strides towards easier runtimes. But I still think for dev-time, Python makes sense!


Cloudflare lets you just upload the model itself: https://blog.cloudflare.com/introducing-constellation/

No idea what they're using to run it though. But there's no way I'll stop using Python for working with ML code lol. It just makes life easy.


> Tensorflow is a C++ framework that has Python bindings and a Python library, but when the models are served they are running on C++

Sure, and it's only a simple 20 step process that involves building Tensorflow from source. Yeay!

https://medium.com/@hamedmp/exporting-trained-tensorflow-mod...

Let me see what the process for compiling a LLM written in Rust is....

https://github.com/rustformers/llm

    cargo install llm-cli
Oh look, it doesn't make me immediately want to give up.


llm-cli looks like it loads model files and it doesn't help with model development. It runs GGML model files. The models aren't written in Rust. Besides the point, GGUF is successor to GGML. There's a variety of ways to convert Pytorch, Keras, etc models to GGML or GGUF.

I dunno, maybe we're talking about different things. I'm saying it's better to do model development in a high level language and then export the training or runtime to a lower level framework, multiple of which exist and have existed. It's becoming simpler to use low-level runtimes (llama.cpp vs Tensorflow). Is that the point you're making?


I’d say discovery and innovation would be slower in a less relaxed language. And speed end up comparable thanks to the compiled parts of python


This is exactly the wrong way around. We've seen the progress we've seen because of the adoption of Python. Even now there are relatively few people that can write code like this and have the ML and math experience to push forward the research.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: