Hacker Newsnew | past | comments | ask | show | jobs | submit | dangelov's commentslogin

Given some of the caveats I mentioned towards the end of the article I'd be a bit wary putting too much trust in LLMs for this use case at this stage. But the field is moving so fast that I don't doubt it will soon be less error prone than a human doing it.


Yeah it's not ideal, but it mostly works - at least in this case. I think each MCP tool works best when it can be kept lean and needs only a small number of arguments.


Check out https://github.com/metoro-io/mcp-golang - looks like they support strongly typed tool arguments.


Author here.

There's basically a couple of different ways to implement an MCP server - for this demo it's a local binary that communicates over stdio, so no OAuth process is taking place. It's only meant to run on your local machine.

To make the demo simpler to explore and understand, the binary loads it's configuration (SnapTrade API client id, secret, and username and secret) from a .env file that you populate with your credentials which allows it to fetch the right data.


Totally understand why it’s not in the post, and it did help me understand mcp more. That said, that’s the issue: most articles I’ve seen are geared toward how to do a local-use-only mcp. In the ones I want to build I need to deploy into an enterprise and know the current user and am not quite clear how yet. The answers on using oauth help though. Maybe a future post idea :)


I setup a project somewhat along these lines with a Raspberry Pi, a USB DAC, and spotifyd. Now I have a decent and convenient audio player hooked up to my sound system. May not pass all the bars for an audiophile but the sound quality is actually fairly decent - much better than my previous Alexa setup - which is all I needed.


I find it odd to brand it as "explicitly anti-Albanian" when the very article that was linked says

> With the end of the "Skopje 2014" project, not only the Macedonian nationalist hungry spirit was fed. Its counterpart, Albanian nationalism, got its part of the city to ill-treat, so the neighbouring Skanderbeg Square was turned into a nationalist showcase for another actor of the Macedonian ethnocratic elite.

But overall agree that it's over-the-top kitsch.


I've used Ollama to run Llama 2 (all variants) on my 2020 Intel MacBook Pro - it's incredibly easy. You just install the app and run a couple of shell commands. I'm guessing soon-ish this model will be available too and then you'd be able to use it with the Continue VS Code extension.

Edited to add: Though somewhat slow, swap seems to have been a good enough replacement for not having the loads of RAM required. Ollama says "32 GB to run the 13B models", but I'm running the llama2:13b model on a 16 GB MBP.


Apple Silicon, especially an M1 Max Studio seems to be an interesting machine to hang on to as the models become more and more efficient with using less and less.

If there's nay other opinions or thoughts on this, I'd be very happy to learn as well. I have considered the eGPU route connected to a 1L PC such as a thinkcentre m80/90.


I have a 64 GB M1 Max MBP, and I'd say unless you really have some academic interest towards messing with open models, for now accessing SOTA models via a REST API has better latency for a given quality.

Claude 1.2 instant is as fast as 3.5, follows instructions at a quality closer to 4, and has a 100k context window. Hard to compete with that with an open source model right now.


How does open source compete with the Claude API? Easy: actually let you use the model. From the signup page:

> Anthropic is rolling out Claude slowly and incrementally, as we work to ensure the safety and scalability of it, in alignment with our company values.

> We're working with select partners to roll out Claude in their products. If you're interested in becoming one of those partners, we are accepting applications. Keep in mind that, due to the overwhelming interest we've received so far, we may take a while to reply.

No thanks, I'd much rather not wait months to see if my app deserves their oh-so-limited attention, or "aligns with the values" of a company taking $400m from Sam Bankman-Fried.

To be more charitable to your underlying point, Claude 2 is free to chat with via Anthropic's website, Poe, or Slack, and the GPT-4 API is open to use. If you're building a prototype or just need a chatbot, these do have better results and dev experience, at least for now. But I don't think picking on your Claude API example is unfair. These companies could randomly refuse your prompts via some opaque "moderation API" (that all GPT fine-tuning data goes through!), train on your company's proprietary data, spy on your most intimate questions, or just not find you worth the trouble and cut you off, at any time. THAT is why open source beats proprietary hands down: My device, my data, my weights, my own business.


Perfect example of why I said academic interest.

Awkward tie-ins between SBF and value systems (?) have no effect on practical usage.

A theoretical concern they might train on my API data after saying they won't doesn't either. Amazon might be training on everything not bolted down in S3, not worth wasting brain power on that.

The moderation API isn't some magic gotcha, it's documented. They don't want to deal with people fine tuning for porn. Maybe you have some ideological disagreement on that but it's not of practical relevance when trying to write code.

At the end of the day you're not alone in these opinions. But some of us prefer pragmatism over hype. Until someone catches OpenAI or Anthropic trying to kill their golden goose by breaking their GDPR, HIPPA, and SOC2 certifications, I'm going to take delivered value over theoretical harm.


In my opinion the risk is coupling accelerated intelligence to competitive business models.


The accelerated intelligence wouldn't exist without competitive business models.


Thanks for the insight.

I do have interest in local models (say running on a fixed list of document structures)


I recently took to rewriting what should be a very simple app from Obj-C to Swift with SwiftUI - because it's the future. The CPU usage was at 5% while idle, just for having a simple tiny pie chart that updates. Not to mention that for some seemingly basic things I still had to use AppKit anyway.

Wrote basically the exact same thing 1 day later in Swift with AppKit and NO SwiftUI and it sits at 0% CPU usage with less code complexity. Maybe in a few years I will give SwiftUI another try.


It’s depressing how few developers would even notice that performance degradation, let alone go back in and fix it.


Did you find out what was causing it? I have developed 'heavier' (aka 50k lines of code) applications in SwiftUI on mac and it mostly sits idle (0-1% cpu based if it is doing some regular background stuff). Heck, I just created a quick Charts based app from an online example on Mac and stays at 0,0%.

Is it possible it is re-rendering the view hierarchy due to some data invalidation you haven't noticed?


Was this with the debugger attached? Did you do any further profiling in instruments on a release scheme to determine the cause of the cpu usage? You should be able to narrow it down to the specific sys calls.


Is this app open source? I would live to poke around and see the difference both in code and performance


”Premature optimization is evil” is dogma, but you can’t keyhole optimize the architecture after the fact.

Just like cars, you can’t build a Kia Soul, and then just replace a few parts to reach Ferrari-like performance.


I've been using Mux & sqlc in a few projects and it's working out great.


both look interesting, "Mux is looking for new Maintainers"


As a financially-insecure immigrant, that line actually helped me understand his mindset, and did not come across in any way as a "hero narrative".


I use VS Code's native integration with WSL. So VS Code runs as a native Windows app with nearly everything you'd expect, but the actual files being read/saved are inside WSL. Works great with Vagrant, Docker for Desktop, Git etc.


Yep, this works really well and for directly accessing files in WSL 2's file system from Windows, @lenova's sibling comment goes over that process.

To expand on that, I also have this path in my Windows explorer "Quick access" list: \\wsl$\Ubuntu-20.04\home\nick

It's a shortcut to my home directory inside of WSL 2 for quick access. It's useful in the cases where I want to drag / drop a photo or something from Windows into WSL 2's world.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: