Hacker Newsnew | past | comments | ask | show | jobs | submit | npollock's commentslogin

it could be the "grounding with google search" feature when using Gemini models


no, but a subset of English could be


You just invented programming languages, halfway


Thought we already had that?


Yeah, let's bring back COBOL


Let me introduce to you.. python ;)


LoRA adapters modify the model's internal weights


not unless they're explicitly merged, which is not a requirement but a small speed only thing


Yeah, I honestly think some of the language used with LoRA gets in the way of people understanding them. It becomes much easier to understand when looking at an actual implementation, as well as how they can be merged or kept separate.


Here's how I would market this:

Create "packages" of context for popular API & Libraries - make them freely available via public url

Keep the packages up to date. They'll be newer than the cutoff date for many models, and they'll be cleaner then the data a model slurps in using web search.

Voila, you're now the trusted repository of context for the developer community.

You'll get a lot of inbound traffic/leads, and ideas for tangential use cases with commercial potential.


https://context7.com/ is just that, in the form of an MCP server.


You can't start fresh chats with updated context and you still need to create multiple chat in your preferred chat service and copy-paste data. But this is SO good to use with development environment through docs or MCP! Thanks for sharing


To be free and not forward credentials I built an alternative to all-in-one chatting without auth and access to search API through the web: https://llmcouncil.github.io/llmcouncil/

Provides simple interface to chat with Gemini, Claude, Grok, Openai, deepseek in parallel


I love this, it can be very useful to have ready to use library of context data and at the same time a perfect solution to bring in new users. Thanks so much.


Of all the suggestions here, this is the one.


wouldn't that be just 3rd party llms.txt?


Are 7 free weekly eval runs fair compensation for sharing your evaluation and fine-tuning data with OpenAI?


"The takeover of io will provide OpenAI with about 55 hardware engineers, software developers and manufacturing experts"

6.5B / 55 = $118 million per engineer

not a cheap aquihire


Who now expect to be paid.


if you own the device, you control the flow of data and dollars


something like this that runs as a browser agent, allowing me to extract structured data from websites (whitelisted) using natural language queries


have you looked at Browser Use? https://browser-use.com/


they are in our YC batch! great product


huh interesting. we're exploring extraction from html


A quote I found helpful:

"reinforcement learning from human feedback .. is designed to optimize machine learning models in domains where specifically designing a reward function is hard"

https://rlhfbook.com/c/05-preferences.html


How do we draw the line between a hard and not-so-hard reward function?


I think if you are able to define a reward function then it sort of doesn’t matter how hard it was to do that - if you can’t then RLHF is your only option.

For example, say you’re building a chess AI that you’re going to train using reinforcement learning alphazero-style. No matter how fancy the logic that you want to employ to build the AI itself, it’s really easy to make a reward function. “Did it win the game” is the reward function.

On the other hand, if you’re making an AI to write poetry. It’s hard/impossible to come up with an objective function to judge the output so you use RLHF.

It lots of cases the whole design springs from the fact that it’s hard to make a suitable reward function (eg GANs for generation of realistic faces is the classic example). What makes an image of a face realistic? So Goodfellow came up with the idea of having two nets one which tries to generate and one which tries to discern which images are fake and which real. Now the reward functions are easy. The generator gets rewarded for generating images good enough to fool the classifier and the classifier gets rewarded for being able to spot which images are fake and which real.


does the tool snap to I-frames when slicing?


I don't know about ffslice, but you can get frame-perfect slicing with minimal reencoding via LosslessCut's experimental "smart cut" feature[2] or Smart Media Cutter's[3] smartcut[4].

[1] https://github.com/mifi/lossless-cut

[2] https://github.com/mifi/lossless-cut/issues/126

[3] https://smartmediacutter.com/

[4] https://github.com/skeskinen/smartcut


Excellent video snipping resources. I love HN for this.


For some reason, when ffmpeg reencodes from 23.97fps h264 to the same fps and codec, the result looks choppy, like the shutter speed was halved or something. The smart lossless encoding you mentioned helps a lot here.


Yes, the tool snaps to I-frames when slicing. The `-c copy` flag ensures no re-encoding, and inherently limits cuts to keyframes.

TBH it's an unfortunate side-effect sometimes as you cannot cut video or audio exactly where you want.


>without re-encoding

What do you think?


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: