Hacker Newsnew | past | comments | ask | show | jobs | submit | fragmede's commentslogin

> it just takes twice as long.

but time is also money. Personally if I could pay more money to get answers faster, I'd pay double.


ActiveX has entered the chat.

The question is how reliable does it need to be? Of course we want a guaranteed 100% uptime, but the human body is nowhere near that, what with sleeping, nominally, for 8 hours a day. That's 66% uptime.

Anyway, it succeeds enough for some to just wear steel toed boots.


We should be absolutely terrified about the amount of access these things have to users systems. Of course there is advice to use a sandbox but there are stupid people out there (I'm one of them) who disregard this advice because it's too cumbersome, so Claude is being run in yolo mode, on the same machine that has access access to bank accounts, insurance, password manager and crypto private keys.

OCR on smartphones is a clear winner in this area. Stepping back, it's just mind blowing how easy it is to take a picture of text and then select it and copy and paste it into whatever. And I totally just take it for granted.

Do compilers know how to take advantage of that, or do programs need code that specifically takes advantage of that?

It’s more like you need to program a dataflow rather than a program with instructions or vliw type processors. They still have operations but for example I don’t think ethos has any branch operations.

There are specialized computation kernels compiled for NPUs. A high-level program (that uses ONNX or CoreML, for example) can decide whether to run the computation using CPU code, a GPU kernel, or an NPU kernel or maybe use multiple devices in parallel for different parts of the task, but the low-level code is compiled separately for each kind of hardware. So it's somewhat abstracted and automated by wrapper libraries but still up to the program ultimately.

Heads up that in most cases I bet this'll result in worse performance as you're then but getting the benefit of the Anthropic tuned system prompts that they use in Claude code, which makes for materially different performance of the agent.

Their financial incentive is negative. They were hoping to force everyone to buy new speakers, driving sales. But if the community is able to get open source firmware to run spotifyd on them, there is a non-zero (not everyone, but it's non-zero) amount of people that will just not buy new speakers from them.

If they can make this OS story go viral, then they stand to have a lot of customers defect from their competitors even people who would never really care about open source.

Could easily be net positive.


It's not negative, though, or at least they don't think so. The fact that they are doing this OSS release means that they believe any loss of new sales would be dwarfed by a loss of goodwill if they'd bricked the old devices.

Certainly goodwill is harder to quantify.


This is why I said "direct". This is an indirect financial incentive, and there are other indirect financial incentives at play here (as others have noted).

> Their financial incentive is negative. They were hoping to force...

Maybe?

People stuck with Bose bricks might show a preference for non-Bose replacements.

People who thought Bose speakers would stay useful longer might prefer Bose, or be willing to pay for a more expensive Bose speaker model.

(Yes, I agree that some PHB's at Bose were almost certainly imagining that their customers would be forced to re-purchase Bose speakers. I'm questioning the validity of their initial assumptions.)


From talking to friends and family, so n=10-ish, non-computer people have not realized that sticking computers in things means they die on computer lifetimes, not appliance lifetimes. No more switches that last for the life of the house; no more speakers that your kids can do modest maintenance to and keep using.

And, if I, a non-Bose customer, hear that Bose open sourced a previous version of their speaker, which gives me some confidence that a present purchase might be somewhat future-proofed, then I am more likely to buy a new Bose product vs a competitor who does not provide sources.

I mean, if you take out the guard rails, here's codex in 46 lines of bash:

    #!/usr/bin/env bash
    set -euo pipefail
    
    # Fail fast if OPENAI_API_KEY is unset or empty
    : "${OPENAI_API_KEY:?set OPENAI_API_KEY}"
    MODEL="${MODEL:-gpt-5.2-chat-latest}"
    
    extract_text_joined() {
      # Collect all text fields from the Responses API output and join them
      jq -r '[.output[]?.content[]? | select(has("text")) | .text] | join("")'
    }
    
    apply_writes() {
      local plan="$1"
      echo "$plan" | jq -c '.files[]' | while read -r f; do
        local path content
        path="$(echo "$f" | jq -r '.path')"
        content="$(echo "$f" | jq -r '.content')"
        mkdir -p "$(dirname "$path")"
        printf "%s" "$content" > "$path"
        echo "wrote $path"
      done
    }
    while true; do
      printf "> "
      read -r USER_INPUT || exit 0
      [[ -z "$USER_INPUT" ]] && continue
      # File list relative to cwd
      TREE="$(find . -type f -maxdepth 6 -print | sed 's|^\./||')"
      USER_JSON="$(jq -n --arg task "$USER_INPUT" --arg tree "$TREE" \
        '{task:$task, workspace_tree:$tree,
          rules:[
            "Return ONLY JSON matching the schema.",
            "Write files wholesale: full final content for each file.",
            "If no file changes are needed, return files:[]"
          ] }')"
      RESP="$(
        curl -s https://api.openai.com/v1/responses \
          -H "Authorization: Bearer $OPENAI_API_KEY" \
          -H "Content-Type: application/json" \
          -d "$(jq -n --arg model "$MODEL" --argjson user "$USER_JSON" '{model:$model,input:[{role:"system",content:"You output only JSON file-write plans."},{role:"user",content:$user}],text:{format:{type:"json_schema",name:"file_writes",schema:{type:"object",additionalProperties:false,properties:{files:{type:"array",items:{type:"object",additionalProperties:false,properties:{path:{type:"string"},content:{type:"string"}},required:["path","content"]}}},required:["files"]}}}')"
      )"
      PLAN="$(printf "%s" "$RESP" | extract_text_joined)"
      apply_writes "$PLAN"
    done

Impressive!

Here's an agent in 24 lines of PHP, written in 2023. But it relies on `llm` to do HTTP and JSON.

https://github.com/dave1010/hubcap


Have you tried talking to ChatGPT in your native tongue? I was blown away by my mother speaking her native tongue to ChatGPT and having it respond in that language. (It's ever so slightly not a mainstream one.)

Even in my own language I can't talk without any pauses.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: