Hacker Newsnew | past | comments | ask | show | jobs | submit | mandliya's commentslogin

CUDA programming (writing CUDA kernels) might be a good direction too.

GPU race is getting really hot and there is a lot of work being done to squeeze every ounce of performance especially for LLM training and inference.

One resource I would recommend is “Programming massively parallel processors” [1]

I am also learning it as my hobby project and uploading my notes here [2]

[1] https://shop.elsevier.com/books/programming-massively-parall...

[2] https://github.com/mandliya/PMPP_notes


I am curious if you tried something like langchain[1] it would solve the problem of not remembering previous conversations. In one of the examples people have used an entire company notion database on top of GPT-3 to answer questions specifically from that database.

PS: not tried myself.

[1] https://github.com/hwchase17/langchain


if i understand correctly it doesnt really remember all previous conversations, it summarizes them to pack more into the limited context window right? thats kind of just kicking the can down the road vs solving the problem of actual long term memory


Isn’t this the same as replying to ChatGPT in the same conversation?


It more let’s the bot can query a database (using a DSL you teach it) when it wants to.

So for example if it couldn’t remember an anecdote from the clients past, I guess it could search the history?

eg ask to search for “Sarah" and then use all stories about her as part of a prompt and iteratively rerun itself.

I think humans would still beat it at synthesising patterns drawn from disparate sessions. But it sounds doable to code…


TIL the London and Lima syndrome from the second link.


Its top result for me too in US.


I have been a premium member for a while and it has helped me so so much. Thank you! However I don't see the new guide feature. Is it yet to launch?


Thanks! We decided we’d give existing users the choice to upgrade because it’s quite different. If you want to migrate your account, you can do it from here: https://www.rescuetime.com/new_rescuetime


Very responsibly done! The issue was not that they had to share the logs with law enforcement, the issue was that marketing message was incorrect from them. This is a responsible step.


Consumers relied upon ProtonMail's prominent and concise marketing claim. Consumers who signed up for the service are now in jeopardy, perhaps facing real [legal] injury based on reasonable expectation of not having their IP logged.

The privacy-centric nature of this abuse is unlikely to result in a class-action-type of response, but Caveat Emptor abuses can be dealt with by the marketplace, too.


What do you suggest they do to rectify the situation?


Here are some ideas: Proactive compensation, an apology letter, hosting an AMA live stream, and a postmortem on how this misleading messaging made it out in the first place.


Pay damages.


I don't think you can get damages for what might happen in the future, only for real damages and probably only this activist that had his IP disclosed can prove damages.

Probably the best you could hope for would be to get out of a long term contract "I paid for a 2 year term based on promises that weren't true"

Otherwise, your best recourse to prevent your IP from being disclosed in the future is to find a provider that won't disclose it under any circumstances (probably not possible), or hide your IP yourself.


The fact that the messaging was there to begin with is the issue. People assume tech companies are immune from the law based on make believe claims that their ideals allow them to circumvent it.


Love how you put this. Yes, marketing message was incorrect.

Did they also fix the part about "no personal info to open an account?". They require a phone number to register through Tor >.<


Responsible to silently remove a lie?


I know what you mean. I loved Vsauce when it originally came out. I remember getting an “oh! Wow!” reactions with amazing first season when Michael would explain things. Later when it moved, it was all too much fluff and little core material I originally loved. I completely lost touch eventually.


I listen to this speech every 2 months. I have known a colleague for years now, but he became a good friend because I was able to understand his perspective in a useless conflict. This speech is so powerful, and calls attention to true human virtues which are easily forgotten (empathy, compassion). It is good reminder of how we live our lives as center of our own universe, completely unaware!


Dropping related Michael Nielsen’s excellent piece about space repitition here:

Augmenting Long Term Memory http://augmentingcognition.com/ltm.html


This looks amazing, probably features like discoverability and recommendation of creators is in the list?

If I follow n of these people, this person x would be a good creator to follow? Something like that. I realized I don’t know so many of the creators at all.


Yeah this is an awesome suggestion. It can be pretty easy to predict who someone would enjoy reading if they follow related people within their domain.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: