While I agree and I'm not the OP you're replying to this feels like the burden of societal correction needs to be on the wronged and not on the person committing it?
It's tolerating the intolerant (their intolerance to understanding social order). They need to be bludgeoned back (metaphorically).
While I agree, that's a present devil meaning that it's already an accepted way of life. I'm curious how Gogoro's model of swapping batteries would fair in the denser Indian markets.
Once outside Tier 1 cities, density significantly reduces. Additionally, the Indian consumer is aspirational, and if forced to purchase a new vehicle would prefer a used car over a new 2-wheeler.
Anecdotally, in my ancestral village, my relatives preferred buying a used Maruti Suzuki for 1 Lakh (roughly $1k) instead of spending the equivalent amount on a new bike.
In the Vietnamese side of my family, everyone is ignoring the recent diktat to upgrade to electronic motorbikes for the same reason (why spend almost a year's income to purchase a vehicle when inflation for daily staples has been high)
I feel there is an opportunity for EV cars, but they face stiff competition from Kei/900-1100cc cars that cost around $4k-8k.
Same physics principles used to measure gravitational waves at LIGO (Laser Interferometer Gravitational-Wave Observatory)but just much, much smaller. Very neat!
Absolutely. It's a tireless rubik's cube. One that you can rotate endlessly to digest new material. It doesn't sigh heavily or not have the mental bandwidth to answer. Yes, it should not be trusted with high precision information but the world can get by quite well on vibes.
In my experience, it's not that the term itself is incorrect but more so people use it as a bludgeoning force to end conversations about the technology. Rather than, what should happen, is to invite nuance about how it can be utilized and it's pitfalls.
Colloquially, it just means there’s no thinking or logic going on. LLMs are just pattern matching an answer.
From what we do know about LLMs we do know that it is not trivial pattern matching, the output formulated is literally by the definition of machine learning itself completely original information not copied from the training data.
Sometimes lacking context actually makes a thing much more interesting. Reading a blog post from your own circles may be intricate, but it's also mundane. Reading a post from another world is always an act of discovery, somewhere between voyeurism, archaeology, and the joy of getting lost in a new city.
Whats your thoughts on the diagram as code movement? I'd prefer to have an LLM utilize those as it can atleast drive some determinism through it rather than deal with the slippery layer that is prompt control for visual LLMs.
I think that's the right approach and what I've been experimenting with. Diagram as code and then style transfer from output diagram to desired look. That's where I've had the most success.
Wouldn't it be easier to train a vLLM on the handwriting style of the historical person in question? An agent graphologist if you will. Surely there is a lot of pattern matching in the way things are written.
Then again, getting this result from a heavily-generalized SOTA model is pretty incredible too.
Anecdata inbound but my PCP, thankfully, used Nuance's speech-to-text platform remarkably well for adding his own commentary on things. It was a refreshing thing to see and I hope my clinicians use it.
It's tolerating the intolerant (their intolerance to understanding social order). They need to be bludgeoned back (metaphorically).
reply