Hacker Newsnew | past | comments | ask | show | jobs | submit | energy123's commentslogin

This is quite a distortion of the facts to push an agenda.

Desalination technology can solve their problems completely, but they armed proxies that attacked the two countries in the region (Saudi Arabia and Israel) that can help them.

They also imprisoned the one qualified guy in their country who blew the whistle on their water mismanagement (e.g. farming water intensive crops in a desert), Dr. Madani.

You could read the Wikipedia page to learn the other man-made reasons behind this crisis. That's preferable than coming here to play defense for a corrupt theocracy. Not that I doubt that climate change is one of the causes.


It's hard enough to get a reliable variance-covariance estimate.

This is why Markowitz isn't used much in the industry, at least not in a plug-and-play fashion. Empirical volatility, and the variance -covariance matrix more generally speaking, is a useful descriptive statistic, but the matrix has high sampling variance, which means Markowitz is garbage in garbage out. Unlike in other fields, you can't just make/collect more data to reduce the sampling variance of the inputs. So you want to regularize the inputs or have some kind of hybrid approach that has a discretionary overlay.

I have some familiarity with the Markowitz model, but certainly not as much as you do about the practical use — could you share notes/articles/talks on the practical use? I’m super interested to learn more.

Black-Litterman model is an example of how to address the shortcoming of unreliable empirical inputs.

You'll also see more ad hoc approaches, such as simulating hypothetical scenarios to determine worst case scenarios.

It's not math heavy. Math heavy is a smell. Expect to see fairly simple monte carlo simulations, but with significant thought put into the assumptions.


> This is why Markowitz isn't used much in the industry

This may be one reason but the return part is much more problematic than the risk part.


I know what you mean, but semantics is about relative positions of points in a given space. Comparing two points from two different spaces is apples and oranges. I feel like this analogy should be salvageable with a small tweak, however.

Predicting the next word requires understanding, they're not separate things. If you don't know what comes after the next word, then you don't know what the next word should be. So the task implicitly forces a more long-horizon understanding of the future sequence.

This is utterly wrong. Predicting the next word requires a large sample of data made into a statistical model. It has nothing to do with "understanding", which implies it knows why rather than what.

Ilya Sustkever was on a podcast, saying to imagine a mystery novel where at the end it says “and the killer is: (name)”. Saying it’s just a statistical model generating the next most likely word, how can it do that in this case if it doesn’t have some understanding of all the clues, etc. A specific name is not statistically likely to appear

I once was chatting with an author of books (very much an amateur) and he said he enjoyed writing because he liked discovering where the story goes. IE, he starts and builds characters and creates scenarios for them and at some point the story kind of takes over, there is only one way a character can act based on what was previously written, but it wasn't preordained. That's why he liked it, it was a discovery to him.

I'm not saying this is the right way to write a book but it is a way some people write at least! And one LLMs seem capable of doing. (though isn't a book outline pretty much the same as a coding plan and well within their wheelhouse?)


Can current LLMs actually do that, though? What Ilya posed was a thought experiment: if it could do that, then we would say that it has understanding. But AFAIK that is beyond current capabilities.

Someone should try it and create a new "mysterybench". Find all mystery novels written after LLM training cutoff, and see how many models unravel the mystery

This implies understanding of preceding tokens, no? GP was saying they have understanding of future tokens.

It can't do that without the answer to who did it being in the training data. I think the reason people keep falling for this illusion is that they can't really imagine how vast the training dataset is. In all cases where it appears to answer a question like the one you posed, it's regurgitating the answer from its training data in a way that creates an illusion of using logic to answer it.

It can't do that without the answer to who did it being in the training data.

Try it. Write a simple original mystery story, and then ask a good model to solve it.

This isn't your father's Chinese Room. It couldn't solve original brainteasers and puzzles if it were.


That’s not true, at all.

Please…go on.

You sound more like a stochastic parrot than an LLM does at this point.

"Understanding" is just a trap to get wrapped up in. A word with no definition and no test to prove it.

Whether or not the model are "understanding" is ultimately immaterial, as their ability to do things is all that matters.


If they can't do things that require understanding, it's material, bub.

And just because you have no understanding of what "understanding" means, doesn't mean nobody does.


> doesn't mean nobody does

If it's not a functional understating that allows to replicate functionality of understanding, is it the real understanding?


If you're claiming a transformer model is a Markov chain, this is easily disprovable by, eg, asking the model why it isn't a Markov chain!

But here is a really big one of those if you want it: https://arxiv.org/abs/2401.17377


Modern LLMs are post trained for tasks other than next word prediction.

They still output words through (except for multi-modal LLMs) so that does involve next word generation.


The line between understanding and “large sample of data made into a statistical model” is kind of fuzzy.

> Predicting the next word requires understanding

If we were talking about humans trying to predict next word, that would be true.

There is no reason to suppose than an LLM is doing anything other than deep pattern prediction pursuant to, and no better than needed for, next word prediction.


There is plenty reason. This article is just one example of many. People bring it up because LLMs routinely do things we call reasoning when we see them manifest in other humans. Brushing it off as 'deep pattern prediction' is genuinely meaningless. Nobody who uses that phrase in that way can actually explain what they are talking about in a way that can be falsified. It's just vibes. It's an unfalsifiable conversation-stopper, not a real explanation. You can replace "pattern matching" with "magic" and the argument is identical because the phrase isn't actually doing anything.

A - A force is required to lift a ball

B - I see Human-N lifting a ball

C - Obviously, Human-N cannot produce forces

D - Forces are not required to lift a ball

Well sir, why are you so sure Human-N cannot produce forces? How is she lifting the ball ? Well Of course Human-N is just using s̶t̶a̶t̶i̶s̶t̶i̶c̶s̶ magic.


You seem to be ignoring two things...

First, the obvious one, is that LLMs are trained to auto-regressively predict human training samples (i.e. essentially to copy them, without overfitting), so OF COURSE they are going to sound like the training set - intelligent, reasoning, understanding, etc, etc. The mistake is to anthropomorphize the model because it sounds human, and associate these attributes of understanding etc to the model itself rather than just reflecting the mental abilities of the humans who wrote the training data.

The second point is perhaps a bit more subtle, and is about the nature of understanding and the differences between what an LLM is predicting and what the human cortex - also a prediction machine - is predicting...

When humans predict, what we're predicting is something external to ourself - the real world. We observe, over time we see regularities, and from this predict we'll continue to see those regularities. Our predictions include our own actions as an input - how will the external world react to our actions, and therefore we learn how to act.

Understanding something means being able to predict how it will behave, both left alone, and in interaction with other objects/agents, including ourselves. Being able to predict what something will do if you poke it is essentially what it means to understand it.

What an LLM is predicting is not the external world and how it reacts to the LLMs actions, since it is auto-regressively trained - it is only predicting a continuation of it's own output (actions) based on it's own immediately preceding output (actions)! The LLM therefore itself understands nothing since it has no grounding for what it is "talking about", and how the external world behaves in reaction to it's own actions.

The LLMs appearance of "understanding" comes solely from the fact that it is mimicking the training data, which was generated by humans who do have agency in the world and understanding of it, but the LLM has no visibility into the generative process of the human mind - only to the artifacts (words) it produces, so the LLM is doomed to operate in a world of words where all it might be considered to "understand" is it's own auto-regressive generative process.


You’re restating two claims that sound intuitive but don’t actually hold up when examined:

1. “LLMs just mimic the training set, so sounding like they understand doesn’t imply understanding.”

This is the magic argument reskinned. Transformers aren’t copying strings, they’re constructing latent representations that capture relationships, abstractions, and causal structure because doing so reduces loss. We know this not by philosophy, but because mechanistic interpretability has repeatedly uncovered internal circuits representing world states, physics, game dynamics, logic operators, and agent modeling. “It’s just next-token prediction” does not prevent any of that from occurring. When an LLM performs multi-step reasoning, corrects its own mistakes, or solves novel problems not seen in training, calling the behavior “mimicry” explains nothing. It’s essentially saying “the model can do it, but not for the reasons we’d accept,” without specifying what evidence would ever convince you otherwise. Imaginary distinction.

2. “Humans predict the world, but LLMs only predict text, so humans understand but LLMs don’t.”

This is a distinction without the force you think it has. Humans also learn from sensory streams over which they have no privileged insight into the generative process. Humans do not know the “real world”; they learn patterns in their sensory data. The fact that the data stream for LLMs consists of text rather than photons doesn’t negate the emergence of internal models. An internal model of how text-described worlds behave is still a model of the world.

If your standard for “understanding” is “being able to successfully predict consequences within some domain,” then LLMs meet that standard, just in the domains they were trained on, and today's state of the art is trained on more than just text.

You conclude that “therefore the LLM understands nothing.” But that’s an all-or-nothing claim that doesn’t follow from your premises. A lack of sensorimotor grounding limits what kinds of understanding the system can acquire; it does not eliminate all possible forms of understanding.

Wouldn't the birds that have the ability to navigate from the earth's magnetic field soon say humans have no understanding of electromagnetism ? They get trained on sensorimotor data humans will never be able to train on. If you think humans have access to the "real world" then think again. They have a tiny, extremely filtered slice of it.

Saying “it understands nothing because autoregression” is just another unfalsifiable claim dressed as an explanation.


> This is the magic argument reskinned. Transformers aren’t copying strings, they’re constructing latent representations that capture relationships, abstractions, and causal structure because doing so reduces loss.

Sure (to the second part), but the latent representations aren't the same as a humans. The human's world that they have experience with, and therefore representations of, is the real word. The LLM's world that they have experience with, and therefore representations of, is the world of words.

Of course an LLM isn't literally copying - it has learnt a sequence of layer-wise next-token predictions/generations (copying of partial embeddings to next token via induction heads etc), with each layer having learnt what patterns in the layer below it needs to attend to, to minimize prediction error at that layer. You can characterize these patterns (latent representations) in various ways, but at the end of the day they are derived from the world of words it is trained on, and are only going to be as good/abstract as next token error minimization allows. These patterns/latent representations (the "world model" of the LLM if you like) are going to be language-based (incl language-based generalizations), not the same as the unseen world model of the humans who generated that language, whose world model describes something completely different - predictions of sensory inputs and causal responses.

So, yes, there is plenty of depth and nuance to the internal representations of an LLM, but no logical reason to think that the "world model" of an LLM is similar to the "world model" of a human since they live in different worlds, and any "understanding" the LLM itself can be considered as having is going to be based on it's own world model.

> Saying “it understands nothing because autoregression” is just another unfalsifiable claim dressed as an explanation.

I disagree. It comes down to how do you define understanding. A human understands (correctly predicts) how the real world behaves, and the effect it's own actions will have on the real world. This is what the human is predicting.

What an LLM is predicting is effectively "what will I say next" after "the cat sat on the". The human might see a cat and based on circumstances and experience of cats predict that the cat will sit on the mat. This is because the human understands cats. The LLM may predict the next word as "mat", but this does not reflect any understanding of cats - it is just a statistical word prediction based on the word sequences it was trained on, notwithstanding that this prediction is based on the LLMs world-of-words-model.


>So, yes, there is plenty of depth and nuance to the internal representations of an LLM, but no logical reason to think that the "world model" of an LLM is similar to the "world model" of a human since they live in different worlds, and any "understanding" the LLM itself can be considered as having is going to be based on it's own world model.

So LLMs and Humans are different and have different sensory inputs. So what ? This is all animals. You think dolphins and orcas are not intelligent and don't understand things ?

>What an LLM is predicting is effectively "what will I say next" after "the cat sat on the". The human might see a cat and based on circumstances and experience of cats predict that the cat will sit on the mat.

Genuinely don't understand how you can actually believe this. A human who predicts mat does so because of the popular phrase. That's it. There is no reason to predict it over the numerous things cats regularly sit on, often much more so the mats (if you even have one). It's not because of any super special understanding of cats. You are doing the same thing the LLM is doing here.


> You think dolphins and orcas are not intelligent and don't understand things ?

Not sure where you got that non-secitur from ...

I would expect most animal intelligence (incl. humans) to be very similar, since their brains are very similar.

Orcas are animals.

LLMs are not animals.


Orca and human brains are similar, in the sense we have a common ancestor if you look back far enough, but they are still very different and focus on entirely different slices of reality and input than humans will ever do. It's not something you can brush off if you really believe in input supremacy so much.

From the orca's perspective, many of the things we say we understand are similarly '2nd hand hearsay'.


I think you are arguing against yourself here.

To follow your hypothetical, if an Orca were to be exposed to human language, discussing human terrestrial affairs, and were able to at least learn some of the patterns, and maybe predict them, then it should indeed be considered not to have any understanding of what that stream of words meant - I wouldn't even elevate it to '2nd hand hearsay'.

Still, the Orca, unlike an LLM, does at least does have a brain, and does live in and interact with the real world, and could probably be said to "understand" things in it's own watery habitat as well as we do.

Regarding "input supremacy" :

It's not the LLMs "world of words" that really sets it apart from animals/humans, since there are also multi-model LLMs with audio and visual inputs more similar to a humans sensory inputs. The real difference is what they are doing with those inputs. The LLM is just a passive observer, whose training consisted of learning patterns in it's inputs. A human/animal is an active agent, interacting with the world, and thereby causing changes in the input data it is then consuming. The human/animal is learning how to DO things, and gaining understanding of how the word reacts. The LLM is learning how to COPY things.

There are of course many other differences between LLMs/Transformers and animal brains, but even if we were to eliminate all these differences the active vs passive one would still be critical.


Regarding cats on mats ...

If you ask a human to complete the phrase "the cat sat on the", they will probably answer "mat". This is memorization, not understanding. The LLM can do this too.

If you just input "the cat sat on the" to an LLM, it will also likely just answer "mat" since this is what LLMs do - they are next-word input continuers.

If you said "the sat sat on the" to a human, they would probably respond "huh?" or "who the hell knows!", since the human understands that cats are fickle creatures and that partial sentences are not the conversational norm.

If you ask an LLM to explain it's understanding of cats, it will happily reply, but the output will not be it's own understanding of cats - it will be parroting some human opinion(s) it got from the training set. It has no first hand understanding, only 2nd hand heresay.


>If you said "the sat sat on the" to a human, they would probably respond "huh?" or "who the hell knows!", since the human understands that cats are fickle creatures and that partial sentences are not the conversational norm.

I'm not sure what you're getting at here ? You think LLMs don't similarly answer 'What are you trying to say?'. Sometimes I wonder if the people who propose these gotcha questions ever bother to actually test them on said LLMs.

>If you ask an LLM to explain it's understanding of cats, it will happily reply, but the output will not be it's own understanding of cats - it will be parroting some human opinion(s) it got from the training set. It has no first hand understanding, only 2nd hand heresay.

Again, you're not making the distinction you think you are. Understanding from '2nd hand heresay' is still understanding. The vast majority of what humans learn in school is such.


> Sometimes I wonder if the people who propose these gotcha questions ever bother to actually test them on said LLMs

Since you asked, yes, Claude responds "mat", then asks if I want it to "continue the story".

Of course if you know anything about LLMs you should realize that they are just input continuers, and any conversational skills comes from post training. To an LLM a question is just an input whose human-preferred (as well as statistically most likely) continuation is a corresponding answer.

I'm not sure why you regard this as a "gotcha" question. If you're expressing opinions on LLMs, then table stakes should be to have a basic understanding of LLMs - what they are internally, how they work, and how they are trained, etc. If you find a description of LLMs as input-continuers in the least bit contentious then I'm sorry to say you completely fail to understand them - this is literally what they are trained to do. The only thing they are trained to do.


Claude and GPT both ask for clarification

https://claude.ai/share/3e14f169-c35a-4eda-b933-e352661c92c2

https://chatgpt.com/share/6919021c-9ef0-800e-b127-a6c1aa8d9f...

>Of course if you know anything about LLMs you should realize that they are just input continuers, and any conversational skills comes from post training.

No, they don't. Post-training makes things easier, more accessible and consistent but conversation skills are in pre-trained LLMs just fine. Append a small transcript to the start of the prompt and you would have the same effect.

>I'm not sure why you regard this as a "gotcha" question. If you're expressing opinions on LLMs, then table stakes should be to have a basic understanding of LLMs - what they are internally, how they work, and how they are trained, etc.

You proposed a distinction and explained a situation which would make that distinction falsifiable. And I simply told you LLMs don't respond the way you claim they would. Even when models respond mat (Now I think your original point had a typo?), it is clearly not due to a lack of understanding of what normal sentences are like.

>If you find a description of LLMs as input-continuers in the least bit contentious then I'm sorry to say you completely fail to understand them - this is literally what they are trained to do. The only thing they are trained to do.

They are predictors. If the training data is solely text then the output will be more text, but that need not be the case. Words can go in while Images or actions or audio may come out. In that sense, humans are also 'input continuers'.


> Claude and GPT both ask for clarification

Yeah - you might want to check what you actually typed there.

Not sure what you're trying to prove by doing it yourself though. Have you heard of random sampling? Never mind ...


>Yeah - you might want to check what you actually typed there.

That's what you typed in your comment. Go check. I just figured it was intentional since surprise is the first thing you expect humans to show in response to it.

>Not sure what you're trying to prove by doing it yourself though. Have you heard of random sampling? Never mind ...

I guess you fancy yourself a genius who knows all about LLMs now, but sampling wouldn't matter here. Your whole point was that it happens because of a fundamental limitation on the part of LLMs that causes them unable to do it. Even one contrary response, never mind multiple would be enough. After all, some humans would simply say 'mat'.

Anyway, it doesn't really matter. Completing 'mat' doesn't have anything to do with a lack of understanding. It's just the default 'assumption' that it's a completion that is being sought.


Anything can be euphemized. Human intelligence is atoms moving around the brain. General relativity is writing on a piece of paper.

If you want to say human and LLM intelligence are both 'deep pattern prediction' then sure, but mostly and certainly in the case I was replying to, people often just use it as a means to make an imaginary unfalsifiable distinction between what LLMs do and what the super special humans do.

How'd you do at the International Math Olympiad this year?

How would you do multiplying 10000 pairs of 100 digit numbers in a limited amount of time? We don't anthropomorphize calculators though...

One problem for your argument is that transformer networks are not, and weren't meant to be, calculators. Their raw numerical calculating abilities are shaky when you don't let them use external tools, but they are also entirely emergent. It turns out that language doesn't just describe logic, it encodes it. Nobody expected that.

To see another problem with your argument, find someone with weak reasoning abilities who is willing to be a test subject. Give them a calculator -- hell, give them a copy of Mathematica -- and send them to IMO, and see how that works out for them.


I hear the LLM was able to parrot fragments of the stuff it was trained to memorize, and did very well

Yeah, that must be it.

Well being able to extrapolate solutions to "novel" mathematical exercises based on a very large sample of similar tasks in your dataset seems like a reasonable explanation.

Question is how well it would do if it was trained without those samples?


Gee, I don't know. How would you do at a math competition if you weren't trained with math books? Sample problems and solutions are not sufficient unless you can genuinely apply human-level inductive and deductive reasoning to them. If you don't understand that and agree with it, I don't see a way forward here.

A more interesting question is, how would you do at a math competition if you were taught to read, then left alone in your room with a bunch of math books? You wouldn't get very far at a competition like IMO, calculator or no calculator, unless you happen to be some kind of prodigy at the level of von Neumann or Ramanujan.


> A more interesting question is, how would you do at a math competition if you were taught to read, then left alone in your room with a bunch of math books?

But that isn't how an LLM learnt to solve math olympiad problems. This isn't a base model just trained on a bunch of math books.

The way they get LLMs to be good at specialized things like math olympiad problems is to custom train them for this using reinforcement learning - they give the LLM lots of examples of similar math problems being solved, showing all the individual solution steps, and train on these, rewarding the model when (due to having selected an appropriate sequence of solution steps) it is able itself to correctly solve the problem.

So, it's not a matter of the LLM reading a bunch of math books and then being expert at math reasoning and problem solving, but more along the lines "of monkey see, monkey do". The LLM was explicitly shown how to step by step solve these problems, then trained extensively until it got it and was able to do it itself. It's probably a reflection of the self-contained and logical nature of math that this works - that the LLM can be trained on one group of problems and the generalizations it has learnt works on unseen problems.

The dream is to be able to teach LLMs to reason more generally, but the reasons this works for math don't generally apply, so it's not clear that this math success can be used to predict future LLM advances in general reasoning.


The dream is to be able to teach LLMs to reason more generally, but the reasons this works for math don't generally apply

Why is that? Any suggestions for further reading that justifies this point?

Ultimately, reinforcement learning is still just a matter of shoveling in more text. Would RL work on humans? Why or why not? How similar is it to what kids are exposed to in school?


An important difference between reinforcement learning (RL) and pre-training is the error feedback that is given. For pre-training the error feedback is just next token prediction error. For RL you need to have a goal in mind (e.g. successfully solving math problems) and the training feedback that is given is the RL "reward" - a measure of how well the model output achieved the goal.

With RL used for LLMs, it's the whole LLM response that is being judged and rewarded (not just the next word), so you might give it a math problem and ask it to solve it, then when it was finished you take the generated answer and check if it is correct or not, and this reward feedback is what allows the RL algorithm to learn to do better.

There are at least two problems with trying to use RL as a way to improve LLM reasoning in the general case.

1) Unlike math (and also programming) it is not easy to automatically check the solution to most general reasoning problems. With a math problem asking for a numerical answer, you can just check against the known answer, or for a programming task you can just check if the program compiles and the output is correct. In contrast, how do you check the answer to more general problems such "Should NATO expand to include Ukraine?" ?! If you can't define a reward then you can't use RL. People have tried using "LLM as judge" to provide rewards in cases like this (give the LLM response to another LLM, and ask it if it thinks the goal was met), but apparently this does not work very well.

2) Even if you could provide rewards for more general reasoning problems, and therefore were able to use RL to train the LLM to generate good solutions for those training examples, this is not very useful unless the reasoning it has learnt generalizes to other problems it was not trained on. In narrow logical domains like math and programming this evidentially works very well, but it is far from clear how learning to reason about NATO will help with reasoning about cooking or cutting your cat's nails, and the general solution to reasoning can't be "we'll just train it on every possible question anyone might ever ask"!

I don't have any particular reading suggestions, but these are widely accepted limiting factors to using RL for LLM reasoning.

I don't think RL for humans would work too well, and it's not generally the way we learn, or kids are mostly taught in school. We mostly learn or are taught individual skills and when they can be used, then practice and learn how to combine and apply them. The closest to using RL in school would be if the only feedback an English teacher gave you on your writing assignments was a letter grade, without any commentary, and you had to figure out what you needed to improve!


Goes to show the "frontier" is not really one frontier. It's a social/mathematical construct that's useful for a broad comparison, but if you have a niche task, there's no substitute for trying the different models.

If they can identify which one is correct, then it's the same as always being correct, just with an expensive compute budget.

I sort of assumed they cached like 30 inferences and just repeat them, but maybe I'm being too cynical.

The hope was for this understanding to emerge as the most efficient solution to the next-token prediction problem.

Put another way, it was hoped that once the dataset got rich enough, developing this understanding is actually more efficient for the neural network than memorizing the training data.

The useful question to ask, if you believe the hope is not bearing fruit, is why. Point specifically to the absent data or the flawed assumption being made.

Or more realistically, put in the creative and difficult research work required to discover the answer to that question.


I've heard it described that being poor is expensive. The poorer you are the more expensive it is. Being poor in a poor country is the most expensive. You can't just buy coffee, you can only afford a sachet of coffee. So per gram you're paying double. You can't afford medical care, so the condition gets worse and thus more expensive to do something about. You're in debt most of the time, which is expensive. You have to travel for work, again expensive. You rent, expensive. It must be awful.

The reason that the rich were so rich, Vimes reasoned, was because they managed to spend less money. Take boots, for example. ... A really good pair of leather boots cost fifty dollars. But an affordable pair of boots, which were sort of OK for a season or two and then leaked like hell when the cardboard gave out, cost about ten dollars. ... But the thing was that good boots lasted for years and years. A man who could afford fifty dollars had a pair of boots that'd still be keeping his feet dry in ten years' time, while a poor man who could only afford cheap boots would have spent a hundred dollars on boots in the same time and would still have wet feet. This was the Captain Samuel Vimes 'Boots' theory of socio-economic unfairness.

https://en.wikipedia.org/wiki/Special:BookSources/0-575-0550...


When I think of so many people who can least afford to do so buying everyday items at dollar stores, CVS, gas stations, and other convenience stores with such high unit prices it bums me out.

> You can't just buy coffee, you can only afford a sachet of coffee.

Imagine if they tried to do without coffee until they saved a few dollars for a can. It could take years!


I think its some sort of decline thats happening Some of is dumb environmental policy. Showerheads that don't spray enough water. Dishwashers that don't wash properly so you need to wash dishes before you put them in and after you take them out. Time of use pricing that means you need to cook at inconvenient times, and even still most of the bill is fixed charges. It's just going on. The decline in Canada seems like its mostly targeted towards poor people. I know a family friend that has a broken bone leg is waiting months for a specialist when anytime he could get an infection and die from infection. Totally preventable even in a third world country, yet it is what it is. My mom also know somone thats waiting for a proecdure too, and they asked hime multiple times if he wants to do Maid. It's almost cynical.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: