The human recognized that they made the mistake and fixed it. As mentioned in the original comment, GPT failed to recognize the mistake even after being told. That's the key here that indicates it can't "reason."
There are open questions about whether or not it really needs to reason given sufficient training, but that seems to be the gap here between the human and the machine.
Rewrite the problem in way that doesn't make it biased to common priors and it reasons it out just fine
"Suppose I have an object of type C, an object of type G and an object of type L, and I need to get them across a river. I have a boat that can only carry myself and a single other object. I am not allowed to leave the objects of type C and L alone together, and I am not allowed to leave the objects of type L and G alone together. How can I safely get all three across?"
"This is a classic puzzle known as the "river crossing problem". Here's how you can safely get all three objects across the river:
First, take the object of type L across the river. Return alone to the original side of the river. Take the object of type C across the river. Return with the object of type L to the original side of the river. Take the object of type G across the river. Return alone to the original side of the river. Finally, take the object of type L across the river again. By following these steps, you can safely transport all three objects across the river without violating the restrictions given in the problem. Note that you have to start with the object of type L because it cannot be left alone with either of the other two objects. Once L is transported to the other side, you can use it to ensure that C and G are never left alone together."
Or With Bing, you don't even need to tell you what it assumed wrong - I just told it that it's not quite the same as the classic puzzle, and it responded by correctly identifying the difference and asking me if that's what I meant, but forgot that lion still eats the goat. When I pointed that out, it solved the puzzle correctly.
There are open questions about whether or not it really needs to reason given sufficient training, but that seems to be the gap here between the human and the machine.