I asked it to compute the simplicial homology of RP^2 and not only was it spot on with the result, it gave me a detailed and essentially correct computation. This definitely appears in its training set, but nevertheless you should have some humility =P
How do you know it's correct? The only simplicial traingulation I know of is by splitting up the sphere into an icosahedron and then identifying all the opposite faces to get the proper antipodal action for the quotient.
I'm not interested in engaging with you further on this topic after you devolved into ad hominems against me in the other thread. I'm here to argue in good faith. Have a good day.
You made an incorrect assessment of a basic calculation in algebraic topology and claimed that it was correct. You didn't even look at what it was computing and simply looked at the final answer which lined up with the answer on Wikipedia. Simplicial calculations for projective planes are not simple. The usual calculations are done with cellular decomposition and that's why the LLM gives the wrong answer, the actual answer is not in the dataset and requires reasoning.
Are you confusing me with someone else? When I asked it GPT computed the homology from the CW decomposition of RP^2 with three cells. Which is a very simple exercise.
That's ok. It seems like LLMs know all about simplicial complexes and homology so I'll spend my time on more fruitful endeavors but thanks for the advice.
To be fair, it's not a simplicial complex, but simplicial and cellular homology coincide on triangulatable spaces like RP^2 so I gave it the benefit of the doubt =) algebraic topology is a pretty fun field regardless of how much a language model knows about it IMO.
lmao. you're totally right. RP^2 can be triangulated with a single triangle with all of its vertices identified. that's totally how you compute the simplicial decomposition of RP^2
I asked you explain why it's wrong, and all you said was "that's incorrect". Saying "no it isn't" got you to explain your answer far better than how I directly asked you to in the first place.
But I would wager that its answer was at least wrong, and perhaps total nonsense.
That's the real hazard of using ChatGPT as a learning tool. You are in no position to evaluate whether the output makes any sense.