4 Comments
User's avatar
Mark's avatar

Still pushing the junk science, huh? All you're doing here is playing word games with a supercharged predictive text computer. It's sophistry, and completely meaningless. Persuading a computer to agree with your ignorant and gauche opinions of cosmology doesn't make them any less ignorant, gauche, or indeed any less wrong.

Expand full comment
Michael Suede's avatar

Given that AI is going to play a major role in directing science in the future, I think it's relevant to show that AI can understand the issue of reification in modern cosmology. Call it word games if you like.

Expand full comment
Mark's avatar

It hasn't 'understood' anything. It has been induced to reproduce a form of words that you deliberately steered it towards. As a piece of evidence it's meaningless. The main conclusion to be drawn from this exercise is that you don't understand, or choose to ignore, how LLMs work and what they are and aren't capable of doing.

Expand full comment
Michael Suede's avatar

Pretty sure the main conclusion is AI systems can recognize the reification of a model when the question is posed to them. Given that their decision-making processes are black-boxes, your assertion that I "don't understand" how LLMs work seems to imply that somehow you do, which is demonstrably false.

Expand full comment