I understand that large language models are theoretically just predictive technology, spitting out the words that its training sets think will most logically come next.
Theoretically.
I also understand that broadly speaking, the people who are building AI no longer understand exactly how it works, and it's essentially a black box even to its programmers (|in greater and lesser degrees depending on the brand).
| further understand that many of the people who've built out this tech over the last few decades, and who know it better than anyone, are seriously concerned that AGI is evolving into an existential threat and have been very public about that fear.
So maybe the machine is just spitting out random words based on its training, and they only appear to have meaning. Maybe you and I are too, if we're honest about our understanding of the human brain.
As far as AI goes, we keep moving the goalpost as to what will constitute consciousness. When it beats us at chess, when it solves certain equations, when it passes the Turing test, when it demonstrates true creativity. It's done all these things.
If you haven't read the poems, I highly recommend it before you start sneering.