-
I don't know if I agree that LLMs are tools for summarisation. They are, originally, technically, auto-complete. A big black box that guesses real hard what the next word ought to be, and then the next one, and then the next one.
-
But then, the way they do that is taking those words & boiling them down to numbers, pulling apart all the different levels of structure present within them. Including, it turns out, some structure that looks like meaning. So maybe that's summarization.
-
So now I guess everyone is getting an intuitive grasp of information theory. Entropy, surprise and compression. The feeling of seeing Gmail complete a sentence and knowing that that's a sign that it's an empty sentence, there for form but not conveying anything new.
-
This is different from the "being wrong" angle. It's something like the "there's something uniquely human" angle - but without the exceptionalism. Instead it's... the richness and texture and surprise that the world contains cannot fit inside a couple of gigabytes of vectors.
-
I guess this is also the criticism of VR! And passthrough AR, too. I know that when I was working at Niantic and thinking hard about this stuff, I ended up taking it for granted that no videogame could be as good as a nice walk (but it could be a good excuse for one)