If you are trying to contextualize someone’s opinion on current AI, I suggest asking three basic questions about their perspective. In particular, you should know which of the following three realities they are aware of. Here goes:
1. How good are the best models today?
Most people do not know this, even if you are speaking with someone at a top university.
2. How rapidly are the best current models are able to self-improve?
In my view, their output is good enough that synthetic data works, they can grade themselves, and they are on a steady glide toward ongoing self-improvement. You can debate the rate, but “compound interest” gets you there sooner or later.
It is fine if someone disagrees with that, but at the very least you want them to have considered this possibility. Most observers really have not.
3. How will the best current models be knit together in stacked, decentralized networks of self-improvement, broadly akin to “the republic of science” for human beings?
This one is far more speculative, as it is not possible to observe much directly in the public sphere. And most of what will happen does not exist yet, not even as plans on the drawing board. Still, you want a person to have given this question some thought. I believe, for one, that this will move the AIs into the realm of being truly innovative. Stack them, have some of them generate a few billion new ideas a week, have the others grade those ideas…etc.
I find it is difficult to have good AI conversations with people who are not conversant with at least the first two realities on this list. The very best conversations are with people who also have spent time thinking about number three as well.
The post The three levels of AI understanding appeared first on Marginal REVOLUTION.