Site icon Hot Paths

Modeling errors in AI doom circles


There is a new and excellent post by titotal, here is one excerpt:

The AI 2027 have picked one very narrow slice of the possibility space, and have built up their model based on that. There’s nothing wrong with doing that, as long as you’re very clear that’s what you’re doing. But if you want other people to take you seriously, you need to have the evidence to back up that your narrow slice is the right one. And while they do try and argue for it, I think they have failed, and not managed to prove anything at all.

And:

So, to summarise a few of the problems:

For method 1:

There is much more detail (and additional scenarios) at the link.  For years now, I have been pushing the line of “AI doom talk needs traditional peer review and formal modeling,” and I view this episode as vindication of that view.

Addendum: Here is a not very good non-response from (some of) the authors.

The post Modeling errors in AI doom circles appeared first on Marginal REVOLUTION.



Source link

Exit mobile version