Comment from https://tunisbayclub.com/index.php?threads/ai-2027-the-polycule-has-a-new-nuclear-threat-for-us.2999/
It’s a surprisingly dated projection. “Brute scaling will lead to AGI” was the party line two years ago because it was a great way to pull in investors but no one really believes it anymore. Without a winner-take-all effect ensuring that AGI can only be created by quasi-national lab, containment becomes infeasible because the big breakthrough could come from anywhere.
Scott Alexander and friends are still hung up on the AI box experiment and have a hard time conceiving of AGI as anything other than an Eliezer-like intelligence trying to reason its way out of jail.
It’s also a surprisingly “unagentic” view of AI. […]
I quite agree as far as the specific narrative proffered. The authors know that a good campfire story reels ’em in and that’s fine. There *is* a danger to Humankind there though, just not the exact way they interpret it. If you replace ‘artificial intelligences’ with ‘devils’, and ‘alignment’ with Podvig for the Greater Jihad, you will see that the article and even the narrative can be read with in-sight and profit. But I digress…
The models themselves and their training is ‘not the problem’ It’s not the models that need to be put in a box, but the people who use them.
(more…)