I found this a frustrating book.
It's about artificial intelligence, whether or not we'll achieve it soon, and whether or not it will be good for mere human beings if we do. And while I suspect Bostrom doesn't think so, I found it, overall, depressing.
First, he wants us to understand that, despite repeated failed predictions of imminent true AI, and the fact that computers still mostly do a small subset of what human brains do, but much faster, and we don't even know how consciousness emerges from the biological brain, strong AI is coming, and maybe very soon. Moreover, as soon as we have human-level artificial intelligence, we will almost immediately be completely outstripped by artificial superintelligence. The only hope for us is to start right now working out how to teach the right set of human values to machines, and keep some degree of control of them. If we wait till it happens, it will be much too late.
And as he works through the philosophical, technological, and human motivation issues involved, he mostly lays out lots and lots ways that this is just not going to work out. But, he would say, also ways it could work!
Except--no. In each of these scenarios, as laid out by him, the possibilities for success sound like a very narrow chance in a sea of possible disaster, or "because it could work, really!", or like the unmotivated free will choice of the AI.
If he's right about AI being upon us in the next century or so, or possibly even sooner, and about the issues he describes, we're doomed.
And there's nothing an aging, retired librarian can do to affect the likelihood of that.
I can't recommend this glimpse of likely additional disaster in the midst of this pandemic, with American democracy possibly teetering to its death, but, hey, you decide.
I bought this audiobook.
No comments:
Post a Comment