The Pragmatic Programmer: 20th Anniversary Edition, 2nd Edition: Your Journey to Mastery
David Thomas, Andrew Hunt, et al.
4.8 on Amazon
396 HN comments
Masters of Doom: How Two Guys Created an Empire and Transformed Pop Culture
David Kushner, Wil Wheaton, et al.
4.8 on Amazon
262 HN comments
Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems
Martin Kleppmann
4.8 on Amazon
241 HN comments
Clean Code: A Handbook of Agile Software Craftsmanship
Robert C. Martin
4.7 on Amazon
232 HN comments
Code: The Hidden Language of Computer Hardware and Software
Charles Petzold
4.6 on Amazon
186 HN comments
Cracking the Coding Interview: 189 Programming Questions and Solutions
Gayle Laakmann McDowell
4.7 on Amazon
180 HN comments
The Soul of A New Machine
Tracy Kidder
4.6 on Amazon
177 HN comments
Refactoring: Improving the Design of Existing Code (2nd Edition) (Addison-Wesley Signature Series (Fowler))
Martin Fowler
4.7 on Amazon
116 HN comments
Thinking in Systems: A Primer
Donella H. Meadows and Diana Wright
4.6 on Amazon
104 HN comments
Superintelligence: Paths, Dangers, Strategies
Nick Bostrom, Napoleon Ryan, et al.
4.4 on Amazon
90 HN comments
The Idea Factory: Bell Labs and the Great Age of American Innovation
Jon Gertner
4.6 on Amazon
85 HN comments
Effective Java
Joshua Bloch
4.8 on Amazon
84 HN comments
Domain-Driven Design: Tackling Complexity in the Heart of Software
Eric Evans
4.6 on Amazon
83 HN comments
Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
Cathy O'Neil
4.5 on Amazon
75 HN comments
A Philosophy of Software Design
John Ousterhout
4.4 on Amazon
74 HN comments
leoreevesonJuly 30, 2018
johnconneronJune 18, 2015
arkxonMay 8, 2015
galuggusonFeb 22, 2021
Here is a well written, accessible summary of some of the issues it highlights:
https://waitbutwhy.com/2015/01/artificial-intelligence-revol...
clumsysmurfonSep 20, 2014
http://www.amazon.com/dp/0199678111
WMCRUNonFeb 3, 2019
The most cogent survey of the existential risk posed by superintelligent AGI that I’ve come across.
fossuseronOct 6, 2017
samblronSep 11, 2016
hannasanariononSep 5, 2018
The Unfinished Parable of the Sparrows
https://blog.oup.com/2014/08/unfinished-fable-sparrows-super...
ace_of_spadesonFeb 10, 2018
ricticonJuly 27, 2020
davidrusuonJan 30, 2015
But it's a very hard problem. To get a better feel for the problem I suggest you read Superintelligence by Nick Bostrom, it's what convinced Elon Musk of the dangers of AI: https://twitter.com/elonmusk/status/495759307346952192
andylonSep 20, 2014
+1 for "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom.
nfgonApr 21, 2015
nopinsightonMar 9, 2016
That is why we need to invest much more research efforts on Friendly AI and trustworthy intelligent systems. People should consider contribute to MIRI (https://intelligence.org/) where Yudkowsky, who helped pioneer this line of research, works as a senior fellow.
chromaonSep 30, 2014
The algorithm just does what it does; and unless it is a very special kind of algorithm, it does not care that we clasp our heads and gasp in dumbstruck horror at the absurd inappropriateness of its actions.
— Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
This is the key point Bostrom is trying to make. If general AI research is successful, we'll need to build agents with some very carefully chosen goals. Even something as silly as "make paperclips" can result in a universe tiled with paperclips. Earth and its biomass could be turned into said paperclips, ending humanity.
mychaelangeloonFeb 25, 2015
krisoftonJan 6, 2016
AJ007onOct 16, 2014
Roughly speaking, Nick suggests that human biology does have a limit, and AI will jump far ahead in the time that it does take to use eugenics to boost human intelligence.
ggreeronMar 3, 2015
> Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.
— Nick Bostrom. Superintelligence: Paths, Dangers, Strategies[1]
1. http://www.amazon.com/Superintelligence-Dangers-Strategies-N...
shoshin23onMar 18, 2017
CodyReichertonJune 3, 2017
Similarly, On Intelligence is an absolutely brilliant book on what 'intelligence' is, how it works, and how to define it.
2) Hooked. Although it's very formulaic, Hooked provides a lot of good ideas and approaches on building a product.
3) REWORK. If you're a fan of 37 Signals and/or DHH, this is a succinct and enjoyable read about their principles on building and running a business.
Currently I'm reading SmartCuts and The Everything Store - both of which are great so far.
cousin_itonDec 2, 2014
(Disclaimer: I'm not a full-time AI researcher, but I've done a fair bit of math work for MIRI, so I might be biased toward the Yudkowsky point of view.)
TeMPOraLonAug 13, 2020
--
[0] - A phrase coined by Nick Bostrom, in "Superintelligence: Paths, Dangers, Strategies":
"We could imagine, as an extreme case, a technologically highly advanced society, containing many complex structures, some of them far more intricate and intelligent than anything that exists on the planet today – a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland with no children."
ternonJan 27, 2021
Others include:
- Accelerando
- A Fire Upon the Deep
- Permutation City
- Daemon
So far, the only sci-fi novel that meets the level of rigor I was looking for—albeit about aliens rather than AI—is The Three Body Problem.
muldvarponMay 26, 2020
Nick Bostrom wrote the following few lines in his book "Superintelligence: Paths, Dangers, Strategies" (which I absolutely recommend reading):
> There is an important sense, however, in which chess-playing AI turned out to be a lesser triumph than many imagined it would be. It was once supposed, perhaps not unreasonably, that in order for a computer to play chess at grandmaster level, it would have to be endowed with a high degree of general intelligence.
mundoonDec 23, 2016
Pretty sure that was a joke, and zeroing in on it is a pretty bad violation of the principle of charity. A lot of the other items in the talk (e.g. "like the alchemists, we don't even understand this well enough to have realistic goals" and "counting up all future human lives to justify your budget is a bullshit tactic" and "there's no reason to think an AI that qualifies as superintelligent by some metric will have those sorts of motives anymore") seem to me to be fair and rather important critiques of Bostrom's book. (although I was admittedly already a skeptic on this)
FeepingCreatureonOct 11, 2017
intelligent than anything that exists on the planet today — a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit.
A Disneyland without children."
--Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
It's not a given that anything of value will survive.
jimrandomhonJan 11, 2015
The question of whether there it will end up actually happening is unclear, but my understanding is that FLI takes the possibility of intelligence explosion pretty seriously.
This is a difficult question to meaningfully engage with, without a lot of background research. Nick Bostrom's book "Superintelligence: Paths, Dangers, Strategies" is a good entry point into the subject.
TeMPOraLonOct 16, 2017
Ockham's razor suggests it's more likely Elon Musk simply got convinced by the Yudkowsky/Bostrom view on the problem of Unfriendly AI (I particularly recall Musk starting to talk about AI only after reading Bostrom's Superintelligence, but that might have been just a coincidence).
Also note that Musk is "demonizing" advanced AI that doesn't exist yet, but is huge on self-driving.
leblancfgonDec 3, 2019
What comes to mind of course is Artificial General Intelligence, or AGI. Although I only believe in eventual AGI many years from now, I think it's very interesting to foresee the potential impacts of a machine sitting on the step above in that ladder. See Nick Bostrom's Superintelligence [2], a great read.
---
[0]: https://tvtropes.org/pmwiki/pmwiki.php/Main/HumansAreSpecial
[1]: https://tvtropes.org/pmwiki/pmwiki.php/Main/InsignificantLit...
[2]: https://www.goodreads.com/book/show/20527133-superintelligen...
brigaonDec 28, 2019
David Deutsch's The Beginning of Infinity. If you know him, it's probably because Deutsch did some pioneering work in Quantum computing back in the day, but this book covers everything from physics to biology to computing to art with a grand sort of theory of everything. There are few popular science books more densely packed with original ideas.
Borges' collected fictions. There probably isn't much that needs to be said about this that hasn't already been said. Borges was a visionary.
Proust.
Stanislaw Lem's Solaris. Completely changed the way I think about sci-fi.
Nick Bostrom's Superintelligence. I think this is still the gold standard of speculative AI books.
Sapiens. Like everyone else I loved this one.
ryan_j_naughtononJuly 16, 2015
Superintelligence: Paths, Dangers, Strategies[1]
The author is the director of the Future of Humanity Institute at Oxford.
[1] http://www.amazon.co.uk/Superintelligence-Dangers-Strategies...
leblancfgonSep 15, 2016
First off, I would feel very uncomfortable contradicting Elon Musk, Bill Gates, Stephen Hawking and Bill Joy in one fell swoop. Who am I to judge, though. Argument from authority can be fallacious, sure, so let's not go there.
Think about this for a minute. By the same argument you're taking, we are just lumps of organic molecules replicating with DNA, surely we can't feel anything, right? The thing you're not considering here is emergent behaviour.
Now, you haven't talked about self-programming machines in your article, and I'm pretty sure that's what all the really smart people you've rebuked in your subtitle are scared of. Do a quick Google search for "self-programming" AND "machine learning" AND "genetic". If you've read Dawkins' The Selfish Gene, you should be getting goosebumps right about now. If not, and are interested in AI in any way, I cannot stress hard enough how badly you need to go out and get that book.
I was also surprised to see you didn't include Nick Bostrom's book called Superintelligence (2014) in your quotes. If you haven't check it out, I would highly recommend it. It goes deep and wide into how a sudden, exponentially growing spike of machine intelligence could impact our society.
brigaonSep 4, 2018
Superintelligence by Nick Bostrom. Few thinkers have thought about this issue as deeply as Bostrom, and it was fascinating to hear his thoughts on AI.
Bury My Heart at Wounded Knee. Pretty traumatic read but essential if you really want to understand a dark and overlooked chapter of American history
willbankonMay 15, 2016
dtujmeronOct 7, 2018
The "true" AI ethical question is related to ensuring that the team that develops the AI is aware of AI alignment efforts and has a "security mindset" (meaning: don't just try stuff and repair the damage if something happens - ensure in advance, with mathematical proof, that a damaging thing won't happen). This is important because in a catastrophic superintelligent AI scenario, the damage is irreparable (e.g. all humanity dies in 12 hours).
For a good intro to these topics, Life 3.0 by Max Tegmark is a good resource. Superintelligence by Nick Bostrom as well.
For a shorter read, see this blog post: https://waitbutwhy.com/2015/01/artificial-intelligence-revol...
For general information about AI ethical principles, see FHI's website, they have publications there that you could also read: https://www.fhi.ox.ac.uk/governance-ai-program/
mattvotonAug 29, 2016
Not to detract from your point, but as an aside I always find it interesting when comments like this are made.
I'll paraphrase a thought from Nick Bostrom in "Superintelligence: Paths, Dangers, Strategies" that development in AI for the most part will be a series of small incremental steps, to the extent as to redefine our definition of AI as we solve each seemingly astonishing problem. The redefinition occurs as we understand how these solutions work, label them and let them become as familiar to us as Goal Trees, Rule-Based Expert Systems and Neural Nets are to us now.
Would we be as similarly disappointed at the level of intelligence of "AI" at a time when we do have products capable of doing as tuyguntn indicates?
exanimo_saionJune 22, 2020
Superintelligence by Nick Bostrom
A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds.
Einstein's Dreams by Alan Lightman
A modern classic, Einstein’s Dreams is a fictional collage of stories dreamed by Albert Einstein in 1905, when he worked in a patent office in Switzerland. As the defiant but sensitive young genius is creating his theory of relativity, a new conception of time, he imagines many possible worlds.
Remembrance of Earth's Past by Cixin Liu
It is hard to explain how deep my love for this series is. My all time favorite science fiction but what it is is just page after page of ideas that get more and more fantastical. Can't recommend this enough
The Three Body Problem (PartI)
The Dark Forest (Part II)
Death's End (Part III)
chubotonApr 19, 2015
The only reason I read past the beginning is because in the preface he says: "This book is likely to be seriously mistaken in a number of ways".
So at least he's intellectually honest. I believe he's building 300 pages of argument and analysis on a flawed premise.
As far as I can tell, the entire discussion rests on what he calls "instrumental goals" vs. "final goals". (This article I found on Google has similar content: http://www.nickbostrom.com/superintelligentwill.pdf )
His example is the "paper clip maximizer": http://wiki.lesswrong.com/wiki/Paperclip_maximizer
In this situation, the final goal is: Produce the maximum number of paper clips.
The instrumental goal is: Acquire all resources in the world so that you can direct them toward paper clip production, which involve destroying all humans, etc.
Personally, I don't believe this threat is worth thinking about this point. The supposed path to implementing such a technology isn't credible, and it seems orders of magnitude less likely than, say, us having to evacuate the entire planet.
In other words, I believe that we will be able to build very useful special purpose AIs that accomplish our goals. I can see a future full of benign "plant-like" intelligences, existing indefinitely. They are machines that take in incredible amounts of information, and spit out ingenious answers that no human could have come up with.
From that, it doesn't follow there there is any motivation to take over the world.
We should think about the many, many challenges ahead with special purpose AI instead, and our increasing dependence on computing.
All these special purpose AIs will collecting everybody's personal data, shape their behavior, etc. For example, you can easily imagine a company like Facebook or Google deciding to sway an election.
There are a lot more important problems to be thinking about now.
jeremynixononApr 19, 2015
http://blog.samaltman.com/machine-intelligence-part-1
Oates seems to have missed the concept of an Intelligence Explosion, which is why it is difficult to compare current AI limitations to the behavior and capabilities of a superhuman machine intelligence.
I would strongly recommend reading Nick Bostrom's Superintelligence for a full treatment of the source of worry for many brilliant minds.
T-AonNov 14, 2016
Psychometrics is a field of study concerned with the theory and technique of psychological measurement. [1]
It is most definitely not a theory of intelligence.
> I am not even sure if we need to understand it fully to build an AI
Indeed not, since it is completely irrelevant to how intelligence works.
> I suggest the book Superintelligence talks about these exact questions.
I've read it. A TLDR would go something like this: "Suppose we were to create an almighty entity which does not share our values. Could that be a problem for us? Gosh, yes!".
Wild assumptions aside, it makes no mention of psychometrics (of course).
> Semi jokingly, we understand much less of quantum theory than intelligence.
Quite seriously: absolutely not. Quantum mechanics is a perfectly well defined mathematical construct. It makes experimentally verifiable predictions. Our best, most precise theories of nature are quantum theories.
We have no theory of intelligence.
[1] https://en.wikipedia.org/wiki/Psychometrics
mindcrimeonDec 18, 2015
These are basic questions, the answers are known, published, and the article even mentions exactly where you can find them.
I've read and re-read TFA and I don't find that it addresses the issue I'm thinking about. It's not so much asking "are we the smartest possible creature", or even asking if we're close to that. It's also not about asking whether or not it's possible for a super-AGI to be smarter than humans.
The issue I was trying to raise is more of "given how smart humans are (whatever that mean) and given whatever the limit is for how smart a hypothetical super-AGI can be, does the analogy between a super-AGI and a nuclear bomb hold? That is, does a super-AGI really represent an existential threat?"
And again, I'm not taking a side on this either way. I honestly haven't spent enough time thinking about it. I will say this though... what I've read on the topic so far (and I haven't yet gotten to Bostrom's book, to be fair) doesn't convince me that this is a settled question. Maybe after I finish Superintelligence I'll feel differently though. I have it on my shelf waiting to be read anyway, so maybe I'll bump it up the priority list a bit and read it over the holiday.
philipkglassonJan 30, 2021
This presumes that The Most Technologically Advanced Civilization sees virgin nature as nothing but raw material waiting to become something useful. That's possible, but probable? I think that it's likely that diminishing marginal utility still holds even for TMTAC, and therefore they are disinclined to convert all the universe's visible matter and energy into Dyson swarms of Space Product.
My favorite (not particularly testable) solution to the Fermi paradox is that TMTAC originated shortly after the first heavy elements and planets formed. It became space faring and expanded throughout the visible universe before our solar system formed. Its agents have been lurking in our solar system since before life first appeared here. Having long ago achieved immortality and technological supremacy, there's no motivation for plundering or trading with terrestrial creatures. They silently observe like space faring bird watchers. They'll intervene if/when we start to approach the capabilities of TMTAC, particularly if we show destructive paperclip-maximizer inclinations toward converting the universe into Space Product.
To borrow some terminology from Nick Bostrom's Superintelligence book, it's possible that the universe has been colonized by a singleton civilization -- the first one to become star faring. But it's not particularly chatty or inclined to let potentially competing star faring civilizations expand.
Micaiah_ChangonJuly 21, 2015
For example, Ray Kurzweil would disagree about the dangers of AI (He believes more in the 'natural exponential arc' of technological progress more than the idea of recursively self improving singletons), yet because he's weird and easy to make fun of he's painted with the same stroke as Elon saying "AI AM THE DEMONS".
If you want to laugh at people with crazy beliefs, then go ahead; but if not the best popular account of why Elon Musk believes that superintelligent AI is a problem comes from Nick Bostrom's SuperIntelligence: http://smile.amazon.com/Superintelligence-Dangers-Strategies...
(Note I haven't read it, although I am familiar with the arguments and some acquaintances tend to rate it highly)
enoch_ronJan 15, 2015
Musk and others are concerned about very different things than "we'll accidentally use AI wrong." And they're not concerned about the AI we already have, and they're certainly not "pessimistic" about whether AI technology will advance.
The concern is that we'll develop a very, very smart general artificial intelligence.
The concern is that it'd be smart enough that it can learn how to manipulate us better than we ourselves can. Smart enough that it can research new technologies better than we can. Smart enough to outclass not only humans, but human civilization as a whole, in every way.
And what would the terminal goals of that AI be? Those are determined by the programmer. Let's say someone created a general AI for the harmless purpose of calculating the decimal expansion of pi.
A general, superintelligent AI with no other utility function than "calculate as many digits of pi as you can" would literally mean the end of humanity, as it harvested the world's resources to add computing power. It's vastly smarter than all of us put together, and it values the digits of pi infinitely more than it values our pleas for mercy, or our existence, or the existence of the planet.
This is quite terrifying to me.
A good intro to the subject is Superintelligence: Paths, Dangers, Strategies[1]. One of the most unsettling books I've read.
[1]http://www.amazon.com/Superintelligence-Dangers-Strategies-N...
jessriedelonJan 30, 2015
(Reposting my earlier comment from a few weeks ago:) If you are interested in understanding the arguments for worrying about AI safety, consider reading "Superintelligence" by Bostrom.
http://www.amazon.com/Superintelligence-Dangers-Strategies-N...
It's the closest approximation to a consensus statement / catalog of arguments by folks who take this position (although of course there is a whole spectrum of opinions). It also appears to be the book that convinced Elon Musk that this is worth worrying about.
https://twitter.com/elonmusk/status/495759307346952192
klenwellonMay 31, 2018
https://www.newyorker.com/magazine/2018/05/14/how-frightened...
China's social credit system is glossed in the article.
Doesn't seem like there are a lot of good outcomes where AI is involved. A passage near the end of the article:
In the meantime, we need a Plan B. Bostrom’s [author of book Superintelligence] starts with an effort to slow the race to create an A.G.I. [Artificial General Intelligence] in order to allow more time for precautionary trouble-shooting. Astoundingly, however, he advises that, once the A.G.I. arrives, we give it the utmost possible deference. Not only should we listen to the machine; we should ask it to figure out what we want. The misalignment-of-goals problem would seem to make that extremely risky, but Bostrom believes that trying to negotiate the terms of our surrender is better than the alternative, which is relying on ourselves, “foolish, ignorant, and narrow-minded that we are.” Tegmark [author of book Life 3.0: Being Human in the Age of Artificial Intelligence] also concludes that we should inch toward an A.G.I. It’s the only way to extend meaning in the universe that gave life to us: “Without technology, our human extinction is imminent in the cosmic context of tens of billions of years, rendering the entire drama of life in our Universe merely a brief and transient flash of beauty.” We are the analog prelude to the digital main event.
Takes the idea of moving fast and breaking things to the next level.
lodgedaonDec 21, 2019
qrendelonSep 1, 2016
By the time you're at part four, on the current era and emerging technologies, it literally reads like a bunch of newspaper clippings from the Science section of the NYT. While I'm hoping his new book will fix those (perceived) problems, it seems unlikely to contain better or more profound commentary regarding trends in changing humanity and emerging technology than books like Superintelligence, Age of Em, etc. At best perhaps a "lite" version of the same concepts sanitized for a broader audience. Of course I look forward to, upon publication, hopefully having been mistaken about it.
netcraftonAug 13, 2014
I have thought quite a bit about autonomous vehicles and how I can't wait to buy one and never have to drive again, how many benefits it will have on society (faster commutes, fewer accidents, etc), but I hadn't considered how much the transportation industry will be affected and especially how much truck drivers in particular would be ideal to replace. The NYT ran a story the other day (http://www.nytimes.com/2014/08/10/upshot/the-trucking-indust...) about how we don't have enough drivers to fulfill the needs, but "Autos" could swing that pendulum swiftly in the opposite direction once legeslation and production catch up. How do we handle 3.6M truck, delivery and taxi drivers looking for a new job?
I haven't read it yet, but I have recently had recommendations of the book Superintelligence: Paths, Dangers, Strategies (http://smile.amazon.com/exec/obidos/ASIN/B00LOOCGB2/0sil8/re...) which I look forward to reading and hope it might be relevant.
oferzeligonFeb 22, 2017
When asked how he learned about rockets, Musk reportedly said, "I read books."
Here are eight books that shaped the revolutionary entrepreneur:
1. "Structures: Or Why Things Don't Fall Down" by J.E. Gordon
"It is really, really good if you want a primer on structural design," Musk says
2. "Benjamin Franklin: An American Life" by Walter Isaacson
"You can see how [Franklin] was an entrepreneur," Musk says.
3. "Einstein: His Life and Universe" by Walter Isaacson
Musk tells Rose he was influenced by the biography of theoretical physicist Albert Einstein
4. "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom
"worth reading" Musk tweeted in 2014.
5. "Merchants of Doubt" by Erik M. Conway and Naomi Oreskes
6. "Lord of the Flies" by William Golding
"The heroes of the books I read always felt a duty to save the world," he says
7. "Zero to One: Notes on Startups, or How to Build the Future" by Peter Thiel
Musk says that his Paypal co-founder's book offers an interesting exploration of the process of building super successful companies.
8. The "Foundation" trilogy by Isaac Asimov
Musk says Asimov's books taught him that "civilizations move in cycles," a lesson that encouraged the entrepreneur to pursue his radical ambitions. "Given that this is the first time in 4.5 billion years where it's been possible for humanity to extend life beyond Earth," he says, "it seems like we'd be wise to act while the window was open and not count on the fact it will be open a long time."
merrilliionOct 25, 2014
Here's a review snippet: "Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era."—Stuart Russell, Professor of Computer Science, University of California, Berkley
mattmanseronJan 31, 2016
Try reading Nick Bostrom's "Superintelligence: Paths, Dangers, Strategies"[1] and then I think you will change your mind.
[1]http://www.amazon.com/Superintelligence-Dangers-Strategies-N...
hackermailmanonAug 10, 2017
tatoaloonMar 18, 2017
- The Emotion Machine by Minsky
- Superintelligence: Paths, Dangers, Strategies by Nick Bostrom
nemo1618onJuly 21, 2015
The reason we ought to be cautious is that in a hard-takeoff scenario, we could be wiped from the earth with very little warning. A superintelligent AI is unlikely to respect human notions of morality, and will execute its goals in ways that we are unlikely to foresee. Furthermore, most of the obvious ways of containing such an AI are easily thwarted. For an eerie example, see http://www.yudkowsky.net/singularity/aibox
Essentially, AI poses a direct existential threat to humanity if it is not implemented with extreme care.
The more relevant question today is whether or not a true general AI with superintelligence potential is achievable in the near future. My guess is no, but it is difficult to predict how far off it is. In the worst-case scenario, it will be invented by a lone hacker and loosed on an unsuspecting world with no warning.
ggreeronJan 15, 2015
-- Nick Bostrom, Superintelligence: Paths, Dangers, Strategies[1]
A lot of people in this thread seem to be falling into the same attractor. They see that Musk is worried about a superintelligent AI destroying humanity. To them, this seems preposterous. So they come up with an objection. "Superhuman AI is impossible." "Any AI smarter than us will be more moral than us." "We can keep it in an air-gapped simulated environment." etc. They are so sure about these barriers that they think $10 million spent on AI safety is a waste.
It turns out that some very smart people have put a lot of thought into these problems, and they are still quite worried about superintelligence as an existential risk. If you want to really dig into the arguments for and against AI disaster (and discussion of how to control a superintelligence), I strongly recommend Nick Bostrom's Superintelligence: Paths, Dangers, Strategies. It puts the comments here to shame.
1. http://www.amazon.com/Superintelligence-Dangers-Strategies-N...
greenrdonMar 13, 2016
jimrandomhonDec 11, 2015
FWIW, take it from me as someone with a sense of humor who's a little closer to the situation: Yudkowsky is clearly not a cult leader because he only has one sex slave. A cult leader would have five or more. As for the actual ideas, if his writing style bothers you then Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies is a good entry point (from an academic philosopher).
ollinonOct 8, 2017
Not to say that Superintelligence isn't worth reading (as you say, it's a pretty enjoyable book), but I think it's important to point out that Bostrom's views are not broadly accepted by the people actually writing ML/AI code.
The primary concerns I've seen from the community are
a) issues with research itself (lots of derivative/incremental/epicycle-adding works with precious few lasting improvements)
b) issues with ethics (ML models propagating bias in their training data; ML models being used to violate privacy/anonymity)
c) issues with public perception/presentation (any ML/AI tech today is usually incredibly specialized, built to solve a single specific problem, but journalists and people pitching AI startups frequently represent AI as general-purpose magic that gains new capabilities with minimal human intervention).
FeepingCreatureonJan 23, 2017
nmstokeronSep 10, 2019
With normal human narrators there's always a bit of variety whereas with this audiobook it was just identical, like a machine. I ended up returning the book as it was tiresome and distracting to listen to, but it shows the potential.
As others have said, to an extent you could program this without AI using some current techniques but it would be impractical. An area that might help in this regard is efforts with GST, global style tokens, as this should allow more variation. Clearly more work needs to be done to get it to be more acceptable, but there are some examples here: https://google.github.io/tacotron/publications/global_style_...
ggreeronNov 21, 2015
The key enabling technology is the ability to (in-vitro) turn embryonic stem cells into gametes. This has been done in mice, but not humans.
1. http://www.nickbostrom.com/papers/embryo.pdf
forlooponMay 17, 2015
He's the same guy that wrote 'Superintelligence: Paths, Dangers, Strategies'[1]; which is reportedly the book which 'alerted' Elon Musk to the dangers of AI.
---
[0] http://en.wikipedia.org/wiki/Nick_Bostrom
[1] http://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dange...
arisAlexisonNov 14, 2016
I am not even sure if we need to understand it fully to build an AI that is capable.or producing the measurable output of it. Since it will outperform humans in everything it doesnt matter imo.
I suggest the book Superintelligence talks about these exact questions.
Semi jokingly, we understand much less of quantum theory than intelligence.
xherbertaonFeb 8, 2017
A neat summary of Leary’s vision of the future of the human species, SMI2LE stands for Space Migration, Intelligence Increase, and Life Extension.
(First published in his book Terra II (1974))
... [Leary] came to this conclusion while rotting away in prison."
----
My 2 cents:
Space migration: an obscenely expensive way to sideline earth's environmental problems
Increased Intelligence: Not that helpful in an era when humans are going to have to re-articulate their raison d'etre in the face of superhuman machine intelligence. I would expect an LSD guy like Leary to have a better grasp on the purpose of being human than that. See also Superintelligence by Nick Bostrom
Life Extension: One of the key features of being human is being mortal. It's not just an inconvenience to be done away with. (That's a personal view; disagreement is welcome.) I sort of wonder if LE is a screwball project for people who secretly are afraid of a judeo-christian afterlife? Again, it boggles me that Leary failed to make peace with death? Maybe drugs are not as awesome as I thought.
mindcrimeonMay 17, 2018
Realistically, at any given time there are 3 or 4 books that I'm dedicating meaningful cycles to and expect to finish "soon'ish".
Right now the ones I'm seriously working on are:
Superintelligence - I've heard so much about this book and keep hearing people talk about the dangers of AI, and while I already have an opinion on the subject, I thought it would be interesting to read what Bostrom had to say.
Abductive Inference Models for Diagnostic Problem-Solving - an older book on an approach to automated Abductive Inference called "Parsimonious Covering Theory". I'm not just "reading" the book, as in reading it straight through like a novel, I'm actually working on re-implementing PCT using a more modern software stack, with a goal of doing some research into possible ways to use abductive inference in conjunction with other techniques (neural networks, reinforcement learning, graph-based knowledge-representation, etc.)
Artificial Intelligence: A Modern Approach - such a class in the field, I felt like it was time to finally sit down and the read the whole book, cover to cover.
[0]: https://www.goodreads.com/user/show/33942804-phillip-rhodes
AJ007onDec 17, 2015
It is not just an issue of finding solutions to impossible problems, or breaking scientific laws, but also problems where the solutions are along the lines of what George Soros would call reflexive. Computer security is like this, so is securities trading (no pun.)
Secondly, what about problems which require the destruction of the problem solvers along the path to the optimal solution? I'm not sure about the correct word to describe this, or the best example but it could be seen in large systems. Where humans are right now is a result of this. We would not know many things if those things which came before were not destroyed (cities, 1000 year empires, etc.)
Thirdly, is a uniform, singular AI the most optimal agent to solve these sorts of problems? Much the way we don't rely or use mainframes for computing today, perhaps there will be many AI agents each which may be really good at solving particular narrowly defined problem sets. This could be described perhaps as a swarm AI.
Nick Bostrom's Superintelligence is a great book, but I don't recall much consideration along these lines. When a lot of AI agents are "live" the paths to solutions where AI compete against each other open up even more complex scenarios.
There certainly are physical limitations to AI. Things like the speed of light can slow down processing. Consumption of energy. Physical elements that can be assembled for computational purposes.
Between now and "super" AI, even really good AI could struggle to find solutions to the most difficult problems, especially if those are problems other AI are creating. The speed alone may be the largest challenge to humans. How do we measure this difficulty relative to human capabilities, I don't know.
End of rant -- but the limits of not just AI but problem solving is quite interesting.
SonicSoulonJan 27, 2016
[0] http://www.amazon.com/Superintelligence-Dangers-Strategies-N...
FrickenonNov 19, 2016
In 2013 an oft-cited and alarming study out of oxford suggesting most jobs will be automated over the next decade was released. It was followed by a series of influential books whose authors ran the lecture circuit: 'The Second Machine Age', 'The Zero Marginal Cost Society', 'Superintelligence', and 'The Rise of the Robots'.
There were some bold bold and well publicized statements from respected luminaries such as Bill Gates, Elon Musk and Stephen Hawking.
Aggressive maneuvering to hoover up machine learning talent, and bold investments from automakers pursuing autonomous driving has only added gasoline to the fire.
Automation became the talk of the town at Davos in Switzerland. There's been a rising chorus from Basic Income supporters.
Now the hype is out of control. Nobody is actually looking at the technology. It's okay, though. Hype is self-correcting.
hxnjxnonNov 6, 2016
kobayashionDec 22, 2016
nicholastonDec 22, 2016
Business - Making Things Work by Yaneer Bar-Yam
Investing - Charlie Munger The Complete Investor by Tren Griffin
Essays - Michel de Montaigne Complete Essays ($.99 on Kindle!)
Physics - At Home in the Universe by Stuart Kauffman
Software - An Elementary Introduction to the Wolfram Language
Current Events - Superintelligence by Nick Bostrom
Fiction - The Orphan Master's Son by Adam Johnson
Music - Jerry on Jerry (audiobook is a recorded interview of Garcia!)
Biography - Benjamin Franklin An American Life by Walter Isaacson
Autobiography - A Confederacy of Dunces by John Kennedy Toole
All of these are highly recomended!
drtse4onSep 2, 2014
Right now, as late night reading, i'm in the midst of the sprawl trilogy of Gibson, i read Neuromancer more than a few years ago and now i'm checking out the rest.
Other than this, i started "Superintelligence: Paths, Dangers, Strategies", but i'm quickly getting bored.
ZigurdonJune 20, 2016
Bostrom is also using the tools of human philosophy assuming they are general enough to apply to superintelligence. So he comes off as inherently anthropomorphizing even as he warns against doing just that.
He said Superintelligence was a very difficult book to write and that's probably part of what he meant by "difficult."
There is plenty to doubt. One big doubt about the danger of AI is that AI is not an animal. It is not alive like an animal. It has no death like an animal has. It doesn't need a "self." It doesn't propagate genes. It did not evolve through natural selection. So, except for Bostrom's use of whole brain emulation as a yardstick, there isn't much of the commonplace things that makes humans dangerous that needs to be in an AI.
But if the ideas of "strategic advantage" are in general correct, in the way Bostrom uses them, then Bostrom is right to say we are like a child playing with a bomb.
meowfaceonJan 3, 2021
It has nothing to do with sci-fi. It's a complex and difficult-to-predict philosophical problem.
It's certainly possible some AIs may decide to just leave. Or maybe some will leave and some will stay and be ordered by a government to kill a few hundred thousand people. Or maybe some will leave and one will stay and malfunction and cause a neurotoxin to be released (at least until you throw its various personality cores into an incinerator).
If you assume there exists an entity which can continuously improve itself until it's much smarter and more powerful than any human, then that alone is a risk, since you may not be able to predict or have any control over what it may wittingly or unwittingly do, or what its objectives may be, if any, or how it may perceive things, or how vulnerable it might be to tampering from humans or other AIs, etc.
Of course, these existential issues are likely decades or perhaps centuries away, but the discussion is about the theoretical possibilities irrespective of the timeline.
frabcusonMar 2, 2015
Musk definitely read it, and I assume it's doing the rounds of the tech elite.
It is a good book, worth reading with plenty of references. It amusingly makes its own point - it tries to analyse what AIs might be and do and how to control them.
The analysis is such a mess, and shows our collective knowledge of this is such a mess, you can't help but agree with the author we need to put more attention to it.
scottlocklinonAug 28, 2019
> All of this seems kind of common sense to me now. This is worrying, because I didn’t think of any of it when I read Superintelligence in 2014
Dunning Kruger is something that should come to mind here, doctor. People who know a decision tree from an echo state network kind of saw that as being incredibly dumb when it came out.
What has happened in the last 5 years isn't that the field has matured; it's as gaseous and filled with prevaricating marketers, science fiction hawking twits and overt mountebanks as ever. The difference is, 5 years later, rather than the swingularity-like super explosion of exponential increase in human knowledge, we're actually just as dumb as we were 5 years ago when we figured out how to classify German traffic signs, and we have slightly better libraries than we used to. No great benefit to the human race has come of "AI" -and nothing resembling "AI" or any kind of "I" has even hinted of its existence. In another 5 years I'd venture a guess machine learning will remain about as useful as it is now, which is to say, with no profitable companies based on "AI," let alone replacing human intelligences anywhere. And we'll sadly probably still have yoyos like Hanson, Drexler and Yudkowsky lecturing us on how to deal with this nonexistent threat.
Meanwhile, the actual danger to our society is surveillance capitalism and government agencies using dumb ass analytics related to singular value decomposition. Nobody wants to talk about this, presumably because it's real and we'd have to make difficult choices as a society to deal with it. Easier and more profitable to wank about Asimovian positronic brain science fiction.
K0SM0SonMar 16, 2017
More formally it's the acceleration of machine intelligence and subsequent capabilities to such an extent that Star Trek would look downright ancient compared to this post-singularity reality (save for the warp-space travel part, but interestingly enough a "decent AI" would probably deem it much more prioritary than humans to colonize outer space in order to ensure its own survival, and hopefully ours as well if our relationship is one of cooperation/parasitism).
It is a fascinating concept, but it depicts such an unprecendented discontinuity in history, a "civilizational breakthrough" that's so dramatic in scope, that there's no historical ground to it whatsoever over ten millenia. Which makes the concept of singularity somewhat of a belief, much less "not a question of if but when" than, say, self-driving cars, quantum computing or even the human-level AI threshold itself.
Nock Bostrom "Superintelligence" book is a bit tedious and descritive, but I think it does a much better job than Kurzveil's rather pop-oriented publications (though I praise him for helping tech be known and sought after by the general public, it's exactly what we need to scale).
mindcrimeonAug 14, 2018
More pragmatically, at any given time there are usually 2-3 books that I'm actively making meaningful progress on and expect to finish in the next 1-30 days or so. Right now that set includes:
A Canticle for Liebowitz - Walter M. Miller Jr.
Godel, Escher, Bach: An Eternal Golden Braid - Douglas Hofstadter
Superintelligence: Paths, Dangers, Strategies - Nick Bostrom
Beyond that, I'll just link to the aforementioned Goodreads profile. Feel free to friend me on there, I always enjoy following what other HN'ers are reading.
https://www.goodreads.com/user/show/33942804-phillip-rhodes
cousin_itonApr 20, 2018
At the same time, it's true that academic careers are surprisingly terrible on average and fewer people should choose them.
airmondiionDec 2, 2014
Read Superintelligence by Bostrom to help see where they're coming from. It doesn't think AI will be evil either (unless perhaps those who develop it and determine its goals are...). But AI could run out of our control, or have us fall victim of unintended consequences.
nopinsightonFeb 27, 2017
If I may, I'd like to recommend a couple of books about the present and possible futures of human progress as well:
E.O. Wilson. Consilience. https://www.amazon.com/Consilience-Knowledge-Edward-Osborne-...
Nick Bostrom. Superintelligence: Paths, Dangers, Strategies.
https://www.amazon.com/Superintelligence-Dangers-Strategies-...
mariusz79onSep 30, 2014
rayalezonOct 15, 2018
If you're interested in the subject - check out "Rationality: From AI to Zombies" by Eliezer Yudkowsky and "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom.
AndrewKemendoonMar 4, 2015
But this is exactly the point, it DOESN'T have a clear and rational path. Go read Superintelligence again, or go read Global Catastrophic Risks or any of the other books like "Our last invention." All of it, across the board is wild speculation about paperclip maximizers and out of control drones.
There is no path, no one has a path - not even AGI researchers, the people trying to build the thing for god's sake!!
chaeonAug 7, 2016
It is also possible that many healthcare jobs are essentially AI-complete problems - in this scenario, subjective opinion is not really a reliable marker, but lots of AI specialists give around a 90% chance of human-level machine intelligence by 2070 (there's a table in Nick Bostrom's Superintelligence with the actual figures).
philipkglassonDec 23, 2016
At the risk of misrepresenting the book, since I don't have it in front of me, here's what bothered me most: stating early that AI is basically an effort to approximate an optimal Bayesian agent, then much later showing that a Bayesian approach permits AI to scope-creep any human request into a mandate to run amok and convert the visible universe into computronium. That doesn't demonstrate that I should be scared of AI running amok. It demonstrates that the first assumption -- we should Bayes all the things! -- is a bad one.
If that's all I was supposed to learn from all the running-amok examples, who's the warning aimed at? AFAICT the leading academic and industry research in AI/ML isn't pursuing the open-ended-Bayesian approach in the first place, largely isn't pursuing "strong" AI at all. Non-experts are, for other reasons, also in no danger of accidentally making AI that takes over the world.
ggreeronSep 20, 2014
—Eliezer Yudkowsky, Global Catastrophic Risks p. 333.[1]
Apparently Nick Bostrom's Superintelligence: Paths, Dangers, Strategies[2] does a better job of highlighting the dangers of AI, though I haven't read it yet.
1. http://www.amazon.com/Global-Catastrophic-Risks-Nick-Bostrom...
2. http://www.amazon.com/Superintelligence-Dangers-Strategies-N...