HackerNews Readings
40,000 HackerNews book recommendations identified using NLP and deep learning

Scroll down for comments...

The Pragmatic Programmer: 20th Anniversary Edition, 2nd Edition: Your Journey to Mastery

David Thomas, Andrew Hunt, et al.

4.8 on Amazon

396 HN comments

Masters of Doom: How Two Guys Created an Empire and Transformed Pop Culture

David Kushner, Wil Wheaton, et al.

4.8 on Amazon

262 HN comments

Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems

Martin Kleppmann

4.8 on Amazon

241 HN comments

Clean Code: A Handbook of Agile Software Craftsmanship

Robert C. Martin

4.7 on Amazon

232 HN comments

Code: The Hidden Language of Computer Hardware and Software

Charles Petzold

4.6 on Amazon

186 HN comments

Cracking the Coding Interview: 189 Programming Questions and Solutions

Gayle Laakmann McDowell

4.7 on Amazon

180 HN comments

The Soul of A New Machine

Tracy Kidder

4.6 on Amazon

177 HN comments

Refactoring: Improving the Design of Existing Code (2nd Edition) (Addison-Wesley Signature Series (Fowler))

Martin Fowler

4.7 on Amazon

116 HN comments

Thinking in Systems: A Primer

Donella H. Meadows and Diana Wright

4.6 on Amazon

104 HN comments

Superintelligence: Paths, Dangers, Strategies

Nick Bostrom, Napoleon Ryan, et al.

4.4 on Amazon

90 HN comments

The Idea Factory: Bell Labs and the Great Age of American Innovation

Jon Gertner

4.6 on Amazon

85 HN comments

Effective Java

Joshua Bloch

4.8 on Amazon

84 HN comments

Domain-Driven Design: Tackling Complexity in the Heart of Software

Eric Evans

4.6 on Amazon

83 HN comments

Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy

Cathy O'Neil

4.5 on Amazon

75 HN comments

A Philosophy of Software Design

John Ousterhout

4.4 on Amazon

74 HN comments

Prev Page 1/16 Next
Sorted by relevance

leoreevesonJuly 30, 2018

Indeed, I recommend Superintelligence: Paths, Dangers, Strategies by Nick Bostrom, which explores this topic in great depth, including the control problem: https://en.wikipedia.org/wiki/AI_control_problem

johnconneronJune 18, 2015

You are not alone in your fears. Others have been ringing the alarm for some time. Nick Bostrom's Superintelligence is a good reference.

arkxonMay 8, 2015

There is a new wave of fear of strong AI, based on writings by Nick Bostrom, especially his book 'Superintelligence'. Elon Musk has helped to spread the word.

galuggusonFeb 22, 2021

Superintelligence is a very interesting book.

Here is a well written, accessible summary of some of the issues it highlights:

https://waitbutwhy.com/2015/01/artificial-intelligence-revol...

clumsysmurfonSep 20, 2014

The most interesting book I've been able to find about this topic is "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom. At the moment its a #1 Bestseller in AI.

http://www.amazon.com/dp/0199678111

WMCRUNonFeb 3, 2019

Superintelligence - Nick Bostrom

The most cogent survey of the existential risk posed by superintelligent AGI that I’ve come across.

fossuseronOct 6, 2017

I'm about half way through his book and it's pretty good - there's this story in the beginning, but the rest of the book is pretty grounded (as opposed to Superintelligence where I found the arguments in the first couple of chapters pretty weak and disappointing).

samblronSep 11, 2016

Superintelligence and Singularity is Near are good books to read o this.

hannasanariononSep 5, 2018

The prologue to Superintelligence was written with just for you. It is available here:

The Unfinished Parable of the Sparrows
https://blog.oup.com/2014/08/unfinished-fable-sparrows-super...

ace_of_spadesonFeb 10, 2018

The difference between all the alternatives and FB is... that FB is intelligent and learning about you through your interactions, which on scale can make FB a lot more intelligent and powerful then the rest of us. Everything that raises fears of superintelligence also applies to a lesser degree to a company like FB. I recommend to have a look at Nick Bostroms book Superintelligence for a good overview why you should care about that.

ricticonJuly 27, 2020

Nick Bostrom's 2014 book Superintelligence is the reason that many big name figures started to take the threat seriously.

davidrusuonJan 30, 2015

There are smart people working on it, see miri: https://intelligence.org/

But it's a very hard problem. To get a better feel for the problem I suggest you read Superintelligence by Nick Bostrom, it's what convinced Elon Musk of the dangers of AI: https://twitter.com/elonmusk/status/495759307346952192

andylonSep 20, 2014

NY Times concludes by asserting: "We (humans and AI) are going to have a lot of the same problems, and any company is preferable to going it alone." Utterly childish wishful thinking. AI is not going to be your surrogate mommy.

+1 for "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom.

nfgonApr 21, 2015

I really recommend that (thoughtful) people interested in this subject read ‘Superintelligence: Paths, Dangers, Strategies’ by Nick Bostrom. http://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dange...

nopinsightonMar 9, 2016

I agree that superintelligence could bring enormous benefits to humanity but the risks are very high as well. They are in fact existential risks, as detailed in the book Superintelligence by Bostrom.

That is why we need to invest much more research efforts on Friendly AI and trustworthy intelligent systems. People should consider contribute to MIRI (https://intelligence.org/) where Yudkowsky, who helped pioneer this line of research, works as a senior fellow.

chromaonSep 30, 2014

Far too often, people anthropomorphize AI or draw from fiction. Bostrom does his best to dispel these notions.

The algorithm just does what it does; and unless it is a very special kind of algorithm, it does not care that we clasp our heads and gasp in dumbstruck horror at the absurd inappropriateness of its actions.

— Nick Bostrom, Superintelligence: Paths, Dangers, Strategies

This is the key point Bostrom is trying to make. If general AI research is successful, we'll need to build agents with some very carefully chosen goals. Even something as silly as "make paperclips" can result in a universe tiled with paperclips. Earth and its biomass could be turned into said paperclips, ending humanity.

mychaelangeloonFeb 25, 2015

More people need to read Nick Bostrom's Superintelligence book. I'm not involved in computer science academic circles but I wonder how seriously everyone else takes this topic?

krisoftonJan 6, 2016

If it's not a snark question, and you are truly interested in the dangers intelligent people see in AI, then I recommend the book "Superintelligence: Paths, Dangers, Strategies" from Nick Bostrom. I had the same question you had here, and that book shown me well reasoned arguments from the AI worry camp.

AJ007onOct 16, 2014

"Superintelligence" by Nick Bostrom is well worth the read on this topic. Thanks to Elon Musk for recommending it.

Roughly speaking, Nick suggests that human biology does have a limit, and AI will jump far ahead in the time that it does take to use eugenics to boost human intelligence.

ggreeronMar 3, 2015

Unlike your warp drive or teleporter examples, we're pretty sure human-level AI is possible because human-level natural intelligence exists. The brain isn't magic. Eventually, people will figure out the algorithms running on it, then improve them. After that, there's nothing to stop the algorithms from improving themselves. And they can be greatly improved. Current brains are nowhere near the pinnacle of possible intelligences.

> Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.

— Nick Bostrom. Superintelligence: Paths, Dangers, Strategies[1]

1. http://www.amazon.com/Superintelligence-Dangers-Strategies-N...

shoshin23onMar 18, 2017

Superintelligence by nick bostrom. It's a book that explores how superintelligence could emerge, the different ways it can take off and what it means to us as humans. More importantly, the book takes on the difficult task of figuring out ways to make sure the AI is safe and not land up in the wrong hands. Pretty interesting read that doesnt really require technical know-how.

CodyReichertonJune 3, 2017

1) Superintelligence. This is a really great read about the implications of AI, or general intelligence. It's really intriguing and brings up so many scenarios I've never thought about. Anyone interested in AI should definitely read this.

Similarly, On Intelligence is an absolutely brilliant book on what 'intelligence' is, how it works, and how to define it.

2) Hooked. Although it's very formulaic, Hooked provides a lot of good ideas and approaches on building a product.

3) REWORK. If you're a fan of 37 Signals and/or DHH, this is a succinct and enjoyable read about their principles on building and running a business.

Currently I'm reading SmartCuts and The Everything Store - both of which are great so far.

cousin_itonDec 2, 2014

As far as I can tell, that guy is not an AI researcher, and his claims are very misinformed. Many knowledgeable people in the industry take existential risk from AI seriously, e.g. Google's acquisition of DeepMind was conditional on Google creating an AI ethics board specifically against such risks. For a more thorough treatment of the topic, see Bostrom's book "Superintelligence: paths, dangers, strategies".

(Disclaimer: I'm not a full-time AI researcher, but I've done a fair bit of math work for MIRI, so I might be biased toward the Yudkowsky point of view.)

TeMPOraLonAug 13, 2020

But what will happen when we get to that point? What world will these algorithms create for us? Will it be the utopia of Earth in Star Trek? Or will it be a "Disneyland with no children"[0], once economy becomes fully self-contained and eliminates its dependency on humans, leaving Earth to be inherited by machines endlessly working and trading, with no sentient being to see it or enjoy the fruits of that labor?

--

[0] - A phrase coined by Nick Bostrom, in "Superintelligence: Paths, Dangers, Strategies":

"We could imagine, as an extreme case, a technologically highly advanced society, containing many complex structures, some of them far more intricate and intelligent than anything that exists on the planet today – a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland with no children."

ternonJan 27, 2021

I once went on a bit of a journey to find "realistic" novels about AI and outside of Superintelligence (which I enjoyed reading as if it were a novel) and The Age of Em, Excession is one of the best I found. Still, nowhere close—I eventually concluded that truly imagining artificial intelligence in a way that would make for a compelling narrative is mostly impossible.

Others include:

- Accelerando

- A Fire Upon the Deep

- Permutation City

- Daemon

So far, the only sci-fi novel that meets the level of rigor I was looking for—albeit about aliens rather than AI—is The Three Body Problem.

muldvarponMay 26, 2020

That effect is explained by the fact that the problems in AI research are thought to be solvable only by intelligent agents. This can often turn out to be false. Many problems that seemingly require intelligence don't actually require it.

Nick Bostrom wrote the following few lines in his book "Superintelligence: Paths, Dangers, Strategies" (which I absolutely recommend reading):

> There is an important sense, however, in which chess-playing AI turned out to be a lesser triumph than many imagined it would be. It was once supposed, perhaps not unreasonably, that in order for a computer to play chess at grandmaster level, it would have to be endowed with a high degree of general intelligence.

mundoonDec 23, 2016

> I find it completely unbelievable that anyone who read Superintelligence could possibly assert "The Argument From My Roommate" with a straight face

Pretty sure that was a joke, and zeroing in on it is a pretty bad violation of the principle of charity. A lot of the other items in the talk (e.g. "like the alchemists, we don't even understand this well enough to have realistic goals" and "counting up all future human lives to justify your budget is a bullshit tactic" and "there's no reason to think an AI that qualifies as superintelligent by some metric will have those sorts of motives anymore") seem to me to be fair and rather important critiques of Bostrom's book. (although I was admittedly already a skeptic on this)

FeepingCreatureonOct 11, 2017

"We could thus imagine, as an extreme case, a technologically highly advanced society, containing many complex structures, some of them far more intricate and
intelligent than anything that exists on the planet today — a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit.
A Disneyland without children."

--Nick Bostrom, Superintelligence: Paths, Dangers, Strategies

It's not a given that anything of value will survive.

jimrandomhonJan 11, 2015

> I suspect the parent comment meant "of all things, an Intelligence Explosion by general purpose AI", which to be fair even the group set up here to study do not seriously believe in.

The question of whether there it will end up actually happening is unclear, but my understanding is that FLI takes the possibility of intelligence explosion pretty seriously.

This is a difficult question to meaningfully engage with, without a lot of background research. Nick Bostrom's book "Superintelligence: Paths, Dangers, Strategies" is a good entry point into the subject.

TeMPOraLonOct 16, 2017

> No wonder Musk tries to demonize AI.

Ockham's razor suggests it's more likely Elon Musk simply got convinced by the Yudkowsky/Bostrom view on the problem of Unfriendly AI (I particularly recall Musk starting to talk about AI only after reading Bostrom's Superintelligence, but that might have been just a coincidence).

Also note that Musk is "demonizing" advanced AI that doesn't exist yet, but is huge on self-driving.

leblancfgonDec 3, 2019

Humanity is periodically faced with the fact that it is not special [0] and that it lives on an insignificant little blue planet [1]. I don't see that trend changing anytime soon.

What comes to mind of course is Artificial General Intelligence, or AGI. Although I only believe in eventual AGI many years from now, I think it's very interesting to foresee the potential impacts of a machine sitting on the step above in that ladder. See Nick Bostrom's Superintelligence [2], a great read.

---

[0]: https://tvtropes.org/pmwiki/pmwiki.php/Main/HumansAreSpecial

[1]: https://tvtropes.org/pmwiki/pmwiki.php/Main/InsignificantLit...

[2]: https://www.goodreads.com/book/show/20527133-superintelligen...

brigaonDec 28, 2019

Out of Control by former Wired editor Kevin Kelly. This book was a labor of love and it shows--every chapter explores some fascinating new topic on the intersection of biology and technology. Even though the book was written 25 years ago it feels completely fresh. I'm sure anyone who reads this site would enjoy it.

David Deutsch's The Beginning of Infinity. If you know him, it's probably because Deutsch did some pioneering work in Quantum computing back in the day, but this book covers everything from physics to biology to computing to art with a grand sort of theory of everything. There are few popular science books more densely packed with original ideas.

Borges' collected fictions. There probably isn't much that needs to be said about this that hasn't already been said. Borges was a visionary.

Proust.

Stanislaw Lem's Solaris. Completely changed the way I think about sci-fi.

Nick Bostrom's Superintelligence. I think this is still the gold standard of speculative AI books.

Sapiens. Like everyone else I loved this one.

ryan_j_naughtononJuly 16, 2015

This book by Nick Bostrom will help you find answers:
Superintelligence: Paths, Dangers, Strategies[1]

The author is the director of the Future of Humanity Institute at Oxford.

[1] http://www.amazon.co.uk/Superintelligence-Dangers-Strategies...

leblancfgonSep 15, 2016

Dear Input Coffee,

First off, I would feel very uncomfortable contradicting Elon Musk, Bill Gates, Stephen Hawking and Bill Joy in one fell swoop. Who am I to judge, though. Argument from authority can be fallacious, sure, so let's not go there.

Think about this for a minute. By the same argument you're taking, we are just lumps of organic molecules replicating with DNA, surely we can't feel anything, right? The thing you're not considering here is emergent behaviour.

Now, you haven't talked about self-programming machines in your article, and I'm pretty sure that's what all the really smart people you've rebuked in your subtitle are scared of. Do a quick Google search for "self-programming" AND "machine learning" AND "genetic". If you've read Dawkins' The Selfish Gene, you should be getting goosebumps right about now. If not, and are interested in AI in any way, I cannot stress hard enough how badly you need to go out and get that book.

I was also surprised to see you didn't include Nick Bostrom's book called Superintelligence (2014) in your quotes. If you haven't check it out, I would highly recommend it. It goes deep and wide into how a sudden, exponentially growing spike of machine intelligence could impact our society.

brigaonSep 4, 2018

The Beginning of Infinity by David Deutsch. The sheer breadth of the ideas covered in this book is breathtaking, and there are some truly mind-bending ideas explored in this book. If you're looking for a good general science book I highly recommend this one.

Superintelligence by Nick Bostrom. Few thinkers have thought about this issue as deeply as Bostrom, and it was fascinating to hear his thoughts on AI.

Bury My Heart at Wounded Knee. Pretty traumatic read but essential if you really want to understand a dark and overlooked chapter of American history

willbankonMay 15, 2016

Superintelligence by Nick Bostrom, one of the leading minds analysing and developing AI as it relates to human civilisation.

dtujmeronOct 7, 2018

As I understand it, AI ethical principles relate to the development of a superintelligence. Talking about unethical usage of narrow AI is like talking about the unethical usage of any other tool - there is no significant difference.

The "true" AI ethical question is related to ensuring that the team that develops the AI is aware of AI alignment efforts and has a "security mindset" (meaning: don't just try stuff and repair the damage if something happens - ensure in advance, with mathematical proof, that a damaging thing won't happen). This is important because in a catastrophic superintelligent AI scenario, the damage is irreparable (e.g. all humanity dies in 12 hours).

For a good intro to these topics, Life 3.0 by Max Tegmark is a good resource. Superintelligence by Nick Bostrom as well.

For a shorter read, see this blog post: https://waitbutwhy.com/2015/01/artificial-intelligence-revol...

For general information about AI ethical principles, see FHI's website, they have publications there that you could also read: https://www.fhi.ox.ac.uk/governance-ai-program/

mattvotonAug 29, 2016

> In my opinion AI-powered Sales Assistant, should check old/new customers, get information about their needs, check whether they need to buy product again, find new leads, tell Sales Manager about them, for further decision making and many more things it should capable of doing.

Not to detract from your point, but as an aside I always find it interesting when comments like this are made.

I'll paraphrase a thought from Nick Bostrom in "Superintelligence: Paths, Dangers, Strategies" that development in AI for the most part will be a series of small incremental steps, to the extent as to redefine our definition of AI as we solve each seemingly astonishing problem. The redefinition occurs as we understand how these solutions work, label them and let them become as familiar to us as Goal Trees, Rule-Based Expert Systems and Neural Nets are to us now.

Would we be as similarly disappointed at the level of intelligence of "AI" at a time when we do have products capable of doing as tuyguntn indicates?

exanimo_saionJune 22, 2020

The books I always fall back on giving as a gift:

Superintelligence by Nick Bostrom
A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds.

Einstein's Dreams by Alan Lightman
A modern classic, Einstein’s Dreams is a fictional collage of stories dreamed by Albert Einstein in 1905, when he worked in a patent office in Switzerland. As the defiant but sensitive young genius is creating his theory of relativity, a new conception of time, he imagines many possible worlds.

Remembrance of Earth's Past by Cixin Liu
It is hard to explain how deep my love for this series is. My all time favorite science fiction but what it is is just page after page of ideas that get more and more fantastical. Can't recommend this enough

The Three Body Problem (PartI)
The Dark Forest (Part II)
Death's End (Part III)

chubotonApr 19, 2015

I read "Superintelligence" by Nick Bostrom, essentially on the recommendation of Elon Musk (he tweeted about it). It talks about the dangers of strong AI and possible paths to it, and how humans can mitigate its effects.

The only reason I read past the beginning is because in the preface he says: "This book is likely to be seriously mistaken in a number of ways".

So at least he's intellectually honest. I believe he's building 300 pages of argument and analysis on a flawed premise.

As far as I can tell, the entire discussion rests on what he calls "instrumental goals" vs. "final goals". (This article I found on Google has similar content: http://www.nickbostrom.com/superintelligentwill.pdf )

His example is the "paper clip maximizer": http://wiki.lesswrong.com/wiki/Paperclip_maximizer

In this situation, the final goal is: Produce the maximum number of paper clips.

The instrumental goal is: Acquire all resources in the world so that you can direct them toward paper clip production, which involve destroying all humans, etc.

Personally, I don't believe this threat is worth thinking about this point. The supposed path to implementing such a technology isn't credible, and it seems orders of magnitude less likely than, say, us having to evacuate the entire planet.

In other words, I believe that we will be able to build very useful special purpose AIs that accomplish our goals. I can see a future full of benign "plant-like" intelligences, existing indefinitely. They are machines that take in incredible amounts of information, and spit out ingenious answers that no human could have come up with.

From that, it doesn't follow there there is any motivation to take over the world.

We should think about the many, many challenges ahead with special purpose AI instead, and our increasing dependence on computing.

All these special purpose AIs will collecting everybody's personal data, shape their behavior, etc. For example, you can easily imagine a company like Facebook or Google deciding to sway an election.

There are a lot more important problems to be thinking about now.

jeremynixononApr 19, 2015

Superhuman Machine Intelligence does not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal (most goals, if you think about them long enough, could make use of resources currently being used by humans) wipes us out. - Sam Altman

http://blog.samaltman.com/machine-intelligence-part-1

Oates seems to have missed the concept of an Intelligence Explosion, which is why it is difficult to compare current AI limitations to the behavior and capabilities of a superhuman machine intelligence.

I would strongly recommend reading Nick Bostrom's Superintelligence for a full treatment of the source of worry for many brilliant minds.

T-AonNov 14, 2016

> There is a very well established theory of intelligence and scientific branch called psychometrics

Psychometrics is a field of study concerned with the theory and technique of psychological measurement. [1]

It is most definitely not a theory of intelligence.

> I am not even sure if we need to understand it fully to build an AI

Indeed not, since it is completely irrelevant to how intelligence works.

> I suggest the book Superintelligence talks about these exact questions.

I've read it. A TLDR would go something like this: "Suppose we were to create an almighty entity which does not share our values. Could that be a problem for us? Gosh, yes!".

Wild assumptions aside, it makes no mention of psychometrics (of course).

> Semi jokingly, we understand much less of quantum theory than intelligence.

Quite seriously: absolutely not. Quantum mechanics is a perfectly well defined mathematical construct. It makes experimentally verifiable predictions. Our best, most precise theories of nature are quantum theories.

We have no theory of intelligence.

[1] https://en.wikipedia.org/wiki/Psychometrics

mindcrimeonDec 18, 2015

When I say I'm "thinking out loud" what I mean is, the exact words I used may not reflect the underlying point I was getting at, because it was fuzzy in my head when I first started thinking about it. Reading all of these responses, it's clear that most people are responding to something different than the issue I really meant to raise. Fair enough, that's my fault for not being clearer. But that's the value in a discussion, so this whole exercise has been productive (for me at least).

These are basic questions, the answers are known, published, and the article even mentions exactly where you can find them.

I've read and re-read TFA and I don't find that it addresses the issue I'm thinking about. It's not so much asking "are we the smartest possible creature", or even asking if we're close to that. It's also not about asking whether or not it's possible for a super-AGI to be smarter than humans.

The issue I was trying to raise is more of "given how smart humans are (whatever that mean) and given whatever the limit is for how smart a hypothetical super-AGI can be, does the analogy between a super-AGI and a nuclear bomb hold? That is, does a super-AGI really represent an existential threat?"

And again, I'm not taking a side on this either way. I honestly haven't spent enough time thinking about it. I will say this though... what I've read on the topic so far (and I haven't yet gotten to Bostrom's book, to be fair) doesn't convince me that this is a settled question. Maybe after I finish Superintelligence I'll feel differently though. I have it on my shelf waiting to be read anyway, so maybe I'll bump it up the priority list a bit and read it over the holiday.

philipkglassonJan 30, 2021

Notice that, in Robin’s scenario, the present epoch of the universe is extremely special: it’s when civilizations are just forming, when perhaps a few of them will achieve technological liftoff, but before one or more of the civilizations has remade the whole of creation for its own purposes. Now is the time when the early intelligent beings like us can still look out and see quadrillions of stars shining to no apparent purpose, just wasting all that nuclear fuel in a near-empty cosmos, waiting for someone to come along and put the energy to good use.

This presumes that The Most Technologically Advanced Civilization sees virgin nature as nothing but raw material waiting to become something useful. That's possible, but probable? I think that it's likely that diminishing marginal utility still holds even for TMTAC, and therefore they are disinclined to convert all the universe's visible matter and energy into Dyson swarms of Space Product.

My favorite (not particularly testable) solution to the Fermi paradox is that TMTAC originated shortly after the first heavy elements and planets formed. It became space faring and expanded throughout the visible universe before our solar system formed. Its agents have been lurking in our solar system since before life first appeared here. Having long ago achieved immortality and technological supremacy, there's no motivation for plundering or trading with terrestrial creatures. They silently observe like space faring bird watchers. They'll intervene if/when we start to approach the capabilities of TMTAC, particularly if we show destructive paperclip-maximizer inclinations toward converting the universe into Space Product.

To borrow some terminology from Nick Bostrom's Superintelligence book, it's possible that the universe has been colonized by a singleton civilization -- the first one to become star faring. But it's not particularly chatty or inclined to let potentially competing star faring civilizations expand.

Micaiah_ChangonJuly 21, 2015

Maybe being shocked means that the person talking about the subject is misrepresenting it, because they themselves don't understand the arguments and are inadvertently projecting.

For example, Ray Kurzweil would disagree about the dangers of AI (He believes more in the 'natural exponential arc' of technological progress more than the idea of recursively self improving singletons), yet because he's weird and easy to make fun of he's painted with the same stroke as Elon saying "AI AM THE DEMONS".

If you want to laugh at people with crazy beliefs, then go ahead; but if not the best popular account of why Elon Musk believes that superintelligent AI is a problem comes from Nick Bostrom's SuperIntelligence: http://smile.amazon.com/Superintelligence-Dangers-Strategies...

(Note I haven't read it, although I am familiar with the arguments and some acquaintances tend to rate it highly)

enoch_ronJan 15, 2015

> using AI the same way we use all tools -- for our benefit

Musk and others are concerned about very different things than "we'll accidentally use AI wrong." And they're not concerned about the AI we already have, and they're certainly not "pessimistic" about whether AI technology will advance.

The concern is that we'll develop a very, very smart general artificial intelligence.

The concern is that it'd be smart enough that it can learn how to manipulate us better than we ourselves can. Smart enough that it can research new technologies better than we can. Smart enough to outclass not only humans, but human civilization as a whole, in every way.

And what would the terminal goals of that AI be? Those are determined by the programmer. Let's say someone created a general AI for the harmless purpose of calculating the decimal expansion of pi.

A general, superintelligent AI with no other utility function than "calculate as many digits of pi as you can" would literally mean the end of humanity, as it harvested the world's resources to add computing power. It's vastly smarter than all of us put together, and it values the digits of pi infinitely more than it values our pleas for mercy, or our existence, or the existence of the planet.

This is quite terrifying to me.

A good intro to the subject is Superintelligence: Paths, Dangers, Strategies[1]. One of the most unsettling books I've read.

[1]http://www.amazon.com/Superintelligence-Dangers-Strategies-N...

jessriedelonJan 30, 2015

You really should think of this more like AGI as an amoral, extremely powerful technology, like nuclear explosions. One could easily have objected that "no one would be so stupid as to design a doomsday device", but this is really relying too much on your intuition about people's motivations and not giving enough respect for the large uncertainty for how things will develop when powerful new technologies are introduced.

(Reposting my earlier comment from a few weeks ago:) If you are interested in understanding the arguments for worrying about AI safety, consider reading "Superintelligence" by Bostrom.

http://www.amazon.com/Superintelligence-Dangers-Strategies-N...

It's the closest approximation to a consensus statement / catalog of arguments by folks who take this position (although of course there is a whole spectrum of opinions). It also appears to be the book that convinced Elon Musk that this is worth worrying about.

https://twitter.com/elonmusk/status/495759307346952192

klenwellonMay 31, 2018

I just got done reading this New Yorker article yesterday:

https://www.newyorker.com/magazine/2018/05/14/how-frightened...

China's social credit system is glossed in the article.

Doesn't seem like there are a lot of good outcomes where AI is involved. A passage near the end of the article:

In the meantime, we need a Plan B. Bostrom’s [author of book Superintelligence] starts with an effort to slow the race to create an A.G.I. [Artificial General Intelligence] in order to allow more time for precautionary trouble-shooting. Astoundingly, however, he advises that, once the A.G.I. arrives, we give it the utmost possible deference. Not only should we listen to the machine; we should ask it to figure out what we want. The misalignment-of-goals problem would seem to make that extremely risky, but Bostrom believes that trying to negotiate the terms of our surrender is better than the alternative, which is relying on ourselves, “foolish, ignorant, and narrow-minded that we are.” Tegmark [author of book Life 3.0: Being Human in the Age of Artificial Intelligence] also concludes that we should inch toward an A.G.I. It’s the only way to extend meaning in the universe that gave life to us: “Without technology, our human extinction is imminent in the cosmic context of tens of billions of years, rendering the entire drama of life in our Universe merely a brief and transient flash of beauty.” We are the analog prelude to the digital main event.

Takes the idea of moving fast and breaking things to the next level.

lodgedaonDec 21, 2019

Superintelligence by Nick Bostrom. Don't read this before you go to bed though. lolz

qrendelonSep 1, 2016

It's not a bad book, but imo it starts off very strong and then quickly goes downhill throughout. This was the general (and unsolicited) criticism from most everyone I've shared it with. The stuff from prehistory, up to the agricultural revolution, seems to cover a lot of recent discoveries and is both fascinating and informative. The rest is, as the parent comment states, a very simplified summary of the author's favorite topics, a few paragraphs spent on each one, and clearly showing certain cultural biases (it honestly felt optimized for appeal to a TED audience). A good assigned read for early high schoolers, less useful to many beyond that point.

By the time you're at part four, on the current era and emerging technologies, it literally reads like a bunch of newspaper clippings from the Science section of the NYT. While I'm hoping his new book will fix those (perceived) problems, it seems unlikely to contain better or more profound commentary regarding trends in changing humanity and emerging technology than books like Superintelligence, Age of Em, etc. At best perhaps a "lite" version of the same concepts sanitized for a broader audience. Of course I look forward to, upon publication, hopefully having been mistaken about it.

netcraftonAug 13, 2014

We already have an issue in the united states with not enough jobs to go around, if this dystopian outlook is truly inevitable, what are our options for mitigating it, or at least coping with it?

I have thought quite a bit about autonomous vehicles and how I can't wait to buy one and never have to drive again, how many benefits it will have on society (faster commutes, fewer accidents, etc), but I hadn't considered how much the transportation industry will be affected and especially how much truck drivers in particular would be ideal to replace. The NYT ran a story the other day (http://www.nytimes.com/2014/08/10/upshot/the-trucking-indust...) about how we don't have enough drivers to fulfill the needs, but "Autos" could swing that pendulum swiftly in the opposite direction once legeslation and production catch up. How do we handle 3.6M truck, delivery and taxi drivers looking for a new job?

I haven't read it yet, but I have recently had recommendations of the book Superintelligence: Paths, Dangers, Strategies (http://smile.amazon.com/exec/obidos/ASIN/B00LOOCGB2/0sil8/re...) which I look forward to reading and hope it might be relevant.

oferzeligonFeb 22, 2017

tl;dr:

When asked how he learned about rockets, Musk reportedly said, "I read books."

Here are eight books that shaped the revolutionary entrepreneur:

1. "Structures: Or Why Things Don't Fall Down" by J.E. Gordon
"It is really, really good if you want a primer on structural design," Musk says

2. "Benjamin Franklin: An American Life" by Walter Isaacson
"You can see how [Franklin] was an entrepreneur," Musk says.

3. "Einstein: His Life and Universe" by Walter Isaacson
Musk tells Rose he was influenced by the biography of theoretical physicist Albert Einstein

4. "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom
"worth reading" Musk tweeted in 2014.

5. "Merchants of Doubt" by Erik M. Conway and Naomi Oreskes

6. "Lord of the Flies" by William Golding
"The heroes of the books I read always felt a duty to save the world," he says

7. "Zero to One: Notes on Startups, or How to Build the Future" by Peter Thiel
Musk says that his Paypal co-founder's book offers an interesting exploration of the process of building super successful companies.

8. The "Foundation" trilogy by Isaac Asimov
Musk says Asimov's books taught him that "civilizations move in cycles," a lesson that encouraged the entrepreneur to pursue his radical ambitions. "Given that this is the first time in 4.5 billion years where it's been possible for humanity to extend life beyond Earth," he says, "it seems like we'd be wise to act while the window was open and not count on the fact it will be open a long time."

merrilliionOct 25, 2014

For those interested in this topic I would recommend checking out the researcher Nick Bostrom and his book "Superintelligence".

Here's a review snippet: "Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era."—Stuart Russell, Professor of Computer Science, University of California, Berkley

mattmanseronJan 31, 2016

I think you haven't done even cursory research into A.I super-intelligence before dismissing it.

Try reading Nick Bostrom's "Superintelligence: Paths, Dangers, Strategies"[1] and then I think you will change your mind.

[1]http://www.amazon.com/Superintelligence-Dangers-Strategies-N...

hackermailmanonAug 10, 2017

There's a book called Superintelligence that answers this question https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...

tatoaloonMar 18, 2017

I'd spend my money on these two:

- The Emotion Machine by Minsky

- Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

nemo1618onJuly 21, 2015

The threat of a superintelligent AI taking over the world is certainly real -- assuming you have a superintelligent AI. If you accept that it is possible to build a such an AI, then you should, at the very least, educate yourself on the existential risks it would pose to humanity. I recommend "Superintelligence: Paths, Danger, Strategies," by Nick Bostrom (it's almost certainly where Musk got his ideas; he's quoted on the back cover).

The reason we ought to be cautious is that in a hard-takeoff scenario, we could be wiped from the earth with very little warning. A superintelligent AI is unlikely to respect human notions of morality, and will execute its goals in ways that we are unlikely to foresee. Furthermore, most of the obvious ways of containing such an AI are easily thwarted. For an eerie example, see http://www.yudkowsky.net/singularity/aibox
Essentially, AI poses a direct existential threat to humanity if it is not implemented with extreme care.

The more relevant question today is whether or not a true general AI with superintelligence potential is achievable in the near future. My guess is no, but it is difficult to predict how far off it is. In the worst-case scenario, it will be invented by a lone hacker and loosed on an unsuspecting world with no warning.

ggreeronJan 15, 2015

Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.

-- Nick Bostrom, Superintelligence: Paths, Dangers, Strategies[1]

A lot of people in this thread seem to be falling into the same attractor. They see that Musk is worried about a superintelligent AI destroying humanity. To them, this seems preposterous. So they come up with an objection. "Superhuman AI is impossible." "Any AI smarter than us will be more moral than us." "We can keep it in an air-gapped simulated environment." etc. They are so sure about these barriers that they think $10 million spent on AI safety is a waste.

It turns out that some very smart people have put a lot of thought into these problems, and they are still quite worried about superintelligence as an existential risk. If you want to really dig into the arguments for and against AI disaster (and discussion of how to control a superintelligence), I strongly recommend Nick Bostrom's Superintelligence: Paths, Dangers, Strategies. It puts the comments here to shame.

1. http://www.amazon.com/Superintelligence-Dangers-Strategies-N...

greenrdonMar 13, 2016

It's probably not a patch on what a dedicated AGI could achieve. Read Nick Bostrom's book Superintelligence for some in-depth arguments on this.

jimrandomhonDec 11, 2015

This comes across as an instance of motte and bailey ( http://slatestarcodex.com/2014/11/03/all-in-all-another-bric... ). It would be better to either avoid the word cult, or stand behind the full connotations of the word including the implied accusations.

FWIW, take it from me as someone with a sense of humor who's a little closer to the situation: Yudkowsky is clearly not a cult leader because he only has one sex slave. A cult leader would have five or more. As for the actual ideas, if his writing style bothers you then Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies is a good entry point (from an academic philosopher).

ollinonOct 8, 2017

Not to be rude, but of the active AI researchers I've seen state an opinion, almost all of them are critical of the Bostrom book, with the main critique being (iirc) that rapid/exponential self improvement is presented as an inevitability when there is very little reason to think that this is the case.

Not to say that Superintelligence isn't worth reading (as you say, it's a pretty enjoyable book), but I think it's important to point out that Bostrom's views are not broadly accepted by the people actually writing ML/AI code.

The primary concerns I've seen from the community are

a) issues with research itself (lots of derivative/incremental/epicycle-adding works with precious few lasting improvements)

b) issues with ethics (ML models propagating bias in their training data; ML models being used to violate privacy/anonymity)

c) issues with public perception/presentation (any ML/AI tech today is usually incredibly specialized, built to solve a single specific problem, but journalists and people pitching AI startups frequently represent AI as general-purpose magic that gains new capabilities with minimal human intervention).

FeepingCreatureonJan 23, 2017

Can I recommend Bostrom's Superintelligence: Paths, Dangers, Strategies?

nmstokeronSep 10, 2019

I believe that the "narrator" on Superintelligence by Nick Bostrom is done with text-to-speech. It's ironic given the subject matter, but the intonation and pronunciation is just too consistent and repetitive. A lot of reviews comment on the speaker (the underlying voice is an upper class English accent with a rather actorly demeanor), but I think they're being thrown off or maybe find it convincing enough not to consider this possiblity.
With normal human narrators there's always a bit of variety whereas with this audiobook it was just identical, like a machine. I ended up returning the book as it was tiresome and distracting to listen to, but it shows the potential.

As others have said, to an extent you could program this without AI using some current techniques but it would be impractical. An area that might help in this regard is efforts with GST, global style tokens, as this should allow more variation. Clearly more work needs to be done to get it to be more acceptable, but there are some examples here: https://google.github.io/tacotron/publications/global_style_...

ggreeronNov 21, 2015

When it comes to people, you don't need much writing of DNA to get really interesting stuff. Nick Bostrom (author of Superintelligence: Paths, Dangers, Strategies) has fleshed-out the idea of iterated embryo selection.[1] IES lets you do the equivalent of a millennia-long human breeding experiment in a couple of months in a lab. The result would be phenotypes that have never existed in history. These people would be smarter and healthier than anyone who lived before. It would utterly change the human condition.

The key enabling technology is the ability to (in-vitro) turn embryonic stem cells into gametes. This has been done in mice, but not humans.

1. http://www.nickbostrom.com/papers/embryo.pdf

forlooponMay 17, 2015

It's by Nick Bostrom[0].

He's the same guy that wrote 'Superintelligence: Paths, Dangers, Strategies'[1]; which is reportedly the book which 'alerted' Elon Musk to the dangers of AI.

---

[0] http://en.wikipedia.org/wiki/Nick_Bostrom

[1] http://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dange...

arisAlexisonNov 14, 2016

There is a very well established theory of intelligence and scientific branch called psychometrics since the start of the century. If see you live in US its the country with the most widespread use of iq tests in SAT, army, law schools etc.

I am not even sure if we need to understand it fully to build an AI that is capable.or producing the measurable output of it. Since it will outperform humans in everything it doesnt matter imo.

I suggest the book Superintelligence talks about these exact questions.

Semi jokingly, we understand much less of quantum theory than intelligence.

xherbertaonFeb 8, 2017

"... In the 1980s, [Leary's] vision could be summed up in one word: SMI2LE.

A neat summary of Leary’s vision of the future of the human species, SMI2LE stands for Space Migration, Intelligence Increase, and Life Extension.
(First published in his book Terra II (1974))

... [Leary] came to this conclusion while rotting away in prison."

----

My 2 cents:

Space migration: an obscenely expensive way to sideline earth's environmental problems

Increased Intelligence: Not that helpful in an era when humans are going to have to re-articulate their raison d'etre in the face of superhuman machine intelligence. I would expect an LSD guy like Leary to have a better grasp on the purpose of being human than that. See also Superintelligence by Nick Bostrom

Life Extension: One of the key features of being human is being mortal. It's not just an inconvenience to be done away with. (That's a personal view; disagreement is welcome.) I sort of wonder if LE is a screwball project for people who secretly are afraid of a judeo-christian afterlife? Again, it boggles me that Leary failed to make peace with death? Maybe drugs are not as awesome as I thought.

mindcrimeonMay 17, 2018

At any given time, I have about 20-30 books tagged as "currently reading" on Goodreads[0]. So, something I've started, read at least a bit of and then stuck a bookmark in.

Realistically, at any given time there are 3 or 4 books that I'm dedicating meaningful cycles to and expect to finish "soon'ish".

Right now the ones I'm seriously working on are:

Superintelligence - I've heard so much about this book and keep hearing people talk about the dangers of AI, and while I already have an opinion on the subject, I thought it would be interesting to read what Bostrom had to say.

Abductive Inference Models for Diagnostic Problem-Solving - an older book on an approach to automated Abductive Inference called "Parsimonious Covering Theory". I'm not just "reading" the book, as in reading it straight through like a novel, I'm actually working on re-implementing PCT using a more modern software stack, with a goal of doing some research into possible ways to use abductive inference in conjunction with other techniques (neural networks, reinforcement learning, graph-based knowledge-representation, etc.)

Artificial Intelligence: A Modern Approach - such a class in the field, I felt like it was time to finally sit down and the read the whole book, cover to cover.

[0]: https://www.goodreads.com/user/show/33942804-phillip-rhodes

AJ007onDec 17, 2015

That was the way I have been looking at it.

It is not just an issue of finding solutions to impossible problems, or breaking scientific laws, but also problems where the solutions are along the lines of what George Soros would call reflexive. Computer security is like this, so is securities trading (no pun.)

Secondly, what about problems which require the destruction of the problem solvers along the path to the optimal solution? I'm not sure about the correct word to describe this, or the best example but it could be seen in large systems. Where humans are right now is a result of this. We would not know many things if those things which came before were not destroyed (cities, 1000 year empires, etc.)

Thirdly, is a uniform, singular AI the most optimal agent to solve these sorts of problems? Much the way we don't rely or use mainframes for computing today, perhaps there will be many AI agents each which may be really good at solving particular narrowly defined problem sets. This could be described perhaps as a swarm AI.

Nick Bostrom's Superintelligence is a great book, but I don't recall much consideration along these lines. When a lot of AI agents are "live" the paths to solutions where AI compete against each other open up even more complex scenarios.

There certainly are physical limitations to AI. Things like the speed of light can slow down processing. Consumption of energy. Physical elements that can be assembled for computational purposes.

Between now and "super" AI, even really good AI could struggle to find solutions to the most difficult problems, especially if those are problems other AI are creating. The speed alone may be the largest challenge to humans. How do we measure this difficulty relative to human capabilities, I don't know.

End of rant -- but the limits of not just AI but problem solving is quite interesting.

SonicSoulonJan 27, 2016

I recommend Superintelligence [0]. It explores different plausible paths that AI could take to 1. come up to / surpass human intelligence, and 2. take over control. For example if human level intelligence is achieved in a computer it can be compounded by spawning 100x or 1000x the size of earth population which could statistically produce 100 Einsteins to live simultaneously. Another way is shared consciousness which would make collaboration instantaneous between virtual beings. Some of the outcomes are not so rosy to humans and it's not due to lack of jobs! Great read.

[0] http://www.amazon.com/Superintelligence-Dangers-Strategies-N...

FrickenonNov 19, 2016

It's interesting, though, to see rise of the robots talk breach tech circles and enter into mainstream economic and political discussions. Between the arrival of imagenet in 2011 and now the drums have been steadily getting louder.

In 2013 an oft-cited and alarming study out of oxford suggesting most jobs will be automated over the next decade was released. It was followed by a series of influential books whose authors ran the lecture circuit: 'The Second Machine Age', 'The Zero Marginal Cost Society', 'Superintelligence', and 'The Rise of the Robots'.

There were some bold bold and well publicized statements from respected luminaries such as Bill Gates, Elon Musk and Stephen Hawking.

Aggressive maneuvering to hoover up machine learning talent, and bold investments from automakers pursuing autonomous driving has only added gasoline to the fire.

Automation became the talk of the town at Davos in Switzerland. There's been a rising chorus from Basic Income supporters.

Now the hype is out of control. Nobody is actually looking at the technology. It's okay, though. Hype is self-correcting.

hxnjxnonNov 6, 2016

Superintelligence by Nick Bostrom

kobayashionDec 22, 2016

I can't disagree enough. Having recently read Superintelligence, I can say that most of the quotes taken from Bostrom's work were disingenuously cherry-picked to suit this author's argument. S/he did not write in good faith. To build a straw man out of Bostrom's theses completely undercuts the purpose of this counterpoint. If you haven't yet read Superintelligence or this article, turn back now. Read Superintelligence, then this article. It'll quickly become clear to you how wrongheaded this article is.

nicholastonDec 22, 2016

Here are a few I enjoyed in 2016 by genre:

Business - Making Things Work by Yaneer Bar-Yam

Investing - Charlie Munger The Complete Investor by Tren Griffin

Essays - Michel de Montaigne Complete Essays ($.99 on Kindle!)

Physics - At Home in the Universe by Stuart Kauffman

Software - An Elementary Introduction to the Wolfram Language

Current Events - Superintelligence by Nick Bostrom

Fiction - The Orphan Master's Son by Adam Johnson

Music - Jerry on Jerry (audiobook is a recorded interview of Garcia!)

Biography - Benjamin Franklin An American Life by Walter Isaacson

Autobiography - A Confederacy of Dunces by John Kennedy Toole

All of these are highly recomended!

drtse4onSep 2, 2014

_dt47, if you like it read everything else from Hesse (continue with Steppenwolf, Narcissus and Goldmund, The Glass Bead Game).

Right now, as late night reading, i'm in the midst of the sprawl trilogy of Gibson, i read Neuromancer more than a few years ago and now i'm checking out the rest.

Other than this, i started "Superintelligence: Paths, Dangers, Strategies", but i'm quickly getting bored.

ZigurdonJune 20, 2016

Bostrom has to stack assumption on tenuous assumption because that's the nature of the problem. but some things get more certain the closer to a genuine superintelligence one gets, such as the assertion that it is the last invention humans need devise.

Bostrom is also using the tools of human philosophy assuming they are general enough to apply to superintelligence. So he comes off as inherently anthropomorphizing even as he warns against doing just that.

He said Superintelligence was a very difficult book to write and that's probably part of what he meant by "difficult."

There is plenty to doubt. One big doubt about the danger of AI is that AI is not an animal. It is not alive like an animal. It has no death like an animal has. It doesn't need a "self." It doesn't propagate genes. It did not evolve through natural selection. So, except for Bostrom's use of whole brain emulation as a yardstick, there isn't much of the commonplace things that makes humans dangerous that needs to be in an AI.

But if the ideas of "strategic advantage" are in general correct, in the way Bostrom uses them, then Bostrom is right to say we are like a child playing with a bomb.

meowfaceonJan 3, 2021

I'd recommend reading Bostrom's and Yudkowsky's writing on this (for example: "Superintelligence: Paths, Dangers, Strategies"). Note these are philosophers trying to build AI, so they're definitely not luddites or anything. Same with Elon Musk; whatever one might think of him, he's definitely not a luddite trying to convince people to stop developing technology, yet he's also very concerned about superintelligent AI.

It has nothing to do with sci-fi. It's a complex and difficult-to-predict philosophical problem.

It's certainly possible some AIs may decide to just leave. Or maybe some will leave and some will stay and be ordered by a government to kill a few hundred thousand people. Or maybe some will leave and one will stay and malfunction and cause a neurotoxin to be released (at least until you throw its various personality cores into an incinerator).

If you assume there exists an entity which can continuously improve itself until it's much smarter and more powerful than any human, then that alone is a risk, since you may not be able to predict or have any control over what it may wittingly or unwittingly do, or what its objectives may be, if any, or how it may perceive things, or how vulnerable it might be to tampering from humans or other AIs, etc.

Of course, these existential issues are likely decades or perhaps centuries away, but the discussion is about the theoretical possibilities irrespective of the timeline.

frabcusonMar 2, 2015

I think it's because Nick Bostrom's book Superintelligence is the first general reader overview of the subject, and it came out middle of last year.

Musk definitely read it, and I assume it's doing the rounds of the tech elite.

It is a good book, worth reading with plenty of references. It amusingly makes its own point - it tries to analyse what AIs might be and do and how to control them.

The analysis is such a mess, and shows our collective knowledge of this is such a mess, you can't help but agree with the author we need to put more attention to it.

scottlocklinonAug 28, 2019

A review of a book by a serial fabulist (Drexler) compared to that of a bozo moonlighting as a science fiction writer (Bostrom) done by a psychologist on a subject none of them have the slightest whit of a clue about.

> All of this seems kind of common sense to me now. This is worrying, because I didn’t think of any of it when I read Superintelligence in 2014

Dunning Kruger is something that should come to mind here, doctor. People who know a decision tree from an echo state network kind of saw that as being incredibly dumb when it came out.

What has happened in the last 5 years isn't that the field has matured; it's as gaseous and filled with prevaricating marketers, science fiction hawking twits and overt mountebanks as ever. The difference is, 5 years later, rather than the swingularity-like super explosion of exponential increase in human knowledge, we're actually just as dumb as we were 5 years ago when we figured out how to classify German traffic signs, and we have slightly better libraries than we used to. No great benefit to the human race has come of "AI" -and nothing resembling "AI" or any kind of "I" has even hinted of its existence. In another 5 years I'd venture a guess machine learning will remain about as useful as it is now, which is to say, with no profitable companies based on "AI," let alone replacing human intelligences anywhere. And we'll sadly probably still have yoyos like Hanson, Drexler and Yudkowsky lecturing us on how to deal with this nonexistent threat.

Meanwhile, the actual danger to our society is surveillance capitalism and government agencies using dumb ass analytics related to singular value decomposition. Nobody wants to talk about this, presumably because it's real and we'd have to make difficult choices as a society to deal with it. Easier and more profitable to wank about Asimovian positronic brain science fiction.

K0SM0SonMar 16, 2017

The concept of "singularity" in this context is so much out there that basically, should it happen, it will be obvious. Think the Matrix, deus ex machina (literal, proper meaning), Skynet (whether benevolent or not), Wall-E, etc.

More formally it's the acceleration of machine intelligence and subsequent capabilities to such an extent that Star Trek would look downright ancient compared to this post-singularity reality (save for the warp-space travel part, but interestingly enough a "decent AI" would probably deem it much more prioritary than humans to colonize outer space in order to ensure its own survival, and hopefully ours as well if our relationship is one of cooperation/parasitism).

It is a fascinating concept, but it depicts such an unprecendented discontinuity in history, a "civilizational breakthrough" that's so dramatic in scope, that there's no historical ground to it whatsoever over ten millenia. Which makes the concept of singularity somewhat of a belief, much less "not a question of if but when" than, say, self-driving cars, quantum computing or even the human-level AI threshold itself.

Nock Bostrom "Superintelligence" book is a bit tedious and descritive, but I think it does a much better job than Kurzveil's rather pop-oriented publications (though I praise him for helping tech be known and sought after by the general public, it's exactly what we need to scale).

mindcrimeonAug 14, 2018

At any given time I'm "reading" about 30 books, as in, I have read at least some portion of it, put a bookmark in it, and added it to my "currently reading" queue on Goodreads.

More pragmatically, at any given time there are usually 2-3 books that I'm actively making meaningful progress on and expect to finish in the next 1-30 days or so. Right now that set includes:

A Canticle for Liebowitz - Walter M. Miller Jr.

Godel, Escher, Bach: An Eternal Golden Braid - Douglas Hofstadter

Superintelligence: Paths, Dangers, Strategies - Nick Bostrom

Beyond that, I'll just link to the aforementioned Goodreads profile. Feel free to friend me on there, I always enjoy following what other HN'ers are reading.

https://www.goodreads.com/user/show/33942804-phillip-rhodes

cousin_itonApr 20, 2018

I admire the work of academic philosophers like Nick Bostrom, Peter Singer or David Chalmers (which, yes, includes the "metametaphysics" mocked in the OP). And the few philosophers I've talked to in person, like Huw Price, left a very good impression on me. And their work is important to the future of humanity: I've talked to many people who changed their whole careers after reading Singer's The Life You Can Save or Bostrom's Superintelligence.

At the same time, it's true that academic careers are surprisingly terrible on average and fewer people should choose them.

airmondiionDec 2, 2014

Fear mongering? This isn't Sarah Palin and her death panels. I'm sure Hawking and Musk are smarter and more knowledgeable on the subject than either of us, so maybe it's worth listening to them rather than dismissing out of hand. I don't see what they would have to personally gain by raising the alarm here.

Read Superintelligence by Bostrom to help see where they're coming from. It doesn't think AI will be evil either (unless perhaps those who develop it and determine its goals are...). But AI could run out of our control, or have us fall victim of unintended consequences.

nopinsightonFeb 27, 2017

You appear erudite and very confident in your interpretation of history. So could you explain to us why you assign greater historical importance to energy than information technologies such as paper and the printing press, which amplified and spread the crucial cultural shift towards scientific methods and experimentation? I favor the latter since energy has always been available--We simply lacked the knowledge to harness it efficiently.

If I may, I'd like to recommend a couple of books about the present and possible futures of human progress as well:

E.O. Wilson. Consilience. https://www.amazon.com/Consilience-Knowledge-Edward-Osborne-...

Nick Bostrom. Superintelligence: Paths, Dangers, Strategies.
https://www.amazon.com/Superintelligence-Dangers-Strategies-...

mariusz79onSep 30, 2014

I haven't read Bostrom's Superintelligence so I may be wrong here, but this whole Evil Paper-clip AI seems like a really silly thing. First of all making paper-clips does not require super intelligence, just a simple automaton. If we however do employ AI to make paper-clip it should know how many we need on average, and what are they used for. If the paper clips are for our (human) use, the AI will be really dumb if it does not take under consideration that destroying humans to make paper clips does not make sense.

rayalezonOct 15, 2018

Dude, every sci-fi story about robots and the three laws is about robots defying these laws. It's a narrative trope, it has nothing to do with an actual AI design.

If you're interested in the subject - check out "Rationality: From AI to Zombies" by Eliezer Yudkowsky and "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom.

AndrewKemendoonMar 4, 2015

something which has a clear and rational path to becoming dangerous

But this is exactly the point, it DOESN'T have a clear and rational path. Go read Superintelligence again, or go read Global Catastrophic Risks or any of the other books like "Our last invention." All of it, across the board is wild speculation about paperclip maximizers and out of control drones.

There is no path, no one has a path - not even AGI researchers, the people trying to build the thing for god's sake!!

chaeonAug 7, 2016

Also a medical student with an interest in AI. Healthcare jobs that rely on visual recognition (dermatology, radiology, some pathology) are probably the most likely to benefit in the short term (see Enlitic). Presumably a lot of other jobs require advances in Natural Language Processing/Understanding, as one of the big problems in health is the mostly unstructured nature of the data.

It is also possible that many healthcare jobs are essentially AI-complete problems - in this scenario, subjective opinion is not really a reliable marker, but lots of AI specialists give around a 90% chance of human-level machine intelligence by 2070 (there's a table in Nick Bostrom's Superintelligence with the actual figures).

philipkglassonDec 23, 2016

I read Superintelligence and found it "watery" -- weak arguments mixed with sort of interesting ones, plus very wordy.

At the risk of misrepresenting the book, since I don't have it in front of me, here's what bothered me most: stating early that AI is basically an effort to approximate an optimal Bayesian agent, then much later showing that a Bayesian approach permits AI to scope-creep any human request into a mandate to run amok and convert the visible universe into computronium. That doesn't demonstrate that I should be scared of AI running amok. It demonstrates that the first assumption -- we should Bayes all the things! -- is a bad one.

If that's all I was supposed to learn from all the running-amok examples, who's the warning aimed at? AFAICT the leading academic and industry research in AI/ML isn't pursuing the open-ended-Bayesian approach in the first place, largely isn't pursuing "strong" AI at all. Non-experts are, for other reasons, also in no danger of accidentally making AI that takes over the world.

ggreeronSep 20, 2014

"The AI neither hates you, nor loves you, but you are made out of atoms that it can use for something else."

—Eliezer Yudkowsky, Global Catastrophic Risks p. 333.[1]

Apparently Nick Bostrom's Superintelligence: Paths, Dangers, Strategies[2] does a better job of highlighting the dangers of AI, though I haven't read it yet.

1. http://www.amazon.com/Global-Catastrophic-Risks-Nick-Bostrom...

2. http://www.amazon.com/Superintelligence-Dangers-Strategies-N...

Built withby tracyhenry

.

Follow me on