HackerNews Readings
40,000 HackerNews book recommendations identified using NLP and deep learning

Scroll down for comments...

Life 3.0: Being Human in the Age of Artificial Intelligence

Max Tegmark, Rob Shapiro, et al.

4.5 on Amazon

12 HN comments

Quantum Computing: An Applied Approach

Jack D. Hidary

4.5 on Amazon

11 HN comments

UNIX and Linux System Administration Handbook

Evi Nemeth, Garth Snyder, et al.

4.7 on Amazon

11 HN comments

Practical Malware Analysis: The Hands-On Guide to Dissecting Malicious Software

Michael Sikorski and Andrew Honig

4.7 on Amazon

11 HN comments

Trust Me, I'm Lying: Confessions of a Media Manipulator

Ryan Holiday and Penguin Audio

4.4 on Amazon

11 HN comments

Building Microservices: Designing Fine-Grained Systems

Sam Newman

4.5 on Amazon

10 HN comments

C++ Concurrency in Action

Anthony Williams

4.7 on Amazon

10 HN comments

Serious Cryptography: A Practical Introduction to Modern Encryption

Jean-Philippe Aumasson

4.7 on Amazon

10 HN comments

Theory of Fun for Game Design

Raph Koster

4.3 on Amazon

10 HN comments

The Model Thinker: What You Need to Know to Make Data Work for You

Scott E. Page, Jamie Renell, et al.

4.5 on Amazon

10 HN comments

Making Things Happen: Mastering Project Management (Theory in Practice)

Scott Berkun

4.4 on Amazon

10 HN comments

Sandworm: A New Era of Cyberwar and the Hunt for the Kremlin's Most Dangerous Hackers

Andy Greenberg, Mark Bramhall, et al.

4.7 on Amazon

10 HN comments

Designing Distributed Systems: Patterns and Paradigms for Scalable, Reliable Services

Brendan Burns

4.3 on Amazon

9 HN comments

High Performance Python: Practical Performant Programming for Humans

Micha Gorelick and Ian Ozsvald

4.8 on Amazon

9 HN comments

JavaScript: The Definitive Guide: Master the World's Most-Used Programming Language

David Flanagan

4.7 on Amazon

9 HN comments

Prev Page 6/16 Next
Sorted by relevance

agi_prometheusonApr 22, 2021

I love to read. I am doing it right since my school days. So, it was always a part of my life. But I would recommend you to read these books:

If you have a science background then read
1) Good to Great
2) Life 3.0

If you don't have a science background
1) Future of Capitalism.
2) Good to Great

"Good to great" is a fantastic book for every entrepreneur out there.

Go! read them right now.

_sy_onDec 7, 2017

A good, recent, and comprehensive primer on intelligence explosion and its theoretical implication: "Life 3.0: Being Human in the Age of Artificial Intelligence" by MIT physicist Max Tegmark.

arkanoonSep 22, 2019

There's at least one book that explores this. Life 3.0: Being Human in the Age of Artificial Intelligence https://g.co/kgs/xr1eHw

__sy__onMay 12, 2020

For those interested in the intelligence to consciousness topic, MIT Max Tegmark's Life 3.0 might still be the best primer. He argues that consciousness is an emergent property of intelligent systems (ability to store, compute, learn). From a pure physical computation standpoint, there doesn't seem to be a hard rule that says that consciousness can't be based off of inorganic materials.

dtujmeronOct 7, 2018

As I understand it, AI ethical principles relate to the development of a superintelligence. Talking about unethical usage of narrow AI is like talking about the unethical usage of any other tool - there is no significant difference.

The "true" AI ethical question is related to ensuring that the team that develops the AI is aware of AI alignment efforts and has a "security mindset" (meaning: don't just try stuff and repair the damage if something happens - ensure in advance, with mathematical proof, that a damaging thing won't happen). This is important because in a catastrophic superintelligent AI scenario, the damage is irreparable (e.g. all humanity dies in 12 hours).

For a good intro to these topics, Life 3.0 by Max Tegmark is a good resource. Superintelligence by Nick Bostrom as well.

For a shorter read, see this blog post: https://waitbutwhy.com/2015/01/artificial-intelligence-revol...

For general information about AI ethical principles, see FHI's website, they have publications there that you could also read: https://www.fhi.ox.ac.uk/governance-ai-program/

gfodoronMay 10, 2018

I'm not sure who you're arguing with here?

My point was that a suboptimal solution is one where there was no careful, methodical thought to the design of AI systems. If AI designers and researchers blindly react to public outcry after 24 hours from a demo (which, in general, is something I would expect Google to do) the kind of thinking you mention above is just as unlikely to happen as if they trudged on forward without any consideration to these things at all. In both cases, this is a suboptimal outcome for society.

In one, we get a fairly random, undesigned world of AI systems that don't serve anyone well and generally are underutilized because nobody is willing to push the boundaries. In the other, we get dystopian AI hell. It's important that researchers be given the space to think you describe. The right way to allow that is to foster an informed and open-minded public about the emergence of AI and the decisions we need to make about it as a society, and not have prominent voices writing about knee-jerk reactions making authoritative demands to a specific public demo after just a day. (See Max Tegmark's book Life 3.0 for the kinds of stuff we need more of being put into the world imho.)

klenwellonMay 31, 2018

I just got done reading this New Yorker article yesterday:

https://www.newyorker.com/magazine/2018/05/14/how-frightened...

China's social credit system is glossed in the article.

Doesn't seem like there are a lot of good outcomes where AI is involved. A passage near the end of the article:

In the meantime, we need a Plan B. Bostrom’s [author of book Superintelligence] starts with an effort to slow the race to create an A.G.I. [Artificial General Intelligence] in order to allow more time for precautionary trouble-shooting. Astoundingly, however, he advises that, once the A.G.I. arrives, we give it the utmost possible deference. Not only should we listen to the machine; we should ask it to figure out what we want. The misalignment-of-goals problem would seem to make that extremely risky, but Bostrom believes that trying to negotiate the terms of our surrender is better than the alternative, which is relying on ourselves, “foolish, ignorant, and narrow-minded that we are.” Tegmark [author of book Life 3.0: Being Human in the Age of Artificial Intelligence] also concludes that we should inch toward an A.G.I. It’s the only way to extend meaning in the universe that gave life to us: “Without technology, our human extinction is imminent in the cosmic context of tens of billions of years, rendering the entire drama of life in our Universe merely a brief and transient flash of beauty.” We are the analog prelude to the digital main event.

Takes the idea of moving fast and breaking things to the next level.

spaceknarfonNov 12, 2018

Yes, in the book Life 3.0 by Max Tegmark! (An AGI is implemented that makes money on Amazon Mechanical Turk.)

matthalvorsononMar 16, 2021

I'd recommend the book Life 3.0, the author surveys a large number of AI researchers to answer this timing question (I think 95% said AGI is guaranteed in the next 50 years iirc), and also discusses why this time is different than the times in the past, like in the 60's when a group of researchers thought they would make significant progress towards AGI over the course of a summer

milansmonJuly 2, 2019

Life 3.0: Being Human in the Age of Artificial Intelligence [by Max Tegmark]

> When they launched, Prometheus was slightly worse than them at programming AI systems, but made up for this by being vastly faster, spending the equivalent of thousands of person-years chugging away at the problem while they chugged a Red Bull. By 10 a.m., it had completed the first redesign of itself, v2.0, which was slightly better but still subhuman. By the time Prometheus 5.0 launched at 2 p.m., however, the Omegas were awestruck: it had blown their performance benchmarks out of the water, and the rate of progress seemed to be accelerating. By nightfall, they decided to deploy Prometheus 10.0 to start phase 2 of their plan: making money.

marrowgarionDec 12, 2018

Life 3.0 by Max Tegmark - great glimpse into the current and potential future of AI

Leonardo da Vinci by Walter Issacson - fascinating look into the real life of Leonardo, demystifying the genius

Excession by Iain M Banks - a bit of a let down

Bluets by Maggie Nelson - lyrical and philosophical and explicit ruminations on the color blue

How to Change Your Mind by Michael Pollan - a lot of already known and rehashed info on psychedelics

Lost City of the Incas by Hirham Bingham - Yale professor who discovered Machu Pichu. Good history of the Incas and region

Farenheit 451 by Ray Bradbuy - Classic!

2041 by Kim Stanley Robinson - NYC underwater in the future. A bit of a let down compared to his Mars series

Shiver by Junji Ito - short stories from the king of Japanese horror manga

Lenin: The man, the dictator, the master of terror by Victor Sebestyen - great bio on Vladimir Lenin. Knew very little about him before reading this. Fantastic!

Deep Learning by Ian Goodfellow, Yoshua Bengio, and Aaron Courville - definitive text book on Deep Learning

The Curse of Bigness by Tim Wu - interesting read into the history of Antitrust and the Sherman Act and how they relate to modern tech giants like Amazon, Google, Facebook

Connecting the Dots by John Chambers - a bit dry. Lessons Chambers learned while CEO of Cisco

amasadonOct 6, 2017

A couple of weeks ago I went to see Max Tegmark (the author of this piece) speak about his new book "Life 3.0: being human in the age of artificial intelligence" in San Francisco and saw the same speculative AI intelligence explosion crap we're seeing all over the place. I was disappointed because I'm a fan of Max's work as a scientist, his "Our Mathematical Universe: My Quest for the Ultimate Nature of Reality" book was a great read and I enjoy watching his lectures about Physics, Math, and sometimes the nature of consciousness.

When he got involved in the AI Risk community I thought it might be good thing that an actual scientist is involved, maybe to ground the community's heavy speculation in scientific thinking. However, what happened was exactly the opposite -- Max turned into a fiction author! (ergo this piece). Now, of course there is a role for fiction in expanding our understanding of the future but the AI Risk community is already heavily fictionalized. The singularity, intelligence explosion, mind uploads, simulations, etc are nothing but idle prophecies.

Karl Popper, the famous philosopher of science, made a distinction between scientific predictions which usually takes the form "If X then Y will happen" and scientific prophecies which usually takes the form "Y will happen" which is exactly what Max and the rest of the AI Risk community is involved in.

Now back to Max's San Francisco talk, I actually asked him this question: "Who is doing the hard scientific work around AI Risk?" and after a long pause he said (abridged): "I don't think there is hard scientific work to be done but that doesn't mean that we shouldn't think about it. We're trying to predict the future and if you told me that my house will burn down then of course I'll go look into it".

This doesn't inspire much confidence in the AI Risk community, where scientists need to leave their tools at the door to enter The Fantastic World of AI Risk and where fact and fiction interweave liberally -- or as Douglas Hofstadter put it when describing the singularitarians: "a lot of very good food and some dog excrements".

Built withby tracyhenry

.

Follow me on