The world’s Largest Sharp Brain Virtual Experts Marketplace Just a click Away
Levels Tought:
Elementary,Middle School,High School,College,University,PHD
| Teaching Since: | Apr 2017 |
| Last Sign in: | 103 Weeks Ago, 4 Days Ago |
| Questions Answered: | 4870 |
| Tutorials Posted: | 4863 |
MBA IT, Mater in Science and Technology
Devry
Jul-1996 - Jul-2000
Professor
Devry University
Mar-2010 - Oct-2016
 i need help with an assignment asap, it's in HUMAN COMPUTER INTERACTION, basically i was given a chapter of a textbook ( artificial intelligence by Nick Bostorm )to summarize and put in power point form (maximum 5 slides) in other to do a presentation. The chapter is 11 pages longÂ
 it's a maximum 5 slide power point, so it's a short presentation, and one page general summary of the chapter, my professor gave an example of what the  summary should look like( THE ONE PAGE SUMMARY I MEAN)  and i will attach hers as well as the textbook chapter.
Â
Â
Â
Chapter 8IS THE DEFAULT OUTCOME DOOM?.Chapter 8 Is the default outcome doom? We found the link between intelligence andfinal values to be extremely loose. We also found an ominous convergence ininstrumental values. For weak agents, these things do not matter much becauseweak agents are easy to control and can do little damage. But in Chapter 6 weargued that the first superintelligence might well get a decisive strategic advantage.Its goals would then determine how humanity’s cosmic endowment will be used.Now we can begin to see how menacing this prospect is.Existential catastrophe as the default outcome of an intelligence explosion?Anexistential risk is one that threatens to cause the extinction of Earth-originatingintelligent life or to otherwise permanently and drastically destroy its potential forfuture desirable development. Proceeding from the idea of first-mover advantage,the orthogonality thesis, and the instrumental convergence thesis, we can now beginto see the outlines of an argument for fearing that a plausible default outcome of thecreation of machine superintelligence is existential catastrophe. First, we discussedhow the initial superintelligence might obtain a decisive strategic advantage. Thissuperintelligence would then be in a position to form asingleton and to shape the future of Earth-originating intelligent life. What happensfrom that point onward would depend on the superintelligence’s motivations.Second, the orthogonality thesis suggests that we cannot blithely assume that asuperintelligence will necessarily share any of the final values stereotypicallyassociated with wisdom and intellectual development in humans— scientificcuriosity, benevolent concern for others, spiritual enlightenment and contemplation,renunciation of material acquisitiveness, a taste for refined culture or for the simplepleasures in life, humility and selflessness, and so forth. We will consider laterwhether it might be possible through deliberate effort to construct asuperintelligence that values such things, or to build one that values human welfare,moral goodness, or any other complex purpose its designers might want it to serve.But it is no less possible— and in fact technically a lot easier— to build asuperintelligence that places final value on nothing but calculating the decimalexpansion of pi. This suggests that— absent a special effort— the firstsuperintelligence may have some such random or reductionistic final goal. Third, theinstrumental convergence thesis entails that we cannot blithely assume that asuperintelligence with the final goal of calculating the decimals of pi (or makingpaperclips, or counting grains of sand) would limit its activities in such a way as notto infringe on human interests. An agent with such a final goal would have aconvergent instrumental reason, in many situations, to acquire an unlimited amountof physical resources and, if possible, to eliminate potential threats to itself and itsgoal system. Human beings might constitute potential threats; they certainlyconstitute physical resources. Taken together, these three points thus indicate that
Â