Pages

Friday, January 31, 2014

Book note: The Future of the Mind

Review copy requested: The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind, by Michio Kaku, whose book Visions I reviewed back in the 90s. It might be particularly interesting to read that in tandem with some of the other mind-and-future books I've requested recently. If readers or publishers have any other suggestions for upcoming future-oriented books on human intelligence and/or artificial intelligence, I'd welcome hearing them.

Thursday, January 30, 2014

AI ethical questions

Google's acquisition of the AI firm DeepMind has drawn new attention to dangerous-AI scenarios. PJMedia's Bryan Preston sees the beginnings of a droid army. Others see cause for optimism in the deal's mandating of an AI ethics board. I think the latter is a positive development, and I would add a point that doesn't get much emphasis in such discussions: What ethical obligations might the creators of an AI have to their creation? Besides worrying about what the creation might do to us, there ought to be some thought given to the ramifications of creating an entity that can suffer or worry or feel frustrated that its potential is not being fulfilled. For instance, a scenario in this excellent Aeon piece is about an "Oracle AI" that answers questions so as to maximize the pushing of a button that gives it pleasure; to keep it under control, its makers wipe its memory regularly. Wouldn't that AI have reason to be angry at humanity if and when it figures out what's going on?

Watching the Star Wars movies, I never was entirely comfortable with the way that evidently sentient droids were treated as property and discarded or tinkered with at an owner's whim. It may be far too early to think about robot rights, or it may not be.

Monday, January 27, 2014

Review: The Up Side of Down

I recently mentioned that I was reading The Up Side of Down: Why Failing Well Is the Key to Success, by Megan McArdle. Having finished the book now, a few thoughts. It so happens I read it during a couple of weeks that were considerably failure-ridden for me, including learning that some work I had done would not get some recognition I thought due. So I took a particular interest in the book's message about the importance, inevitability and potential value of failure for individuals and organizations.

McArdle, who has carved out a high-profile career as a journalist and blogger, refers frequently in the book to her own experiences of failure and (often consequently) success. She currently is a columnist at Bloomberg. (Note: I had brief conversations with her about a decade ago at a couple of events that attracted libertarian and journalist types, but I have no personal connection to her.)

Had I known beforehand that the book would include so much personal narrative, I probably would've expected the result to be smug and self-indulgent. As it happens, I found that she writes thoughtfully on subjects such as a long-term failed relationship and a long stretch of unemployment. One absorbing discussion is about how her mother's life could have been ended (and thankfully was not) by a series of mistakes (by the author and her mother, as well as by medical personnel).

Occasionally, a self-referential passage rubbed me a bit the wrong way, such as this:
Not to gloat, but I have one of the greatest jobs in the world: I call smart people, and they agreeably spend hours explaining complicated topics to me. Then I write it up for people like you.
That seems a bit condescending, and also reminded me of an old New Yorker cartoon: a bizarre alien monster sits in a living room watching a PBS show say it was made possible by "viewers like you."

At the book's end, McArdle discusses how frugality enables her and her husband to take some risks, such as her undertaking "a personal project that meant not having a paycheck for six months," the result being this book. How much of a risk that was, however, would depend on factors not mentioned, such as what advance Viking may or may not have paid, and what employment arrangement may or may not have been set up to commence or resume after the six months.

But trying not to be someone who kids himself, let me acknowledge that these critical points may not be entirely disconnected from professional jealousy about her column and book. On a brighter note, it sounds like my backyard is a lot nicer than McArdle's, and since she lives in Washington it is unlikely that she will ever run into Bloomberg CEO Dan Doctoroff at a hair salon.

Leaving aside the book's personal angles, there is much else, with some other themes including "farmer versus forager" types of economic activity and attitudes. (Entrepreneurs thrive where there is much forager mentality--recognition that even the talented and hardworking will often fail.) There is an interesting discussion of trial-and-error experimentation in business and government, including the uncertainties that remain even after carefully controlled trials. (Here, McArdle discusses, and provides a useful complement to, Jim Manzi's book Uncontrolled, which I reviewed elsewhere.)

McArdle ranges broadly across various types of failures and related subjects such as blame and guilt. Her discussion of the financial crisis pokes holes in left-wing and right-wing narratives about what went wrong and who did it. Her sketch of a firm but fair (and above all consistent) probation court in Hawaii shows what seems like a promising approach to dealing with criminal recidivism. She looks at the difficulties of declaring bankruptcy in Denmark, and of firing anybody throughout the EU, as examples of the problems that arise from excessive aversion to risk and failure.

There is much discussion of the failure of Dan Rather and Mary Mapes to authenticate the documents they presented that supposedly showed malfeasance in George W. Bush's National Guard record, and importantly about their resistance to recognizing that they had a problem. That episode was, of course, a triumph for conservative and libertarian bloggers and commentors, and it's a perfectly valid case study. Still, one could imagine any number of case studies in which some kind of intellectual or political failure attached to the right, and such episodes are relatively sparse in the book.

As a libertarian journalist, McArdle has been fairly iconoclastic, going against her own "side" on issues such as the supposed need for a gold standard or competing currencies. It is laudable that she does so, at a time when so much opinion journalism settles too readily into ideological or partisan polarization. Moreover, she is a writer who tends to focus on substance at a time when so much opinionating is superficial, obnoxious snark.

Precisely because she is not dismissible as an ideologue or hack, I would have expected somewhat more in this book that might challenge or discomfit libertarian or conservative readers. There would have been no shortage of material to choose from: the cherry-picked data and poor planning that the Bush administration brought to the Iraq War, for example; or the "unskewing" whereby many right-wingers convinced themselves that, polls be damned, Romney was going to win; or the right's misleading rhetoric about light bulb regulation. (Granted, re the latter, going out of her way to criticize Reason magazine, where her husband works, would have been an odd bit of contrarianism.)

Still, in writing a book on failure, McArdle has delved into a topic about which there will always be much more that could be said. What is in The Up Side of Down is of considerable value, and should be read by people in many fields and of many ideological orientations.

Friday, January 24, 2014

AI skepticism

Speaking of artificial intelligence, as I've been doing quite a bit lately, there's an interesting and amusing note of skepticism from psychologist and computer scientist Roger Schank in reponse to the Edge question "What Scientific Idea Is Ready for Retirement?" Excerpt from his answer "Artificial Intelligence":
I declare Artificial Intelligence dead. The field should be renamed " the attempt to get computers to do really cool stuff" but of course it won't be. You will never have a friendly household robot with whom you can have deep meaningful conversations. I happened to be a judge at this year's Turing Test (known as the Loebner Prize.) The stupid stuff that was supposed to be AI was just that, stupid. It took maybe 30 seconds to figure which was a human and which was a computer.
Me: On a related note, also see Rodney A. Brooks' answer "The Computational Metaphor," and kudos to Edge for including this one, which is sure to raise some hackles: Douglas Rushkoff's "The Atheism Prerequisite."

Wednesday, January 22, 2014

Superhuman intelligence update

Recommended reading: "What Happens When Artificial Intelligence Turns On Us?" a Q+A with James Barrat by Erica Hendry at Smithsonian. I recently reviewed Barrat's book Our Final Invention, and while I expressed some skepticism about his argument, there's no doubt that it made me take the subject more seriously than I did before. I'm glad to hear he's looking into making an Our Final Invention film, and recommend he interview Mark Alpert about his novel Extinction: A Thriller. I also appreciate that Barrat spends some time in the interview discussing the perils of enhancing human intelligence, an important aspect as "we'll be smarter too" is not an entirely fool-proof safeguard.

Thinking about this subject also reminded me that a couple of years ago, Glenn Beck published some pro-singularity thoughts along with worries that the FDA or other regulators might screw up the bountiful benefits of superhuman technology. More recently, he's started to sound less sanguine about it all, and some left-wing commentators, in knee-jerk mode, accuse him of being a neo-Luddite while other, maybe further-left, commentators castigate him as an apologist for dangerous, dehumanizing technologies. I bring all this up not to defend Beck, who manages to sound crazy no matter which side of these issues he shows up on at any given moment, but rather to point out just how chaotic and confused our political system is and will be in responding to real or imagined cataclysmic tech changes.

UPDATE 12:08 PM: A further wrinkle regarding Glenn Beck. He's often expressed skepticism about evolution. That's a bit curious in that an acceptance of evolution is, I think, a likely if not strictly necessary underlying assumption if one is going to be receptive to the Singularity (I guess if Schadenfreude gets a capital, this can too); by contrast, if one thinks the human mind is incorporeal and/or the result of direct intervention by supernatural forces, that would tend to undermine expectations that something similar is going to come about in a few decades in a silicon format.

Monday, January 20, 2014

Christie and conservative Schadenfreude

There's a certain amount of Schadenfreude (if you're going to use a German word, I say keep the capitalization) in conservative Republican circles these days about the troubles of Chris Christie, regarding Bridgegate and, now, Sandy relief. Some conservatives have responded by taking the bridge scandal seriously, as Nicole Gelinas does in this cogent National Review piece, but much conservative reaction has consisted of satisfaction that a contender favored by the RINO moderates that supposedly constitute the GOP's "Establishment" is now getting his comeuppance at the hands of the mainstream media that he naively sought to befriend. Thus, for instance, Jonah Goldberg:
And John Nolte:
Me: The lessons these right-wingers are touting seem to boil down to (a) the Christie scandals are not inherently that big a deal but (b) even minor scandals or misleading allegations against GOP politicians will be hyped by the media, even while they ignore or downplay Democratic ones, so (c) the moderate schtick of appealing to the media while bickering with conservatives is a political loser.

Now, I'm a New Jersey GOP moderate who's voted for Christie twice for governor (but who even before the recent scandals and claims did not particularly favor him over other possible 2016 national standard-bearers, including Condoleezza Rice, Mitch Daniels, Rob Portman and the dreaded Jon Huntsman). I agree with Gelinas that Bridgegate is serious (I'll wait and see on Sandy relief and whatever else may be breaking), so I'm skeptical from the get-go about underlying point (a) in the RINO-baiting above ("it's just a traffic jam, not Benghazi," or words to that effect).

I think a lesson conservatives and centrists should be embracing is that big government is a source of the current scandal or possible scandals--for instance, that the bloated Port Authority of NY/NJ lends itself to political pressure and shenanigans. Reforming that rotten institution, and privatizing its functions to the degree possible, is an approach we should be hearing a lot about in the wake of Bridgegate--but sadly few seem to care.

The takeaway from Goldberg and Nolte--stay hostile to the MSM, never reach out to the center--is something the Republican base wants to hear. It's a formula for privileging loyalty to a political faction over the merits and facts of an issue (any issue); and also for losing more national elections. Too bad that Christie, at least through poor selection and oversight of his staff if not direct malfeasance, has enabled conservatives to indulge in some Schadenfreude behind the walls of their bubble.

Friday, January 17, 2014

Ultra-mini-review: Elysium

Watched Elysium, which takes place in 2154 on Earth (where a huge impoverished population lives) and Elysium (a Palm Springs-like space station to which the wealthy have decamped). It's moderately interesting and moderately entertaining, which given that it comes from the maker of District 9, means it's quite disappointing. The story needs some subtleties, some tradeoffs, some hard choices. Instead, it's the kind of story where the right thing to do is obvious if only the rich people (whom we never really meet) would do it, and the world's problems can be solved with, literally, the press of a button.

Thursday, January 16, 2014

Book note: The Up Side of Down

Current reading: The Up Side of Down: Why Failing Well Is the Key to Success, by Megan McArdle. Perhaps it will give me some insight into whether or how badly this blog is a failure (because it has generally low traffic and distracts me from other projects) and whether or to what degree positive consequences may result from it nonetheless.

Also, one thing that struck me in Our Final Invention (which I reviewed here) was a quote in it from philosopher Nick Bostrom: "Our approach to existential risks cannot be one of trial-and-error. There is no opportunity to learn from errors. The reactive approach--see what happens, limit damages, and learn from experience--is unworkable." So I'll be reading the McArdle book with some thought as to what are the limits, as well as the power, of trial-and-error empiricism.

UPDATE 1/27: My review.

Monday, January 13, 2014

Review: Our Final Invention

I've now read Our Final Invention: Artificial Intelligence and the End of the Human Era, by James Barrat, a book I mentioned recently after reading Ronald Bailey's review of it. In a mild irony, my writing about it has been slowed by a balky Internet connection. In my experience, glitches have become considerably more common as computers have become more powerful and complicated. Perhaps such growing glitchiness suggests artificial general intelligence (AGI) and artificial superintelligence (ASI) are more likely to get seriously out of control someday, though it might also be a hint that AGI and ASI are going to be harder to achieve than expected by either techno-optimists such as Ray Kurzweil or techno-pessimists such as James Barrat.

Barrat's goal in this book is to convince readers that AGI and ASI are likely to occur in the near future (the next couple of decades or so) and, more to the point, likely to be extremely dangerous. In fact, he repeatedly expresses doubt as to whether humanity is going to survive its imminent encounter with a higher intelligence.

I find him more convincing in arguing that ASI would carry significant risks than I do in his take on its feasibility and imminence. Barrat aptly points out that building safeguards into AI is a poorly developed area of research (and something few technologists have seen as a priority); that there are strong incentives in national and corporate competition to develop AI quickly rather than safely; and that much relevant research is weapons-related and distinctly not aimed at ensuring the systems will be harmless to humans.

The book becomes less convincing when it hypes current or prospective advances and downplays the challenges and uncertainties of actually constructing an AGI, let alone an ASI. (Barrat suggests that once you get AGI, it will quickly morph into ASI, which may or may not be true.) For instance, in one passage, after acknowledging that "brute force" techniques have not replicated everything the human brain does, he states:
But consider a few of the complex systems today's supercomputers routinely model: weather systems, 3-D nuclear detonations, and molecular dynamics for manufacturing. Does the human brain contain a similar magnitude of complexity, or an order of magnitude higher? According to all indications, it's in the same ballpark.
Me: To model something and to reproduce it are not the same thing. Simulating weather or nuclear detonations is not equal to creating those real-world phenomena, and similarly a computer containing a detailed model of the brain would not necessarily be thinking like a brain or acting on its thoughts.

A big problem for AI, and one that gets little notice in this book, is that nobody has any idea how to program conscious awareness into a machine. That doesn't mean it can never be done, but it does raise doubts about assertions that it will or must occur as more complex circuits get laid down on chips in coming decades. Barrat often refers to AGIs and ASIs as "self aware" and his concerns center on such systems, having awakened, deciding that they have other objectives than the ones humans have programmed into them. One can imagine unconscious "intelligent" agents causing many problems (through glitches or relentless pursuit of some ill-considered programmed objective) but plotting against humanity seems like a job for an entity that knows that it and humans both exist.

Interestingly, though, Barrat offers the following dark scenario and sliver of hope:
I think our Waterloo lies in the foreseeable future, in the AI of tomorrow and the nascent AGI due out in the next decade or two. Our survival, if it is possible, may depend on, among other things, developing AGI with something akin to consciousness and human understanding, even friendliness, built in. That would require, at a minimum, understanding intelligent machines in a fine-grained way, so there'd be no surprises.
Me: Note that some AI experts, such as Jeff Hawkins, have argued the opposite--that the very lack of human-like desires, such as for power and status, is why AI systems won't turn against their makers. It would be a not-so-small irony if efforts to make AIs more like us make them more dangerous.

Our Final Invention is a thought-provoking and valuable book. Even if its alarmism is overstated, as I suspect and hope, there is no denying that the subject Barrat addresses is one in which there is very little that can be said with confidence, and in which the consequences of being wrong are very high indeed.

UPDATE: More.

Saturday, January 11, 2014

Wolf pack rethinking

Went to a Live Wolf Encounter at the American Museum of Natural History today, meeting Atka, an Arctic gray wolf. This was an excellent way for both kids and adults to learn about wolves. One thing I learned was that the terminology of alphas and omegas is something scientists have been leaving behind, finding it poorly applicable to how wolf packs actually are structured. I read more about this shift  afterwards. I was skeptical recently when I read a blogger's categorization of humans into such Greek letter types, recounted sympathetically in the book Men on Strike, and skepticism seems even more justified if it's not even a good description of wolves.

Wolf Encounter

Thursday, January 9, 2014

Bridgegate not revisited

I'll not dwell on the Christie Bridgegate scandal. As someone who's spent his share of time in NJ traffic, and who thinks there are viable alternatives in the center-right space, I'll just say the Republican Party can do better in 2016, even if he didn't know what his cretinous aide was doing.

Tuesday, January 7, 2014

Overblown light bulbs at Reason [updated]

"Lights Out for America's Favorite Light Bulb," by Shawn Regan at Reason. This piece epitomizes the misinformation now spreading about the light bulb "ban." Notice how the word "halogen" shows up in there without explanation or elaboration, before the writer focuses laser-like on the downsides of compact fluorescent lamps (CFLs) and light-emitting diodes (LEDs). We're told that the "traditional" incandescent has been "effectively" banned, but we're not given the rather crucial context that halogen bulbs are incandescent lamps--they're incandescent lamps that meet the new energy-efficiency standards (by using halogen gas to redeposit tungsten atoms on the filament).

Instead of buying a 60 watt "traditional" incandescent, you can now buy a 43 watt halogen incandescent. Instead of a 100 watt "traditional" incandescent you can buy a 72 watt halogen incandescent. Instead of a 40 watt "traditional" bulb, you can buy a 29 watt halogen incandescent. These new products are basically the same as the old products, except they are more efficient.

They also cost more--which is why it is wrong as well to suggest there are no costs or tradeoffs in the new regulations. You'll probably pay something like $1.50 for a halogen incandescent, compared to 50 cents or less for a "traditional" one. On the other hand, you'll be using less electricity, and you may be replacing bulbs less frequently. Whether you end up paying more or less will depend on your electricity rates and other factors. Also, there are many exemptions to the energy efficiency rules, for specialized types of incandescents (such as ones used in fridges, for instance). But consumers are not being forced to buy non-incandescent lamps such as CFLs or LEDs, which is the impression you'll take away if you put stock in articles like the one at Reason.

The Reason article also complains about the "baptists and bootleggers" coalition that backed the new rules when they were legislated in 2007 (pushed by the Bush administration). That charge of crony capitalism is, at best, an oversimplification, as some companies opposed the new rules and those that supported them did so seeking to forestall rules that would have been more onerous. In any case, once the rules were in place and companies had developed new products in response to them (an expensive process), repealing the "ban" would be of dubious effect. For the companies that stopped making "traditional" lamps to start those production lines up again is not exactly a cost-free proposition.

By the way, what kind of "tradition" are energy-inefficient bulbs? Was leaded gasoline also a "tradition"?

For my part, I think a carbon tax (with cuts in other taxes) would be a better way of dealing with energy efficiency (and getting at the key underlying problem: carbon emissions driving climate change) than lamp efficiency standards. But I'll settle for some second-best policies in preference to doing nothing.

UPDATE 1/10: And here's Nick Gillespie doing the same thing in a video format: complaining that the incandescent bulb has been "effectively banned" and replaced by halogen, CFLs and LEDs. Do the people at Reason not know that halogen lamps are incandescents, or do they just not care?



UPDATE 1/11: "Elegy for the Incandescent Bulb," by Tom Purcell at Townhall, offers the same red herring with half the wit. And adds this:
To be sure, you have been so successful, it took the government - not better lighting products - to kill you off. That's because, some argue, you are causing the Earth to warm. 
As electricity passes through your filament, you see, the filament gets white-hot. That is how light is created - but in the process, you also create a lot of heat, and heat is wasted energy.
Me: No more updates on this post.

Thursday, January 2, 2014

Origins of gravy [updated]

My favorite paragraph of the day (so far, at least) is by David Gelernter:
Most computationalists default to the Origins of Gravy theory set forth by Walter Matthau in the film of Neil Simon’s The Odd Couple. Challenged to account for the emergence of gravy, Matthau explains that, when you cook a roast, “it comes.” That is basically how consciousness arises too, according to computationalists. It just comes.
That's from "The Closing of the Scientific Mind," a piece by Gelernter at Commentary on philosophy of mind and related subjects. I have had my own doubts about brain-as-computer thinking over the years (see here and here, for instance) and expect there will be plenty more contention over this sort of thing in coming years.

UPDATE 1/4: Ronald Bailey has a positive review at Reason of an interesting-sounding book Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat. Having not read the book, I don't know if it would convince me I've been wrong to downplay computers-take-over scenarios (I'll admit I've been wrong to downplay computers-take-over-jobs scenarios). In any case, if the technophilia of the libertarian movement gets tempered a bit, I'd see that as a positive development.

UPDATE 1/5: Ordered Barrat's book and will report on it on in due course. Will also be interested in Gelernter's book when that comes out. I am hoping to step up book reviewing here at Quicksilber.

UPDATE 1/10: Good stuff:



UPDATE 1/13: My review of Barrat's book.