Stephen Baker

The Boost
Home - posts tagged as Science

Our cyborg future
September 20, 2011Science

If you haven't read it yet, check out the New York Times magazine story, The Cyborg in Us. It traces the research that connects the brain to computers. Early applications should help people with physical handicaps, whether its using a computer or moving a prosthetic limb with their thoughts. But why should it stop there? My prediction: It won't.

I was especially interested to read the comments. Matt, from Almeria, Spain, captured some of my thinking on the matter:

I think that technology that can help the disabled to be 'normal' again will inevitably come to pass given the substantial motivation and economic incentive. The injured and disabled will be the pioneering cyborgs. The rest of us will eventually be forced to come around when it turns out that the previously disabled don't have to be limited to normal levels of performance, but could instead have enhanced performance. They will rightly ask: Given the extra potential of my cybernetic equipment, why should I only run as fast or think as well as the average person?

The same logic applies to drugs, gene therapies, and other vectors of technological assist that may be developed mainly to counteract medically diagnosed deficiencies, but also happen to work for healthy people. You can be sure that healthy people will avail themselves of these advantages.

Eventually the general public will have to get enhanced to keep up with a growing elite of enhanced people, or fall by the wayside.


A few other interesting questions:
1) How will mind-computer interface deal with stray thoughts?
2) Would the constitutional guarantee against unreasonable search and seizure apply to mind probes? (And I would add, even if one country guarantees it, how about elsewhere?)
3) What happens when the batteries run out?

And I thought MD from NYC, made a good and succinct point: "When will we be able to direct computers with our thoughts? Isn't it going to be the other way around?"

add comment link to post share:

Robotic folder: mastery in a domain
April 9, 2010Science


If you haven't already seen this neat freak of a robot folding towels in a Berkeley lab, it's worth a look for comic appeal as well as technical bravado. (ex Andrew Sullivan). This is an example of a domain expert non pareil. It's great at folding towels, just the way other robots are experts at riveting car chassis or replacing hips. But how can these one-trick robots join efforts?

I was talking a while back with people at Microsoft about robotics. They way they see it, the field is still where computers were in the 1970s. There's loads of different software for different types of applications, but as of yet no broad platform for industrial-scale development. Of course, Microsoft would like to provide that, just as it did for PCs.

In the artificial intelligence project I'm exploring for my book, IBM's development of a Jeopardy-playing computer, researchers are working on open-source platforms, such as UIMA. It's a system to analyze enormous volumes of unstructured data, using dozens or hundreds of different methods. One effort might be analyzing the structure of sentences while another will be busy with "entity" recognition, making sure that "Athens" is a city and that "Bob" is (most likely) a masculine person.

While the towel-folding robot is a domain expert, the Jeopardy-playing computer, Watson, is a generalist. It has to know quite a bit about lots of things. If UIMA becomes a broadly accepted platform, others can build on this effort. This is important, because Jeopardy alone, while impressive, in the end is not much more useful than folding towels.

***

The Wall Street Journal runs a story today saying something that we've been following here for a long time: Tech firms are eager to hire Numerati.

Rather than looking for just plain-vanilla computer scientists, who typically don't have as deep a study of math and statistics, companies from Facebook Inc. to online advertising company AdMob Inc. say they need more workers with stronger backgrounds in statistics and a related field called machine learning, which involves writing algorithms that get smarter over time by looking for patterns in large data sets.

add comment link to post share:

Internet: Refuge for those with psychotic leanings?
February 14, 2010Science

Warning: If you're answering a questionnaire and are asked if you experience big mood swings and enjoy scenes movie scenes of "violence and torture," think twice before answering yes. It points to high rankings on "psychoticism." (And considering that millions enjoy watching 24, it might lead to worrisome conclusions about our society.)

In any case, a study of Internet behavior carried out by Turkish researchers appears to show that  people with higher levels of psychosis are more likely to take refuge in online dealings, and to use the Internet as a substitute for face-to-face contact.(ex Murketing). Could that explain Facebook's soaring popularity? The social behavior of mere neurotics, I should note, seems to be unswayed by the Net.

These groups are defined by Eysenck's personality test. I looked for it online, and found numerous links for "free" personality tests. This is a booming field online. People want to learn about themselves, and as I've written more than once, companies just love scooping up gigabytes of intimate, self-reported data from millions of us.


Jack Bauer plying his craft in "24"

add comment link to post share:

Blue Brain: Henry Markram's thinking machine
February 4, 2010Science

Bluebrain | Year One from Couple 3 Films on Vimeo.


Here's a movie I want to see. It's about Henry Markram's venture to build a computer model, neuron by neuron, of the brain. (ex Frontal Cortex)


add comment link to post share:

Baseball catches and hurricanes
January 28, 2010Science


I remember reading as a kid about Willie Mays' legendary catch in the 1954 World Series. He turned around at the sound of the hit and dashed straight back from center field. (Given the slow speed of sound and the distances in the Polo Grounds, I'm thinking, he may have started turning before the crack reached his ears.) In any case, Mays carried a full catalog of line drives and towering flies in his head. He knew the diving movements of topspin and the effects of various winds. He no doubt carried an audio library of every conceivably crack, clack, thwat and pop of the bat, and he how each one affected the flight of the ball.

The point is, according to Jeff Hawkin's 2004 book On Intelligence, the human brain carries this trove of memories. And when something new happens, we sift through our memories, find something comparable, and then make adjustments to it to figure out how to respond. (Willie probably had to add speed and distance to catch Vic Wertz's prodigious fly.)

Now, try teaching a robot to catch a towering fly. It will take a roomful of Phds modeling acceleration, wind, and the weight of the air on a fall day in New York, not to mention the exquisitely orchestrated movements, at the receiving end, of the human hand. Engineering Willie Mays' catch, while possible, may be the technical equivalent of a mission to Mars.

Hawkins' Silicon Valley company, Numenta (which I wrote about in 2008 at BusinessWeek), builds software modeled on our neocortex. At the end of his book, which I just read, he writes about how brain-like computers could create breakthroughs.

Powerful pattern-processing machines based on our brain architecture would not have to rely on the same senses that we have for data. That would be silly (We already have 7 billion of those specimens up and running.) Instead, he writes, they could capture data from farflung sensors and synthesize the patterns, predicting from them, much as we do. So, just to pick one example, one of these wonder machines could have the same sort of feel for budding hurricanes that Willie Mays had for fly balls. It wouldn't be based on trillions of calculations, which is how we predict weather today. Instead it would use memories. If Numenta succeeds, powerful computers will start developing "instinctive" hunches about all sorts of things.

add comment link to post share:

Game Theory: IBM's machine on Jeopardy
April 28, 2009Science

Can't wait to see (or, more likely, read about) how IBM's Watson computer fares on Jeopardy. Lots of other efforts are out there to create knowledgeable bots, from Stephen Wolfram to Doug Lenat's CyCorp. A good showing on Jeopardy would give IBM it's biggest machine vs human boost since winning the chess championship.

But in Jeopardy, more than chess, the machine will have to rack its "mind" to come up with answers, but also to anticipate what its competitors know, and what they'll do. This requires game theory. Should the machine be betting on questions it can answer with 63% confidence? That depends on how it's doing in the game, and what the others might do. It'll be interesting to see if IBM attempts to give its machine this type of tactical smarts.

add comment link to post share:

Wolfram's new search engine
April 15, 2009Science

If you're curious about what comes after Google, take a look at this article about Stephen Wolfram's new search engine (though he resists the term), Wolfram/Alpha. Wolfram, who developed Mathematica, is trying to encapsulate the world of knowledge in one system. His words:

“My idea is to make the world computable. Mathematica was about finding the simplest primitive computations, and designing a system where humans could hook these computations together to create patterns of scientific interest. NKS was about the notion that that we can start with primitive computations and not bring in humans at all. If you do a brute search over the space of all possible computations, you can find ones that are rich enough to produce the natural-looking kinds of patterns that you want. And Wolfram|Alpha is about how we might build the edifice of human knowledge from simple primitive computational rules.”

Instead of finding Web pages, his system is designed simply to answer questions, even those that require contextual knowledge to understand and synthesis or calculations to answer. He's releasing it in May. If it turns out to be even half as good as it sounds in this article, he's going to need one massive data center to handle all the traffic.

add comment link to post share:

The limits of simulation (or why we experiment on animals)
April 6, 2009Science

An excellent post by Mark Chu-Carroll, of the Good Math, Bad Math blog, on why we cannot simulate the inner workings of a cell, much less an entire animal--and hence must carry out medical research on live creatures. He discusses in clear detail the range of simulations. One that caught my attention is the power of simulations to discover emergent phenomena, "things where some thing behaves one way at one scale, but changes dramatically when you put together huge numbers of those things and look at them at a different scale."

The best example of emergent phenomena is our macro-scale universe. When we look at the world, things seem concrete and predictable. When you watch a baseball game, you can see the baseball fly from the hand of the pitcher to the bat, and it's obvious that you can precisely describe both the position and the velocity of the baseball when it's in flight. But the baseball is made up of a huge number of particles which do not behave in such well-mannered ways. They're unpredictable, erratic. Their behavior can't be described precisely, only probabilistically. And yet, when we put together quadrillions of quadrillions of unpredictable, probabilistic particles, we get something concrete, comprehensible, and extremely predictable.

Sadly, we don't understand the quadillions of relationships among the tiny actors within our bodies. So despite the genius of Numerati (including Mark, who works at Google) we can build the most primitive predictive models of ourselves. They work to a degree for shoppers, voters and consumers of advertising--but not for medicine.

add comment link to post share:

Marketshare Partners: math model of the marketing world
March 27, 2009Science


Times Square

Let's imagine that I'm walking through Times Square (which I'll be doing on my way home pretty soon). I see a huge billboard for Samsung. Maybe that gets me to thinking about a new TV, and when I get home I do a Google search for Samsung, scout around on the Samsung site, or maybe on Gizmodo. And maybe tomorrow I go to Best Buy and pick up a TV.

In that scenario, that billboard actually accomplished something. But unlike a clickable search ad on Google, it's hard to measure. I had a meeting this morning with entrepreneur whose business is based on measuring what advertisers and media buyers have long viewed as unmeasureable. His name is Wes Nichols, and he runs LA-based Marketshare Partners.

This is a Numerati business if there ever was one. To measure and predict the impact of the whole gamut of advertising and marketing, Nichols and his team model much of the advertising and consumer economy. The complexity is staggering. One model for a car company features 300 fluctuating variables. Marketshare has 50 employees, including a stable of phds, many of them in economics. I don't have the details on how the modeling works, but would like to find out.

But what he told me gives me a bit of cheer for the battered traditional media. In the last 10 years, advertisers have migrated toward media like Google, which offer countable clicks, trackable customers, and even deliver a quantified return on advertising investment. Magazines and newspapers and billboards are hard-pressed to produce such numbers, and have suffered as a result. (craigslist hurt papers, too, by taking away much of their once-lucrative classified market.)

Nichols predicts that mainstream media will bounce back, at least a little, once tools like his can put numbers on the value our ads deliver. Their impact is less direct than Google's, he says, but still valuable.

add comment link to post share:

Simplifying machine learning for BW article
March 1, 2009Science

It wasn't until the article was laid out and ready to go to press that I learned about a mistake in my BusinessWeek story, The Next Net. In the end, I didn't bother fixing it because it would have involved a couple of paragraphs about machine learning. And except for a handful of people at Sense Networks, it didn't make any difference.

But still, for me at least, those paragraphs would have been an interested addition to the story. So here they are:

As it tracks our movements with cell phones, Sense Networks has two different ways of interpreting us. One is based on what humans want to learn. The other leaves it up to machines.

Sense has one "tribe" called "Young and Edgy." As I was writing the story, I understood that the computer looked at people's movements, and that it placed late-night clubbers and bar-hoppers in this group. It did. But it was following human instructions. Advertisers want to locate this group. So Sense tells the computer to look for people who stay out after midnight at least three times a week, along with a few other behaviors. The humans come up with the specs, and the machine simply follows orders. This is the kind of work computers have been able to do forever. (Though the new ones crunch lots more data and work faster.)

My mistake in the article was that I described Young and Edgy as a machine-generated tribe. These are the ones I find more interesting. In these cases, the machine goes through immense piles of mobile data and looks for common behaviors. What are those common behaviors? We can surmise that it has to do with things like where people come from and where they go, commuting patterns, random movements at different times of the day, common neighborhoods, etc. Those are written into the algorithms that sets the computer on its course. But it is the machine that ultimately makes the distinctions and creates clusters. It then draws a map--in our case of San Francisco--for every hour of the week. And it colors different neighborhoods by the presense of different behaviors.


Fisherman's Wharf

Again, those behaviors are defined by the machine. But looking at the map, it's pretty easy to see that there's a tan behavior associated with Fisherman's Wharf on weekdays. That looks like tourist behavior. And on weekends, that same behavior spreads through different parts of the city, as more San Franciscans behave like tourists.

The next step for the computer is to track the dots as they move across time and place, through the 168 hourly maps and all of their colors. And the dots that follow similar color patterns would be similar to each other--and in similar tribes. Would one of them be Young and Edgy? Well, this is up to the computer. Very likely, one of them would have large Young and Edgy characteristics. But it would be blended with other colors. In that sense, the computer picks up more of our complexity. And perhaps it groups us with people we wouldn't recognize as similar. (The algorithms have to be smart, I must add, to draw distinctions between behaviors. It's important, for example, to distinguish between "tourist behavior" and "unemployed behavior." But looking at it from a machine's point of view, how are they different? I find this stuff fascinating.)

Would members of different machine-generated tribes be interested in the same brand of beer or vacation spot? The advertiser would have to do more testing to determine how our tribes correlate to preferences. And since advertisers, like editors, often like to keep things simple, they just say: Hey computer. Leave the thinking to me. Go out and find Young and Edgy according to my definition. That approach makes use of the computer's computational skills. But the real breakthroughs in understanding human behavior are much more likely to come when we let the machines draw their own clusters.

add comment link to post share:




©2017 Stephen Baker Media, All rights reserved.     Site by Infinet Design







Kirkus Reviews - https://www.kirkusreviews.com/book-reviews/stephen-baker/the-boost/

LibraryJournal - Library Journal

Booklist Reviews - David Pitt

Locus - Paul di Filippo

read more reviews



Prequel to The Boost: Dark Site
- December 3, 2014


The Boost: an excerpt
- April 15, 2014


My horrible Superbowl weekend, in perspective
- February 3, 2014


My coming novel: Boosting human cognition
- May 30, 2013


Why Nate Silver is never wrong
- November 8, 2012


The psychology behind bankers' hatred for Obama
- September 10, 2012


"Corporations are People": an op-ed
- August 16, 2011


Wall Street Journal excerpt: Final Jeopardy
- February 4, 2011


Why IBM's Watson is Smarter than Google
- January 9, 2011


Rethinking books
- October 3, 2010


The coming privacy boom
- August 17, 2010


The appeal of virtual
- May 18, 2010