Sunday 29 January 2012

How Young Americans Think About Innovation: 3 Takeaways

Read more: How Young Americans Think About Innovation: 3 Takeaways 



Every year the Lemelson-MIT program conducts its Invention Index, a survey of Americans ages 16 to 25 to see how they feel about science, technology, engineering, and mathematics (STEM) and whether they see themselves as inventive people. PM talked to foundation executive director Joshua Schuler to see what jumped out from this year’s results. 

Sorry, Steve. Thomas Edison won top innovator honors in this poll.
Sorry, Steve. Thomas Edison won top innovator honors in this poll.


1. Thomas Edison Is Still the Man


Although millions of young adults have iPhones running Facebook apps, neither Apple founder Steve Jobs nor Facebook czar Mark Zuckerberg topped the list when the Lemelson-MIT survey asked who was the greatest innovator of all time. From a pool that includes both of those modern tech titans, 52 percent of respondents picked Thomas Alva Edison. (Sorry, Tesla lovers; he wasn’t one of the choices.) 

The survey’s respondents surely appreciate interior illumination and recorded music. But Schuler says there’s probably another explanation for the Wizard of Menlo Park’s strong showing—one that can be explained in a quote from inventor and First Robotics founder Dean Kamen: "You get what you celebrate." Young Americans’ lives are filled with products created by modern innovators like Jobs and Zuckerberg, but the textbooks that codify history’s greatest inventors tend to celebrate, well, dead people like Edison. 

With his enormous catalog of patents and inventions, Edison may indeed be the greatest innovator ever. But Schuler says students and young adults might be more inclined to appreciate and go into science and tech careers if they recognize the brilliance of more modern minds. His suggestion: "Why not do current events as part of science class?" As much as 16- to 25-year-old Americans rely on Facebook, he says, they probably don’t think about the algorithms that make it possible, all of which are based upon STEM. And the news is filled with hotly contested issues, from stem cells to the Stop Online Piracy Act, that are rooted in science and tech. 

2. We Just Got These Tablets, And Now They’re Going Away


One question the Lemelson-MIT survey asks every year is what technologies young people think will become obsolete in the near future (this year’s survey asked them to consider the world 15 years from now). If you need any indictor of the accelerating pace of technology, Schuler says, you need look no further than the poll results. Ten years ago, he says, respondents tended to pick items like cars and TVs—products that had been around for many decades. This year’s top picks to disappear by the mid-2020s include many items that came into the American mainstream only recently. 

MP3 players at 36 percent and DVR at 31 percent topped the list. Indeed, those seem like logical choices to disappear, as smartphones have taken over for stand-alone music players and smart TVs and other trends may do away with DVR. But the other top contenders include text messages (25 percent), social media (24), tablets (16), and smartphones themselves (12). "You’re now used to this rapid replacement of technology," Schuler says. 

3. Too Many Roadblocks


Two big points in this year’s survey stopped Schuler cold when he read them. First, 60 percent of respondents could name a reason not to go into a science and tech field. "They’re daunted by something," he says, whether it’s that the path through school seems too hard, they don’t know anybody in those fields to look up to, or another reason. Secondly, Schuler says, nearly a third said they had little to no experience building anything hands-on, whether it’s a digital product like a website or a physical project like piecing together circuit. "These two are connected pretty strongly," he says. Building cultivates DIY skills and kick-starts a person’s interest in making things. 

Those numbers would probably alarm President Obama, who spent a chunk of last night’s State of the Union address hammering the need to enhance American STEM education as a means to boost the economy. Schuler says he was grateful that Obama made such a high-profile argument. "STEM is the foundation of technology, invention, and innovation," he says. 

But, Schuler says, it’s critical to remember that strengthening American STEM education isn’t just about churning out more Ph.D.s. Vocational-technology schools, junior colleges, and other institutions must help students reach their inventive potential, he says. "We need more of the bulk of the U.S. population appreciating STEM and thinking in creative ways." 

Monday 23 January 2012


Ball Beats Sequencer Adds A Physical Element To Music Creation, Rather Than A Touchscreen


OverThe last few weeks we featured a number of state-of-the-art sequences, mixers and amplifiers here on Geeky Gadgets in the run-up to the NAMM show. Many of which included a dock for Apple’s iPad tablet, enabling you to do an array of different things with your musical instruments and audio.
Looking to bring a more physical edge to sequencing rather than using the touchscreen. A new sequencer called “Ball Beats” has been created that uses steel ball bearings to create and compose beats. The Ball Beats sequencer has been designed to be used with either Mac Linux or PC computers and provides a more original way to create beats. Watch it in action in the videos after the jump to see how it works.
Ball Beats
The Ball Beats sequencer if a simple looking acrylic sheet with holes cut into it. Once a ball bearing is place into one, a note is then played within the sequence. Users are able to set the loop in the opposite direction, ping pong style and you can also set the note duration and shuffle the steps by activating random modes if required.
The Ball Beats sequencer measures 6 x 8 x 1″ inches and comes supplied with 50 steel ball bearings. Its also fitted with a 8 step sequencer, 6 multimode channels, built in USB i/o interface and sync to midi timecode. For more information and to purchase a Ball beats sequencer for $199 jump over to the Alkex website.

Monday 16 January 2012


Are We Finally Ready for Self-Driving Cars?

Google isn't the only company working on truly automated automobiles. Robot cars will reduce accidents, ease congestion, and keep others from interfering with my 
excellent driving—especially if we can get the bad drivers into them.
by David Freedman
Last year a woman living down the street from me backed out of her driveway as if she were Danica Patrick, without so much as glancing behind her to see if there were any Spandex-encased middle-aged men on vintage racing bikes tooling down the road just then. It turned out there was one, and let me just say that an SUV hood makes a surprisingly cushy landing strip. My brief flight through space and time inspired me to think about this sort of alarmingly frequent stupid driving trick—specifically, how things like this should by now have been rendered obsolete by automation.
I am plenty familiar with the arguments against giving up direct control of our cars. I personally eschew power windows and locks, never mind automatic transmissions, and I proudly raise my Alfa Romeo’s convertible top manually. (For a great upper-body workout, try raising a top at 20 mph. For a great YouTube video, try it at 30 mph.) In fact, I 
wish my car had more things for me to do manually. I would happily set flaps, trim sails, position heat shields, and load torpedo tubes if only those features were available for my model year. No, I want everyone else’s cars to be highly automated, so they will stay out of my way when they ought to.
You may feel the same way. People tend to overestimate their skills behind the wheel and underestimate the skills of the boobs and psychopaths driving around them, a phenomenon that psychologists call “optimism bias” and the rest of us simply call delusional overconfidence. Statistics bear it out. The Centers for Disease Control and Prevention estimates that car crashes killed nearly 40,000 people and cost more than $70 billion in the United States last year (blog post, the study itself). To make matters worse, the Virginia Tech Transportation Institute reports that nearly 
80 percent of car crashes result from drivers’ lack of attention (pdf) to the road.
Automakers, well aware of these statistics, have introduced some impressive driver de-idiotizing systems over the years. Traction control helps prevent skidding; crash-avoidance systems fling radar waves in every direction, looking for approaching vehicles; the Lexus LS series even features a windshield-mounted camera that monitors the lane markings in front of you and gently nudges you back into your lane, via electric motors that assist your steering, if you drift.
But none of these intelligent systems is intelligent enough to cut dumb drivers entirely out of the loop. “The smartest computer in the car is the human brain,” says Lexus’s Paul Williamsen, who trains Lexus dealers on these systems. “Our primary mission is to provide better computer inputs to allow it to make better judgments.” Translation: The car tries to get numbskulls to wake up before all hell breaks loose.
This brings us to what should be the state of the art in keeping your car out of my way: autonomous driving. With all these sensors and trajectory calculators, programming a car to auto-stop or auto-swerve in the face of an impending crack-up should be a no-brainer. But it turns out carmakers may be unwilling to let your car save your butt—and more important, mine—that way. Perhaps they know that if any damage occurs, you have a team of lawyers standing by, ready to argue in court that if the car hadn’t taken over, you would have gracefully swerved around that cement mixer you failed to see barreling down on you after you ran the stop sign while text-messaging.
So the most that cars will do, at least for the next few years, is apply light braking and steering and prepare the vehicle for impact by, depending on make and model, tightening up seat belts, unlocking doors, and turning on the hazard lights. (In Japan the Lexus LS can also slam on the brakes while on cruise control, in deference to that nation’s spectacular highway congestion and relative non-litigiousness.)
This unwillingness to let 
computers override the terrible decisions of terrible drivers is rather ironic, says Brad Templeton, an influential Internet entrepreneur and expert on civil rights in the digital age; accident avoidance is one of the most appropriate ways to have computers intrude into our lives. So Templeton is pushing for self-driving vehicle systems. Sure, they may occasionally do worse than a human driver would, he concedes, and their imperfections will inevitably even kill people. But, he adds, when you consider those hard statistics on dopey drivers and the trails of destruction (not to mention hyperextended middle fingers) they leave behind, it is hard to argue that automatic systems, once proved safe, will take more lives than they will save. One study conducted by the Insurance Institute for Highway Safety estimates that crash-avoidance technologies could reduce fatal car accidents by one-third, potentially saving many thousands of lives a year. “Human drivers set the bar pretty low,” Templeton says. Tell me about it.
Even as the automotive world has studiously avoided introducing fully auto autos, one chunk of the industry has been working hard to perfect the technologies that make them possible. The U.S. government has been quietly backing an “intelligent transportation systems” scheme that would support automated driving. It would use a 5.9-GHz band (similar to Wi-Fi but faster and more secure) set aside by the FCC to let cars “talk” to each other and to traffic lights to avoid crashes and congestion. Cars beaming short-range radio signals in 360 degrees and broadcasting their exact position on the road at every moment could auto-drive together, bumper to bumper, at high speeds. Add GPS coordinates and such cars could even predict crashes before they happen. Picture a car that calmly informs you of a homicidal maniac approaching an intersection at 30 miles per hour and then calmly recommends that you hang back when the light turns green to avoid imminent death. Or perhaps it would simply make the decision for you. On the flip side, picture a car that threatens to rat you out every time you inch past the speed limit.
Illustration by David Plunker
Peter Appel, administrator for the U.S. Department of Transportation’s Research and Innovative Technology Administration, says cars that talk to each other and to roadway infrastructure have the potential to eliminate 81 percent of traffic accidents. (Caveat: That impressive figure, pulled from a study commissioned by the National Highway Traffic Safety Administration, excludes intoxicated drivers and assumes that everyone on the road is driving a talking car, that every intersection and stop sign in America can join in the conversation, and that the auto industry agrees on standardized equipment for all this chatter.)
To help inch things along, the DOT recently awarded a consortium of automakers, including Ford, GM, Toyota, and others, $7.4 million to outfit cars with compatible radio transmitters and test them on closed courses. Two years from now, if things go well, Congress could mandate that all new cars be 5.9 GHz compatible by 2018.
The European Union, meanwhile, is taking a more collective approach to autonomous driving. As part of its Safe Road Trains for the Environment project, known as SARTRE, automakers are designing vehicle systems that would let cars safely tailgate on designated highways in a trainlike procession led by a professional driver. Each car in the convoy would measure the distance, speed, and direction of the car in front of it, allowing the “driver” to nap, text, or read the paper without killing anyone. The catch is that they would be available only on certain highways and accessible only to vehicles equipped with the right communications gear. No 35-year-old Alfa Romeo convertibles. Fine with me—you take the high-tech road, I’ll take the low-tech road, the latter being blessedly clear of, well, you.

Sunday 15 January 2012


Forget 3D Screens—We Need 3D Audio, Like in Real Life

A backward march of audio quality has left 
us listening to tinny, stripped-down MP3s. It’s time to show the kids what they are missing.
by David H. Freedman

Some decades ago, a salesguy in a high-end audio shop badly misjudged my socioeconomic status and treated me to an ultrahigh-quality recording of an obscure jazz ensemble, played on a $10,000 audio system in an acoustically perfect room. I staggered out goose-bumped and hair-raised, a newly minted audiophile wannabe. I was sure that this was just the beginning of a journey into ever-more-amazing sound experiences. The equipment in that room consisted of glowing tubes in big metal cases, vibrating domes in massive wood cabinets, and spinning platters of plastic. No doubt technological innovation would one day shrink this clunky system into something small enough to carry around and cheap enough to avoid triggering the reckless-behavior clause in my prenup. More important, I was sure that even grander realms of audio quality lay ahead. By 2011, who could imagine what sort of incredible sonic delights would await?
Technology certainly has come through in some ways. Today’s iPod Shuffle is so small that it is little more than audio-enabled jewelry. No complaints on the pricing, either; you can get a pretty good MP3 player for the cost of a newly released CD. There’s just one little snag: Today’s sound quality is miserable, worse than what I was listening to on my budget stereo 30 years ago.
The biggest culprit in our sonic backsliding is the ubiquity of low-quality digital music files. “If you’re not going to listen to a high-quality recording, you don’t need a high-quality system,” says John Meyer, founder of the audiophile speaker company Newform Research in Ontario. Hey, tell my kids. They are all too happy to semipermanently install wads of plastic in their ears for the privilege of listening to near-terabytic playlists rendered in mediocre-at-best fidelity.
The music and electronics indus­tries have eagerly catered to our growing obsession with convenience, blithely sacrificing sound in the process. All the way back in the 1980s, audiophiles were pointing out that those newfangled digital CDs lacked the subtlety and warmth of the best vinyl recordings. And the most popular versions of today’s standard, the MP3 file, have just a fraction of the potential fidelity of a CD recording.
The problem with MP3s is that they are “lossy,” which means they literally are missing some of the sound. When your brain hears sounds made up of multiple frequencies (as almost all music is), it tends to pay attention to whichever frequencies are the most readily perceived at any moment and largely ignores the rest. Most MP3 files simply leave out the subtler components of the music altogether—as much as 85 percent of what is actually recorded—in order to shrink the file size.
In theory we should not much notice what’s missing, but in practice a careful listener will find the diminished quality hard to ignore, especially when playing MP3s on a high-fidelity home stereo. To my kids this blandness has just become the standard of what recorded music sounds like: They have learned to like their music uniformly loud and stripped-down to an in-your-face artificial clarity that does away with all the warm, rounded audio undercurrents.
The good news is that the lab of Louis Thibault, director of Canada’s Communications Research Centre’s Advanced Audio Systems Group, is developing a superior way to encode music files. The technique involves plotting out how the music varies over time in frequency and amplitude, which results in a graph that depicts the music as a sort of rugged 3-D mountainscape. Visualizing a recording this way lets you describe the music in terms of geometric shapes instead of as a bunch of frequencies. That approach turns out to save a lot of file space, in the same way that describing a circle as a center point and a radius is more efficient than describing every little segment of the circle. “It looks as 
if we can reduce file size by about 
50 percent compared with MP3s, with the same audio quality,” Thibault says.
Turned around, this “object-based compression,” as it’s called, could provide much higher fidelity than that of a typical 16-bit MP3 in an equal-size file. Apple, meanwhile, is reportedly developing a new digital music player that can handle higher-resolution, 24-bit recordings, but who wants pricier, slower downloads that will make your existing music player obsolete? If Thibault’s compression scheme becomes standard, as he hopes it will, we could keep our 16-bit music players, and headphones could easily catch up; a decent pair of $50 earbuds already well exceed the potential of the music that gets poured into them. My kids may go into audio shock when they find out what they’ve been missing.

But pumping up the fidelity of digital recordings only gets me back to where I started all those many years ago in the audiophile shop. What I really want is an improvement that will rock my sonic world, the way that sadistic salesperson did so long ago. So I got in touch with Karlheinz Brandenburg, who, in addition to being director of the Fraunhofer Institute for Digital Media Technology in Ilmenau, Germany, is also the audio technology legend who largely developed the MP3 file. Sure enough, Brandenburg has moved way, way beyond simply trying to get higher-fidelity recordings packed into smaller files. What he’s chasing now is spatially realistic sound.
In real life, everything we hear is highly dependent on spatial orientation, for three reasons. First, your ear is shaped funny; second, the environment around you messes with all the sound waves that bounce around before they reach you; and third, your other ear is also shaped funny. Sound may reach your right ear first, then your left ear; a portion of the sound may be reflected by the wall behind you while another portion is partly absorbed by the coffee table in front of you. Every sound is uniquely filtered by that odd maze of flaps in your ears, and every little twist or nod of your head alters the whole aural picture.
“Your brain takes all this information and extracts from it not just what the sounds are and where the sources are, but a sense of what the environment is around you,” says Agnieszka Roginska, associate director of New York University’s music technology program, who is also working on spatially realistic sound.

Saturday 14 January 2012


"I See," Said the Blind Man With an Artificial Retina

New ocular implants are already illuminating colors and shapes, and promise to become far better.
by Carl Zimmer
For 100 million people around the globe who suffer from macular degenerationand other diseases of the retina, life is a steady march from light into darkness. The intricate layers of neurons at the backs of their eyes gradually degrade and lose the ability to snatch photons and translate them into electric signals that are sent to the brain. Vision steadily blurs or narrows, and for some, the world fades to black. Until recently some types of retinal degeneration seemed as inevitable as the wrinkling of skin or the graying of hair—only far more terrifying and debilitating. But recent studies offer hope that eventually the darkness may be lifted. Some scientists are trying to inject signaling molecules into the eye to stimulate light-collecting photoreceptor cells to regrow. Others want to deliver working copies of broken genes into retinal cells, restoring their function. And a number of researchers are taking a fundamentally different, technology-driven approach to fighting blindness. They seek not to fix biology but to replace it, by plugging cameras into people’s eyes.
Scientists have been trying to build visual prostheses since the 1970s. This past spring the effort reached a crucial milestone, when European regulators approved the first commercially available bionic eye. The Argus II, a device made by Second Sight, a company in California, includes a video camera housed in a special pair of glasses. It wirelessly transmits signals from the camera to a 6 pixel by 10 pixel grid of electrodes attached to the back of a subject’s eye. The electrodes stimulate the neurons in the retina, which send secondary signals down the optic nerve to the brain.
A 60-pixel picture is a far cry from HDTV, but any measure of restored vision can make a huge difference. In clinical human trials, patients wearing the Argus II implant were able to make out doorways, distinguish eight different colors, or read short sentences written in large letters. And if the recent history of technology is any guide, the current $100,000 price tag for the device should fall quickly even as its resolution rises. Already researchers are testing artificial retinas that do not require an external camera; instead, the photons will strike light-sensitive arrays inside the eye itself. The Illinois-based company Optobionics has built experimental designs containing 5,000 light sensors.
Commercial digital cameras hint at how much more improvement might lie just ahead. Our retinas contain 127 million photoreceptors spread over 1,100 square millimeters. State-of-the-art consumer camera detectors, by comparison, carry 16.6 million light sensors spread over 1,600 square millimeters, and their numbers have improved rapidly in recent years. But simply piling on the pixels will not be enough to match the rich visual experience of human eyes. To create a true artificial retina, says University of Oregon physicist and vision researcher Richard Taylor, engineers and neuroscientists will have to come up with something much more sophisticated than an implanted camera.
it is easy to think of eyes as biological cameras—and in some ways, they are. When the light from an image passes through our pupil, it ends up producing a flipped image on our retina. The light that enters a camera does the same thing. Eyes and cameras both have lenses that adjust the path of the incoming light to bring an image into sharper focus. The digital revolution has made cameras even more eye-like. Instead of catching light on film, digital cameras use an array of light-sensitive photodiodes that function much like the photoreceptors in an eye.
But once you get up close, the similarities break down. Cameras are boringly euclidean. Typically engineers build photodiodes as tiny square elements and spread them out in regularly spaced grids. Most existing artificial retinas have the same design, with impulses conveyed from the photodiodes to neurons through a rectangular grid of electrodes. The network of neurons in the retina, on the other hand, looks less like a grid than a set of psychedelic snowflakes, with branches upon branches filling the retina in swirling patterns. This mismatch means that when surgeons position the grid on the retina, many of the wires fail to contact a neuron. As a result, their signals never make it to the brain.
Some engineers have suggested making bigger electrodes that are more tightly spaced, creating a larger area for contact, but that approach faces a fundamental obstacle. In the human eye, neurons sit in front of the photoreceptors, but due to the snowflake-like geometry, there is still lots of space for light to slip through. An artificial retina with big electrodes, by contrast, would block out the very light it was trying to detect.
Natural photoreceptors are quirky in another way, too: They are bunched up. Much of what we see comes through a pinhead-size patch in the center of the retina known as the fovea. The fovea is densely packed with photoreceptors. The sharp view of the world that we simply think of as “vision” comes from light landing there; light that falls beyond the fovea produces blurry peripheral images. A camera, by contrast, has light-trapping photodiodes spread evenly across its entire image field.
The reason we don’t feel as if we are looking at the world through a periscope is that our eyes are in constant motion; our focus jumps around so that our foveas can capture different parts of our field of view. The distances of the jumps our eyes make have a hidden mathematical order: The frequency of a jump goes up as distance gets shorter. In other words, we make big jumps from time to time, but we make more smaller jumps, and far more even smaller jumps. This rough, fragmented pattern, known as a fractal, creates an effective means of sampling a large space. It bears a striking resemblance to the path of an insect flying around in search of food. Our eyes, in effect, forage for visual information.
Once our eyes capture light, the neurons in the retina do not relay information directly to the brain. Instead, they process visual information before it leaves the eye, inhibiting or enhancing neighboring neurons to adjust the way we see. They sharpen the contrast between regions of light and dark, a bit like photoshopping an image in real time. This image processing most likely evolved because it allowed animals to perceive objects more quickly, especially against murky backgrounds. A monkey in a forest squinting at a leopard at twilight, struggling to figure out exactly what it is, will probably never see another leopard. Unlike a camera that passively takes in a picture, our eyes are honed to actively extract the most important information we need to make fast decisions.
Right now scientists can only speculate what it might be like to wear an artificial retina with millions of photoreceptors in a regular grid, but such a device would not restore the experience of vision—no matter how many electrodes it contains. Without the retina’s sophisticated image processing, it might just supply a rapid, confusing stream of information to the brain.
Taylor, the Oregon vision researcher, argues that simplistic artificial eyes could also cause stress. He reached this conclusion after asking subjects to look at various patterns, some simple and some fractal, then describe how the images made them feel. He also measured physiological signs of stress, like electrical activity in the skin. Unlike simple images, fractal images lowered stress levels by up to 60 percent. Taylor suspects the calming effect has to do with the fact that our eye movements are fractal too. It is interesting to note that natural images—such as forests and clouds—are often fractal as well. Trees have large limbs off which sprout branches, off which grow leaves. Our vision is matched to the natural world.
An artificial retina that simply mirrors the detector in a digital camera would presumably allow people to see every part of their field of view with equal clarity. There would be no need to move their eyes around in fractal patterns to pick up information, Taylor notes, so there would be no antistress effect.
The solution, Taylor thinks, involves artificial retinas that are more like real eyes. Light sensors could be programmed with built-in feedbacks to sharpen the edges on objects or clumped together to provide more detail at the center. It may be possible to overcome the mismatch between regular electrodes and irregular neurons. Taylor is developing new kinds of circuits that he hopes to incorporate into next-generation artificial eyes. His team builds these circuits so that they spontaneously branch, creating structures that Taylor dubs nanoflowers. Although nanoflowers do not exactly match the eye’s neurons, their geometry would similarly admit light and allow circuits to contact far more neurons than can a simple grid.
Taylor’s work is an important reminder of how much progress scientists are making toward restoring lost vision, but also of how far they still have to go. The secret to success will be remembering not to take the camera metaphor too seriously: There is a lot more to the eye than meets the eye.


Thursday 12 January 2012


Mind Over Motor: Controlling Robots With Your Thoughts

A clever new system helps paralyzed patients and computers work together to control a robot, helping to connect locked-in people with the world.
by Jason Daley
Over recent months, in José del R. Millán’s computer science lab in Switzerland, a little round robot, similar to a Roomba with a laptop mounted on it (right), bumped its way through an office space filled with furniture and people. Nothing special, except the robot was being controlled from a clinic more than 60 miles away—and not with a joystick or keyboard, but with the brain waves of a paralyzed patient.
The robot’s journey was an experiment in shared control, a type of brain-machine interface that merges conscious thought and algorithms to give disabled patients finer mental control over devices that help them communicate or retrieve objects. If the user experiences a mental misfire, Millán’s software can step in to help. Instead of crashing down the stairs, for instance, the robot would recalculate to find the door.
Such technology is a potential life changer for the tens of thousands of people suffering from locked-in syndrome, a type of paralysis that leaves patients with only the ability to blink. The condition is usually incurable, but Millán’s research could make it more bearable, allowing patients to engage the world through a robotic proxy. “The last 10 years have been like a proof of concept,” says Justin Sanchez, director of the Neuro­prosthetics Research Group at the University of Miami, who is also studying shared control. “But the research is moving fast. Now there is a big push to get these devices to people who need them for everyday life.”
Millán’s system, announced in September at Switzerland’s École Polytechnique Fédérale de Lausanne, is a big step in making brain-machine interfaces more useful by splitting the cognitive workload between the patient and the machine. Previously, users had to fully concentrate on one of three commands—turn left, turn right, or do nothing—
creating specific brain wave patterns detected by an electrode-studded cap. That system exhausted users by forcing them to think of the command constantly. With shared control, a robot quickly interprets the user’s intention, allowing him to relax mentally. Millán is now developing software that is even better at weeding out unrelated thoughts and determining what the user really wants from the machine.
Although the disabled will probably be the first beneficiaries of Millán’s technology, we may all eventually end up under the scanner. Millán and auto manufacturer Nissan recently announced they are collaborating on a shared-control car that will scan the driver’s brain waves and eyes and step in if the mind—and the Altima—begin to wander.

Wednesday 11 January 2012


Big Idea How Pot, Cocaine, and Hunger Intersect in the Brain

Researchers are studying the role that the endocannabinoid system plays in cravings, and using their findings to try to control our excesses.
by Gregory Mone
In June 2006 pharmaceutical giant Sanofi-Aventis began selling a new weight-loss drug called rimonabant in Europe. Rimonabant worked in part by reducing appetite, and the company claimed it could also treat addiction, harmful cholesterol, and diabetes. Lab tests even suggested the drug produced healthier sperm. But within six months, the company had received more than 900 reports of nausea, depression, and other side effects.
By the following summer, the U.S. Food and Drug Administration had rejected rimonabant, noting that relative to a placebo, patients taking it were twice as likely to contemplate, plan, or attempt suicide. The European Medicines Agency soon asked Sanofi-Aventis to address the safety concerns, and on December 5, 2008, the company pulled the drug off the European market.
Rimonabant was a spectacular flop, and yet its lure today is stronger than ever. Researchers worldwide are pursuing novel drugs aimed at the exact same target: the endocannabinoid system, an elaborate network of receptors and proteins that operate within the brain, heart, gut, liver, and throughout the central nervous system. For drug designers, the system’s powerful role in regulating cravings, mood, pain, and memory makes it a tantalizing target. The challenge now is finding sharper, more refined ways to manipulate it without causing the sort of debilitating side effects that derailed rimonabant. “The system is very, very widespread and very effective at a variety of levels,” says neuroscientist Keith Sharkey, who studies the role of endocannabinoids in the gut at the Hotchkiss Brain Institute at the University of Calgary. “It seems to be very important in the body, which is a concern when you develop drugs for it because you will get a range of effects.”
prescription
Zheng-Xiong Xi, a pharmacologist at the National Institute on Drug Abuse in Baltimore, explains that the main receptor in the endocannabinoid system, CB1, interferes with brain levels of dopamine, a chemical associated with reward-seeking behavior, pleasure, and motivation. Activating CB1 jump-starts a chain reaction that culminates in an excess of dopamine floating around between neurons. “The dopamine produces a good feeling, a rush, euphoric effects,” Xi says.
A few years ago, Xi was studying this phenomenon in mice, hoping to find a pill to treat addiction to dopamine-boosting drugs such as cocaine. Scientists suspected that rimonabant, which decreases CB1 activity, dampens appetite by decreasing dopamine levels and taking the rush out of eating. Xi was looking for a compound that would have the same effect on cocaine users. Without the high, he theorized, cocaine might lose its appeal.
In studying the endocannabinoid system, Xi tested THC, the active compound in marijuana. THC is thought to produce euphoric feelings by increasing CB1 activity and causing dopamine levels to rise. Instead he saw the opposite effect. “We found that at higher doses it produced a decrease,” Xi says. “So how did this happen?”
When Xi tried THC on mice lacking CB1 receptors, he found the same response: Dopamine dropped. Could THC be acting on the other receptor in the endocannabinoid system, CB2? It was an odd question, since CB2 receptors were not thought to reside in the brain. “For years people didn’t believe that they really existed there,” Sharkey says. But when Xi tested THC in mice without the CB2 receptor, it had no effect at all. CB2 was clearly involved.
Testing the idea further, Xi then switched to JWH-133, a compound designed to latch onto CB2 receptors and decrease dopamine levels. If CB2 receptors were truly absent in the brain, the drug should have no effect on dopamine levels there. Instead, Xi found, they were plunging. In cocaine-addicted mice with unmodified endocannabinoid systems, JWH-133 dramatically reduced the number of times the mice would press a lever for more cocaine. “The drug-taking behavior or drug intake is tremendously decreased,” Xi says. Meanwhile, when he and his team tested JWH-133 on mice without CB2 receptors, the rodents kept going for the cocaine, and their dopamine levels were higher.
This effect on dopamine may explain why rimonabant was dangerous: It blocked the body’s ability to produce a natural high. But Xi’s work, published in Nature Neuroscience, suggests that targeting a different receptor, CB2, makes all the difference. While early tests in rats hinted that rimonabant might have depressive effects, Xi’s group found no evidence of malaise in their mice.
While Xi turned to a new receptor, another group of scientists, led by Sharkey and Northeastern University medicinal chemist Alexandros Makriyannis, developed a compound that targets the same spot as rimonabant, CB1, but features certain key differences that could overcome its flaws. Appetite isn’t regulated only by the brain, Sharkey explains. The gut is also loaded with cannabinoid receptors that can act as brakes on the urge to eat. So Sharkey and Makriyannis developed a compound called AM6545 that stays out of the brain. This way, they figured, it might not have adverse psychiatric effects. In a study reported late last year in the British Journal of Pharmacology, AM6545 enabled mice and rats to lose weight without inducing signs of depression or nausea. “In animals it does exactly the same thing as rimonabant,” Makriyannis says, only without the drawbacks.
Whether either of these new drugs—or several others currently in testing—will be as effective in humans is an open question. Xi has shown that cannabinoid receptor location and concentration varies from species to species, so mouse results might not transfer to people. And he says it still isn’t clear why JWH-133 reduces dopamine but does not lead to hints of depression. Xi plans to study the question further and test JWH-133 and other compounds as tools to fight addiction.
Makriyannis, who has been involved in the field from its inception, foresees a whole new class of drugs that combat pain, inflammation, metabolic disorders, and more. “I’m convinced that in the next five years we shall have more drugs for the endocannabinoid system,” he says.