QUANTA

Sunday, May 29, 2011

Friday, May 27, 2011

 CH. 1: PROLOGUE
This book can’t begin with the tale of the telekinetic monkey.
That certainly comes as a surprise. After all, how often does someone writing nonfiction get to lead with a monkey who can move objects with her thoughts?
If you lunge at this opportunity, however, the story comes out all wrong. It sounds like science fiction, for one thing, even though the monkey—a cute little critter named Belle—is completely real and scampering at Duke University.
This gulf between what engineers are actually creating today and what ordinary readers might find believable is significant. It is the first challenge to making sense of this world unfolding before us, in which we face the biggest change in tens of thousands of years in what it means to be human.
This book aims at letting a general audience in on the vast changes that right now are reshaping our selves, our children and our relationships. - Source: http://www.garreau.com/

Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
 LOVE IS BETTER THAN DRUGS IN REDUCING PAIN
Say scientists. They apparently have not studied its ability also to cause it. - Source: http://www.garreau.com/

Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
 A SPY BOT THAT CAN HIDE
"Lockheed Martin's approach does include a sort of basic theory of mind, in the sense that the robot makes assumptions about how to act covertly in the presence of humans," says Alan Wagner of the Georgia Institute of Technology in Atlanta. - Source: http://www.garreau.com/

Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
 SEAGULL BOT
Check out the video of the amazingly lifelike, flapping wing, gliding, indoor/outdoor seagull bot. No word on whether it makes rocks white.
- Source: http://www.garreau.com/

Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
EYES GROWN FROM STEM CELLS
In a test tube, mouse embryonic stem cells self-organized into the most complex part of an eye. If it works in humans, it holds the promise to regenerate damaged or lost eyes. This emergence of complexity from no pattern, "truly is stunning," says a scientist not involved in the breakthrough. "I never thought that I'd ever see a retina grown in a dish." - Source: http://www.garreau.com/

Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk

Thursday, May 26, 2011

Lockheed Martin buys first D-Wave quantum computing system

May 26, 2011 b

Lockheed Martin Corporation has agreed to purchase the first D-Wave One quantum computing system from D-Wave Systems Inc., according to D-Wave spokesperson Ann Gibbon.

Lockheed Martin plans to use this “quantum annealing processor” for some of Lockheed Martin’s “most challenging computation problems,” according to a D-Wave statement.

D-Wave computing systems address combinatorial optimization problems.that are “hard for traditional methods to solve in a cost-effective amount of time.”

These include software verification and validation, financial risk analysis, affinity mapping and sentiment analysis, object recognition in images, medical imaging classification, compressed sensing, and bioinformatics.

Source: http://goo.gl/swNe8


Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk

Tuesday, May 24, 2011

New studies reveal evidence that cell phone radiation damages DNA, brain, and sperm

May 24, 2011

New independent studies offer proof that confirms findings from the Council of Europe: pulsed digital signals from cell phones disrupt DNA, impair brain function, and lower sperm count, according to a statement by the Environmental Health Trust (EHT).

On May 23, a think-tank of experts organized by Gazi University and EHT convened at a workshop in Istanbul, Turkey, “Science Update: Cell Phones and Health,” to present the findings.

Prof. Nesrin Seyhan , WHO and NATO advisor and head and founder of the Biophysics Department and Bioelectromagnetics Laboratory at Gazi University in Ankara and founder of the Non-Ionizing Radiation Protection (GNRP) Center, found that just four hours of exposure to RF-EMF disrupts the ability of human brain cells to repair damaged genes.

Other new  work from Australia shows damage to human sperm.

“This work provides a warning signal to all of us. The evidence justifies precautionary measures to reduce the risks for every one of us,” says Prof. Wilhelm Mosgoeller from the Medical University of Vienna, who has led European research teams that found that RF-EMF induces DNA breaks.

Two years after false accusations against scientists who described DNA breaks, recent results finally show that exposure-induced DNA breaks are real, according to these scientists.

Impact on reproductive health and cell death

Insect studies have demonstrated that acute exposure to GSM (Global System for Mobile) signals brings about DNA fragmentation in insects’ ovarian cells, and consequently a large reduction in the reproductive capacity of the insects. Further studies demonstrated that long exposures induced cell death to the insects in the study.

Dr. Adamantia Fragopoulou, leader of a team at the University of Athens, found effects on embryonic development taking place in the presence of a mild electromagnetic field. Throughout the gestation period, exposure to radiation for just six minutes a day affects the bone formation of fetuses. The team suggests that this is possibly caused by the interaction of cell phone radiation with crucial molecules and ions involved in embryogenesis.

Impacts on the young brain

Dr. Seyhan found that the increasing use of cell phones — and the increasing number of associated base stations — are becoming a widespread source of non-ionizing electromagnetic radiation. This work suggests that some biological effects are likely to occur even with low-level electromagnetic fields. The team concluded that 900 and 1,800 MHz radiation levels is related to an increase in the permeability of the blood-brain barrier in young adult male rats. The rat’s brains can be used to correspond to the brains of human teenagers.

Children are increasingly heavy users of cell phones; at higher frequencies, children absorb more energy from external radio frequency radiation than adults, because their tissue normally contains a larger number of ions and so has a higher conductivity. They conclude limiting cell phone and cordless phone use by young children and teenagers to the lowest possible level and urgently ban telecom companies from marketing to them.

In addition, research from a team at the University of Athens found that rats exposed to cell phone radiation were unable to remember the location of places previously familiar to them. This finding is of potentially critical importance for people who heavily rely on spatial memory for recording information about their environment and spatial orientation.

Source: http://goo.gl/1rPJk

Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
Metamaterials could improve wireless power transmission
 
May 24, 2011

Electrical engineers at Duke University have determined that it is theoretically possible to improve the efficiency of recharging devices without wires, using metamaterials.

Normally, as power passes from a transmitting device to a receiving device, most (if not all) of it scatters and dissipates unless the two devices are extremely close together. The metamaterial postulated by the researchers, which would be situated between the energy source and the “recipient” device, greatly refocuses the transmitted energy for minimal loss of power.

The metamaterial used in the wireless power transmission would likely be made of hundreds to thousands — depending on the application — of individual thin conducting loops arranged into an array. Each piece is made from the same copper-on-fiberglass substrate used in printed circuit boards, with excess copper etched away. These pieces can then be arranged in an almost infinite variety of configurations.

‘The system would need to be tailored to the specific recipient device, in essence the source and target would need to be ‘tuned’ to each other,” says Yaroslav Urzhumov. “This new understanding of how matematerials can be fabricated and arranged should help make the design of wireless power transmission systems more focused.”

Source: http://goo.gl/4ppUA

Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
 Earthquake? Terrorist bomb? Call in the AI

In the chaos of large-scale emergencies, artificially intelligent software could help direct first responders

9.47 am, Tavistock Square, London, 7 July 2005. Almost an hour has passed since the suicide bombs on board three underground trains exploded. Thirty-nine commuters are now dead or dying, and many more are badly injured.

Hassib Hussain, aged 18, now detonates his own device on the number 30 bus - murdering a further 13 and leaving behind one of the most striking images of the day: a bus ripped open like a tin of sardines.

In the aftermath of the bus bomb, questions were raised about how emergency services had reacted to the blast. Citizens and police called emergency services within 5 minutes, but ambulance teams did not arrive on the scene for nearly an hour.

As the events of that day show, the anatomy of a disaster - whether a terrorist attack or an earthquake - can change in a flash, and lives often depend on how police, paramedics and firefighters respond to the changing conditions. To help train for and navigate such chaos, new research is employing computer-simulation techniques to help first responders adapt to emergencies as they unfold.

Most emergency services prepare for the worst with a limited number of incident plans - sometimes fewer than 10 - that tell them how to react in specific scenarios, says Graham Coates of Durham University, UK. It is not enough, he says. "They need something that is flexible, that actually presents them with a dynamic, tailor-made response."

A government inquest, concluded last month, found that no additional lives were lost because of the delay in responding to the Tavistock Square bomb, but that "communication difficulties" on the day were worrying.

So Coates and colleagues are developing a training simulation that will help emergency services adapt more readily. The "Rescue" system comprises up to 4000 individual software agents that represent the public and members of emergency services. Each is equipped with a rudimentary level of programmed behaviours, such as "help an injured person".

In the simulation, agents are given a set of orders that adhere to standard operating procedure for emergency services - such as "resuscitate injured victims before moving them". When the situation changes - a fire in a building threatens the victims, for example - agents can deviate from their orders if it helps them achieve a better outcome.

Meanwhile, a decision-support system takes a big-picture view of the unfolding situation. By analysing information fed back by the agents on the ground, it can issue updated orders to help make sure resources like paramedics, ambulances and firefighters are distributed optimally.

Humans that train with the system can accept, reject or modify its recommendations, and unfolding event scenarios are recorded and replayed to see how different approaches yield different results. Coates presented his team's work at the International Conference on Information Systems for Crisis Response and Management in Lisbon, Portugal, last week.

That still leaves the problem of predicting how a panicked public might react to a crisis - will fleeing crowds hamper a rescue effort, or will bystanders comply with any instructions they receive?

To explore this, researchers at the University of Notre Dame in South Bend, Indiana, have built a detailed simulation of how crowds respond to disaster. The Dynamic Adaptive Disaster Simulation (DADS) also uses basic software agents representing humans, only here they are programmed to simply flee from danger and move towards safety.

When used in a real emergency situation, DADS will utilise location data from thousands of cellphones, triangulated and streamed from masts in the region of the emergency. It can make predictions of how crowds will move by advancing the simulation faster than real-time events. This would give emergency services a valuable head start, says Greg Madey, who is overseeing the project.

A similar study led by Mehdi Moussaïd of Paul Sabatier University in Toulouse, France, sought to address what happens when such crowds are packed into tight spaces.

In his simulation, he presumed that pedestrians choose the most direct route to their destination if there is nothing in their way, and always try to keep their distance from those around them. Running a simulation based on these two rules, Moussaïd and his colleagues found that as they increased the crowd's density, the model produced crushes and waves of people just like those seen in real-life events such as stampedes or crushes at football stadiums (Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.1016507108). The team hope to use their model to help plan emergency evacuations.

Jenny Cole, head of emergency services at London-based independent think tank The Royal United Services Institute, wrote a report on how the different emergency services worked together in the wake of the London bombings. She remains "sceptical" about these kinds of simulations. "No matter how practical or useful they would be, there's usually no money left in the end to implement them," she says.

For his part, Coates says he plans to release his system to local authorities for free as soon as it is ready.

Source: http://goo.gl/75hT7


Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
World record in data transmission: 26 terabits per second on a single laser beam
 
May 24, 2011

Scientists at Karlsruhe Institute of Technology (KIT) have succeeded in encoding data at a rate of 26 terabits per second on a single laser beam, transmitting over a distance of 50 km.

The team beat its own data rate record of 10 terabits per second (10,000 billion bits per second) set in 2010.

With their new optoelectric decoding method, the high data rate is then broken down into smaller bit rates, and uses orthogonal frequency division multiplexing (OFDM) based on fast Fourier transform processing for data encoding. At 26 terabits per second, it is possible to transmit up to 400 million telephone calls simultaneously, or the contents of 700 DVDs in one second, the researchers said. .

The scientists said their results show that physical limits have not yet been reached, even at extremely high data rates.

Source: http://goo.gl/K2cbm

 
Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
Discovery opens the door to generating electricity from microbes
 
May 24, 2011

The molecular structure of the proteins that enable bacterial cells to transfer electrical charge has been discovered by scientists at the University of East Anglia.

The discovery means scientists can design electrodes with better contacts to pick up the charges generated by the microbes, creating efficient microbial fuel cells or “bio-batteries.” The advance could also help development of microbe-based agents that can clean up oil or uranium pollution, and fuel cells powered by human or animal waste.

The scientists used x-ray crystallography to reveal the molecular structure of the proteins attached to the surface of a Shewanella oneidensis bacterial cell.

The researchers said this discovery could enable scientists to design electrodes with better contacts to pick up the charges generated by the microbes.

These anaerobic bacteria live in oxygen-free environments and can take in and degrade wastes such as oil to generate energy, so they might also be used to clean up oil spills or uranium pollution, the researchers said.

Source: http://goo.gl/ddkAm

Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk

Sunday, May 22, 2011

World’s smallest 3-D printer

May 20, 2011

A very small and light 3-D printer prototype has been developed by researchers at the Vienna University of Technology.

The prototype is designed for use at home, using Internet blueprints to print custom 3-D objects. The desired object is printed in a small tub filled with synthetic resin.

The resin hardens under the illumination of intense beams of light. The synthetic resin is irradiated layer-by-layer at exactly the right spots. When one layer hardens, the next layer (a 20th of a millimeter thick) can be attached to it, until the object is completed — a method called “additive manufacturing technology.”

This allows the printer to produce complicated geometrical objects with an intricate inner structure not possible using casting techniques, the researchers said.

The printer can be used for applications that require extraordinary precision — such as construction parts for hearing aids. The printer uses light emitting diodes to create the high intensity light.

The prototype is the size of a carton of milk, weighs 1.5 kilograms, and is currently priced at €1200.00 ($1717.00).

Source: http://goo.gl/mXz8d



Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk

Thursday, May 19, 2011

 3-D cloaking achieved for visible light
 
May 19, 2011

Karlsruhe Institute of Technology Center for Functional Nanostructures (CFN) researchers have created the first 3-D invisibility cloak for ordinary (non-polarized) visible light, limited at this time to the 700 nm. (red) range.

Cloaking was first developed, in 2006, for the microwave range, and has more recently been extended to the infrared (IR) range. In 2010, CFN researchers presented the first 3-D invisibility cloak in the journal Science, limited to infrared wavelengths. Previous devices were only able to hide objects from light traveling in only one direction; viewed from any other angle, the object would remain visible.

How it works

An  invisibility cloak is basically a simple magic trick, using a waveguide that guides incident light waves out of the cloak to make it appear that the light waves never actually came in contact with the object. The waveguide is made of metamaterials — materials with special optical properties created by using a photolithographic method called “direct laser writing,” in the case of CFN.

But the particles of the metamaterial have to be smaller than the wavelength of the light that is to be deflected — that’s the  challenge. For the cloak to be invisible at wavelengths visible to human beings, the metamaterial particle size needs to be in the range of a 700 nanometers (for red light) or less.

If CFN can get the metamaterial particle dimensions down below 400 nm., invisibility in white light can be achieved. At that point, we are in the realm of Harry Potter’s invisibility cloak.

Practical applications of the research include flat, aberration-free lenses for use in integrated optical chips and optical “black holes” for concentrating and absorbing light in solar cells.

Source: http://goo.gl/j5me9


Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
 Sharing Information Corrupts Wisdom of Crowds

By Brandon Keim

When people can learn what others think, the wisdom of crowds may veer towards ignorance.

In a new study of crowd wisdom — the statistical phenomenon by which individual biases cancel each other out, distilling hundreds or thousands of individual guesses into uncannily accurate average answers — researchers told test participants about their peers’ guesses. As a result, their group insight went awry.

“Although groups are initially ‘wise,’ knowledge about estimates of others narrows the diversity of opinions to such an extent that it undermines” collective wisdom, wrote researchers led by mathematician Jan Lorenz and sociologist Heiko Rahut of Switzerland’s ETH Zurich, in Proceedings of the National Academy of Sciences on May 16. “Even mild social influence can undermine the wisdom of crowd effect.”

The effect — perhaps better described as the accuracy of crowds, since it best applies to questions involving quantifiable estimates — has been described for decades, beginning with Francis Galton’s 1907 account of fairgoers guessing an ox’s weight. It reached mainstream prominence with economist James Surowiecki’s 2004 bestseller, The Wisdom of Crowds.

As Surowiecki explained, certain conditions must be met for crowd wisdom to emerge. Members of the crowd ought to have a variety of opinions, and to arrive at those opinions independently.

Take those away, and crowd intelligence fails, as evidenced in some market bubbles. Computer modeling of crowd behavior also hints at dynamics underlying crowd breakdowns, with he balance between information flow and diverse opinions becoming skewed.

Lorenz and Rahut’s experiment fits between large-scale, real-world messiness and theoretical investigation. They recruited 144 students from ETH Zurich, sitting them in isolated cubicles and asking them to guess Switzerland’s population density, the length of its border with Italy, the number of new immigrants to Zurich and how many crimes were committed in 2006.

After answering, test subjects were given a small monetary reward based on their answer’s accuracy, then asked again. This proceeded for four more rounds; and while some students didn’t learn what their peers guessed, others were told.

As testing progressed, the average answers of independent test subjects became more accurate, in keeping with the wisdom-of-crowds phenomenon. Socially influenced test subjects, however, actually became less accurate.

The researchers attributed this to three effects. The first they called “social influence”: Opinions became less diverse. The second effect was “range reduction”: In mathematical terms, correct answers became clustered at the group’s edges. Exacerbating it all was the “confidence effect,” in which students became more certain about their guesses.

“The truth becomes less central if social influence is allowed,” wrote Lorenz and Rahut, who think this problem could be intensified in markets and politics — systems that rely on collective assessment.

“Opinion polls and the mass media largely promote information feedback and therefore trigger convergence of how we judge the facts,” they wrote. The wisdom of crowds is valuable, but used improperly it “creates overconfidence in possibly false beliefs.”

http://goo.gl/iYSsv


Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
 Treating chronic low back pain can reverse abnormal brain activity

May 19, 2011

Treatment of chronic low back pain can reverse abnormal brain activity and function, researchers at McGill University and the McGill University Health Centre (MUHC) have determined.

Low back pain is the most common form of chronic pain among adults. Individuals with chronic pain also experience cognitive impairments, reduced gray matter, depression, and anxiety.

The researchers asked the question: If you can alleviate chronic low back pain, can you reverse these changes in the brain?

The team recruited patients who had low back pain for more than six months and who planned on undergoing treatment for spinal injections or spinal surgery. MRI scans were conducted on each subject before and six months after their procedures.

The scans measured the cortical thickness of the brain and brain activity when the subjects where asked to perform a simple cognitive task, and found increased cortical thickness in specific areas of the brain related to pain reduction and physical disability.

After treatment, the abnormal brain activity that was observed initially during an attention-demanding cognitive task was found to have normalized.

http://goo.gl/Fnev8


Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
 Bionic hand for 'elective amputation' patient

By Neil Bowdler

An Austrian man has voluntarily had his hand amputated so he can be fitted with a bionic limb.

The patient, called "Milo", aged 26, lost the use of his right hand in a motorcycle accident a decade ago.

After his stump heals in several weeks' time, he will be fitted with a bionic hand which will be controlled by nerve signals in his own arm.

The surgery is the second such elective amputation to be performed by Viennese surgeon Professor Oskar Aszmann.

The patient, a Serbian national who has lived in Austria since childhood, suffered injuries to a leg and shoulder when he skidded off his motorcycle and smashed into a lamppost in 2001 while on holiday in Serbia.

While the leg healed, what is called a "brachial plexus" injury to his right shoulder left his right arm paralysed. Nerve tissue transplanted from his leg by Professor Aszmann restored movement to his arm but not to his hand.

A further operation involving the transplantation of muscle and nerve tissue into his forearm also failed to restore movement to the hand, but it did at least boost the electric signals being delivered from his brain to his forearm, signals that could be used to drive a bionic hand.

Then three years ago, Milo was asked whether he wanted to consider elective amputation.

"The operation will change my life. I live 10 years with this hand and it cannot be (made) better. The only way is to cut this down and I get a new arm," Milo told BBC News prior to his surgery at Vienna's General Hospital.

Milo took the decision after using a hybrid hand fitted parallel to his dysfunctional hand with which he could experience controlling a prosthesis.

Such bionic hands, manufactured by the German prosthetics company Otto Bock, can pinch and grasp in response to signals from the brain that are picked up by two sensors placed over the skin above nerves in the forearm.

In effect, the patient controls the hand using the same brain signals that would have once powered similar movements in the real hand.

The wrist of the prosthesis can be rotated manually using the patient's other functioning hand (if the patient has one).

Source: http://goo.gl/r8V6S


Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
 A Blood Test Offers Clues to Longevity

By ANDREW POLLACK

Want to know how long you will live?

Blood tests that seek to tell people their biological age — possibly offering a clue to their longevity or how healthy they will remain — are now going on sale.

But contrary to various recent media reports, the tests cannot specify how many months or years someone can expect to live. Some experts say the tests will not provide any useful information.

The tests measure telomeres, which are structures on the tips of chromosomes that shorten as people age. Various studies have shown that people with shorter telomeres in their white blood cells are more likely to develop illnesses like cancer, heart disease and Alzheimer’s disease, or even to die earlier. Studies in mice have suggested that extending telomeres lengthens lives.

Seizing on that, laboratories are beginning to offer tests of telomere length, setting off a new debate over what genetic tests should be offered to the public and what would be the ethical implications if the results were used by employers or others.

Some of the laboratories offering the tests emphasize that the results are merely intended to raise a warning flag.

“We see it as a kind of wake-up call for the patient and the clinician to say, ‘You know, you’re on a rapidly aging path,’ ” said Otto Schaefer, vice president for sales and marketing at SpectraCell Laboratories in Houston, which offers a test for $290.

A company in Spain, provocatively named Life Length, has begun selling a test for 500 euros ($712), that says that it can tell people their biological age, which may not correspond to their chronologic age.

Another company, Telome Health of Menlo Park, Calif., plans to begin offering a test later this year for about $200. It was co-founded by Elizabeth H. Blackburn of the University of California, San Francisco, who shared a Nobel Prize in 2009 for discoveries related to telomeres.

Calvin B. Harley, the chief scientific officer at Telome Health, said the test would be akin to a car’s dashboard signal, a “check engine light.” He compared it with a cholesterol test, but more versatile since it can predict a risk of various illnesses, not just heart attacks.

But among the critics of such tests is Carol Greider, a molecular biologist at Johns Hopkins University, who was a co-winner of the Nobel Prize with Dr. Blackburn.

Dr. Greider acknowledged that solid evidence showed that the 1 percent of people with the shortest telomeres were at an increased risk of certain diseases, particularly bone marrow failure and pulmonary fibrosis, a fatal scarring of the lungs. But outside of that 1 percent, she said, “The science really isn’t there to tell us what the consequences are of your telomere length.”

Dr. Greider said that there was great variability in telomere length. “A given telomere length can be from a 20-year-old or a 70-year-old,” she said. “You could send me a DNA sample and I couldn’t tell you how old that person is.”

Dr. Peter Lansdorp, a telomere expert at the British Columbia Cancer Agency, also had doubts. “If telomeres are short for you or me, what does it mean?” he said. Dr. Lansdorp started a company, Repeat Diagnostics, which conducts telomere testing for medical researchers only.

Recent media reports speculated on the tests and their possible implications, including ethical problems.

“You could imagine insurance companies wanting this knowledge to set rates or deny coverage,” said Jerry W. Shay, a professor of cell biology at the University of Texas Southwestern Medical Center in Dallas, who is an adviser to Life Length.

Test vendors say the speculation is running wild.

“It doesn’t mean we will tell anyone how long they will live,” said María Blasco, a co-founder of Life Length and a molecular biologist at the Spanish National Cancer Research Center in Madrid. Even if a 50-year-old has the telomere length more typical of a 70-year-old, she said, “This doesn’t mean your whole body is like a 70-year-old person’s body.”

Still, she said, “We think it can be helpful to people who are especially keen on knowing how healthy they are.”

Generally tests offered by a single laboratory do not have to be approved by the Food and Drug Administration. But the F.D.A. has been cracking down recently on some tests offered to the public, saying they may need approval. The FDA said in a statement Wednesday that it was aware of the tests, and had not come to any conclusions.

Executives at both Telome Health and Life Length say they will require a doctor to be involved in ordering the test, though SpectraCell said it allowed individuals to order the test.

Telomeres are stretches of DNA linked to certain proteins that are at the ends of chromosomes. They are often likened to the caps at the end of shoelaces. Each time a cell divides, the telomeres get shorter. Eventually, the telomeres get so short that the cell can no longer divide. It enters a state of senescence or dies.

One study in Utah, using blood samples from 143 elderly people collected in the 1980s, found that those with shorter telomeres were almost twice as likely to die in the ensuing years as those with longer ones.

Another study, published in The Journal of the American Medical Association last July, followed 787 people in Italy, all initially free of cancer. Those with the shortest telomeres had three times the risk of developing cancers in the next 10 years as those with the longest telomeres.

Still, not all studies have found such strong correlations. In any case, correlations do not prove that the shorter telomeres are causing the problems, although experts say some animal and cell studies do suggest causality.

Some say that the telomere test might not tell people much that cannot be learned in other ways.

“You can pretty much look at people and determine their biological age,” said Michael West, who founded Geron, the biotechnology company that sponsored and conducted some important research on telomeres. He now runs BioTime, another biotechnology company.

It is also unclear what to do about short telomeres. At the moment, there is no drug that can lengthen telomeres, though researchers are working on drugs and stem cell therapies.

There is some evidence, however, that stress is associated with shorter telomeres and that stress relief, exercise or certain nutrients such as omega-3 fatty acids might at least slow the decline in telomere length. But healthy lifestyles are already recommended for people without having to know their telomere length.

There are also disputes about how to measure telomeres. Life Length says its technique, while more expensive, can detect not only average telomere length but the shortest telomeres in cells. The shortest telomeres cause the health problems, said Dr. Shay, the adviser to Life Length.

Telome Health and SpectraCell use a DNA amplification technique called polymerase chain reaction, or P.C.R., which is cheaper but provides only an average length. And there are some questions about the accuracy.

Dr. Harley of Telome said the P.C.R. test was more relevant because virtually all the studies correlating telomere length with disease had used that test.

For those wanting to know how long they might live, there are already some indexes that are used by geriatricians to estimate the chances of a patient dying in anywhere from six months to nine years. A patient with a short expected lifespan, for instance, might no longer need to undergo annual screening for cancer.

These estimates rely on factors such as person’s age, gender, smoking history, whether they have certain diseases and whether they can perform certain functions, like walking several blocks, pushing an armchair or managing their finances.

Dr. Sei Lee, an assistant professor at the University of California, San Francisco who developed a test that estimates the probability of dying within four years, said he was not sure how much telomere length testing would add. “The chance of any single factor being a great predictor is probably low,” he said.

Source: http://goo.gl/ukFgt


Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
 The search for ET continues — in West Virginia

Now that NASA’s Kepler space telescope has identified 1,235 possible planets around stars in our galaxy, astronomers at the University of California, Berkeley, are aiming a radio telescope — the 100 meter Robert C. Byrd Green Bank Telescope, the largest steerable radio telescope in the world — at the most Earth-like of these worlds to see if they can detect signals from an advanced civilization.

“The SETI Institute is also checking out the most attractive of the new worlds discovered by Kepler, in hopes of discovering that some might shelter — not just life — but technically sophisticated life,” SETI Institute senior astronomer Seth Shostak told KurzweilAI. :”We do this over a very wide range of radio frequencies, and with the ability to immediately check out any interesting signals. The real excitement, of course, is that SETI practitioners now have some very compelling directions in which to aim their antennas. Really, it’s like trying to discover Antarctica two centuries ago: the chances improve when everyone aims their ships toward the south!”

The search began on a week ago on May 8, when astronomers dedicated an hour to eight stars with candidate planets in the star’s habitable zone (ones with a surface temperature between zero and 100 degrees Celsius, so liquid water could be maintained and they are likely to harbor life).

They plan to acquire 24 hours of data on a total of 86 Earth-like planets, do a coarse analysis, and then, in about two months, ask an estimated 1 million users of SETI@home (the world’s largest distributed computer) to conduct a more detailed analysis on their home computers.

The Kepler Mission uses a satellite to watch for the small dip in light received from a star when an orbiting planet passes between our line of sight with the star.

So far, around 1300 planet candidates have been found by the Kepler Satellite.

The 86 stars were chosen from the 1,235 candidate planetary systems, called Kepler Objects of Interest, or KOIs. The targets include the 54 KOIs identified by the Kepler team as being in the habitable temperature range and with sizes ranging from Earth-size to larger than Jupiter.

There are also 10 KOIs not on the Kepler team’s habitable list but with orbits less than three times Earth’s orbit and orbital periods greater than 50 days, and systems with four or more possible planets.

After the Green Bank telescope has targeted each star, it will scan the entire Kepler field for signals from planets other than the 86 targets. “If you extrapolate from the Kepler data, there could be 50 billion planets in the galaxy,” physicist Dan Werthimer, chief scientist for SETI@home, said. “It’s really exciting to be able to look at this first batch of Earth-like planets.”

Why the Green Bank Telescope?

Werthimer conducted a brief SETI project using the Allen Telescope Array (ATA), which hosted a broader search for intelligent signals from space run by the SETI Institute. That search ended last month after the institute and UC Berkeley ran out of money to operate it. Now the action has moved to West Virginia.

So why not use Arecibo? Because the Arecibo dish can’t view the area of the northern sky on which Kepler focuses. Arecibo also has limited frequency coverage (it centers on the 21 centimeter or 1420 MHz “water hole,” a natural window in which water-based life forms would signal their existence, since its wavelengths easily pass through the dust clouds that obscure much of the galaxy).

“With a new data recorder on the Green Bank telescope, we can scan a 800 megaHertz range of frequencies simultaneously, which is 300 times the range we can get at Arecibo,” said Werthimer. One day on the Green Bank telescope provides as much data as one year’s worth of observations at Arecibo: about 60 terabytes.

Source: http://goo.gl/sfEFM


Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
 THE SINGULARITY INSTITUTE FOR ARTIFICIAL INTELLIGENCE

Reducing long-term catastrophic risks from artificial intelligence

In 1965, the eminent statistician I. J. Good proposed that artificial intelligence beyond some threshold level would snowball, creating a cascade of self-improvements: AIs would be smart enough to make themselves smarter, and, having made themselves smarter, would spot still further opportunities for improvement, leaving human abilities far behind [3]. Good called this process an "intelligence explosion," while later authors have used the terms "technological singularity" or simply "the Singularity" [10] [21].

        The Singularity Institute aims to reduce the risk of a catastrophe, should such an event eventually occur. Our activities include research, education, and conferences. In this document, we provide a whirlwind introduction to the case for taking AI risks seriously, and suggest some strategies to reduce those risks.


What We're (Not) About

The Singularity Institute is interested in the advent of smart, cross-domain, human-plus-equivalent, self-improving Artificial Intelligence. We do not forecast any particular time when such AI will be developed. We are interested in analyzing points of leverage for increasing the probability that the advent of AI turns out positive. We do not see ourselves as having the job of foretelling that it will go well or poorly - if the outcome were predetermined there would be no point in trying to intervene. We suspect that AI is primarily a software problem which will require new insight, not a hardware problem which will fall to Moore's Law. We are interested in rational analyses which try to support each point of claimed detail, as opposed to storytelling in which many interesting details are invented but not separately supported.


Indifference, not malice

 

Anthropomorphic ideas of a "robot rebellion," in which AIs spontaneously develop primate-like resentments of low tribal status, are the stuff of science fiction. The more plausible danger stems not from malice, but from the fact that human survival requires scarce resources: resources for which AIs may have other uses [13] [14]. Superintelligent AIs with real-world traction, such as access to pervasive data networks and autonomous robotics, could radically alter their environment, e.g., by harnessing all available solar, chemical, and nuclear energy. If such AIs found uses for free energy that better furthered their goals than supporting human life, human survival would become unlikely.

        Many AIs will converge toward being optimizing systems, in the sense that, after self-modification, they will act to maximize some goal [13]. For instance, AIs developed under evolutionary pressures would be selected for values that maximized reproductive fitness, and would prefer to allocate resources to reproduction rather than supporting humans [1]. Such unsafe AIs might actively mimic safe benevolence until they became powerful, since being destroyed would prevent them from working toward their goals. Thus, a broad range of AI designs may initially appear safe, but if developed to the point of a Singularity could cause human extinction in the course of optimizing the Earth for their goals.

 

An intelligence explosion may be sudden

 

The pace of an intelligence explosion depends on two conflicting pressures: each improvement in AI technology increases the ability of AIs to research more improvements, while the depletion of low-hanging fruit makes subsequent improvements more difficult. The rate of improvement is hard to estimate, but several factors suggest it would be high. The predominant view in the AI field is that the bottleneck for powerful AI is software, rather than hardware, and continued rapid hardware progress is expected in coming decades [4]. If and when the software is developed, there may thus be a glut of hardware to run many copies of AIs, and to run them at high speeds, amplifying the effects of AI improvements [8]. As we have little reason to expect that human minds are ideally optimized for intelligence, as opposed to being the first intelligences sophisticated enough to produce technological civilization, there is likely to be further low-hanging fruit to pluck (after all, the AI would have been successfully created by a slower and smaller human research community). Given strong enough feedback, or sufficiently abundant hardware, the first AI with humanlike AI research abilities might be able to reach superintelligence rapidly—in particular, more rapidly than researchers and policy-makers can develop adequate safety measures.

 

Is concern premature?

 

The absence of a clear picture of how to build AI means that we cannot assign high confidence to the development of AI in the next several decades. It also makes it difficult to rule out unforeseen advances. Past underestimates of the AI challenge (perhaps most infamously, those made for the 1956 Dartmouth Conference) [12] do not guarantee that AI will never succeed, and we need to take into account both repeated discoveries that the problem is more difficult than expected, and incremental progress in the field. Advances in AI and machine learning algorithms [17], increasing R&D expenditures by the technology industry, hardware advances that make computation-hungry algorithms feasible [4], enormous datasets [5], and insights from neuroscience give advantages that past researchers lacked. Given the size of the stakes and the uncertainty about AI timelines, it seems best to allow for the possibility of medium-term AI development in our safety strategies.

Friendly AI

 

Concern about the risks of future AI technology has led some commentators, such as Sun co-founder Bill Joy, to suggest the global regulation and restriction of such technologies [9]. However, appropriately designed AI could offer similarly enormous benefits. More specifically, human ingenuity is currently a bottleneck in making progress on many key challenges affecting our collective welfare: eradicating diseases, averting long-term nuclear risks, and living richer, more meaningful lives. Safe AI could help enormously in meeting each of these challenges. Further, the prospect of those benefits along with the competitive advantages from AI would make a restrictive global treaty very difficult to enforce.

          SIAI's primary approach to reducing AI risks has thus been to promote the development of AI with benevolent motivations which are reliably stable under self-improvement, what we call “Friendly AI” [22].

          To very quickly summarize some of the key ideas in Friendly AI:

 

1. We can't make guarantees about the final outcome of an agent's interaction with the environment, but we may be able to make guarantees about what the agent is trying to do, given its knowledge—we can't determine that Deep Blue will win against Kasparov just by inspecting Deep Blue, but an inspection might reveal that Deep Blue searches the game tree for winning positions rather than losing ones.


2. Since code executes on the almost perfectly deterministic environment of a computer chip, we may be able to make very strong guarantees about an agent's motivations (including how that agent rewrites itself), even though we can't logically prove the outcomes of environmental strategies. This is important, because if the agent fails on an environmental strategy, it can update its model of the world and try again; but during self-modification, the AI may need to implement a million code changes, one after the other, without any of them being catastrophic.

 

3. If Gandhi doesn't want to kill people, and someone offers Gandhi a pill that will alter his brain to make him want to kill people, and Gandhi knows this is what the pill does, then Gandhi will very likely refuse to take the pill. Most utility functions should be trivially stable under reflection—provided that the AI can correctly project the result of its own self-modifications. Thus, the problem of Friendly AI is not creating an extra conscience module that constrains the AI despite its preferences, but reaching into the enormous design space of possible minds and selecting an AI that prefers to be Friendly.

 

4. Human terminal values are extremely complicated, although this complexity is not introspectively visible at a glance, for much the same reason that major progress in computer vision was once thought to be a summer's work. Since we have no introspective access to the details of human values, the solution to this problem probably involves designing an AI to learn human values by looking at humans, asking questions, scanning human brains, etc., rather than an AI preprogrammed with a fixed set of imperatives that sounded like good ideas at the time.

 

5. The explicit moral values of human civilization have changed over time, and we regard this change as progress, and extrapolate that progress may continue in the future. An AI programmed with the explicit values of 1800 might now be fighting to reestablish slavery. Static moral values are clearly undesirable, but most random changes to values will be even less desirable—every improvement is a change, but not every change is an improvement. Possible bootstrapping algorithms include "do what we would have told you to do if we knew everything you knew," "do what we would've told you to do if we thought as fast as you did and could consider many more possible lines of moral argument," and "do what we would tell you to do if we had your ability to reflect on and modify ourselves."  In moral philosophy, this notion of moral progress is known as reflective equilibrium [15].

 

Seeding research programs

 

As we get closer to advanced AI, it will be easier to learn how to reduce risks effectively. The interventions to focus on today are those whose benefits will compound over time: lines of research that can guide other choices or that entail much incremental work. Some possibilities include:

 

Friendly AI: Theoretical computer scientists can investigate AI architectures that self-modify while retaining stable goals. Theoretical toy systems exist now: Gödel machines make provably optimal self-improvements given certain assumptions [19]. Decision theories are being proposed that aim to be stable under self-modification [2]. These models can be extended incrementally into less idealized contexts.

 

Stable brain emulations: One conjectured route to safe AI starts with human brain emulation. Neuroscientists can investigate the possibility of emulating the brains of individual humans with known motivations, while evolutionary theorists can investigate methods to prevent dangerous evolutionary dynamics and social scientists can investigate social or legal frameworks to channel the impact of emulations in positive directions [18].

 

Models of AI risks:  Researchers can build models of AI risks and of AI growth trajectories, using tools from game theory, evolutionary analysis, computer security, or economics [1] [6] [8] [14] [22]. If such analysis is done rigorously it can help to channel the efforts of scientists, graduate students, and funding agencies to the areas with the greatest potential benefits.


Institutional improvements: Major technological risks are ultimately navigated by society as a whole: success requires that society understand and respond to scientific evidence. Knowledge of the biases that distort human thinking around catastrophic risks [23], improved methods for probabilistic forecasting [16] or risk analysis [11], and methods for identifying and aggregating expert opinions [7] can all improve our collective odds. So can methods for international cooperation around AI development, and for avoiding an AI “arms race” that might be won by the competitor most willing to trade off safety measures for speed [20].

Our aims

We aim to seed the above research programs. We are too small to carry out all the needed research ourselves, but we can get the ball rolling.
        We have groundwork already. We have: (a) seed research about catastrophic AI risks and AI safety technologies; (b) human capital; and (c) programs that engage outside research talent, including our annual Singularity Summits and our Visiting Fellows program.
        Going forward, we plan to continue our recent growth by scaling up our visiting fellows program, extending the Singularity Summits and similar academic networking, and writing further papers to seed the above research programs, in-house or with the best outside talent we can find. We welcome potential co-authors, Visiting Fellows, and other collaborators, as well as any suggestions or cost-benefit analyses on how to reduce catastrophic AI risk.


The upside and downside of artificial intelligence

 

Human intelligence is the most powerful known biological technology, with a discontinuous impact upon the planet compared to past organisms. But our place in history probably rests, not on our being the smartest intelligences that could exist, but rather, on being the first intelligences that did exist. We probably are to intelligence what the first replicator was to biology. The first single-stranded RNA capable of copying itself was nowhere near being an ultra-sophisticated replicator—but it still had an important place in history, due to being first.

            The future of intelligence is—hopefully—very much greater than its past. The origin and shape of human intelligence may end up playing a critical role in the origin and shape of future civilizations on a much larger scale than one planet. And the origin and shape of the first self-improving Artificial Intelligences humanity builds, may have a similarly strong impact, for similar reasons. It is the values of future intelligence that will shape future civilization. What stands to be won or lost is the values of future intelligences, and thus the value of future civilization.   

Source: http://goo.gl/J2lGc


Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk

Tuesday, May 17, 2011

 Control Desk for the Neural Switchboard

By CARL E. SCHOONOVER and ABBY RABINOWITZ

Treating anxiety no longer requires years of pills or psychotherapy. At least, not for a certain set of bioengineered mice.

In a study recently published in the journal Nature, a team of neuroscientists turned these high-strung prey into bold explorers with the flip of a switch.

The group, led by Dr. Karl Deisseroth, a psychiatrist and researcher at Stanford, employed an emerging technology called optogenetics to control electrical activity in a few carefully selected neurons.

First they engineered these neurons to be sensitive to light. Then, using implanted optical fibers, they flashed blue light on a specific neural pathway in the amygdala, a brain region involved in processing emotions.

And the mice, which had been keeping to the sides of their enclosure, scampered freely across an open space.

While such tools are very far from being used or even tested in humans, scientists say optogenetics research is exciting because it gives them extraordinary control over specific brain circuits — and with it, new insights into an array of disorders, among them anxiety and Parkinson’s disease.

Mice are very different from humans, as Dr. Deisseroth (pronounced DICE-er-roth) acknowledged. But he added that because “the mammalian brain has striking commonalities across species,” the findings might lead to a better understanding of the neural mechanisms of human anxiety.

David Barlow, founder of the Center for Anxiety and Related Disorders at Boston University, cautions against pushing the analogy too far: “I am sure the investigators would agree that these complex syndromes can’t be reduced to the firing of a single small neural circuit without considering other important brain circuits, including those involved in thinking and appraisal.”

But a deeper insight is suggested by a follow-up experiment in which Dr. Deisseroth’s team directed their light beam just a little more broadly, activating more pathways in the amygdala. This erased the effect entirely, leaving the mouse as skittish as ever.

This implies that current drug treatments, which are far less specific and often cause side effects, could also in part be working against themselves.

David Anderson, a professor of biology at the California Institute of Technology who also does research using optogenetics, compares the drugs’ effects to a sloppy oil change. If you dump a gallon of oil over your car’s engine, some of it will dribble into the right place, but a lot of it will end up doing more harm than good.

“Psychiatric disorders are probably not due only to chemical imbalances in the brain,” Dr. Anderson said. “It’s more than just a giant bag of serotonin or dopamine whose concentrations sometimes are too low or too high. Rather, they likely involve disorders of specific circuits within specific brain regions.”

So optogenetics, which can focus on individual circuits with exceptional precision, may hold promise for psychiatric treatment. But Dr. Deisseroth and others caution that it will be years before these tools are used on humans, if ever.

For one, the procedure involves bioengineering that most people would think twice about. First, biologists identify an “opsin,” a protein found in photosensitive organisms like pond scum that allows them to detect light. Next, they fish out the opsin’s gene and insert it into a neuron within the brain, using viruses that have been engineered to be harmless —“disposable molecular syringes,” as Dr. Anderson calls them.

There, the opsin DNA becomes part of the cell’s genetic material, and the resulting opsin proteins conduct electric currents — the language of the brain — when they are exposed to light. (Some opsins, like channelrhodopsin, which responds to blue light, activate neurons; others, like halorhodopsin, activated by yellow light, silence them.)

Finally, researchers delicately thread thin optical fibers down through layers of nervous tissue and deliver light to just the right spot.

Thanks to optogenetics, neuroscientists can go beyond observing correlations between the activity of neurons and an animal’s behavior; by turning particular neurons on or off at will, they can prove that those neurons actually govern the behavior.

“Sometimes before I give talks, people will ask me about my ‘imaging’ tools,” said Dr. Deisseroth, 39, a practicing psychiatrist whose dissatisfaction with current treatments led him to form a research laboratory in 2004 to develop and apply optogenetic technology.

“I say: ‘Interestingly, it’s the complete opposite of imaging, which is observational. We’re not using light to observe events. We’re sending light in to cause events.’ ”

In early experiments, scientists showed that they could make worms stop wiggling and drive mice around in manic circles as if by remote control.

Now that the technique has earned its stripes, laboratories around the world are using it to better understand how the nervous system works, and to study problems including chronic pain, Parkinson’s disease and retinal degeneration.

Some of the insights gained from these experiments in the lab are already inching their way to the clinic.

Dr. Amit Etkin, a Stanford psychiatrist and researcher who collaborates with Dr. Deisseroth, is trying to translate the findings about anxiety in rodents to improve human therapy with existing tools. Using transcranial magnetic stimulation, a technique that is far less specific than optogenetics but has the advantage of being noninvasive, Dr. Etkin seeks to activate the human analog of the amygdala circuitry that reduced anxiety in Dr. Deisseroth’s mice.

Dr. Jaimie Henderson, their colleague in the neurosurgery department, has treated more than 600 Parkinson’s patients using a standard procedure called deep brain stimulation. The treatment, which requires implanting metal electrodes in a brain region called the subthalamic nucleus, improves coordination and fine motor control. But it also causes side effects, like involuntary muscle contractions and dizziness, perhaps because turning on electrodes deep inside the brain also activates extraneous circuits.

“If we could find a way to just activate the circuits that provide therapeutic benefit without the ones that cause side effects, that would obviously be very helpful,” Dr. Henderson said.

Moreover, as with any invasive brain surgery, implanting electrodes carries the risk of infection and life-threatening hemorrhage. What if you could stimulate the brain’s surface instead? A new theory of how deep brain stimulation affects Parkinson’s symptoms, based on optogenetics work in rodents, suggests that this might succeed.

Dr. Henderson has recently begun clinical tests in human patients, and hopes that this approach may also treat other problems associated with Parkinson’s, like speech disorders.

In the building next door, Krishna V. Shenoy, a neuroscience researcher, is bringing optogenetics to work on primates. Extending the success of a similar effort by an M.I.T. group led by Robert Desimone and Edward S. Boyden, he recently inserted opsins into the brains of rhesus monkeys. They experienced no ill effects from the viruses or the optical fibers, and the team was able to control selected neurons using light.

Dr. Shenoy, who is part of an international effort financed by the Defense Advanced Research Projects Agency, says optogenetics has promise for new devices that could eventually help treat traumatic brain injury and equip wounded veterans with neural prostheses.

“Current systems can move a prosthetic arm to a cup, but without an artificial sense of touch it’s very difficult to pick it up without either dropping or crushing it,” he said. “By feeding information from sensors on the prosthetic fingertips directly back into the brain using optogenetics, one could in principle provide a high-fidelity artificial sense of touch.”

Some researchers are already imagining how optogenetics-based treatments could be used directly on people if the biomedical challenge of safely delivering novel genes to patients can be overcome.

Dr. Boyden, who participated in the early development of optogenetics, runs a laboratory dedicated to creating and disseminating ever more powerful tools. He pointed out that light, unlike drugs and electrodes, can switch neurons off — or as he put it, “shut an entire circuit down.” And shutting down overexcitable circuits is just what you’d want to do to an epileptic brain.

“If you want to turn off a brain circuit and the alternative is surgical removal of a brain region, optical fiber implants might seem preferable,” Dr. Boyden said. Several labs are working on the problem, even if actual applications still seem far off.

For Dr. Deisseroth, who treats patients with autism and depression, optogenetics offers a more immediate promise: easing the stigma faced by people with mental illness, whose appearance of physical health can cause incomprehension from family members, friends and doctors.

“Just understanding for us, as a society, that someone who has anxiety has a known or knowable circuitry difference is incredibly valuable,” he said.

Source: http://goo.gl/UbbTU

Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk