QUANTA

Thursday, September 29, 2011

Einstein bounces back: as doubt cast, another fundamental theory confirmed

Poor old Einstein has had a rough few days.

It all began with an experiment last week that bizarrely found sub-atomic particles called neutrinos appear to move faster than the speed of light.

The finding was a shock.

The speed of light was enshrined in 1905 by Einstein as the Universe's speed limit. Today, physicists almost everywhere accept it as such. Could the great man have got it terribly wrong?

But soon after this shadow fell across Einstein's reputation, another experiment came along which has validated - magnificently and on a cosmological scale - another of his landmark ideas.

According to Einstein's general theory of relativity, light emitted from stars and galaxies is slightly tugged by gravity from celestial bodies.

Danish astronomers have put the theory to the test in measuring light emitted by galactic "clusters".

These are sectors of deep space which are packed with thousands of galaxies, held together by their own gravity. Their density and mass should thus have a perceptible gravitational effect on the light they emit.

University of Copenhagen cosmologist Radek Wojtak and colleagues analysed light from around 8000 of these clusters.

They were looking for variations in "redshift", a measurement of the shift in light. As the Universe expands, light from a star becomes slightly redder as its wavelength lengthens, indicating a widening distance between the star and Earth.

Wojtak's team measured the wavelength of light from galaxies lying in the middle of the galactic clusters, where the densest gravitational pull prevailed, and those lying on the more sparsely-populated periphery.

"We could measure small differences in the redshift of the galaxies and see that the light from galaxies in the middle of a cluster had to 'crawl' out through the gravitational field, while it was easier for the light from the outlying galaxies," said Wojtak.

They then measured the galaxy cluster's total mass to get a fix on its gravitational potential.

"The redshift of light is proportionately offset in relation to the gravitational influence from the galaxy cluster's gravity," said Wojtak.

"In that way, our observations confirm the theory of relativity."

The findings do not negate popular theories about dark matter and dark energy, the enigmatic phenomena that account for almost all over the matter in the Universe.

Until now, Einstein's theory of the impact of gravity on light had only been tested from within the Solar System itself - essentially by measuring light from the Sun that was "redshifted" by the gravitational pull of Mercury.

On September 22, physicists reported that neutrinos can travel faster than light, a finding that - if verified - would blast a hole in Einstein's theory of special relativity.

In experiments conducted between the European Centre for Nuclear Research (CERN) in Switzerland and a laboratory in Italy, the particles were clocked at 300,006 kilometres per second, about six km/sec faster than the speed of light, the researchers said.

The physicists themselves admitted they were quite flummoxed by the findings and other experts are skeptical, suggesting a problem in measurement techniques or equipment.

Wojtak's research is released on Wednesday by Nature, the British scientific journal.

Read more: http://goo.gl/kxM69

Global Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk

Wednesday, September 28, 2011

 New ‘FeTRAM’ memory uses 99 percent less energy than flash memory

A new type of nonvolatile computer memory that could be faster than the existing commercial memory and use far less power than flash memory devices is being developed at Purdue University’s Birck Nanotechnology Center.

The FeTRAM (ferroelectric transistor random access memory) technology combines silicon nanowires with a ferroelectric polymer, a material that switches polarity when electric fields are applied, making possible a new type of ferroelectric transistor. It has the potential to use 99 percent less energy than flash memory.

The FeTRAM technology fulfills the three basic functions of computer memory: to write information, read the information and hold it for a long period of time.

The new technology is compatible with standard CMOS  chip technology and has the potential to replace conventional memory systems.

The FeTRAMs are similar to state-of-the-art ferroelectric random access memories, FeRAMs, which are in commercial use, but represent a relatively small part of the overall semiconductor market. Both use ferroelectric material to store information in a nonvolatile fashion. But unlike FeRAMS, the new technology allows for nondestructive readout, meaning information can be read without losing it, made possible by storing information using a ferroelectric transistor instead of the capacitor used in conventional FeRAMs.

Read more: http://goo.gl/pHhwc

Global Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
 Intel Labs announces science and technology center focused on next generation of pervasive computing

The new Intel Science and Technology Center for Pervasive Computing will focus on developing pervasive computing applications for low-power sensing and communication, understanding human state and activities, and personalization and adaptation.

The center, located at the University of Washington, will explore task spaces that interact seamlessly with users by combining multiple cues such as a person’s context, gestures and voice, and that provide assistance through multiple output modes such as audio and projected imagery.

Read more: http://goo.gl/qzCIp

Global Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
 Plasmonic nanotweezers trap tightly without overheating

Harvard engineers have created “plasmonic nanotweezers” that use laser light to trap and study nanoparticles such as viruses more efficiently than optical tweezers, without overheating.

With optical tweezers, a lens cannot focus the beam any smaller than half the wavelength of the light; the focal size limit places an upper limit on the gradient force that can be generated; and they overheat. The new design uses plasmonics for tighter focusing, along with silicon coated in copper and then gold, with raised gold pillars, acting as a heat sink.

Read more: http://goo.gl/q7ZZk

Global Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
 Is Quantum Computing real?
 
Researchers have been working on quantum systems for more than a decade, in the hopes of developing super-tiny, super-powerful computers. And while there is still plenty of excitement surrounding quantum computing, significant roadblocks are causing some to question whether quantum computing will ever make it out of the lab.

Researchers have been working on quantum systems for more than a decade, in the hopes of developing super-tiny, super-powerful computers. And while there is still plenty of excitement surrounding quantum computing, significant roadblocks are causing some to question whether quantum computing will ever make it out of the lab.

25 radical research projects you should know about

First, what is quantum computing? One simple definition is that quantum computers use qubits (or quantum bits) to encode information. However, unlike silicon-based computers that use bits which are zeroes or ones, qubits can exist in multiple states simultaneously. In other words, a qubit is a bit of information that has not yet decided whether it wants to be a zero or a one.

In theory, that means that quantum systems can produce simultaneous processing of calculations; in essence, true parallel systems.

Watch a slideshow version of this story.

Olivier Pfister, professor of experimental atomic, molecular and optical physics at the University of Virginia, says quantum algorithms could deliver exponential advances in compute speed, which would be useful for database searching, pattern recognition, solving complex mathematical problems and cracking encryption protocols.

"But the roadblocks to complete success are numerous," Pfister adds. The first is scalability - how do you build systems with large numbers of qubits. The second is even more vexing - how do you overcome "decoherence," the random changes in quantum states that occur when qubits interact with the environment.

The first roadblock is an obvious one: quantum systems are microscopic. The challenge is to gain exquisite levels of control at the atomic scale, over thousands of atoms. To date, this has only been achieved on the order of 10 atoms.

"My work with optical fields has demonstrated good preliminary control over 60 qubit equivalents, which we call 'Qmodes' and has the potential to scale to thousands of Qmodes," Pfister says. "Each Qmode is a distinctly specified color of the electromagnetic field, but to develop a quantum computer, nearly hundreds to thousands of Qmodes are required."

Decoherence is an even more vexing problem. "All the algorithms or patents in the world are not going to produce a quantum computer until we learn how to control decoherence," says Professor Philip Stamp, Director of the Pacific Institute for Theoretical Physics, Physics, and Astronomy at the University of British Columbia.

In the early days of quantum research, computer scientists used classical error correction methods to try to mitigate the effects of decoherence, but Stamp says those methods are turning out to be not applicable to the quantum world. "The strong claims for error correction as a panacea to deal with decoherence need to be re-evaluated."

According to Stamp, there are many experiments going on around the world in which researchers are claiming that they have built quantum information processing devices, but many of these claims dissolve when the hard questions about decoherence for multi-qubit systems are asked.

So far, the most sophisticated quantum computations have been performed in 'ion trap' systems, with up to eight entangled qubits. But physicists believe that the long-term future of this field lies with solid-state computations; that is, in processors made from solid state electronics (or all-electronic devices that look and feel more like regular microprocessors), as opposed to atomic particles. This has not been possible using solid-state qubits until now because the qubits only lasted about a nanosecond. Now these qubits can last a microsecond (a thousand times longer), which is enough to run simple algorithms.

Quantum controversy
The most recent results showing very low decoherence for magnetic molecule qubits was recently published in Nature International Weekly Journal of Science by a team of researchers from the Vancouver-based company D-Wave Systems. D-Wave has performed a technique called quantum annealing, which could provide the computational model for a quantum processor.

Dr. Suzanne Gildert, PhD from the University of Birmingham, experimental physicist, and quantum computer programmer (now working at D-Wave Systems), says that with quantum annealing, decoherence is not a problem.

According to Gildert, D-Wave uses Natural Quantum Computing (NQC) to build its quantum computers, which is very different from the traditionally proposed schemes. "Some quantum computing schemes try to take ideas from regular computing — such as logic operations — and make 'quantum' versions of them, which is extremely difficult. Making 'quantum' versions of computing operations is a very delicate process. It's like trying to keep a pencil standing on its end by placing it on a block of wood, and then moving the block around to try to balance the whole thing. It's almost impossible. You have to constantly work hard to keep the pencil (i.e., the qubits) in the upright state. Decoherence is what happens when the pencil falls over," Gildert says.

"In our NQC approach, which is more scalable and robust, we let the pencil lie flat on the wood instead, and then move it around. We're computing by allowing the pencil to roll however it wants to, rather than asking it to stay in an unusual state. So we don't have this same problem of bits of information 'decohering' because the state we are trying to put the system into is what nature wants it to be in (that's why we call it Natural QC)."

But Jim Tully, vice president and chief of research, semiconductors and electronics at Gartner Research, says that what D-Wave is doing is not really quantum computing.

Tully says, "A sub-class of quantum computing has been demonstrated by D-Wave Systems that is referred to as quantum annealing, which involves superposition, but does not involve entanglement and is not; therefore, 'true' quantum computing. Quantum annealing is potentially useful for optimization purposes, specifically for the purposes of finding a mathematical minimum in a dataset very quickly."

There may be some dispute over whether D-Wave's approach is pure quantum computing, but Lockheed Martin is a believer. Lockheed Martin owns a quantum computing system called the D-Wave One, a 128-qubit processor and surrounding system (cooling apparatus, shielded rooms etc.). Lockheed is working on a problem known as verification and validation to develop tools that can help predict how a complex system will behave; for example, to detect if there are bugs in the system, which may cause equipment to behave in a faulty way.

Keith Mordoff, director of Communications Information Systems & Global Solutions at Lockheed Martin, says, "Yes, we have a fully functioning quantum computer with 56 qubits, which is different from the classical methods. D-Wave uses an adiabatic or quantum annealing approach, which defines a complex system whose ground state (lowest energy state) represents the solution to the problem posed. It constructs a simple system and initializes it in its ground state (relatively straight forward for simple systems), then changes the simple system slowly until it becomes the complex system. As the system evolves, it remains in the ground state, then measures the state of the final system. And, this will be the answer to the problem posed. The change from simple system to complex system is induced by turning on a background magnetic field."

Future shock
Some scientists are extremely skeptical about quantum computing and doubt that it will ever amount to anything tangible.

Artur Ekert, professor of Quantum Physics, Mathematical Institute at the University of Oxford, says physicists today can only control a handful of quantum bits, which is adequate for quantum communication and quantum cryptography, but nothing more. He notes that it will take a few more domesticated qubits to produce quantum repeaters and quantum memories, and even more to protect and correct quantum data.

"Add still a few more qubits, and we should be able to run quantum simulations of some quantum phenomena and so forth. But when this process arrives to 'a practical quantum computer' is very much a question of defining what 'a practical quantum computer' really is. The best outcome of our research in this field would be to discover that we cannot build a quantum computer for some very fundamental reason, then maybe we would learn something new and something profound about the laws of nature," Ekert says.

Gildert adds that the key area for quantum computing will be machine learning, which is strongly linked to the field of artificial intelligence (AI). This discipline is about constructing software programs that can learn from experience, as opposed to current software, which is static.

"This is radically different from how we use computing for most tasks today," Gildert says. "The reason that learning software is not ubiquitous is that there are some very difficult and core mathematical problems known as optimization problems under the hood when you look closely at machine learning software. D-Wave is building a hardware engine that is designed to tackle those hard problems, opening the door to an entirely new way of programming and creating useful pieces of code."

According to Gildert, one very important real-world application is in the field of medical diagnosis. It's possible to write a program that applies hand-coded rules to X-ray or MRI images to try and detect whether there is a tumor in the image. But current software can only perform as well as the expert doctors' knowledge regarding what to look for in those images. With learning software, the program is shown examples of X-rays or MRI scans with and without tumors, then it learns the differences itself without having to be told. With this technology, the computer can even detect anomalies that a doctor cannot see or might not even notice. And the more examples you show it, the better it gets at this task.

"It is unlikely that QCs will replace desktop machines any time soon," Gildert says. "In terms of years, it depends on the effort invested, available funding, and the people working on the problem. The logical assumption is that these machines will be cloud-based co-processors for existing data centers used by companies that have very difficult problems to solve. Quantum systems are very good at solving a specific class of hard problems in the fields of AI and machine learning, so we are concentrating on building tools that help introduce the potentials of quantum computing to the people who work in these areas."

Addison Snell, CEO of Intersect360 Research, an analyst firm specializing in high performance computing, says, "Quantum computing is still of interest primarily among government and defense research labs. And, while the principles of quantum computing have been described for years, it is a wholly new paradigm, and the number of applications it will work for, even theoretically at this point, is small. However, some of these applications could be relevant to national security, so a high degree of interest remains."

"Quantum computing is certainly 'on the radar' of IBM, HP, and other supercomputing vendors, but it is difficult to say how many engineers they have working on this technology. At this point, it is uncertain whether quantum computing will ever have any role beyond a small handful of boutique supercomputing installations; but if or when it does, it is not likely we'll see commercially available working systems within the next five years."

"That depends what you mean by working systems," Stamp adds. "If you believe D-Wave, we already have one system commercially available now. I think that for a genuine quantum computer, we may be talking about 10 years for something that a very big company can buy and 25 to 30 years for the ordinary consumer."

25 ways IT will morph in 25 years

"I'd put quantum computing, even if it proves competitive and valid, 20 years out because of the very complex infrastructure that has to go with it," says Michael Peterson, analyst and CEO at Strategic Research Corporation. "Developing a new technology like this requires 'breaking the laws of physics' more than once.' However, we did it with disk technology many times over during the past 25 years, and we'll do it many times more."

Mordoff adds that there are other commercial companies evaluating quantum computers, but no one is actually 'using' them, thus far, except Lockheed and D-Wave, of course. "Whether we want this or not, we have to eventually venture into a quantum domain," Ekert says.

"Some researchers believe that general purpose quantum computers will never be developed. Instead, they will be dedicated to a narrow class of use such as the optimization engine of D-Wave Systems. This suggests architectures where traditional computers offload specific calculations to dedicated quantum acceleration engines. It's still likely to be around 10 years before the acceleration engine approach is ready for general adoption by the classes of user that can make use of them; however, they will likely be attractive offerings for cloud service providers, adds Tully."

Read more: http://goo.gl/Ai04A


Global Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
 On Wall Street, The Race to Zero Continues

To address today's need for speedier transaction processing and to handle the associated surge in message traffic, financial services firms are examining every aspect of their infrastructures to squeeze any delays out of their end-to-end computational workflows. This quest to lower latencies in each step of processing trades, and to perform other chores, was a common theme at the High Performance Computing Financial Markets Conference held in New York, last week.

In session after session, panelists (representing the vendor and user communities) discussed how they were addressing the latency issue in the so-called “race to zero.”

Lee Fisher, Worldwide Business Development manager for HP’s scalable, high performance solutions in Financial Services, kicked off the conference with a roundtable discussion that put the latency issue into perspective.

“Latency equals risk,” said Fisher. “The issue is not about latency for latency’s sake, it’s about managing latency to reduce risk.” He and others at the conference noted, for example, that if data is out of sync, an organization is making decisions base on old information compared to their competitors.

Throughout the day, speakers discussed how their organizations were working to reduce latency in financial services trading systems. And while reducing latency takes a coordinated systems approach, the bulk of the day’s discussions highlighted how every aspect of the end-to-end operation is being closely examined. This includes looking at CPU performance, data movement, timing systems, and the applications themselves.

On the HPC hardware side, a number of companies noted the technologies that are now being employed to help reduce latency. For example, Joseph Curley, Director, Technical Computing Marketing at Intel, talked about efforts to accelerate data analysis-driven decisions in financial services firms.

“Intel was asked to produce a processor that focused on reducing latency,” said Curley. He pointed to the Xeon class of processors, noting in particular the use of the X5698. The Xeon X5698 is built on Westemere microarchitecture, and shares many features with other Xeon 5600 chips. However, there is one major difference. It offers extremely high core frequency of up to 4.4 GHz. (HP, for example, offers a special DL server with this processor for financial services firms.)

Looking a bit to the future, Curley said he expects newer Intel Sandy Bridge processors will find favor in financial services firms. Also looking ahead, Don Newell, CTO, Server Product Group at AMD, in a different session, mentioned the recent release of its Bulldozer microprocessor architecture, which offers advanced performance per watt. Bulldozer will be implemented in AMD’s Interlagos (Opteron 6200) CPUs and, according to Newell these chips will offer substantially higher integer performance than the previous generation Istanbul processors.

Several of the conference panelists throughout the day noted other efforts to reduce latency are aimed at increasing the performance of code running on multi-core processors. Work here is focused on threading and increasing parallelism.

End-user organizations are definitely keeping an eye on this type of work. Jens von der Heide, Director at Barclays Capital, chimed into the discussion noting that there is an interest in newer processors. He noted that from a pricing perspective, newer processors “continue to be very attractive, making it easy to migrate.” As has been the case for years, newer, top performance processors now cost the same as the top-level processors of old.

Relating back to Curley’s comment about the work on increased parallelism, von der Heide agreed that there is certainly more discussion today about the role of single-thread versus multi-thread. However, from his perspective, “the big issue is that when many threads are running, you want all of them to go faster, so what it comes down to is [we] want more cores.”

As advances are made in these areas, other aspects of trading processing workflows, such as I/O, naturally need attention. “If you have a faster server, you then need to move data into it faster,” said Doron Arad, Client Solutions Director at Mellanox. “For that you need a low-latency network.”

Arad note the issue comes down to how do you push data into memory? To accomplish this, technologies that are of interest include non-blocking fabrics, kernel bypass approaches, message acceleration, and remote direct memory access (RDMA).

Additionally, organizations are broadening their focus on the network. In the past, companies would look at the switch and cabling; now they also include the NIC in the network latency discussion. To that end, he noted that there is wide-scale use of InfiniBand in the financial services market. Echoing that point, Fisher noted he was seeing growing use of InfiniBand, as well as 10G Ethernet NICs.

Arad added that he was also seeing demand for a NIC that supports both Ethernet and InfiniBand. Rob Cornish, IT Strategy and Infrastructure Officer at the International Securities Exchange (ISE), agreed, noting that he’d like to be able to chose InfiniBand or Ethernet based on a particular application’s needs.

Adding to the discussion, several conference participants said that in addition to 10G Ethernet NICs for their servers, they were also looking for 40G Ethernet support in their interconnect fabric switches.

Application Acceleration and Data Feeds

As financial services firms cut latency with improvements in server and networking hardware, the next place to look for performance improvements is in the applications. That was the topic of discussion of an afternoon session looking at ways to architect the best solutions for what was called “Wall Street Optimization.”

In particular, David Rubio, Senior Consultant at Forsythe Solutions Group, discussed the need to profile applications. He noted that financial services organizations need to use appropriate tools to get insight into the bottlenecks that may be happening with each critical application.

One challenge is that assumptions are sometimes made that prove wrong. For instance, an organization may think an application is optimized because it may be using the latest compiler, but might still suffer from low performance due to use of libraries routines that are 20 years old. Or an application might be using a garbage collection algorithm that is not appropriate for a specific process.

“You need visibility into the applications and how they are interacting with the system,” said Rubio. He noted the basic problem comes down to this: “You have software running as a thread on a core. What is the thread doing? Is it executing code or waiting for something? If a thread is blocked, what is it waiting for?” He noted the need to use common OS tools like truss, strace, snoop, and tcpdump, or DTrace on Solaris systems.            

There was also talk of using LANZ, Arista Network’s Latency Analyzer. Within the financial services market, LANZ is used to get more visibility into the network to see if microbursts of trading activity are happening or not. It offers sub-millisecond reporting intervals such that congestion can be detected and application-layer messages sent faster than some products can forward a packet.  

Complementing these approaches, some panelists talked about the need for hardware acceleration, including the use of FGPAs and GPUs to improve application performance. However, one conference attendee raised the point that in some applications, such as those found in the futures market, there are lots of changes in the data feeds, so it is hard to implement that onto exotic hardware.

That triggered some discussion about the issue of data feeds. Certainly, optimizing a firm’s hardware and software can only cut latencies so far if the external data needed in the calculations and operations (and supplied by the major exchanges) is not delivered in a timely manner. The exchanges that provide the data are turning to technologies like co-location and hardware acceleration to reduce any delays on their end of the operation.

As the discussion evolved throughout the day at the conference, it became increasingly clear that with all of these aspects involved in the race for zero, a key element in latency reduction is the role of the systems integrator.

The consensus at the conference was that the way to reduce latency was to take a solutions approach, perhaps managed by a systems integrator. The approach would need to examine ways to optimize systems to reduce delays in CPU performance, host performance, networking including the NIC, and the data feeds from the major exchanges.

Read more: http://goo.gl/WZd7L



Global Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
Learning in your sleep

People may be learning while sleeping. using an unconscious form of memory, a study by Michigan State University researchers has found.

“We speculate that we may be investigating a … new, previously undefined form of memory … distinct from traditional memory systems,” said Kimberly Fenn, assistant professor of psychology. “There is substantial evidence that during sleep, your brain is processing information without your awareness and this ability may contribute to memory in a waking state.”

The study of more than 250 people  suggests that people derive vastly different effects from this “sleep memory” ability, with some memories improving dramatically and others not at all. Fenn said she believes this potential separate memory ability is not being captured by traditional intelligence tests and aptitude tests such as the SAT and ACT.  “This is the first step to investigate whether or not this potential new memory construct is related to outcomes such as classroom learning,” she said.

“Simply improving your sleep could potentially improve your performance in the classroom,” Fenn suggested.

Read more: http://goo.gl/abkED

Global Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
 Welcome to the genomic revolution

In this talk from TEDxBoston, GenomeQuest CEO Richard Resnick shows how cheap and fast genome sequencing is about to turn health care (and insurance, and politics) upside down.

“The price to sequence a base has fallen 100 million times,” Resnick said. “The world-wide capacity to sequence human genomes is something between 50,000 and 100,000 genomes this year, and this is expected to double, triple or maybe quadruple year over year in the foreseeable future….

“One lab in particular represents 20% of that capacity, the Beijing Genomics Institute. The Chinese are absolutely winning this race to the new Moon.”

Read more: http://goo.gl/kRCMd

Global Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
 Spies could hide messages in gene-modified microbes

A new encryption method, “steganography by printed arrays of microbes (SPAM),” uses a collection of Escherichia coli strains modified with fluorescent proteins that glow in a range of seven colors.

“You can think of all sorts of secret spy applications,” says David Walt, a chemist at Tufts University, who led the research.

Read more: http://goo.gl/b1150


Global Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk

Sunday, September 25, 2011

 Searching for New Ideas

Google's head of research explains why artificial intelligence is crucial to the search company's future.

If anyone can preview the future of computing, it should be Alfred Spector, Google's director of research. Spector's team focuses on the most challenging areas of computer science research with the intention of shaping Google's future technology. During a break from a National Academy of Engineering meeting on emerging technologies hosted by his company, Spector told Technology Review's computing editor Tom Simonite about these efforts, and explained how Google funnels its users' knowledge into artificial intelligence.

TR: Google often releases products based on novel ideas and technologies. How is the research conducted by your team different from the work carried out by other groups?

Spector: We also work on things that benefit Google and its users, but we have a longer time horizon and we try to advance the state of the art. That means areas like natural language processing [understanding human language], machine learning, speech recognition, translation, and image recognition. These are mostly problems that have traditionally been called artificial intelligence.

We have the significant advantage of being able to work in vitro on the large systems that Google operates, so we have large amounts of data and large numbers of users.

Can you give an example of some AI that has come out of this research effort?

Our translation tools can now use parsing—understanding the grammatical parts of a sentence. We used to train our translation just statistically, by comparing texts in different languages. Parsing now goes along with that, so we can assign parts of speech to sentences. Take the sentence "The dog crossed the road": "the dog" is the subject, "crossed" is a verb, "the road" is the object. This makes our translations better, and it's particularly useful in Japanese.

Another example is Fusion Tables, which is now part of Google Docs [the company's online office suite]. You can create a database that is shared with others and visualize and publish that data. A lot of media organizations are using it to display information on Google Maps or Google Earth to explain situations to the public. [During the recent hurricane Irene, New York public radio station WNYC used Fusion Tables to create an interactive guide to evacuation zones in the city.]

Does Google have a particular approach to AI?

In general, we have been using hybrid artificial intelligence, which means that we learn from our user community. When they label something as having a certain meaning or implication, we learn from that. With voice search, for example, if we correctly recognize an utterance, we will see that it lead to something that someone clicked on. The system self-trains based on that, so the more it's used, the better it gets.

Spelling correction for Web search uses the same approach. When Barack Obama ran for president, people might not have been sure how to spell his name and tried different ways. Eventually they came across something that worked, then they clicked on the result. We learned then which of the spellings was the one that got the results, which allowed us to automatically correct them.

We think Fusion Tables will also help our systems learn. If there are thousands of tables that say there are 50 states in the Union, there are probably 50 states in the Union. And the Union probably has states. Don't underestimate that. It sounds trivial, but computers can induce lots of information from many examples.

What new directions is the research group exploring at the moment?

We're looking at projects in security, because it's an increasingly important topic across computing. One area we're looking at is whether you can constrain the programs that you use to work on the most minimal amount of information possible. If they went wrong, they would be limited in what harm they could do.

Imagine you're using a word processor. In principle, it could delete all of your files; it's acting as you. But what if when you started your word processor, you gave it only a single file to edit? The worst it could do would be to corrupt that file; the damage it could do would be very limited. We're looking if we could tightly constrain the damage that could be done by faulty programs. That's an old line of thought. People have thought of this for years. We think it might be practical now.

Google is working hard on its social networking project, Google+. Do you expect your research to contribute to that effort?

Being useful in the social realm is very strong for many of the things that we do. Google+ is a communication mechanism, and we do research on AI problems that could aid communication—for example, how to recommend content, or how to communicate across languages. Ideas like those could help people communicate across their social network.

Google+ also provides us lots more opportunity to learn from our users. Take the "+1" button, for example. That's a very important signal that could be quite relevant to improving how we understand what matters to you. If your 10 friends think something is great, it's very likely you would like to see it.

Read more: http://goo.gl/Fd1OC

Global Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk

Friday, September 23, 2011

Tiny Neutrinos May Have Broken Cosmic Speed Limit

By DENNIS OVERBYE

Roll over, Einstein?

The physics world is abuzz with news that a group of European physicists plans to announce Friday that it has clocked a burst of subatomic particles known as neutrinos breaking the cosmic speed limit — the speed of light — that was set by Albert Einstein in 1905.

If true, it is a result that would change the world. But that “if” is enormous.

Even before the European physicists had presented their results — in a paper that appeared on the physics Web site arXiv.org on Thursday night and in a seminar at CERN, the European Center for Nuclear Research, on Friday — a chorus of physicists had risen up on blogs and elsewhere arguing that it was way too soon to give up on Einstein and that there was probably some experimental error. Incredible claims require incredible evidence.

“These guys have done their level best, but before throwing Einstein on the bonfire, you would like to see an independent experiment,” said John Ellis, a CERN theorist who has published work on the speeds of the ghostly particles known as neutrinos.

According to scientists familiar with the paper, the neutrinos raced from a particle accelerator at CERN outside Geneva, where they were created, to a cavern underneath Gran Sasso in Italy, a distance of about 450 miles, about 60 nanoseconds faster than it would take a light beam. That amounts to a speed greater than light by about 0.0025 percent (2.5 parts in a hundred thousand).

Even this small deviation would open up the possibility of time travel and play havoc with longstanding notions of cause and effect. Einstein himself — the author of modern physics, whose theory of relativity established the speed of light as the ultimate limit — said that if you could send a message faster than light, “You could send a telegram to the past.”

Alvaro de Rujula, a theorist at CERN, called the claim “flabbergasting.”

“If it is true, then we truly haven’t understood anything about anything,” he said, adding: “It looks too big to be true. The correct attitude is to ask oneself what went wrong.”

The group that is reporting the results is known as Opera, for Oscillation Project with Emulsion-Tracking Apparatus. Antonio Ereditato, the physicist at the University of Bern who leads the group, agreed with Dr. de Rujula and others who expressed shock. He told the BBC that Opera — after much internal discussion — had decided to put its results out there in order to get them scrutinized.

“My dream would be that another, independent experiment finds the same thing,” Dr. Ereditato told the BBC. “Then I would be relieved.”

Neutrinos are among the weirdest denizens of the weird quantum subatomic world. Once thought to be massless and to travel at the speed of light, they can sail through walls and planets like wind through a screen door. Moreover, they come in three varieties and can morph from one form to another as they travel along, an effect that the Opera experiment was designed to detect by comparing 10-microsecond pulses of protons on one end with pulses of neutrinos at the other. Dr. de Rujula pointed out, however, that it was impossible to identify which protons gave birth to which neutrino, leading to statistical uncertainties.

Dr. Ellis noted that a similar experiment was reported by a collaboration known as Minos in 2007 on neutrinos created at Fermilab in Illinois and beamed through the Earth to the Soudan Mine in Minnesota. That group found, although with less precision, that the neutrino speeds were consistent with the speed of light.

Measurements of neutrinos emitted from a supernova in the Large Magellanic Cloud in 1987, moreover, suggested that their speeds differed from light by less than one part in a billion.

John Learned, a neutrino astronomer at the University of Hawaii, said that if the results of the Opera researchers turned out to be true, it could be the first hint that neutrinos can take a shortcut through space, through extra dimensions. Joe Lykken of Fermilab said, “Special relativity only holds in flat space, so if there is a warped fifth dimension, it is possible that on other slices of it, the speed of light is different.”

But it is too soon for such mind-bending speculation. The Opera results will generate a rush of experiments aimed at confirming or repudiating it, according to Dr. Learned. “This is revolutionary and will require convincing replication,” he said.

Read more: http://goo.gl/RRHSN

Global Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk

Thursday, September 22, 2011

 Ray Kurzweil on How Entrepreneurs Can Live Forever

Ray Kurzweil believes that health and medicine are going to benefit in the next two decades from what he calls the law of accelerating returns. That idea comes from Kurzweil’s 2005 book “The Singularity is Near,” and it reflects his optimism about how “exponential technologies” will come together to deliver a world where we can solve all of our problems.

In fact, one consequence of the Singularity, or the augmenting of humans with technology, will be that we’ll live forever because we’ve solved health and medicine problems via nanotechnology and biotech. Kurzweil anticipates we could start extending our lives indefinitely (supposing we don’t get hit by a bus) within a couple of decades. We caught up with Kurzweil for a video interview on Friday during the closing graduation ceremony for the third class of Singularity University, which Kurzweil co-founder with Peter Diamandis in 2007. The program trains 80 entrepreneurs from dozens of countries over the course of a summer and tasks them with creating a business that can impact the lives of a billion people.

The Singularity is a term coined by Vernor Vinge, a computer scientist and science fiction writer. He used it 1993 to describe the point at which rapidly accelerating technology makes the future literally impossible to predict. The technological singularity has been most closely associated with the development of artificial intelligence, and the point at which AI is powerful enough that machines can improve themselves.

I heard Kurzweil talk in 2008 about this topic at the Game Developers Conference. He speculated back then that human lifespans would start stretching longer and that we would start becoming immortal at some point as nanotechnology and biological research advances make it possible to repair our bodies as they age. It sounds like a crazy prediction, but Kurzweil has a good track record on predicting the outcome of exponential technologies. In the 1980s, he predicted something like the World Wide Web of the 1990s.

“The power of exponential growth means multiplying by 1,000 in 10 years and a million in 20 years,” he said in our interview. “Life-saving technologies like cell phones are in almost everyone’s hands, 5 billion of them. About 30 percent of Africans have them.”

On Friday, Kurzweil said that entrepreneurs from the Singularity University program are embarking on ventures that can change life for a billion people.

“The tools required to change the world are in our hands,” Kurzweil said. “It doesn’t require billions of dollars.”

Beyond health and medicine, Kurzweil wants to end evils such as poverty through new technologies.

“We can end the digital divide and the have-have-not society,” he said. “And we can do it ourselves.”

Kurzweil argued that the idea of massive change through entrepreneurship is not so far-fetched, considering, Kurzweil said, that a kid at Harvard who wanted to meet girls created the world’s biggest social network, Facebook.

“We want to bring entrepreneurship to even younger people,” Kurzweil said.

Read more: http://goo.gl/0ZdxP

Global Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
 UPDATE 1-Particles found to break speed of light

* Finding could overturn laws of physics

* Scientists confident measurements correct (Adds background and quotes)

By Robert Evans

GENEVA, Sept 22 (Reuters) - An international team of scientists said on Thursday they had recorded sub-atomic particles travelling faster than light -- a finding that could overturn one of Einstein's long-accepted fundamental laws of the universe.

Antonio Ereditato, spokesman for the researchers, told Reuters that measurements taken over three years showed neutrinos pumped from CERN near Geneva to Gran Sasso in Italy had arrived 60 nanoseconds quicker than light would have done.

"We have high confidence in our results. We have checked and rechecked for anything that could have distorted our measurements but we found nothing," he said. "We now want colleagues to check them independently."

If confirmed, the discovery would undermine Albert Einstein's 1905 theory of special relativity, which says that the speed of light is a "cosmic constant" and that nothing in the universe can travel faster.

That assertion, which has withstood over a century of testing, is one of the key elements of the so-called Standard Model of physics, which attempts to describe the way the universe and everything in it works.

The totally unexpected finding emerged from research by a physicists working on an experiment dubbed OPERA run jointly by the CERN particle research centre near Geneva and the Gran Sasso Laboratory in central Italy.

A total of 15,000 beams of neutrinos -- tiny particles that pervade the cosmos -- were fired over a period of 3 years from CERN towards Gran Sasso 730 (500 miles) km away, where they were picked up by giant detectors.

Light would have covered the distance in around 2.4 thousandths of a second, but the neutrinos took 60 nanoseconds -- or 60 billionths of a second -- less than light beams would have taken.

"It is a tiny difference," said Ereditato, who also works at Berne University in Switzerland, "but conceptually it is incredibly important. The finding is so startling that, for the moment, everybody should be very prudent."

Ereditato declined to speculate on what it might mean if other physicists, who will be officially informed of the discovery at a meeting in CERN on Friday, found that OPERA's measurements were correct.

"I just don't want to think of the implications," he told Reuters. "We are scientists and work with what we know."

Much science-fiction literature is based on the idea that, if the light-speed barrier can be overcome, time travel might theoretically become possible.

The existence of the neutrino, an elementary sub-atomic particle with a tiny amount of mass created in radioactive decay or in nuclear reactions such as those in the Sun, was first confirmed in 1934, but it still mystifies researchers.

It can pass through most matter undetected, even over long distances, and without being affected. Millions pass through the human body every day, scientists say.

To reach Gran Sasso, the neutrinos pushed out from a special installation at CERN -- also home to the Large Hadron Collider probing the origins of the universe -- have to pass through water, air and rock.

The underground Italian laboratory, some 120 km (75 miles) to the south of Rome, is the largest of its type in the world for particle physics and cosmic research.

Around 750 scientists from 22 different countries work there, attracted by the possibility of staging experiments in its three massive halls, protected from cosmic rays by some 1,400 metres (4,200 feet) of rock overhead.

Read more: http://goo.gl/UzQ2a

Global Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk
Report on the fourth conference on artificial general intelligence

September 3, 2011 by Ben Goertzel

The Fourth Conference on Artificial General Intelligence (AGI-11) was held on Google’s campus in Mountain View (Silicon Valley), California, in the first week of August 2011. This was the largest AGI conference yet, with more than 200 people attending, and it had a markedly different tone from the prior conferences in the series.

A number of participants noted that there was less of an out-of-the-mainstream, wild-eyed maverick feel to the proceedings, and more of a sense of “business as usual” or “normal science” — a sense in the air that AGI is obviously an important, feasible R&D area to be working on, albeit a bit “cutting-edge” compared to the majority of (more narrowly specialized) AI R&D.

I think this difference in tone was due partly to the Google and Bay Area location, and partly to the fact that the conference was held in close spatiotemporal proximity to two larger and older AI-related conferences, AAAI-11 and IJCNN-11. IJCNN was just before AGI in San Jose, and AAAI was just after AGI in San Francisco — so a number of academic AI researchers who usually go to the larger conferences, but not AGI, decided to try out AGI as well this year. Complementing this academic group, there was also a strong turnout from the Silicon Valley software industry, and the Bay Area futurist and transhumanist community.

Tutorials

The first day of the conference was occupied by tutorials on the LIDA and OpenCog systems, and the Church probabilistic logic programming language. The second day comprised two workshops: one on self-programming in AGI systems, and the next the traditional “Future of AGI” workshop, which was particularly lively due to the prominence of future-of-technology issues in Bay Area culture (the conference site was not so far off from the headquarters of a variety of futurist organizations like Singularity University, the Singularity Institute for AI, the Foresight Institute, etc.). Most of the talks from the Future of AGI workshop have corresponding papers or presentations on the conference’s schedule page — with themes such as

    Steve Omohundro, Design Principles for a Safe and Beneficial AGI Infrastructure
    Anna Salamon, Can Whole Brain Emulation help us build safe AGI?
    Carl Shulman, Risk-averse preferences as AGI safety technique
    Mark Waser, Rational Universal Benevolence: Simpler, Safer, and Wiser than “Friendly AI”
    Itamar Arel, Reward Driven Learning and the Risk of an Adversarial Artificial General Intelligence
    Ahmed Abdel-Fattah & Kai-Uwe Kuehnberger, Remarks on the Feasibility and the Ethical Challenges of a Next Milestone in AGI
    Matt Chapman, Maximizing The Power of Open-Source for AGI
    Ben Goertzel and Joel Pitt, Nine Ways to Bias Open-Source AGI Toward Friendliness

Norvig, Dickmanns, Sloman, Boyden, Shi

The final two days constituted the conference proper, with technical talks corresponding to papers in the conference proceedings, which were published in Springer’s Lecture Notes in AI book series. Videos of the conference talks, including the workshops and tutorials, will be posted by Google during the next months, and linked from the conference website.

Peter Norvig, Google’s head of research and the co-author of the best-selling AI textbook (whose latest edition does mention AGI, albeit quite briefly), gave brief opening remarks. He didn’t announce any grand Google AGI initiatives, making clear that his own current research focus is elsewhere than the direct pursuit of powerful artificial general intelligence. Yet, he also made clear that he sees a lot of the research going on at Google as part of an overall body of work that is ultimately building toward advanced AGI.

The four keynote speeches highlighted different aspects of the AGI field, as well as the strongly international nature of the AGI community.

Ernst Dickmanns, from Germany, reviewed his pioneering work on self-driving cars from the 1980s, which in some ways was more advanced than the current work of self-driving cars being conducted by Google and others. He wrapped up with a discussion of general lessons for AGI implied by his experience with self-driving cars, including the importance of adaptive learning and of “dynamic vision” that performs vision in a manner closely coordinated with action.

Aaron Sloman, from Britain, discussed “toddler theorems” — the symbolic understandings of the world that young children learn and create based on their sensorimotor and cognitive experiences. He challenged the researchers in the audience to understand and model the kind of learning and world-modeling that crows or human babies do, and sketched some concepts that he felt would be useful for this sort of modeling.

MIT’s Ed Boyden reviewed his recent work on optogenetics, one of the most exciting and rapidly developing technologies for imaging the brain — a very important area, given the point raised in the conference’s Special Track on Neuroscience and AGI that the main factor holding back the design of AGI systems based on human brain emulation is currently the lack of appropriate tools for measuring what’s happening in the brain. We can’t yet measure the brain well enough to construct detailed dynamic brain simulations. Boyden’s work is one of the approaches that, step by step, is seeking to overcome this barrier.

Zhongzhi Shi, from the Chinese Academy of Sciences in Beijing, described his integrative AGI architecture, which incorporates aspects from multiple Western AGI designs into a novel overall framework. He also stressed the importance of cloud computing for enabling practical experimentation with complex AGI architectures like the one he described.

Neuroscience and AGI

As well as the regular technical AGI talks, there was a Special Session on Neuroscience and AGI, led by neuroscientist Randal Koene, who is probably the world’s most successful advocate of mind uploading, or what he now calls “substrate independent minds.” Most of the AGI field today is only loosely connected to neuroscience; and yet, in principle, nearly every AGI researcher would agree that careful emulation of the brain is one potential path to AGI, with a high probability of succeeding eventually.

The Special Session served to bring neuroscientists and AGI researchers together, to see what they could learn from each other. Neuroscience is not yet at the point where one can architect an AGI based solely on neuroscience knowledge, yet there are many areas where AGI can draw inspiration from neuroscience.

Demis Hassabis emphasized the fact that AGI currently lacks any strong theories of how sensorimotor processing interfaces with abstract conceptual processing, and suggested some ways that neuroscience may provide inspiration here, e.g., analysis of cortical-hippocampal interactions. Another point raised in discussions was that reinforcement learning could potentially gain inspiration from study of the various ways in which the brain treats internal intrinsic rewards (alerting or surprisingness) comparably to explicit external rewards.

Kurzweil and Solomonoff prizes

Three prizes were awarded at the conference: two Kurzweil Prizes and one Solomonoff Prize.

The Kurzweil Prize for Best AGI Paper was awarded to Linus Gisslen, Matt Luciw, Vincent Graziano and Juergen Schmidhuber for their paper entitled Sequential Constant Size Compressors and Reinforcement Learning. This paper represents an effort to bridge the gap between the general mathematical theory of AGI (which in its purest form applies only to AI programs achieving massive general intelligence via using unrealistically much processing power) and the practical business of building useful AGI programs.

Specifically, one of the key ideas in the general theory of AGI is “reinforcement learning” — learning via reward signals from the environment — but the bulk of the mathematical theory of reinforcement learning makes the assumption that the AI system has complete visibility into the environment. Obviously this is unrealistic — no real-world intelligence has full knowledge of its environment. The award-winning paper describes a novel, creative method of using recurrent neural networks to apply reinforcement learning methods to partially-observable environments — indicating a promising research direction to follow, for those who wish to make reinforcement learning algorithms that scale up to real world problems, such as those human-level AGIs will have to deal with.

The 2011 Kurzweil Award for Best AGI Idea was awarded to Paul Rosenbloom for his paper entitled From Memory to Problem Solving: Mechanism Reuse in a Graphical Cognitive Architecture. Rosenbloom has a long history in the AI field, including a role co-creating the classic SOAR AI architecture in the 1980s. While still supporting the general concepts underlying his older AI work, his current research focuses more heavily on scalable probabilistic methods — but more flexible and powerful ones than Bayes nets, Markov Logic Networks and other current popular techniques.

Extending his previous work on factor graphs as a core construct for scalable uncertainty management in AGI systems, his award-winning paper shows how factor graph mechanisms described for memory can also be used for problem-solving tasks. In the human brain there is no crisp distinction between memory and problem-solving, so it is conceptually satisfying to see AGI approaches that also avoid this sort of crisp distinction. It is yet unclear to what extent any single mechanism can be used to achieve all the capabilities needed for human-level AGI. But it is a very interesting and valuable research direction to take a single powerful and flexible mechanism like factor graphs and see how far one can push it, and Dr. Rosenbloom’s paper comprises a wonderful example of this sort of work.

The 2011 Solomonoff AGI Theory Prize — named in honor of AGI pioneer Ray Solomonoff, who passed away in 2010 — was awarded to Laurent Orseau and Mark Ring, for a pair of papers titled Self-Modification and Mortality in Artificial Agents and Delusion, Survival, and Intelligent Agents. These papers explore aspects of theoretical generally intelligent agents inspired by Marcus Hutter’s AIXI model (a theoretical AGI system that would achieve massive general intelligence using infeasibly much computational resources, but that may potentially be approximated by more feasible AGI approaches).

The former paper considers some consequences of endowing an intelligent agent of this nature with the ability to modify its own code; and the latter analyzes aspects of what happens when this sort of theoretical intelligent agent is interfaced with the real world. These papers constitute important steps in bridging the gap between the abstract mathematical theory of AGI, and the real-world business of creating AGI systems and embedding them in the world.

Hybridization

While there was a lot of strong and interesting research presented at the AGI-11 conference, I think it’s fair to say that there were no dramatic breakthroughs presented. Rather, there was more of a feeling of steady incremental progress. Also, compared to previous years, there was less of a feeling of separate, individual research projects working in a vacuum — the connections between different AI approaches seem to be getting clearer each year, in spite of the absence of a clearly defined common vocabulary or conceptual framework among various AGI researchers.

Links were built between abstract AGI theory and practical work, and between neuroscience and AGI engineering. Hybridization of previously wholly different AGI architectures was reported (e.g., the paper I presented, describing the incorporation of aspects of Joscha Bach’s MicroPsi system in my OpenCog system). All signs of a field that’s gradually maturing.

A Sputnik of AGI

These observations lead me inexorably to some more personal musings on AGI. I can’t help wondering: Can we get to human-level AGI and beyond via step-by-step, incremental progress, year after year?

It’s a subtle question, actually. It’s clear that we are far from having a rigorous scientific understanding of how general intelligence works. At some point, there’s going to be a breakthrough in the science of general intelligence — and I’m really looking forward to it! I even hope to play a large part in it. But the question is: will this scientific breakthrough come before or after the engineering of an AGI system with powerful, evidently near-human-level capability?

It may be that we need a scientific breakthrough in the rigorous theory of general intelligence before we can engineer an advanced AGI system. But … I presently suspect that we don’t. My current opinion is that it should be possible to create a powerful AGI system via proceeding step-by-step from the current state of knowledge — doing engineering inspired by an integrative conceptual, not quite fully rigorous understanding of general intelligence.

If this is right, then we can build a system that will have the impact of a “Sputnik of AGI,” via combining variants of existing algorithms in a reasonable cognitive architecture in a manner guided by a solid conceptual understanding of mind. And then, by studying this Sputnik AGI system and its successors and variants, we will be able to arrive at the foreseen scientific breakthrough in the science of general intelligence. This of course is what my colleagues and I are trying to do with the OpenCog project — but the general point I’m making here is independent of our specific OpenCog AGI design.

Anyway, that’s my personal view of the near- to mid-term future of AGI, which I advocated in asides during my OpenCog tutorial, and various discussions at the Future of AGI Workshop. But my view on these matters is far from universal among AGI researchers — even as the AGI field matures and becomes less marginal, it is still characterized by an extremely healthy diversity of views and attitudes! I look forward to ongoing discussions of these matters with my colleagues in the AGI community as the AGI conference series proceeds and develops.

Mostly, it’s awesome to even have a serious AGI community. It’s hard sometimes to remember that 10 years ago this was far from the case!

Read more: http://goo.gl/FV90i

Global Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk