Sometimes my mind wanders over topics I am sure have been discussed at length by experts.
For example today I was wondering if we're going about this all wrong in our attempts to build a computer that mimics the human brain in our quest for artificial intelligence. As Jonah Lehrer harps on an on about on his blog, the human brain is probably the most complex (yet efficient) machine in the Milky Way Galaxy. In fact it's pretty much orders of magnitude more complex than anything else, including the Large Hadron Collider.
If we are really committed to A.I., shouldn't we start at the bottom of the food chain, at the far end of the timeline?
Why not build some super simple software that has a single purpose: it inputs streaming data, changes it, and then outputs it. For example it could input groups of 3D coordinates and output a single centerpoint of the polygon formed by the 3D points.
The amount of data gradually increases, and as it does so, the software will copy itself so that it can continue to handle the data in realtime. However, when the software copies itself it does not necessarily do so perfectly. Furthermore, if the data decreases, the software will terminate itself, starting with the oldest versions first, until the data is being handled by the least number of copies of software possible to maintain realtime.
Studies of Drosophilia (fruit flies) have found that about 70% of random mutations are harmful and kill the creature. The remaining 30% are almost entirely neutral, only a tiny, tiny amount are beneficial.
So I imagine this scenario where by rapidly ramping up and then dropping down the flow of data into the software population, you could rapidly accumulate large numbers of software mutations, and then cause die-offs of the less mutated software.
How long until a software developed a language bug in the "die off when the data stream is cut back" and that software and its progeny replicated out of control...then sat idle if there was no data to process? Voila! Evolution. One can imagine that, at the rate at which computers can work compared to living organisms, that you could evolve highly advanced software in a relatively short amount of time. Of course, you might end up with massive, inefficient software to do a relatively simple job. Just look at the millions of codons of junk DNA filling our cells...useless filler that wastes replication energy every time a cell divides.
Or what if you built a little, plastic robot, that lived in a soup of water and vinyl monomers? It spent its days polymerizing the vinyl out of the water into PVC, which it used to create replicates of its own parts. Once all its parts were replicated it would split in two, and repeat.
But then you added some amide monomer to the slurry. The little plastic bots could, if they somehow developed the ability, polymerize to polyamide instead of polyvinyl chloride. You occasionally would flash the drum with blasts of ultraviolet light, to encourage spontaneous polymerization via energy input.
How long until your plastic robots were nylon robots? Maybe then you add a little alcohol to the mix, and the plastic robots inadvertently turn themselves into glue, while the nylon robots live on. Voila! Evolution.
The point is that the human brain, and the body in which it resides, didn't just come about. It all started with a little disorganized cell, billions of years ago. It seems like the simplest way to build an artificial brain would be to do the same.
_
Wednesday, 7 July 2010
Subscribe to:
Post Comments (Atom)
0 comments:
Post a Comment