Thursday, May 15, 2008

Neural Modeling & immortality

The human brain has about 1011 neurons and for each neuron there is an average of a few thousand connections. If we forget about the details of both the connections and the neurons function etc. To have enough labels to uniquely determine each neuron we would need at least 5 bytes or so and conveniently enough we are moving into an era in which we are doing 64 bit computing. That is we will soon be operating on platforms that are big and bad enough to handle huge structures such as things with 1011 parts. Each neuron since it would have thousands of connections it is reasonable to say we could encode a neuron and all its connections with about 6.4*105 bits (around the amount of space the memory addresses of its connected neurons would take plus a little). Thus just storing the connections without weight or any of the ancilliary information such as connection strength time of signal propagation etc it would take about 6.4*1016 bits to store the connection information. That is about 105 TB of data. Now massive information storage of that magnitude is not something that you come across in your every day desktop computer, however on the basis of Moores law we could expect desktops to have that (conservatively) within about 32 years. Right now we can (and do) support massive data structures of that size in servers. The amount of computing power necessary to do effective computations using structures of such massive size probably will develop in parallel. The amount of data necessary to store the connections gives you order of magnitude expectations of what is required to store meaningful data on the structure of the brain. If we add extra data into the mix so that we store connection strength information about neuron type and sensitivity and stuff like that you are changing the required data only by some small multiplicitive factor. Although connection structure is definitely not the only important thing about the human brain even if there are hundreds or even thousands of similarly important characteristics you change the size of the required data by two or three orders of magnitude which means you need to wait 12 or 18 years for computers to get better or you need to be willing to make a super computer now that is capable of handling that sort of stuff. At any rate it would definitely appear that within the next 50 years or so computers will be powerful enough to encode and run processes similar to the human brain. Somehow I doubt that by that time we will have scanning technology or knowledge of the human brain sufficient to be able to scan a brain and put that information into a computer. The point is however that some time between 2040 and 2070 I am betting that personal computers will be powerful enough to in principle represent and support a structure equivalent in essence to the functioning of a human brain.

There are possibly some very large holes in this line of reasoning, for instance if the specific spatial layout of the neurons was very important to functioning or if previously unknown structures in the brain actually constitute a large part of its computational power or if chemical diffusion and the specifics of chemical processes in the brain are very important etc. I can't really know how much of the processing power of the brain is tied up in such things as the properties of diffusive chemical kinetics or neural knotting. If it turns out that it takes many orders of magnitude more processing power and space to adequately describe these processes instead of (as I was assuming) a similar amount of of processing power and information to encode and process the connection structure then it could take very considerably longer to really be able to represent the mind. At the same time though this is really kind of the brute force approach, try to make a computer be able to as precisely as possible model exactly what is happening in a human brain. While this approach has some nice allure to it since it is clear to see how its success is on some level relatively certain (assuming, as many people do not, that the physical functioning of the brain is what gives rise to the mind).

The possibility stands however that the particular form of neural computing that the human body employs is really very inefficient when it comes to producing intelligence. Perhaps we will stumble upon a better way of doing things somewhere along the way but I find that unlikely. Very probably we will not understand how we can do intelligence with finesse until we are capable of doing it with sheer brute force.