“A feedback loop involves four distinct stages. First comes the data: A behavior must be measured, captured, and stored. This is the evidence stage. Second, the information must be relayed to the individual, not in the raw-data form in which it was captured but in a context that makes it emotionally resonant. This is the relevance stage. But even compelling information is useless if we don’t know what to make of it, so we need a third stage: consequence. The information must illuminate one or more paths ahead. And finally, the fourth stage: action. There must be a clear moment when the individual can recalibrate a behavior, make a choice, and act. Then that action is measured, and the feedback loop can run once more, every action stimulating new behaviors that inch us closer to our goals.”—
This article explains my work and the universe too I guess. A great read or listen via the Mp3 audio.
I discover myself all the time comparing programming languages to brain functions.
While I have never designed a programming language myself, I have used many of them, and think, that I know their strengths and weaknesses quite well by now.
But more importantly, I have studied brain functions and neural sciences to some extend over the years (especially starting with reading von Neumann’s and Turings books)
It is not a case that von Neumann, was also one of the founders of the computer paradigm. After all he just invented computers after the human prototype he was studying.
Let me explain why:
Preamble: I must explain some basics of how basic brain functionality in order to explain my conclusions:
- The brain is a neural network
- Neural networks are connections of neurons
- Neurons are cells
So what’s the brain for?
The brain is a world simulator. It’s a big approximator. It takes input from it’s senors (the eyes, etc.) and produces output (which in someway should reproduce what’s happening or will happen in the real world)
How does the brain do that? It does it by a process called “learning”, which was studied quite extensively by von Neumann, reaching also very surprising conclusions about the learning process itsself. All that neural networks do, is they modify their connection weightings all time, until the output produced by the network is coherent to what the sensor input says. It’s some sort of “Calibration” that takes place during learning. When you learn things right, chances are you will make the right conclusions, before things happen, and you probably will survive. Yes, you see that car rushing at you with 100 km/h. Well your brain should be able to precalculate, that if it continous on its trail like that it will just hit you in the next minute. Therefore it’s better to jump in the bush before. Your brain might even be able to precalculate, that the driver might be drunk or that he will see you in the next 20 seconds and therefore tell you to wait a moment to not make too fast or early conclusions. Your brain is fantastic!
Ok. Probably you knew this already.
What does this have to do with programming languages?
Well. If you think about the above, there is a powerful conclusion you can make, which is:
The objects that run through such functions are electric impulse codificated objects coming directly from our sensors.
In computers sensors quantize these information to 0-1 bit streams.
This article was really fun writing. I think creating the brain analogy was a good idea. Comparing programming language features to brain functions, somewhat works quite well, so this means that there is a strong correlation between the two.
Have another brain analogy. Post me a line.
Brain reasoning, bayesian networks, abstraction and Markov chains
Surely just reinventing the wheel here, but human brain looks like a sophisticated bayesian Markov chain machine with abstraction capabilities.
It is a Markov chain machine in the sense that our brain, through learning and experience, builds and updates over time a large matrix of conditional probabilities. By counting the number of instances of concurrent events A, B, C, …, I, X in real life, the brain constantly updates the probability of outcome X given the occurrence of A, B, C, …, I. Or in mathematical terms, P(X|A,B,C,…I).
Abstraction capabilities allow then the brain to build upon fundamental conditional probabilities P1, P2, P3, Px based on direct experiences, to create a second, third and iteratively n-th layer of more complex probabilities, for example P(Px|P1,P2,P3,…).
This wealth of information could be encode in chromosome-like strings to be passed on and further processed.
Mon 5:55 Spontaneous Creativity
Uprooted from these desolate mental trenches, we flash forward into scenes not quite real in the ordinary sense, yet still vivid in their proliferation round neurons somewhere near the frontal cortex of now wildly ecstatic electric insights. An unseen flurry of activity happening with both text and synapses takes hold of imaginations, overlapping the interiors of enclosed eyelids with an orchestra of spontaneous cerebral events, lighting and coloring in a myriad of shadowy imagery. As the written word trails on, the conceptual landscape of this printed text etches into thought the forefront of everlasting creativity.