Thursday, June 1, 2017

One Notion on "Black Mirror Season 3 Episode 2)

Directly jumping into the notion that kept on firing in my brain, as I was getting closer to the end of the episode, which is will we reach a state in which there would be an intelligent software that would be able to “learn and adapt on the fly”, by monitoring our brain activity and work out how best “to frighten us” (or in other scenarios if applicable, how best to treat us or guide us).

This is the very pinnacle of AI overpowering humans; the ability of understanding what is going on in the human brain by monitoring the brain activity. The capacity of decrypting neuronal firings to their exact meanings shall make humans existentially naked.

But for now we should know first of all that neurons and neuroglia are the constituents of our brains, not images or audios. We can open a brain of a person and see the car he was dreaming of, or his first sexual encounter. The difficulty of the idea mentioned above is well known for whoever is into neuroscience. First of all our external behaviors arise from impulses that take place in our brains. Our memories arise when certain factors stimulate a certain neural pathway, and by that the perception of a path treaded upon earlier is experienced. By that we should understand the limitations of our current tools and realize that what we can weigh or assess are neural firings (creating a brain activity that could be tracked), opening and closing of ion channels that facilitate stimulatory or inhibitory mechanisms of neurons, and brain activity being studied under certain behaviors (certain regions being active or non-active). We can assess certain neurotransmitters fired that will further stimulate or inhibit. In between our neurons, there are no images or audios saved in a tangible way. It is firings that create so.

For a software to have the capacity of learning and adapting by monitoring our brain activity is something outrageously dumbfounding. How can certain neural impulses be understood by a machine that they meant a certain thing. Firings are within a range of similarity, but the subtle nuances lie on the subjectively created neural network in a certain individual. The probability of the machine “guessing” what was meant by a certain brain activity just seems confusing, and rather "impossible". 



The ability of looking at a certain brain region being active in correspondence to a certain behavior is present, but looking at the brain activity, then saying what has been thought or what is roaming in the brain is not yet there, and that as well seems to be so far-fetched. So far, AI is greatly advanced in quantitative knowledge. It has the capacity of analyzing a huge database, and bringing out wonderful predictive results and also giving amazing guides. As someone noted “after all, being advanced in quantitative analysis gives high qualitative results”. I have been telling friends that it would be better to put your blood results into a system and it works on it, then gives you the diagnosis and differential diagnosis instead of a well experienced GP. With all this advance, the capacity of interpreting firings as certain memories, and being able to create simulations based on what has been acquired from the firings “still” seems a rather astonishing, or even scary, thing. 

No comments: