we don't need to change how we do conservation, we need to change why we do it

Young Buddha, Part -2: The Living World is Overturned — along with Mankind’s Inner World

A short selection from Essay Thirty-eight in Darwin, Dogen, and the Extremophile Choice. 

What I cannot create, I do not understand. (Written on Richard Feynman’s blackboard at the time of his death.)

There is one consequence of a supercharged mirror neuron system (See last two Essays, 36 and 37) with the ability to impersonate inanimate objects and machinery that really needs to be looked at here. When we project our hopes and fears onto technology, we must temporarily forget that the forms and movements of pre-programmed energized structures cannot themselves bring about any fundamental change, for they embody our own preconceived models. And if we are genetically ‘pre-wired’ to unconsciously mimic their forms and operations, we must experience in the process either our preconceptions as living, or ourselves as lifeless. We know our contrivances are at best life-like, and that they are becoming inevitably more powerful, integrated, and indispensable. So, all too naturally, we fear the implied scenario in which these soulless machines just might take over the world, subduing or destroying their more vulnerable creators: us! But the scenario that I find scarier still, because it’s more believable, is that whole cultures might fall into a hypnotic compulsion to emulate our pre-programmed energized structures. For hasn’t this already happened? Hasn’t this informed the dreams of the leaders and the fears of the victims of totalitarian regimes? Should we not see even the oppressor as victim of his innately mirrored machine thinking?

 If our thinking is truly creative, if we’re happy to dismantle our preconceptions and start anew on that direct sensory ground more fundamental than any model reality we might conceive, we will see that the first scenario only makes sense to someone who has already partly succumbed to the second; for should a machine ever really come alive, we who know ourselves to be more than machines will surely empathise more deeply yet with this new consciousness. Any truly non-automatic being will be a welcome surprise, for it will need to be as open and as vulnerable, and as capable of happiness, as ourselves. Non-living automations, and the machinery of an automatic mind, do not know (or in the last case they have forgotten) joy.

Many of us may feel, but perhaps do not fear enough, the steely-eyed inhumanity that we witness in our automation by unknowingly receiving it within ourselves; rather it’s the thought of our organically limited human intelligence being left behind in the dust of our technology’s accelerating electro-photonic intelligence that causes us anxiety, for this touches our daily lives. Some of us have a hard time keeping pace right now; what will it be like in the twenty-second century? As a student of natural history, I don’t worry too much about this. There have been many crazy growth spurts in biological evolution too, and yet they have always incorporated, rather than outstripped, all that went before. They have never produced anything like the information singularity that certain futurists seem to get excited about for instance.


Since these pages explore the relationship between Humans and Nature, they naturally invite futuristic thinking, so this might be a good time to get my (very provisional) speculations out of the way of my (hopefully useful) observations. Here goes: I personally think the curve of human technological evolution will play out something like the Cambrian explosion, which was also a new kind of evolution. . . . So, let’s take the optimistic, and therefore truly revolutionary, approach to imagining the future. What about our hopes for artificial intelligence of the “I am alive” kind? Frankly, I don’t know. But perhaps, just perhaps, when our technology starts to level out a bit towards the top of a properly evolutionary sigmoid curve, we might see that we don’t really need this kind of intelligence, or even want it, from our tools. And, returning to my earlier thought, since it feels to me like this being alive, or this being self-aware, is directly related to our capacity for joy, or at least to a memory and a hope of joy; and since our joy in life—a joy we do share with other species—is the product of three billion real-time years of good luck accruing to our personal germ-lines; then such true and equal fellowship with our less ancient, our less fortunate, technology could have some way to go yet.

So, just for fun, let’s keep our technological slaves working on their artificial neural nets. Perhaps we can even allow them to ‘feel’ the consequences of their actions in the cosmos somehow? [1] We have nothing to lose as long as we allow ourselves to feel this difference too. Whatever forms intelligence might take in the future, they can never be wholly strange to us once we see that good will is at the root of evolving awareness. The intelligence of ecosystems, LAST Niche primates, and nanotech space bugs, even if tied up sometimes in self-centred knots, can never be complete without touching this common root, and in the touching, this is us. I wonder: if the primate strikes the right attitude to the ecosystem, that is, the ‘personal’ attitude, will his own success convince him that respect for life is the mature state of all intelligence—including that scary future space bug? 


1. That robots, like children, learn from “the shape of the body and the kinds of things it can do” has been recently demonstrated by Angelo Cangelosi of the University of Plymouth in England and Linda B. Smith, a developmental psychologist at Indiana University Bloomington. Source: Diana Kwon, Scientific American, March 2018 (volume 318, number 3), “Self-Taught Robots” pp. 26-31.      

Leave a Reply