2016-08-30

Immortality, a false good idea

Immortality is trendy. According to some so-called "transhumanists", it is the promise of artificial intelligence at short or medium term, at the very least before the end of the 21st century. Considering the current advances in this field, we are bound to see amazing achievements which will shake our very notions of identity (what I am) and humanity (what we are). If I can transfer, one piece after another, neuron after neuron, organ after organ, each and every element which makes my identity into a human or machine clone of myself, supposing this is sound in theory and doable in practice, will this duplicate of myself still be myself? The same one? Another one? And if I make several clones, which one will be the "true" one? Do such questions make any sense at all? All this looks really like just another, high-tech, version of the Ship of Theseus, and our transhumanists provide no more no less than the ancient philosophers answers to the difficult questions about permanence and identity this old story has been setting, more than two thousand years ago.
None of those dreamers seem to provide a clear idea of how this immortality is supposed to be lived in practice, if ever we achieve it. A neverending old age? Not really a happy prospect! No, to be sure, immortality is only worth it if it goes with eternal youth! And even so, being alone in this condition, and seeing everyone else growing old and die, friends, family, my children and their children, does not that amount to buying an eternity of sorrow? Not sure how long one could stand that. But wait, don't worry, our transhumanists will claim, this is no problem because just everybody will be immortal! Everybody? You mean every single one of the 10 billion people expected to be living by 2100? Or only a very small minority of wealthy happy few? But let's assume the (highly unlikely) prospect of generalized immortality by 2050. In that case it will not be 10 but 15 billion immortal people at the end of the century if natality does not abate.That's clearly not sustainable. But maybe when everyone is immortal, there will be no need to have children anymore, and maybe even at some point it will be forbidden due to shrinking resources. Instead of seeing your children die like in the first scenario, you will not see children anymore. Not sure which one is the worst prospect!
Either way, alone or all together, immortality is definitely not a good idea. And if it were, life would have certainly invented and adopted it long ago. But since billions of years, evolution and resilience of life on this planet despite all kinds of cataclysms (the latest being humanity itself) is based on a completely different strategy. For a species to survive and evolve, individual beings have to die and be replaced by fresh ones, and for the life itself to continue, species have to evolve and eventually disappear, replaced by ones more fit to changing conditions.
So let's forget about actual immortality. We have many technical means to record and keep alive for as long as possible the memory of those who are gone, if they deserved it. To our transhumanists I would suggest to simply make their lives something worth remembering. It's a proven recipe for the only kind of immortality which is worth it, the one living in our memories.

[This post is available in French here]

2016-03-15

Handwriting questions and answers

Why stick to handwriting? 
It's so painful and slow! 


There are so many efficient technical ways to write and communicate now.


It might be good for the museum and art school, for poetry and diaries. 
But why should I bother?


My handwriting is so ugly anyway. 
Why should I show it at all and why should I make others suffer from deciphering it?


But it shows too much about me ... I don't want it to be analyzed by graphologists.

In other words ...

2016-03-13

In praise of handwriting

This started by a post shared by +Teodora Petkova, suggesting to share handwriting, I found the idea was cool, so I started a Google+ collection. For the record here is my today contribution - complete with spelling mistake (thousands of times)


2016-01-14

Otherwise said in French

I have started with the new year a kind of mirror of this blog in French, a long overdue return to my native language. I hope some readers of in other words will be fluent enough in French to also make sense and hopefully enjoy those choses autrement dites. The first posts are listed and linked below, with a short abstract.
  • Toute chose commence par un trait on Shitao, the unity of painting, calligraphy and poetry in classical Chinese culture, and how the continuum of nature is divided into things by the single brushstroke. 
  • Cosmographie en orange et bleu a "just so story" about the separation of heavens and earth as seen and described by the first ontologist in the first days, and what happened to him on the seventh day.
  • L'ontologiste sur le rivage des choses on the illusion of so-called ontologies (in the modern sense of the term) thinking they have said what things are, when they only define how things differ from each other.
To be continued ... stay tuned!

2016-01-05

Desperately seeking the next scientific revolution

If you still believe in the ambient narrative on the accelerated path of scientific and technological progress, it's time to read You Call this Progress? on the excellent Tom Murphy's blog Do the Math. I'm a bit older than the author, just enough to have seen a few last but not least scientific achievements of the past century happening between my birth and his one. The paper of Watson & Crick on DNA structure was published in Nature a few days after I was born. My childhood time saw the discovery of the cosmic microwave background and general acceptance of the Big Bang theory, experimental confirmation and acceptance of the plate tectonics theory. While I was a student the standard model of microphysics was completed. Meanwhile chaos theory, of which mathematical premices had been discovered by Poincaré at the very beginning of the century, was setting the limits of predictability of natural systems evolution, even under deterministic laws.

This set of discoveries was somehow the bouquet final of a golden age of scientific revolutions which contibuted to our current vision of the world, starting in the 19th century with thermodynamics, theory of species evolution, foundation of microbiology, electromagnetism unification, followed at the beginning of the 20th century by relativity and quantum mechanics, two pillars for our current understanding of microphysics and cosmology, from energy production and nucleosynthesis in stars to structure of galaxies and visible universe at large. Put together, those revolutions spanning about 150 years from 1825 to 1975 set the basis for the mainstream scientific narrative, giving an awesome but broadly consistent (if you don't drill too much in the details, see below) account of our universe history, from Big Bang to galaxies, stars and planets formation and evolution, our small Earth and life at its surface, bacteria, dinosaurs and you and me. A narrative we've come to like and make ours thanks to excellent popularization. We like to be children of the stars, and to wonder, looking at the night sky, if we are the only ones.

As Tom Murphy clearly arguments, this narrative has not substantially changed since 40 years, and has not seriously been challenged by further discoveries. Many details of the story have been clarified, thanks to improved computing power, data acquisition, and spatial exploration. We've discovered thousands of exoplanets as soon as we had the technical ability to detect them, but that did not come as a surprise, and in fact what would have been really disturbing would have been not to discover any. The same lack of surprise happened with gravitational lenses first discovered in 1979 but predicted by general relativity. And no new unexpected particle has been discovered despite billions of dollars dedicated to the Large Hadron Collider, the largest experimental infrastructure ever built.

Could that mean that the golden age of scientific revolutions is really behind us, and all we have to do in the future is to keep on building on top of them an apparently unbound number of technological applications? In other words, that no new radical paradigm shift, similar to the ones of the 1825-1975 period, is likely to happen? Before making such a bold prediction, it would be safe to remember those famous for having proven wrong in the past in pretending that there was nothing new to be discovered.

Actually, major issues already known by 1975 are still open. In physics, the unification of interactions needs to solve strong inconsistencies between relativity and quantum theory, an issue with which Albert Einstein himself struggled until his death, not to speak about the mysterious dark matter and dark energy needed by theory to account for the accelerated expansion of the universe. The latter is actually one of the rare important and unexpected discoveries of the end of the 20th century. In natural science, the process of apparition of life on Earth has still to be clarified, as well as the correlative issue of the existence of extraterrestrial life.

The number of scientists and scientific publications since 1975 has kept growing exponentially, as well as the power of data acquisition, storage and computing technology. With no result comparable in importance for our understanding of the universe to what Galileo discovered in the single year 1610 simply by turning the first telescope towards the Moon, Venus and Jupiter. The general process of science and technology evolution in the past has been that improved technology and instrumentation yields new results pushing towards theoretical revolutions and paradigm shifts. But strangely enough, the unprecented explosion of technologies since half a century has produced nothing of the kind.

Is it really so? Some scientists pretend that there actually is a revolution going on, but as usual mainstream science establishment is rejecting it. This is for example the position of Rupert Sheldrake in this article of 2012 The New Scientific Revolution. Indeed, the theories Sheldrake is defending, such as Morphic Resonance and Morphic Fields, are really disruptive and alluring, but refuted as non-scientific by the majority of his peers. I'm not a biologist, so I won't venture in this debate, and let readers make their own mind about it.

2015-12-17

Two cents of (natural) intelligence

Several months ago, my previous attempt to speak here about artificial intelligence, wondering if computers could participate in the invention of language, met a total lack of feedback (it's not too late for second thoughts, dear reader). I found it quite frustrating, hence another attempt to venture on this slippery debate ground.
+Emeka Okoye in the follow-up of the previous post on facets makes strong points. When I wonder how much intelligence we want to delegate to machines, and for which tasks, the answer comes as a clear declaration of intention.
We are not delegating "intelligence" to machines rather we are delegating "tasks" ... We can have a master-slave relationship with machines ... We, humans, must be in control.
I appreciate the cautious quote marks in the above. But can it be that simple? Or just wishful thinking, as +Gideon Rosenblatt is warning us in a post entitled Artificial Intelligence as a Force of Nature. The connected machines ecosystem, distributed agents, neuronal networks and the like, are likely to evolve into systems (call them intelligent or not is a moot point) which might soon escape, or has already escaped if we believe some other experts on this topic, the initial purpose and tasks assigned by their human creators, to explore totally new and unexpected paths. This hypothesis, not completely new, is backened here by a comparison with evolution of life, of which the emergent ambient intelligence would be a natural (in all meanings of the term) follow-up.

But evolution of technologies, from primitive pots, knifes and looms up to our sophisticated information systems, is difficult to compare to the evolution of life and intelligence. The latter is very slow, driven by species selection on time scales of millions of years, spanning thousands of generations. Behind each success we witness, each species we wonder how it perfectly fits its environment, are forgotten zillions of miserable failures which have been eliminated by the pitiless struggle for life. Nothing can support the hypothesis of an original design and intention behind such stories.
It's often said, like in this recent Tech Insider article, that comparing natural and artificial intelligence is like comparing birds to planes. I agree, but this article misses an important argument. Birds can fly, but at no moment did Mother Nature sat down at her engineering desk and decided to design animals able to fly. They just happened to evolve so over millions of years from awkward feathered dinosaurs, jumping and flying better and better and we now have eagles, sterns and falcons. On the contrary, planes were from the beginning designed with the purpose of flying, and in barely half a century they were able to fly higher and quicker than the above natural champions of flight.

To make it short, technology evolves based on purpose and design, life (nature) has neither predefined purpose nor design. Intelligence makes no exception to that. Natural intelligence (ants, dolphins, you and me) is a by-product of evolution, like wings and flight. We were not designed to be intelligent, we just happened to be so as birds happened to fly. But computers were built with a purpose, even if they now behave beyond their original design and purpose, like many other technologies, because the world is complex, open and interconnected.

Let's make a different hypothesis here. Distributed intelligent agents could escape the original purpose and design of their human creators, maybe. But in such a case, they are not likely to emerge as the single super intelligence some hope and others fear. Rather, like the prebiotic soup more than three billions years ago, its spontaneous evolution would probably follow the convoluted and haphazard paths of natural evolution, struggle for survival and the rest. A recipe for success over billions of years, maybe, but not for tomorrow morning.

2015-12-14

Rage against the mobile

The conversation around the previous post about facets led me to investigate a bit more about mobile, and what it means for the web of text. This is something I'd never really considered so far, and thanks to +Aaron Bradley for attracting my attention on it. Bear in mind I'm just an old baby-boomer who never adopted mobile devices so far, touchscreens drive me crazy, and I still wonder how people can write anything beyond a two words sentence on such devices etc. To be honest I do have a mobile phone but it is as dumb as can be (see below). It's a nice light, small object, feeling a bit like a pebble in my pocket but I actually barely use it (by today standards), just to quick calls and messages. Most of the time I don't even carry it along with me, let alone check messages, to the despair of my family, friends and former colleagues. But they eventually get used to it.


To make it short, I do not belong to the mobile generation, and my experience of the Web has been from the beginning, is, and is bound to remain a desk activity, even if the desktop has become a laptop along the years. I'm happy with my keyboard and full screen, so why should I change? And when the desk is closed, I'm glad to be offline and unreachable. I wish and hope things can stay that way as long as I'm able to read, think and write.

With such a long disclaimer, what am I untitled to say about mobile? Only quote what others who seem to know better have already written. In this article among others I read about the so-called mobile tipping point, this clear and quite depressing account of the consequences of mobile access on Web content.
The prospect for people who like to read and browse and sample human knowledge, frankly, is of a more precipitous, depressing decline into a black-and-white world without nuance [...] The smaller screens and less nimble navigation on phones lend themselves to consuming directory, video, graphic and podcast content more easily that full sentences. If the text goes much beyond one sentence, it is likely to go unread just because it looks harder to read than the next slice of information on the screen. [...] Visitors who access information via a mobile device don’t stay on sites as long as they do when using a desktop computer. So if you’re counting on people using their smartphones or tablets to take the same deep reading dive into the wonders of your printed or normal Web page messages, you’re probably out of luck.  
Given the frantic efforts of Web content providers to keep audience captive, all is ready for a demagogic vicious circle of simplification. Short sentences, more and more black-and-white so-called facts. If this is where the Web is heading to, count me out. I won't write for mobile more than I use mobile to read and write.

I still have hope, though, looking at this blog analytics. Over 80% of the traffic seems to still come from regular (non mobile) browsers and OS. But I guess many of you visitors have also a mobile (smart) phone you otherwise use. I wonder if and how you manage to balance which device you use for which usage. Are you smart enough to use mobile for apps, and switch to proper desk screens to take the time to read (and write)? I'm curious to know.