Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Sunday, 23 February 2014

Is Human Extraterrestial Migration Banned by (Monotheistic) Religious Ethics – And Maybe Some Secular Too?

As you know, the ethical assessment and political evaluation of technological risk is one of my main areas of interest, and a focus of one of my main research publications, the book The Price of Precaution and the Ethics of Risk. In that book, I consider a number of futuristic scenarios to illustrate and test my theoretical ideas. One thing that I'm not considering, however, is the vision of human migration to other planets. But this very scenario has now become the topic of some inflamed debating between  a visionary entrepreneurial endeavour to such an effect and the opinions of highly authoritative religious scholars.

As infantile, unrealistic and uneconomic they may seem, there are actual plans for having humans migrate from Earth to other planets – Mars being one in immediate focus, for instance through the initiative Mars-One. I'm one of those who think that, while it may be prudent to actually work on such contingencies (this is one reason why I have accepted to be scientific adviser to the Lifeboat Foundation), making it the primary priority seems to me to be an immoral waste of resources in light of more pressing needs where there are no technological barriers for doing good (such as securing clean drinking water and sewerage installations for all people globally, or fixing the rules of global trade to be at least somewhat less to the disbenefit of those needing it the most). I don't, however, host any principled objection to the idea of human extraterrestial migration – to my mind it's about needs, likelihoods of success and priorities in light of what stakes are up for humanity at the moment.

Others, however, seem to take a more rigid stance. Thus, apparently, the General Authority of Islamic Affairs and Endowment in the United Arab Emirates has issued a fatwa (i.e., a scholarly, allegedly authoritative interpretation of the tenets of Islam), according to which the idea of a one-way trip to Mars in the Mars-One style, would be too risky and uncertain to be allowed under the ban against recklessly endangering human life:

The committee, presided by Professor Dr Farooq Hamada, said: “Protecting life against all possible dangers and keeping it safe is an issue agreed upon by all religions and is clearly stipulated in verse 4/29 of the Holy Quran: Do not kill yourselves or one another. Indeed, Allah is to you ever Merciful.”
Apparently, the strong wording of these learned clerics is partly motivated by the fact that...

Thousands of volunteers, including some 500 Saudis and other Arabs, have reportedly applied for the mission which costs $6 billion. The committee indicated that some may be interested in travelling to Mars for escaping punishment or standing before Almighty Allah for judgment.
 “This is an absolutely baseless and unacceptable belief because not even an atom falls outside the purview of Allah, the Creator of everything.  This has also been clearly underscored in verse 19&20/93 of the Holy Quran in which Allah says: There is no one in the heavens and earth but that he comes to the Most Merciful as a servant. (Indeed) He has enumerated them and counted them a (full) counting.”
The Mars-One initiative has chosen to respond to this assault on their project (and, I strongly suspect, on its financial viability) not primarily by ridicule or resentment, but in kind, arguing that the mission is in the genral spirit of what some famous muslim explorers have done in the past (which is not really relevant to the argument) and, more interestingly, that central parts of Islamic teaching would rather seem to condone the planned mission, and that the implied risk assessment of the GAIAE committee is flawed from an intellectual perspective:

Space Exploration, just like Earth exploration throughout history, will come with risks and rewards. We would like to respectfully inform the GAIAE about elements of the Mars One mission that reduce the risk to human life as much as possible. It may seem extremely dangerous to send humans to Mars today, but the humans will be preceded by at least eight cargo missions. Robotic unmanned vehicles will prepare the habitable settlement. Water and a breathable atmosphere will be produced inside the habitat and the settlement will be operational for two years, even before the first crew leaves Earth. Each of the cargo missions will land in a system very similar to the human landing capsule. An impressive track record of the landing technology will be established before risking human lives. It should be noted that the moon lander was never test on the Moon before Neil Armstrong and Buzz Aldrin landed successfully on the Moon.
If we may be so bold: the GAIAE should not analyze the risk as they perceive it today. The GAIAE should assess the potential risk for humans as if an unmanned habitable outpost is ready and waiting on Mars. Only when that outpost is established will human lives be risked in Mars One's plan. With eight successful consecutive landing and a habitable settlement waiting on Mars, will the human mission be risk-free? Of course not. Any progress requires taking risks, but in this case the reward is 'the next giant leap for mankind'. That reward is certainly worth the risks involved in this mission.
It remains to be seen what the GAIAE committee will respond. The Mars-One reasoning isn't exactly fail-safe, since it comes down to how the importance of the mission is weighed in light of the cost and what that money could have been used for instead and what those alternative activities might have implied in terms of truly valuable gain and risk to human life and limb. My own theory would probably give the Mars-One option rather low priority in such light, I dare to say without having made any more precise analysis (which, provided the wide range of uncertainty, I would doubt to be possible anyway). And I dare venture the guess that my theory is more allowing to technological adventure than any of the Abrahamitic religions.

For this is my final reflection, inspired by an aside-comment by my colleague Anders Herlitz: The reaction of the Islamic scholars of the UAE is a pretty logical one in light of the strong stance against human taking of human life, not least one's own, in the scriptures of Christianity, Islam and Judaism. As noted by the pioneer theorist of the ethics of risk, theologian Hans Jonas, this stance would seem to warrant a high degree of risk aversion as soon as such scenarios are among the options. For sure (I would say, it's more uncertain if Jonas would be prepared to follow me), taking such risks may – as Mars-One suggests – be justified, but it takes special considerations and circumstances for that to be the case. In particular, venturing on risky missions just for the hell of it, or for making money, or for "doing something different", or for feeling important, or for exapanding human boundaries, or somesuch would in fact not seem to suffice. What would seem to be necessary is the presence of some realistic threat to human life or humanity, where the activity in question would be a necessary or, at least, reasonable response of escape. At the very least, the story of the Ark of Noah would seem to suggest as much.

So, my wonder is really why the GAIAE committee is so alone in its critical response to the Mars-One initiative. Where's the other islamic leaders? Where's the Pope? Where are the Lutheran Arch Bishops or the many preachers of the free churches Where are the chief Rabbis? And, since there are also secular versions around of the stance to the importance of human life, in particular one's own – where's the penetrating analyses from the Future of Humanity Institute and the Institute of the Ethics of Emerging Technology of the Kantian and (late) Wittgensteinian positions on this matter, just to mention the most obvious ones that would seem to qualify?









Sunday, 30 September 2012

Are Drones more Advanced than Human Brains?




'What??', you may rightfully ask, has the philosopher joined the club of positive futurists that he word-whipped so badly recently? How could the US distance-controlled search and destroy flying units popularly known as "drones" ever be compared to the complexity of the wiring or functionality of a real brain? Especially so since said drones evidently fail massively (see also here) to do what they are supposed to. Not meaning that I find the activities of humans in military operations much more tasteful, mind you – just so that we can put that little debate to a side for now.

But it's not me, folks! It's no smaller an intellectual giant than the very President of Yemen, Abed Rabbo Mansour Hadi, elected by a massive majority as the sole candidate in 2012, who says so – or seems to be saying so, according to the Washington Post (reported also in my own country here, here) Yes, that's right, the very same Yemen where the activities of drones have recently been heavily criticised for inefficiency, inhumanity and political counterproductivity (see also here). What he says more precisely, in response to the exposure of the increased use of drones in Yemen, is this:

Every operation, before taking place, they take permission from the president /.../ The drone technologically is more advanced than the human brain.

Now, it is not my place here to criticise the decisions of the president as such, I'm sure there are more than one political delicacy for him to consider in these matters. However, since he seems to be basing his decision at least partly on the above assessment of the capacities of drones, there seems to be a tiny bit here for the philosopher to have a word about. Simply put: are there any reasons to hold true what he says about drones and brains?

I'm sure that your initial reaction is the same as mine was: obviously not! The laughingly narrow computational, sensory and behavioral capacity of a drone to be comparable to the immensely complex biological wiring of the human brain and its sensory and nervous system, capable of so much more than merely killing people - come off it! So, why not just say that? you may wonder. Because, on further inspection, I changed my mind, I confess that the just stated is indeed one interpretation of what the president says, but it is far from the only one and even less the most reasonable one.

Consider again the comparison made in  the quote above.

Note, for instance, that it is done between a part of humans (their brain) and the whole of the drone. Human brains are in fact not capable of doing much unless assisted by the rest of the human body. This in contrast to a drone, that includes not only its computer and sensory mechanisms, but a whole lot of mechanics as well. This makes the drone capable of, e.g., flying and bombing, which the human brain as such is clearly not capable of.

You may retort that the brain may feel and think much better about more things than the drone computer (plus sensors), but that's also a simplification. For sure, a drone is probably a much too simple machine to be ascribed anything like beliefs or feelings (or any sort of sentiment or attitude beyond purely behavioral dispositions of the same kind that can be ascribed to any inanimate object). But we also know that a computer has a capacity for computation and quantitative data processing far beyond any human with regard to complexity and speed. So when it comes to getting a well-defined type of task done, the drone computer and sensors may very well do much better than any single or group of human brains.

That something like this is the intended meaning of the statement is actually hinted at by the use of the qualifier "technologically". One interpretation of that could perhaps be the same as synthetic or manufactured, in which case, the statement would become trivially true, but also empty of interesting information: we already knew that brains are not artifacts, didn't we? But the word "technology" may also signify something else than the distinction between natural and artificial, it may rather signify the idea of technology as any type of use of any type of instrument for the realisation of human plans. In effect, the qualitative comparison between drones and human brains has to be done relative to the assumed goals of a specific plan. In this case, I suppose, that of killing certain people while avoiding to kill certain other people. This, of course, opens the issue of whether one should attempt to kill anybody at all, but it is rather obvious that the president does not signal that question to be open for debate in spite of the fact that pondering it would be a task where a human brain would for sure be vastly superior to a drone.

A pretty boring retort at this stage could be to point to the fact that if it hadn't been for human brains, there wouldn't be any drones. One could add, perhaps, that the operation of drones takes place with active guidance and operation of humans (including their brains). But surely, what the president is getting at is how things would have gone had humans tried to carry out whatever orders they are trying to carry out without access to the drones.

And, plausibly, this is what the president means and claims: that humans using drones get more of those people killed that are supposed to be (according to given orders) killed and less of those that are not supposed to be killed compared to if human soldiers or fighter planes had been used.  The statement carries no deeper ramifications for cognitive science or philosophy, except perhaps that our celebration about the capacities of the human mind and brain tend to become less obvious and looking more self-serving when taken down from more general and unspecific levels.

It is, of course, an empirical question whether or not the claim about the greater efficiency (relative to some particular set of orders or goals) is correct or not (as seen there are some doubts expressed in the Washington Post stories), but it is not an a priori obvious falsehood. To assess it would, however, require access not only to body count data and such, but also the precise content of said orders with regard to e.g. accepted degrees of collateral killings, losses to own troups (guaranteed to stay at zero when using drones) and so on. Which, of course, will not be forthcoming. The dear president Hadi can say whatever he wants about the relative capacities of drones and brains and never be faulted.

For my own part, I cannot but remember the rendering (from the book The Man who Knew too Much) of a response by Alan Turing in a radio debate on artificial intelligence in the 1950's to the challenge that no computer could ever compose sonnets (or any other poem, one supposes) of similar quality to those of Shakespeare. Turing said that while it was possibly true that computer poems would not be enjoyable for humans, he rather thought that a computer could be able to compose poems of great enjoyment to other computers. If anyone has a more exact reference of this, I would be happy to receive it.