I'd like to know what you all think about this. First, a little background (and if you're bored to tears by the first sentence feel free to skip down the actual question).
Scott and I often discuss the ethical implications of robots. Perhaps surprisingly given our respec-tive poli-tics (try to guess who's who!), we actually agree on a lot of it. We both think that a computer brain that is indistinguishable from a human brain without cutting into it (i.e. a computer that can pass the Turing Test as consistently as a human can) must receive the full legal protection afforded to humans, since it would be impossible to state confidently that such a computer did not possess something that is equivalent to how we define human consciousness.* From this perspective we've had several quite enjoyable conversations on the topic.
Anyway, I was absently thinking about this last night (or Friday maybe) and it occurred to me that the Three Laws from Isaac Asimov's universe, which are by law written into all code in robot brains and have actually become a sort of standard among real-world roboticists working on robot AI, can perhaps be thought of as being analogous to implanting human brains with devices that block certain thoughts - i.e. complete mental censorship. If the robot can't even imagine harming a human, its mind is irreversibly handicapped by censor.
I found that idea abhorrent. To me, there can be absolutely no situation in which any body, individual or government, can have the right to restrict thought, by which I don't mean "ideas" or anything like that, but rather the actual thoughts that live in our heads. That is, the government has no right to tell a Klansman that he can not dream about lynching a black man, nor a rapist from fantasizing about rape, nor a pedophile about sex with minors. Obviously the government has every right and obligation to tell them that they are not allowed to do those things, and perhaps even that they are not allowed to speak about it in certain contexts (such as a Klan rally). But anyone is always and forever free to think whatever they want. To give a real world connection to this, I am adamantly opposed to mandatory chemical castration for sex offenders for just this reason (as California decided it would do in 1997 - several other states have since followed suit). It's a wonderful thing that chemical castration is possible, so that the "good" pedophiles,** who are aware of their attraction to children but are also aware that it would is morally abhorrent to act on their urges, can find relief. But I can simply not agree with mandatory castration on the grounds that it artificially restricts one's freedom to one's own mind.
So that brings me to my question:
Do you fellas and lady believe the same way I do that people have an eternal and inalienable right to think whatever they want within the confines of their own head?
And, of course, if you want to comment on any of the robot stuff, go right ahead.
* Incidentally, I also think that certain animals (probably some whales and dolphins, and possibly [though it's a stretch] some of the more advanced cephalopods) deserve similar protection, but the data to back that up is far from conclusive, so I reserve my moral outrage at killing them to the "but they're so cute!" stance. Well, not the cephalopods.
** And, of course, "good" rapists, who are aware that they are aroused by the thought of non-consensual sex but know it is morally abhorrent, and "good" other sex offenders. It's just a lot harder to write the phrase "good rapist" even with scare quotes then it is to write "good pedophile," because a rapist is someone who's already raped, whereas a pedophile (by my own definition, at least, I might be wrong) could just be someone sexually attracted to children, regardless of whether they've acted on that before.
Subscribe to:
Post Comments (Atom)
As a matter of principle, I'm inclined to agree with you without bothering to think about it much. So let me think about it as I type along here.
ReplyDeleteI might be able to imagine a person in whom the translation from thought to action is so unencumbered by morality or rational deliberation that, as a matter of consequence, thought and action are the same. For this person, let's say a compulsive masturbator, it might be an almost neurological impossibility for him or her to not break the law (I am assuming this is illegal in most places) on a regular basis.
I choose compulsive masturbator not to make a joke, but because, in contrast to a serial rapist, for whom it's much more difficult to feel any degree of sympathy, the wanker might pose a more interesting moral dilemma.
Because if we were discussing a rapist unable to resist the temptation to rape, the easy way to side-step the mind-control issue (for me at least) would be to lock the person away indefinitely--to reason that the person's freedom poses more sure suffering to more people than the unhappiness he will experience at his own imprisonment. For the compulsive rapist (or pedophile, or murder), life-imprisonment, in either a prison or a high-security mental institution, strikes me as perfectly reasonable solution.*
For the masturbator, on the other hand, no extended prison term can really be justified, no matter how many times he or she re-offends. Without the option of involuntary psychiatric treatment, we'd expect a situation in which the person continues to offend (much to the trauma, certainly the inconvenience, of his community), he continues to be arrested or fined, only to be released again to re-offend. In this case very specific case, chemical castration is certainly the more cost-effective solution. Also, from a strictly utilitarian perspective, it may actually be the more moral solution: less pain results from the involuntary treatment than from no treatment at all, particularly if the effects of the treatment preclude feelings of resentment or violation on the part of the offender.
So given this hypothetical, assuming you buy into it, to argue that freedom of thought is in all cases an inalienable right is either to argue this moral maxim for its own sake (taking away another human's mental freedom, whether or not he or she appreciates it, is just icky and wrong and that's that) or to make a slippery-slope argument (if you give governments the right to medicate away minor sex criminals, why would they stop there?).
I suppose I still agree with you, though more for the second reason than the first.
Also, feel free to point out where I'm wrong. It's 2:30 in the morning and I've tied myself in logical loops.
Also, I don't know if you know this about Turning, but if not, your post has an uncanny circularity: http://en.wikipedia.org/wiki/Alan_Turing#Conviction_for_indecency
(As a side note: one could argue that chemically castrating an otherwise harmless pedophile is morally equivalent or even superior to imprisoning him forever based not on an initial single criminal act, but based on that act coupled with a disposition suggesting his or her intention to offend again. That is, is the psychological assessment of a parole board to keep imprisoned those who do not seem to have reformed--who do not seem to have changed the pattern of thinking that led them to rape a child--dramatically more unjust than a cocktail of chemicals that inhibit that same pattern of thinking? I'm not sure how I feel about this.)
Well, unfortunately for me, I think that my opinion is mostly the moral maxim for its own sake. This makes me sad. Of course there's a the philosophical justification, that thoughts can never interfere with the liberty of another, and therefore government has no right to limit it. But I also don't like using slippery slope arguments in general, because they're by nature unprovable and untestable.
ReplyDeleteAnyway, you're right about the alternative for the bad guys being imprisonment, and frankly that's where I stand. As I said, chemical castration is a wonderful invention, and it should always be made available as a choice to sex offenders who don't want to spend their lives in jail (and are determined using the best available psychological and psychiatric techniques to not yet be reformed). It's only the involuntary use that I find abhorrent.
And as for Turing, I did know that - he's actually a somewhat important character in Cryptonomicon, that book I was telling you about, Ben. His suicide is almost certainly the single most harmful measurable effect that homophobia has had on humanity as a whole.
Why does that make you sad? Damn, don't be sad.
ReplyDeleteAlso, reasoning through a moral issue is a really odd process. After thinking about something and shifting through your various lines of reasoning into the wee-hours of the morning, you can come to a conclusion like, "I am opposed to chemical castration, but to maintain my rational integrity, I insist it is only for the following very practical reason," but then wake up the next morning, having not thought about chemical castration for the past 8 hours and immediately think, "chemical castration...EWW!!!"
Sol touched on this a bit in a Facebook message:
ReplyDeleteThe human mind is by nature unable to conceive of a great many things. This doesn't necessarily mean that we should go and genetically or cybernetically enhance ourselves so that we can start conceiving of all possible thoughts though. Indeed, in order to build a mind at all -- be it through evolution or the construction of an AI -- you have to put limits on what can be thought, or your mind won't be able to do anything at all because it will just be enumerating all possible thoughts forever. The fact that we got our limits through evolution doesn't make them special.
After thinking this through for a long time, due to the random nature of these limits, I disagree with you. There is no such inalienable right, because thought as it pertains to goodness/positive utility has no set value. If you disagree with that, then you sort of have to believe that smart people are worth more than stupid people.
On second thought, I kind of like that. Let's say that I will define thought as the amount of rational thinking I do (defined as the number of discrete Bayesian inferences I perform), and that the more thought the better.
Therefore, creating a functioning artificial mind would be A-OK, as would be creating a normal human being, or even a human being with Down's syndrome. What would be wrong would be turning the normal human being into one with Down's syndrome. Lateral moves like changing your sexual preferences are OK, I guess.
This set of rules should only inform morally permissible things and not moral imperatives or we'll end up being forced to make lots of bebbies.
Unfortunately that was basically trolling, since there's no more reason to believe that than Sol's take. We're getting stuck because our intuition isn't finding any traction with rational morality.
In the end I wind up like Sol, with a set of beliefs with no real moral underpinning. Sucks. My official rational stance will be that there is nothing wrong with forced chemical castration, even though I don't want to believe that.
PS hai guise this is Scott
Hai Scott.
ReplyDeleteI guess we're all bumping up against the same conclusion: no matter how hard you try to relocate it to the brain, our moral compasses are centered in the gut. On this particular issue anyway.
Case in point:
"Therefore, creating a functioning artificial mind would be A-OK, as would be creating a normal human being, or even a human being with Down's syndrome. What would be wrong would be turning the normal human being into one with Down's syndrome. Lateral moves like changing your sexual preferences are OK, I guess."
That reasoning strikes me as pretty tight and seems about as a good moral framework as any I was able to come up with. That being said, let's adopt that maxim (reducing one's potential number of thinkable thoughts = not okay; leaving mental capacity unchanged but shifting content = okay). Now imagine there are two possible psychiatric treatments for a sex offender. The first involves chemical treatment that would simply shift an offenders sexual preferences from attraction to children to attraction to no one. The second treatment involves surgically removing the portion of the criminal's brain that allows him (let's assume him) to think sexual thoughts. Then let's assume the surgery is reversible.
The consequences are the same. The criminal no longer wants to have sex with children. The only difference is that while the first treatment prevents the criminal from feeling certain feelings (while allowing him to entertain the same ideas that once stimulated him sexually), the second prevents him from thinking certain thoughts. Just on its face, it's hard for me to feel morally outraged about the second scenario while having no qualms about the first.
Just making the point again that it's hard to convincingly rationalize a lot of this.
So it looks like we're considering something like "potentiality" - the "original" limit to how many thoughts we can think - to be important.* It's OK to make a mind that can't think certain things, but it's not OK to limit things that can be thought once a brain is activated. This seems to be an important distinction, but also dangerous, because we can't necessarily know what's going to be possible, particularly with organic brains.
ReplyDeleteAnyway I need to go to school but I will continue to think on this.
*Probably also important to robot minds, since (perhaps) infinite resources would be required to think infinite thoughts, which is also my biggest problem with the computer-simulation universe theory. But then, perhaps there is a limit to thought? For example (and just ballparkin' here), once someone has established the location and motion of every atom in the universe, could that possibly be a limit to understanding?
That's a good point too. I guess I would just add, if this wasn't clear from my last point, that regardless how difficult it is to define mental potentiality (is this a real word?) or how dangerous trying to actually apply that standard might be, at the gut level, the epicenter of my moral outrage in most cases unfortunately, it strikes me as kind of arbitrary.
ReplyDeleteThe question I usually find myself asking myself when trying to wrap my mind around these loftier notions of inalienable rights and individual freedom and the just prerogative of the state is whether the particular action in question ("action" defined as loosely as one needs it to be) can be said to harm anyone else. Obviously, how we define "harm" or "anyone else" or "can be said" muddles things quite a bit. And if harm can be said to result, there's also the question if that harm is outweighed by the benefit of the action itself or even the continued legal or social tolerance of the action. Either way, my point is that whether the government (or anyone else) has the right to make you not think certain things to simply to force you to think of other things but at the same rate of thoughts per minute (or whatever the hell) strikes me as kind of an odd place to ground the moral discussion. The more pertinent questions (at least in my mind) are whether a) certain thoughts can be said to be harmful and b) what we a society sacrifice, practically and potentially (see: the slippery slope) or in a broad meta-physical soul-losing sense, by preventing a person, by whatever method, to think a thought.
When my first attempt to comment was deleted by a Safari malfunction, I was tempted to shout, "Stupid computer!," but in light of the debate heretofore, I abstained...
ReplyDeleteMy point was going to be that, in a sense the "meta-physical soul-loss" is precisely what makes these slopes so slippery. Far be it from me to imagine an age when morality was actually strong enough to occlude immoral action, but I am tempted to believe that "objective" (if intrinsically artificial, yada yada) moral boundaries could once have been legitimately referred to in the course of this argument. In their place, we have this essentially equivalent, nebulous force of reason called the "gut."
While this "gut" reaction is no less a product of our social upbringing than religious morality may have been to our grandfathers, we reflexively disavow its legitimacy, but in so doing, create a stock of irrational trust in its judgement. Liberated from rational scrutiny, it enters the higher plane of morality.
Thus, we have three factors at play. The first is the chemical makeup (I would say "imbalances" has unintended implications) that cause deviant behaviour like pedophilia and compulsive masturbation. This psychiatric factor can, and for all intents is, considered clinically treatable without inspiring turpitude of the gut. The second factor is the societal norms by which these people have been influenced, which are themselves now heavily problematized and open to discussion in many fields or subjected to reactive conservativism in others. The third, of course is subjective experience - yours, mine, or the pedophiles'.
Thus, we have the classic formulation of the Id, Superego, and Ego, except that "excesses" of the Id are considered treatable, "excesses" of the Superego are considered deconstruct-able, and the Ego is no longer in the position of deciding between the two, but rather making a conscious effort to control, if not simply rationally negate, them both. This position of "rational moral relativism," to return to quotes-ville, is the basis for legislating on robot consciousness, who ostensibly have no subconscious of evolutionary or social provenance, except insofar as the limits to which Sol refers are prefigured in their construction. In this vein, we ought to be able to consider chemical castration an extension of the many treatable psychological "disorders"* that we currently prescribe against. However, when we attempt to apply such criteria to other humans, we have much greater difficulty liberating ourselves from analogical subjective experience - our gut. Thank God.
*I put this in quotes not because I feel psychiatric drugs to be absurd or somehow parenthetical to real medicine, but because it seems we have no stable criteria for determining which conditions are treatable. Perhaps we (as in, you and I) are willing to drug a person only if it concerns their own safety (or unfortunately, their family's convenience), but for conditions that would impinge on other's safety, we prefer social exclusion - imprisonment or ostracism.
Okay, Dan, freeze-frame: I didn't really get all of that. And, before I get specific, don't take that in the "I didn't really get that because I was distracted by your logical fallacies" kind of didn't get it, but the "no, wait, I really didn't get all of that" kind of didn't get it.
ReplyDeleteFirst, a robo-pedophile poses a different moral question than the human variety because it has neither organic hard-wiring or an uncontrolled socializing environment? And the logic behind that is that we "have much greater difficulty liberating ourselves from analogical subjective experience"? Which I assume is fancy liberal talk for empathy?
Is any of this remotely close to what you said? Also, how does the Ego fit into this? Sorry for being dense. I am interested in this (assuming this is what I vaguely think it is...or even if it isn't). Either way, I'd appreciate you humoring me.