Monday, November 26, 2007

Bias, generalization and stereotypes: A half-baked lesson in Ethics

[[extra-curricular]]

[ACM suggests that some percentage of Undergraduate CS courses should be spent on discussing
ethics. May be this will fill that role... At any rate, I have been sending it since Fall 2003, and I see no reason
to break the tradition this year ;-) ]


Inductive generalizations are what allow the
organisms with their limited minds to cope with the staggering complexity
of the real world. Faced with novel situations, our ancestors had to
make rapid "fight or flight" decisions, and they had to do biased
learning to get anywhere close to survival. So, we can't really
seriously ask people not to generalize or not to have biases!

The problem of course is where does this leave us vis-a-vis
stereotypes--the "all Antarciticans are untrustworthy", "all
Krakatoans are smelly" variety. Afterall, they too are instances of
our mind's highly useful ability to induce patterns from limited
samples.

So, what, if any, is the best computational argument against stereotyping? One
normal argument is that the stereotype may actually be wrong--in
otherwords, they are actually wrong (non-PAC) generalizations, either
because they are based on selective (non-representative) samples, or
because the learner intentionally chose to ignore training samples
disagreeing with its hypothesis. True, some
stereotypes--e.g. "women
can't do math", "men can't cook" variety--are of this form.

However, this argument alone will not suffice, as it leaves open the
possibility that it is okay to stereotype if the stereotype is
correct. (By correct, we must, of course, mean "probably approximately
correct," since there are few instances where you get metaphysical
certainty of generalization.)

What exactly could be wrong in distrusting a specific Antarcitican because
you have come across a large sample of untrustworthy Antarciticans?

I think one way to see it is perhaps in terms of "cost-based
learning". In these types of scenarios, you, the learning agent, have
a high cost on false negatives--if you missed identifying an
untrustworthy person, or a person who is likely to mug you on a dimly
lit street, or a person who is very likely to be a "bad" employee in
your organization, your success/survival chances slim down.
 
At the same time, the agent has much less cost on false positives, despite
the fact that the person who is classifed falsely positive by your
(negative) stereotype suffers a very large cost. Since the false
positive *is* a member of the society, the society incurs a cost for
your false positives, and we have the classic case of individual good
clashing with societal good.

This then is the reason civil societies must go the extra mile to
discourage acting on negative stereotypes, so we do not round up all
antarciticans and put them in bootcamps, or stop all Krakatoans at
airport securities and douse them with Chanel 5. And societies, the
good ones, by and large do, or at least try to do. The golden rule,
the "let a thousand guilty go free than imprison one innocent", and
the general societal strictures about negative streotypes--are all

measures towards this.

You need good societal laws (economists call these "Mechanism Design")
 precisely when the individual good/instinct clashes with the societal good.

So, you are forced to learn to sometimes avoid acting on the highly
efficient, probably PAC, generalizations that your highly evolved
brain makes. I think.


Yours illuminatingly... ;-)
Rao

Epilogue/can skip:

It was a spring night in College Park, Maryland sometime in
1988. Terrapins were doing fine.  The Len Bias incident was slowly
getting forgotten.  It was life as usual at UMD. About the only big
(if a week-old) news was that of a non-caucasian guy assaulting a
couple of women students in parking lots.  I was a graduate student,
and on this particular night I did my obligatory late-evening visit to
my lab to feign some quality work. My lab is towards the edge of the campus;
just a couple more buildings down the Paint Branch Drive, and you get
to the poorly lit open-air parking lots.

On that night I parked my car, walked down the couple of blocks to my
lab, only to remember that I left a book in the car. So, I turned, and
started walking back to the parking lot. As I was walking, I noticed
that this woman walking in front turned a couple of times to look back at me. I remembered
that I had passed her by in the opposite direction. Presently I
noticed her turning into the cryogenics building, presumably her
lab. As I passed by the cryo lab, however, I saw the woman standing
behind the glass doors of the lab and looking at me.


Somewhere after I took a few more steps it hit me with lightning
force--I was a false positive! The woman was  ducking into
the lab to avoid the possibility that I might be the non-caucasian
male reportedly assaulting campus women. I knew, at a rational level,
that what she was exhibiting is a reasonably rational survival
instinct. But it did precious little to assuage the shock and
diminution I felt (as evidenced by the fact that I still remember the
incident freshly, after these many years.).
 
There is no substitute for assessing the cost of false positives than being a false positive
yourself sometime in your life...

--------------
     ....not to make up your minds, but to open them.

    To make the agony of decision-making so intense that 
    you can escape only by thinking. 
                        -Tag line from Columbia School of Journalism Seminars

   "Induction  extends your expectation, not your experience"
 

No comments: