Monday, May 09, 2005

Good dog!

After reading about the stray dog that rescued the abandoned baby, I was reminded of one of my many odd philosophical views -- the view that animals can be morally praiseworthy. In my view, what makes one an appropriate object for moral praise is an intrinsic desire to help others. (Intrinsic desires are to be contrasted with instrumental desires. If you want to help others only because someone promises you a bone for helping others, you have an instrumental desire and that's not morally praiseworthy.) So any creature that can be motivated by feelings of sympathy or benevolence is a candidate for moral praise.

As it often happens, my big enemy here is Immanuel Kant. Kant believed that motivations rooted in our desires and not in reason itself couldn't be moral. I agree that animals don't have the capacity that's necessary for moral esteem on Kant's view -- they can't consider their reasons for acting, look for reasons why those are good reasons, and discover the foundation of all their reasons in their own nature as free and rational agents. Kant expressed a common belief that some kind of reflection or deliberation which animals probably can't do is necessary for moral praise, and that's the belief I don't have.

5 comments:

Neil Sinhababu said...

I'm not exactly sure how broad I want the scope of moral praise to be. But at the very least, I want to praise all creatures (and robots) who desire to help others. If they feel pleasure in knowing of another's happiness and are motivated to help others, they get praise from me. I don't know what of this your AIs have, so it's a bit hard to answer exactly.

Blue said...

I think this deterministic outlook kinda makes the difference between itnrinsic desires and instrumental desires very ambiguous. An AI programmed to want to help people? A mother who doesn't desire to increase world utility, so much as pass on her genes, when she feel compassion for her doe-eyed child?

Blue said...

Also, (inspired by a certain online quiz going around), I wonder jsut how far does the ability to have moral worth go? Not just AI's or animals, but what about whole cultures? Can you judge the "belief of a nation" as ethically good or not, or is it a meaningless statement?

Anonymous said...

According to Candace Vogler, Kant would go so far as to say (I think) that an intrinsic desire to help others makes it less plausible that you are behaving morally. If you're helping others out of warm fuzzy feelings you get from helping people, you may be fulfilling your duties through your actions, but you doing so from your own inclinations, not for the sake of fulfilling duty. It's more plausible to believe that someone who's a cold-hearted bastard but who helps others because he feels obliged to is behaving morally.

This is why I did badly on my Kant midterm.

Neil Sinhababu said...

Tony, I think I can make sense of the moral worth of an entire culture. If people in one culture are more benevolent than people in another culture, I'd assign it higher moral worth. Cultures with more cruelty are worse.

You're right, Julian. The biggest role desire can play in Kant (as I interpret him) is in making someone aware of a particular option. But desire can't play any role whatsoever in the agent's justification of the action, or the agent will be trapped in heteronomy. This view sucks.