Sunday, November 11, 2012

Best Guesses

As people do not have objective knowledge about what is right and wrong, they must base their ethical systems on a combination of reason and intuition.  Mass intuitions are, except in a few specific circumstances, presumably more reliable than individual intuitions.  This does not mean that mass intuitions, or mass beliefs, are fool-proof - it is entirely possible that every single person in the world could believe something false.  The past provides evidence that this is extremely unlikely, however.

It is also entirely possible that trees are pancakes and dogs fly around with jet engines.  This is also extremely unlikely, and relatively few people dispute that one may safely act on the assumption that what one observes is roughly equivalent to reality.  It seems no less justifiable to incorporate mass intuition into one's code of ethics.  In the end, all 'knowledge' is based upon certain unprovable assumptions - why must ethics be any different?

A Problem With Utilitarianism

While many philosophers reject the concept of utilitarianism as a guiding code of ethics, I think that it does have some value.  It seems quite commonsensical to suppose that one person's life means less than the life of that person and one other.  In fact, without any hint of utilitarianism, codes of ethics tend to fall to pieces.

However, utilitarianism is not suitable to serve as one's only source of ethical guidelines.  The most significant reason I see for this is that it requires inaccessible knowledge in order to function correctly.

Utilitarianism attempts to quantify the moral value of states and actions by assigning levels of value to different things.  For example, if one can do something that will make one person a little bit happy, or two people a little bit happy, one should do the latter.  If one can do something that makes one person a little bit happy, or one person very happy, one should again do the latter.  Unfortunately, while it is fairly easy to tell one person apart from two people, telling 'a little bit happy' apart from 'very happy' is more challenging.  When one must guess at the exact values of states and actions, it becomes especially difficult - is it better to make three thousand people a little bit happy, or three thousand and one people a little bit happy, but slightly less happy than the three thousand mentioned previously?  To make the correct moral judgement, one would have to know the exact difference in amount of happiness experienced by the two groups of people, and that is something which we cannot currently measure.  This, in addition to other reasons, makes utilitarianism unsuitable as one's sole system of ethics.