I'm going to keep chiseling away at the utilitarian ethic represented by Kenny's essay on poverty and Peter Singer's essay, "Famine, Affluence, and Morality." I want emphasize, again, that I am not out to prove them wrong. If I did that, I doubt anyone would hear me out, because they would just assume I'm one of those ethically-challenged people who don't want to admit that he should chip in to Oxfam. I wouldn't be bugged by the whole thing, unless I thought they were at least partially right. It's the paradox that bugs me – the fact that it seems like we should care more than we do, but don't. I am led by Harry's Law (in honor of my boss, Harry Shaughnessy): "If what you're doing seems to be really, really hard, you're probably not doing it right."
So, I did some homework on Singer. It turns out there is a name for this reluctance to accept his conclusions: the Demandingness Objection. Various philosophers have taken a crack at explaining why Singer is wrong. Some do little more than restate the observed intuition: we don't feel like it's a profound moral obligation, so we shouldn't treat it as such. Or that it's simply too demanding to be considered reasonable. Those approaches just beg the question: why is it unreasonable? A few others, such as Thomas Nagel and Philip Pettit, dig into the intuition that most people feel: that an infinite demand of the world's needy upon our resources somehow compromises our own interests too much. "What about me? Doesn't my happiness and suffering matter?" is the usual, sometimes explicit response. It defies our sensibilities to have our own interests swamped and made insubstantial by the demands of millions of others.
Even Singer, it seems, doesn't take his own prescriptions to that length. He claims to give 25% of his income to overseas relief, which is certainly generous, and yet still far short of the full-on sacrifices his ethics seem to demand. Even Singer, at some undetermined point, seems to think his own interests trump those of the little girl drowning from hunger right in front of him in far-away Namibia.
Of all the approaches to the Demandingness Objection, Pettit's seemed the most sensible: we are not responsible for ALL world suffering, merely our fair share of the world suffering. If everyone in the First World nations chipped in a little for the Third, then the problems could really be solved without anyone having to make superhuman sacrifices. At least that approach allows us to accept some responsibility for our fellow human beings, without turning ourselves into victims.
But even that approach has a certain cold, number-crunching aspect that doesn't sit well . . . not mention that it opens up a whole new set of questions: how much is "my fair share"? Which needs are the ones that exert a moral demand? Is it enough to keep people from dying, or do I need to bring them up to an identical standard of living as my own? What constitutes suffering, or happiness, and are they completely correlated with material wealth? And how do we measure it?
In fact, the whole utilitarian project, once you truly start to implement it, runs into lots of problems with measurement. There is another well-known objection called the Mere Addition Paradox, which tries to run with the assumptions of utilitarianism and finds itself in some weird conclusions. If we add people to the world who are somewhat less happy than everyone else, is the world diminished? If you say yes, then you might be lead to conclude the solution to inequality is to kill all the sad people (a la Monty Python's King Otto). If you say no, then through a series of calculations you might ultimately conclude that having an enormous number of marginally happy people is better than a smaller number of quite happy people, and the goal of our ethical manipulations becomes the multiplication of misery. The paradoxes suggest what I had outlined in the beginning of our discussion: rather than the moral intuition being right, or the ethical rule being right, perhaps neither is right, and we're trying to rationalize something that is not altogether rational.
For the record, I don't consider myself a utilitarian, because I don't think that the ultimate goal of life is "happiness." Actually, one of the essays I keep meaning to write is my own arguments against happiness as the ultimate goal, some of which may resemble the "mere addition paradox." (I have to admit, I didn't follow your link there, so I'm not sure.)
But I also reject Harry's Law. The people we admire most--whether it's Mother Theresa or Michaelangelo--are the people who overcame their natural desire for short-term gratification, made the unnatural and unintuitive choices, and then accomplished the very difficult task of living by those choices day in and day out.
I also reject the "fair share" argument. Imagine a world in which a hundred people (including you) are born millionaires, and another hundred people are born into near-starvation. And let's assume the other 99 rich people aren't doing their fair share. Should you help one poor person, and reason, "the other 99 should each also help one poor person, so it's not my problem that they aren't?"
I think you mis-understood Harry's Law. It is is an observation that most (but not all) extremely difficult tasks can be made much easier (though not necessarily easy) with the correct insight. Or, put another way, most of our problems are due to a lack of understanding. Yes, I admire people who are willing to suffer for a goal, but I admired even more the people who could make needless suffering go away. I can't count how many times I would be struggling with a problem that was bogging me down, and Augie would ask me what I'm doing, and I would explain it to him, and he would make three phone calls and suddenly the problem was solved. It pissed me off, because here I was thinking I was being all noble, suffering through the inevitable hardship of a noble goal, and he comes along and proves to me I was just wasting my time. There were days that if I heard "think outside the box" one more time I would scream.
Your response would be more appropriate if I had stated Harry's law as "If something seems really, really hard, you're probably doing the wrong thing."
BTW, I'm not all that thrilled with the "fair share" argument, either. It seems to make a different mistake, focusing on the "fairness" of the situation rather than the "goodness" of the act. But that also just reminds me of Pinker's dissection of the moral impulse: the circuits for calculating "fairness" in an exchange relationship are part of a different algorithm than the one used for navigating communal relationships.