Friday, May 30. 2008
Is it better to strive for big changes, or little changes? The answer is, "Yes."
Little changes are, by definition, little . . . if they are done in isolation. You might feel good about kicking a coffee habit or losing a couple pounds, but those changes will remain relatively pointless if you stop at that. Those changes are only worthy goals if they are part of a larger overall mission. Self-improvement for its own sake is shallow, self-centered and ultimately pointless. So the only changes worth striving for, ultimately, are the big ones.
On the other hand, big changes are often the result of long, slow, step-by-step effort. I was reading in the New Yorker about the secret of Toyota's success, which, it turns out, is no secret: just keep getting better. The company's culture of kaizen â€“ "continuous improvement" â€“ holds every worker responsible for making things a little bit better every day. Making tiny improvements in manufacturing process doesn't sound very sexy to the American ear â€“ we celebrate breakthroughs and overnight sensations, not meticulous incremental change. And yet that relentless little-by-little push is what defines most of the business successes of our time. Toyota, now the leading making of cars, is one example. Wal-Mart is another; Sam Walton conquered the world of retailing because he believed it was always possible to make the price a little bit lower.
Does that notion of kaizen apply to individuals as well as companies? Should we, as Emile Coue advised, tell ourselves "every day, in every way, I'm getting better and better"? Yes . . . but not in the way you might think. I don't think a human life can be optimized in quite the same way as an assembly line. Tiny tweaks in your diet or exercise or habits will not relentlessly move you into happiness. However, the idea of continuous improvement is important because:
Thursday, May 29. 2008
I started reading a few more articles on Steve Pavlina's website. Some students turned me on to the site when I mentioned I was writing more â€“ Pavlina wrote an evidently popular article entitled, for maximum googleability, "How to make money from your blog." I love his site's tagline: "Self-improvement for smart people." It has just enough elitist zing to cut through the squishiness that usually surrounds self-help sites.
I started examining some of Pavlina's other offerings, and I couldn't help but notice that most of his advice seemed to fall into two categories:
Why does advice run to one side of the spectrum or the other? Why do people seem intent on either swinging for the fences with monumental changes, or else making tiny adjustments? I have a few theories:
Wednesday, May 28. 2008
Dan Ariely's Predictably Irrational, a behavioral economist's notebook on how human decision-making is consistently flawed, is a book about meta-surprise. That is, not only are the results surprising, but what is even more surprising is that any of it should come as a surprise at all. Anyone who is a serious student of human nature should not be shocked at some of Ariely's scientific findings:
Ok, so human decision-making is biased in some consistent ways. I'm sure no one ever thought of that before . . . except, of course, marketers, advertisers, salesmen, politicians, dieticians, doctors, lawyers, stock brokers, preachers, ethicists, policemen, writers, designers, editors, entertainers, coaches, middle managers, psychologists, psychoanalysts, childcare professionals, nagging children, nagging spouses, prostitutes, drug dealers, advice columnists, spammers, and anyone who has ever tried to seduce anyone else. In fact, every profession that in any way deals with human beings has, as its primary mission, to exploit or defend against the biases in human judgment.
And yet we're still surprised.
The significance of that surprise is not lost on Ariely. He himself is not surprised that these biases exist, but he's utterly exasperated that no one sees them coming. One reason his book strikes so deep is that he is calling attention to the elephant in the room: if people are so obviously and consistently acting as non-rational agents, why do our primary economic, business, and justice models assume people behave in their rational self-interest? Ariely is addressing traditional macroeconomic theory most directly: how can we speak of supply and demand, when both are somewhat arbitrary fabrications of our minds? Can you completely trust the "wisdom of the free market," when all those agents informing the market share consistent biases? But Ariely takes aim at other spheres as well, such as criminal justice; does it make sense to have laws based on deterrence, when scientific research shows that the actual psychology of law-breaking does not coldly calculate risk/reward ratios? Is it truly just, to severely punish thieves and robbers, when white-collar fraud causes ten times as much economic damage?
The real appeal of the book is not its high-level philosophical implications, though â€“ it's the tactical philosophy, the personal introspection it provokes. With every chapter, you're asking yourself: do I do that? Do I keep options open, even when it no longer makes sense to do so? Do I cheat in small ways that add up to a big impact? The armchair psychology is entertaining . . . at least, right up until it provokes some real cognitive dissonance. And if you don't walk away from the book with a little cognitive dissonance, suddenly distrusting all the supposedly rational decisions that have shaped your life . . . well, you'll just be surprised later.
Tuesday, May 27. 2008
After my long and tedious picking apart of the Golden Rule, Kenny forwarded to me a link to Steven Pinker's recent New York Times Magazine article deconstructing the moral sense. Behold, there is nothing new under the sun. I can't even finish my own critique without finding out that someone else has done much the same thing, with much more information and research, and clearer reasoning. Pinker provides a very clear description of how certain moral categories can be more or less universal, while still allowing that different cultures will put different priorities on those categories, resulting in differing conclusions.
This is a trend I'm running into more and more with my writing â€“ the ubiquity of information. I can't make a single assertion without realizing that I ought to be linking to something to provide more evidence. If I don't, someone else will google up contradictory evidence in a heartbeat. The end result is:
Monday, May 26. 2008
I almost always go to see the Cohen Brothers' films. Raising Arizona was a high school cult favorite, and then after writing about Blood Simple for my first film class in college, I decided that the Cohen Brothers were cool and deserved my full attention.
Sometimes the attention paid off. Miller's Crossing is still, to my mind, the most entertaining gangster film ever made. (The Godfather, of course, will remain as the best gangster film, but sometimes less than the best is more entertaining.) O Brother, Where Art Thou was a truly original creation that brought American folk music into the spotlight; I found myself saying, defensively, "Hey, I was into Ralph Stanley before Ralph Stanley was cool."
But, then there are the disappointments. The slightly surreal visual style that was so engaging in Miller's Crossing became oddly disturbing and pointless in Barton Fink. The Big Labowski was a Big Waste; I felt like I had discovered a once-formidable but now washed-up movie in a shabby hotel room, strung out after mainlining Cheech and Chong scripts. Fargo, I admit I enjoyed, though the foot sticking out of the wood chipper was a sad signal: grotesque humor, rather than magnificent plotting, may be the calling card of the Brothers from now on.
Which brings me to No Country for Old Men, their latest genre-bending installment. My wife and I have been so busy we can't even watch a whole movie in one sitting. After watching the first half of No Country for Old Men we were excited about watching the rest of it. It had been a long time since I had seen such memorable characters: the Hunter, every inch the cool, competent-yet-sympathetic action hero; the homespun, folk-wise old Sheriff; and perhaps the creepiest assassin-villain ever to pump tension into a thriller. (Javier Bardem's Oscar for Best Supporting Actor in the role of Anton Chigurh is thoroughly deserved.) With break-neck action, building suspense, plot twists at once unexpected and consistent . . . we couldn't wait to see how it ended.
Which it . . . didn't. At least, not in the conventional movie sense. You don't completely become aware of Hollywood movie conventions until you see one violated, and boy, do I feel violated. I can handle movies with tragic endings â€“ that is, movies ending with the death(s) of the hero(s). But convention normally demands that something transcends their deaths: their mission survives them, or their heroic example, or their moral purity, or their love for each other. Even in death, something has to rise up and say, "Death can't touch this," in order for the audience to feel closure.
But after all that action, the relentless hunt, the suspense, the action . . . No Country for Old Men just ends with death, period. Everyone remotely likeable in the film winds up abruptly snuffed out, or anxiously, helplessly awaiting death. Only the most villainous of villains walks away, cheating death just when you think he might get his just desserts. So what is this movie about, anyway? A rule of thumb of literary analysis is that the last person to speak is the main character, and the last thing he says points to the meaning of the story. In this case, it's the Sheriff, newly retired, anxious, reflecting on his own end prefigured in a dream. The message of No Country for Old Men is: "You're end is coming. Ain't nuthin' you can do about it. Sucks, huh?" Thanks a lot, Cohen Brothers. I suppose Cormac McCarthy, the author of the novel upon which the film is based, will have to take the blame for the plot. It sounds like a great novel . . . but then again, great doesn't always equal entertaining.
Thursday, May 22. 2008
As far as I can see, there is only one way to come to a purely rational derivation of the Golden Rule. It goes something like this:
Ok, ok, I know what you're thinking. I said I was going to forward a rational explanation for the Golden Rule, and proclaiming that "I am you and you are me" sounds more like pop Buddhism or an overused song lyric. But just run with it for a moment.
All apparent conflicts with self-interest and the Golden Rule dissolve once you assume they are identical. Helping others is then as self-evidently rational as helping yourself. Choosing to do the right thing to help others then becomes as easy (and as hard) as making choices to benefit yourself. Of course that process won't always be easy or perfect . . . but then again, it was never easy for people to consistently do things that served their own self-interest, either. Someone may know that it's in their best long-term interests to diet and lose twenty pounds . . . but they don't always feel like dieting, and chocolate cake beckons. Balancing one's own self-interest against the interests of others is still a logistical problem, but it doesn't require a leap into the irrational.
I find this hypothesis intriguing for a few reasons:
Wednesday, May 21. 2008
There was another way to attempt rationalizing the Golden Rule, using a philosophy I'll call "ethical hedonism." In a sentence: "I help others because it makes me feel good. It may appear to be altruistic, but really I'm just doing it to please myself, so it's really just another form of self-interest."
It's certainly true that helping others can be pleasurable. And that pleasure does motivate lots of people to perform altruistic acts. But approach also fails to completely cover all aspects of the Golden Rule. Most especially, it fails to explain moral imperative â€“ the sense that certain things are the right thing to do, whether you feel like doing them or not. The Golden Rule does not say, "Do unto others as you would have them do unto youâ€¦except when you don't feel like it." Feelings got nothin' to do with it. The right thing would still be the right thing, even if it made you miserable.
Even when doing right feels good, you have to ask: is it right because it makes me feel good, or does it make me feel good because it's right? I think most ethical people would go with the latter formulation. Pleasure is a by-product, but not the ultimate reason, for ethical behavior.
So we still don't have a rational justification for the Golden Rule . . . yet.
(to be continued)
Tuesday, May 20. 2008
How can we rationally justify the Golden Rule, since "doing the right thing" often seems to be contrary to one's own individual happiness or well-being? The only reason we talk about morality or ethics at all is because we so often see that ethical decisions are contrary to one's immediate self-interest. (Please note the emphasis on "rationally". I am not saying that I disagree with the Golden Rule -- I'm just trying to establish why humanity believes in it, and if that reason can be rationally derived.)
People have approached the problem in several ways. One way is simply to stop right there, and declare the Golden Rule to be self-evident, just like wanting one's own happiness is self-evident. I do not find that satisfactory, though â€“ it just doesn't seem irreducible just yet.
Another tack is to assume the Golden Rule springs from enlightened self-interest: helping others is just another way to help yourself. By sacrificing some of our self-interest to serve others, we create a society in which everyone enjoys the benefits of peace and collaboration. This approach is especially popular amongst Objectivists and other laissez-faire free-market capitalists, who see almost all apparent altruism as just a manifestation of people freely working together toward their mutual benefit.
The problem with enlightened self-interest is that it explains some kinds of altruism and morality, but not all kinds. Under that rationale, it makes perfect sense to loan my lawn mower to my next door neighbor -- after all, there's a high probability that my neighbor will reciprocate with some favor in the future. But it doesn't do a good job of explaining the Marine who jumps on a grenade to save the rest of his platoon. It's really difficult to see how getting blown to bits is in one's self-interest . . . At least, not without making all kinds of other assumptions about afterlife, heavenly rewards, etc. Some evolutionary theorists would just wave their hands and chalk it up to an imperfectly evolved creature -- "some people just have too much of that altruistic impulse." The ultimate sacrifice is just a fluke.
Somehow, I doubt it. Were enlightened self-interest the true source of the Golden Rule, you would see limits on altruism reflected in the rule itself. The Golden Rule would be something like "do unto others as you would have them do unto you -- but don't go too far with it, ok?" But actually, the consensus of world ethical traditions is just the opposite -- they actively celebrate altruism that sacrifices self-interest.
In fact, the measure of one's moral superiority is just how far one is willing to forego their own interests for the sake of others. We hold up as paragons people like Mother Teresa, who sacrifice nearly all personal comfort, security, and pleasure for the sake of helping others. How can you rationally justify that?
Again, evolutionary theorists can muster an answer. The altruistic impulse, they would say, is designed to serve the self-interest of the genes, not the individual. We are genetically programmed to sacrifice ourselves for the sake of others because such an impulse can ultimately lead to preserving our genes, since the beneficiaries of such altruism are usually the people closest to us: our own children and relatives.
That's a pretty good explanation for how altruism might have evolved, but it still doesn't give a rational explanation for why self-sacrifice is good. Again, you would think that if this was the ultimate source of the moral impulse, it would be reflected in the Golden Rule: the wording would be something like, "Do unto your tribe as you would have your tribe do unto you . . . but screw everyone else." But instead, many traditional formulations of the Golden Rule explicitly call for universality. And the boundaries of moral responsibility do not even end with our own species-- many people feel some sense of moral obligation to all living things. Does the Golden Rule, in its fullest expression, really reflect the interests our genes? Or is it something even bigger than that?
So . . . We still haven't gotten to the bottom of the matter. What is the rational basis for the Golden Rule?
Monday, May 19. 2008
So, where was I? Oh yeah, I was trying to deconstruct secular ethics and see if one could construct a completely rational basis for moral behavior. If you examine the teleology of every action, trying to see what end it serves, you eventually must come to something that is self-evidently good. Whatever is deemed "self-evidently good" is not the result of rational reasoning, but is seen as irreducibly good, good in and of itself. (Let's ignore, for the moment, the question of whether that judgment of goodness is a mere opinion or some kind of objective truth. Either way, you must ultimately accept some final point of reference for goodness, if you want to have a consistently rational morality.)
So what's "self-evidently good?" A lot of contemporary Americans would agree with Aristotle's assessment that happiness is the goal of life. People differ in the sophistication of their notion of "happiness" -- some think purely in terms of maximizing pleasure (hedonists and epicures), while others see happiness as a settled state of well-being that transcends circumstance. And, of course, people have vastly different personal conceptions of what would make them happy. But the overall direction of the idea is the same -- if it brings pleasure and well-being, it's good.
This notion of "self-evidently good" jibes well with evolutionary theory. Organisms evolve to appreciate and take pleasure in those things that serve their well-being, that is, their survival and propagation. "Happiness" seems to correlate strongly with "evolutionarily successful."
We still haven't touched on what most people would call "morality," though. All we've said so far is, "People strive to serve their own happiness." Well, duh. We haven't explained why people care about other people's happiness, and why caring about other people's happiness is, by and large, considered good. In fact, almost all systems of ethics and morals assume that serving the interests of others is the core of morality. The Golden Rule in its various formulations is about the closest we get to universal consensus on the nature of morality. "Do unto others as you would have them do unto you." "Love thy neighbor as thyself."
So, if we're assuming that happiness and well-being is the self-evident good, and that securing happiness for others as well as yourself is the essence of ethical or moral behavior . . . How do we reconcile these two principles, given that they seem to conflict with each other? If fact, morality isn't even all that interesting until we get into the conflicts, when "doing the right thing" is contrary to one's own apparent self-interest and happiness. What is the rational basis for the Golden Rule?
(to be continued)
Thursday, May 15. 2008
So . . . How do get to a purely rational justification for moral, ethical behavior? Well, part of the problem is that a lot of how we judge something to be good or bad depends on teleology -- that is, actions are seen as means to some ultimate end. So, if you say, "Having lots of money is good," you have to immediately qualify it; having money is not good in and of itself, but rather because you can use money to buy things. And then you're still not done; the things you buy with money are not good, but rather they also are means to some other end. And so, any action or pursuit can be deconstructed into a long chain of means: you work the job, so you can make money, so you can buy the expensive car, so you can impress your friends and peers, so you can feel important, so you can . . . etc.
If you follow that teleological chain of means far enough, you should eventually bump into something that is not a means, but rather something that is good in and of itself. Aristotle ran down that chain in his Nicomachean Ethics and what he found as the ultimate end was eudaimonia -- sometimes translated as happiness, though it's a little more involved than our usual notion of the word. You might agree with Aristotle or not, but the fact remains that if you want to logically, rationally derive an ethical code, you have to eventually converge on something that is self-evidently good. Something -- happiness, pleasure, virtue, union with God, something -- must be good in and of itself.
But . . . wait a minute. Saying something is "self-evident" is not a matter of rationality, per se. Rationality -- at least, in the sense of "logic" -- is all about correctly deriving conclusions based on formal rules from correct starting assumptions. But all logic must start with assumptions. In mathematics, they are referred to as axioms. For instance, "x = x" -- "any value is equal to itself" -- is an axiom. We can't prove it, but we accept it to be true because . . . Well, just because. It's true, dammit. We don't know how we know, we just know.
As in mathematics, so in philosophy. To accept something as good in and of itself means that you are accepting an unquestionable axiom. You are -- dare I say it -- accepting something on faith. And this is exactly where things get really interesting. What, if anything, are the unquestionable axioms of ethics? And, more importantly, what faculty are we using to perceive them?
(Page 1 of 2, totaling 11 entries) » next page
Syndicate This Blog