The Effective Decision
Effective executives do not make a great many decisions. They concentrate on what is important. They try to make the few important decisions on the highest level of conceptual understanding. They try to find the constants in a situation, to think through what is strategic and generic rather than to “solve problems.” They are, therefore, not overly impressed by speed in decision making; rather, they consider virtuosity in manipulating a great many variables a symptom of sloppy thinking. They want to know what the decision is all about and what the underlying realities are which it has to satisfy. They want impact rather than technique. And they want to be sound rather than clever.
Effective executives know when a decision has to be based on principle and when it should be made pragmatically, on the merits of the case. They know the trickiest decision is that between the right and the wrong compromise, and they have learned to tell one from the other. They know that the most time-consuming step in the process is not making the decision but putting it into effect. Unless a decision has degenerated into work, it is not a decision; it is at best a good intention. This means that, while the effective decision itself is based on the highest level of conceptual understanding, the action commitment should be as close as possible to the capacities of the people who have to carry it out. Above all, effective executives know that decision making has its own systematic process and its own clearly defined elements.
The elements do not by themselves “make” the decisions. Indeed, every decision is a risk-taking judgment. But unless these elements are the stepping stones of the decision process, the executive will not arrive at a right, and certainly not at an effective, decision. Therefore, in this article I shall describe the sequence of steps involved in the decision-making process.
1. Classifying the problem. Is it generic? Is it exceptional and unique? Or is it the first manifestation of a new genus for which a rule has yet to be developed?
2. Defining the problem. What are we dealing with?
3. Specifying the answer to the problem. What are the “boundary conditions”?
4. Deciding what is “right,” rather than what is acceptable, in order to meet the boundary conditions.. What will fully satisfy the specifications before attention is given to the compromises, adaptations, and concessions needed to make the decision acceptable?
5. Building into the decision the action to carry it out. What does the action commitment have to be? Who has to know about it?
6. Testing the validity and effectiveness of the decision against the actual course of events.How is the decision being carried out? Are the assumptions on which it is based appropriate or obsolete?
Let us take a look at each of these individual elements.
The effective decision maker asks: Is this a symptom of a fundamental disorder or a stray event? The generic always has to be answered through a rule, a principle. But the truly exceptional event can only be handled as such and as it comes.
Strictly speaking, the executive might distinguish among four, rather than between two, different types of occurrences.
First, there is the truly generic event, of which the individual occurrence is only a symptom. Most of the “problems” that come up in the course of the executive’s work are of this nature. Inventory decisions in a business, for instance, are not “decisions.” They are adaptations. The problem is generic. This is even more likely to be true of occurrences within manufacturing organizations. For example:
A product control and engineering group will typically handle many hundreds of problems in the course of a month. Yet, whenever these are analyzed, the great majority prove to be just symptoms—and manifestations—of underlying basic situations. The individual process control engineer or production engineer who works in one part of the plant usually cannot see this. He might have a few problems each month with the couplings in the pipes that carry steam or hot liquids, and that’s all.
Only when the total workload of the group over several months is analyzed does the generic problem appear. Then it is seen that temperatures or pressures have become too great for the existing equipment and that the couplings holding the various lines together need to be redesigned for greater loads. Until this analysis is done, process control will spend a tremendous amount of time fixing leaks without ever getting control of the situation.
The second type of occurrence is the problem which, while a unique event for the individual institution, is actually generic. Consider:
The company that receives an offer to merge from another, larger one, will never receive such an offer again if it accepts. This is a nonrecurrent situation as far as the individual company, its board of directors, and its management are concerned. But it is, of course, a generic situation which occurs all the time. Thinking through whether to accept or to reject the offer requires some general rules. For these, however, the executive has to look to the experience of others.
Next there is the truly exceptional event that the executive must distinguish. To illustrate:
The huge power failure that plunged into darkness the whole of Northeastern North America from St. Lawrence to Washington in November 1965 was, according to first explanations, a truly exceptional situation. So was the thalidomide tragedy which led to the birth of so many deformed babies in the early 1960s. The probability of either of these events occurring, we were told, was one in ten million or one in a hundred million, and concatenations of these events were as unlikely ever to recur again as it is unlikely, for instance, for the chair on which I sit to disintegrate into its constituent atoms.
Truly unique events are rare, however. Whenever one appears, the decision maker has to ask: Is this a true exception or only the first manifestation of a new genus? And this—the early manifestation of a new generic problem—is the fourth and last category of events with which the decision process deals. Thus:
We know now that both the Northeastern power failure and the thalidomide tragedy were only the first occurrences of what, under conditions of modern power technology or of modern pharmacology, are likely to become fairly frequent occurrences unless generic solutions are found.
All events but the truly unique require a generic solution. They require a rule, a policy, or a principle. Once the right principle has been developed, all manifestations of the same generic situation can be handled pragmatically—that is, by adaptation of the rule to the concrete circumstances of the case. Truly unique events, however, must be treated individually. The executive cannot develop rules for the exceptional.
The effective decision maker spends time determining which of the four different situations is happening. The wrong decision will be made if the situation is classified incorrectly.
By far the most common mistake of the decision maker is to treat a generic situation as if it were a series of unique events—that is, to be pragmatic when lacking the generic understanding and principle. The inevitable result is frustration and futility. This was clearly shown, I think, by the failure of most of the policies, both domestic and foreign, of the Kennedy Administration. Consider:
For all the brilliance of its members, the Administration achieved fundamentally only one success, and that was in the Cuban missile crisis. Otherwise, it achieved practically nothing. The main reason was surely what its members called “pragmatism”—namely, the Administration’s refusal to develop rules and principles, and its insistence on training everything “on its merits.” Yet it was clear to everyone, including the members of the Administration, that the basic assumptions on which its policies rested—the valid assumptions of the immediate postwar years—had become increasingly unrealistic in international, as well as in domestic, affairs in the 1960’s.
Equally common is the mistake of treating a new event as if it were just another example of the old problem to which, therefore, the old rules should be applied:
This was the error that snowballed the local power failure on the New York–Ontario border into the great Northeastern blackout. The power engineers, especially in New York City, applied the right rule for a normal overload. Yet their own instruments had signaled that something quite extraordinary was going on which called for exceptional, rather than standard, countermeasures.
By contrast, the one great triumph of President Kennedy in the Cuban missile crisis rested on acceptance of the challenge to think through an extraordinary, exceptional occurrence. As soon as he accepted this, his own tremendous resources of intelligence and courage effectively came into play.
Once a problem has been classified as generic or unique, it is usually fairly easy to define. “What is this all about?” “What is pertinent here?” “What is the key to this situation?” Questions such as these are familiar. But only the truly effective decision makers are aware that the danger in this step is not the wrong definition; it is the plausible but incomplete one. For example:
The American automobile industry held to a plausible but incomplete definition of the problem of automotive safety. It was this lack of awareness—far more than any reluctance to spend money on safety engineering—that eventually, in 1966, brought the industry under sudden and sharp Congressional attack for its unsafe cars and then left the industry totally bewildered by the attack. It simply is not true that the industry has paid scant attention to safety.
On the contrary, it has worked hard at safer highway engineering and at driver training, believing these to be the major areas for concern. That accidents are caused by unsafe roads and unsafe drivers is plausible enough. Indeed, all other agencies concerned with automotive safety, from the highway police to the high schools, picked the same targets for their campaigns. These campaigns have produced results. The number of accidents on highways built for safety has been greatly lessened. Similarly, safety-trained drivers have been involved in far fewer accidents.
But although the ratio of accidents per thousand cars or per thousand miles driven has been going down, the total number of accidents and the severity of them have kept creeping up. It should therefore have become clear long ago that something would have to be done about the small but significant probability that accidents will occur despite safety laws and safety training.
This means that future safety campaigns will have to be supplemented by engineering to make accidents themselves less dangerous. Whereas cars have been engineered to be safe when used correctly, they will also have to be engineered for safety when used incorrectly.
There is only one safeguard against becoming the prisoner of an incomplete definition: check it again and again against all the observable facts, and throw out a definition the moment it fails to encompass any of them.
Effective decision makers always test for signs that something is atypical or something unusual is happening, always asking: Does the definition explain the observed events, and does it explain all of them? They always write out what the definition is expected to make happen—for instance, make automobile accidents disappear—and then test regularly to see if this really happens. Finally, they go back and think the problem through again whenever they see something atypical, when they find unexplained phenomena, or when the course of events deviates, even in details, from expectations.
These are in essence the rules Hippocrates laid down for medical diagnosis well over 2,000 years ago. They are the rules for scientific observation first formulated by Aristotle and then reaffirmed by Galileo 300 years ago. These, in other words, are old, well-known, time-tested rules, which an executive can learn and apply systematically.
The next major element in the decision process is defining clear specifications as to what the decision has to accomplish. What are the objectives the decision has to reach? What are the minimum goals it has to attain? What are the conditions it has to satisfy? In science these are known as “boundary conditions.” A decision, to be effective, needs to satisfy the boundary conditions. Consider:
“Can our needs be satisfied,” Alfred P. Sloan, Jr. presumably asked himself when he took command of General Motors in 1922, “by removing the autonomy of our division heads?” His answer was clearly in the negative. The boundary conditions of his problem demanded strength and responsibility in the chief operating positions. This was needed as much as unity and control at the center. Everyone before Sloan had seen the problem as one of personalities—to be solved through a struggle for power from which one man would emerge victorious. The boundary conditions, Sloan realized, demanded a solution to a constitutional problem—to be solved through a new structure: decentralization which balanced local autonomy of operations with central control of direction and policy.
A decision that does not satisfy the boundary conditions is worse than one which wrongly defines the problem. It is all but impossible to salvage the decision that starts with the right premises but stops short of the right conclusions. Furthermore, clear thinking about the boundary conditions is needed to know when a decision has to be abandoned. The most common cause of failure in a decision lies not in its being wrong initially. Rather, it is a subsequent shift in the goals—the specifications—which makes the prior right decision suddenly inappropriate. And unless the decision maker has kept the boundary conditions clear, so as to make possible the immediate replacement of the outflanked decision with a new and appropriate policy, he may not even notice that things have changed. For example:
Franklin D. Roosevelt was bitterly attacked for his switch from conservative candidate in 1932 to radical president in 1933. But it wasn’t Roosevelt who changed. The sudden economic collapse which occurred between the summer of 1932 and the spring of 1933 changed the specifications. A policy appropriate to the goal of national economic recovery—which a conservative economic policy might have been—was no longer appropriate when, with the Bank Holiday, the goal had to become political and social cohesion. When the boundary conditions changed, Roosevelt immediately substituted a political objective (reform) for his former economic one (recovery).
Above all, clear thinking about the boundary conditions is needed to identify the most dangerous of all possible decisions: the one in which the specifications that have to be satisfied are essentially incompatible. In other words, this is the decision that might—just might—work if nothing whatever goes wrong. A classic case is President Kennedy’s Bay of Pigs decision:
One specification was clearly Castro’s overthrow. The other was to make it appear that the invasion was a “spontaneous” uprising of the Cubans. But these two specifications would have been compatible with each other only if an immediate island-wide uprising against Castro would have completely paralyzed the Cuban army. And while this was not impossible, it clearly was not probable in such a tightly controlled police state.
Decisions of this sort are usually called “gambles.” But actually they arise from something much less rational than a gamble—namely, a hope against hope that two (or more) clearly incompatible specifications can be fulfilled simultaneously. This is hoping for a miracle; and the trouble with miracles is not that they happen so rarely, but that they are, alas, singularly unreliable.
Everyone can make the wrong decision. In fact, everyone will sometimes make a wrong decision. But no executive needs to make a decision which, on the face of it, seems to make sense but, in reality, falls short of satisfying the boundary conditions.
The effective executive has to start out with what is “right” rather than what is acceptable precisely because a compromise is always necessary in the end. But if what will satisfy the boundary conditions is not known, the decision maker cannot distinguish between the right compromise and the wrong compromise—and may end up by making the wrong compromise. Consider:
I was taught this lesson in 1944 when I started on my first big consulting assignment. It was a study of the management structure and policies of General Motors Corporation. Alfred P. Sloan, Jr., who was then chairman and chief executive officer of the company, called me to his office at the start of my assignment and said: “I shall not tell you what to study, what to write, or what conclusions to come to. This is your task. My only instruction to you is to put down what you think is right as you see it. Don’t you worry about our reaction. Don’t you worry about whether we will like this or dislike that. And don’t you, above all, concern yourself with the compromises that might be needed to make your conclusions acceptable. There is not one executive in this company who does not know how to make every single conceivable compromise without any help from you. But he can’t make the right compromise unless you first tell him what right is.”
The effective executive knows that there are two different kinds of compromise. One is expressed in the old proverb, “Half a loaf is better than no bread.” The other, in the story of the judgment of Solomon, is clearly based on the realization that “half a baby is worse than no baby at all.” In the first instance, the boundary conditions are still being satisfied. The purpose of bread is to provide food, and half a loaf is still food. Half a baby, however, does not satisfy the boundary conditions. For half a baby is not half of a living and growing child.
It is a waste of time to worry about what will be acceptable and what the decision maker should or should not say so as not to evoke resistance. (The things one worries about seldom happen, while objections and difficulties no one thought about may suddenly turn out to be almost insurmountable obstacles.) In other words, the decision maker gains nothing by starting out with the question, “What is acceptable?” For in the process of answering it, he or she usually gives away the important things and loses any chance to come up with an effective—let alone the right—answer.
Converting the decision into action is the fifth major element in the decision process. While thinking through the boundary conditions is the most difficult step in decision making, converting the decision into effective action is usually the most time-consuming one. Yet a decision will not become effective unless the action commitments have been built into it from the start. In fact, no decision has been made unless carrying it out in specific steps has become someone’s work assignment and responsibility. Until then, it is only a good intention.
The flaw in so many policy statements, especially those of business, is that they contain no action commitment—to carry them out is no one’s specific work and responsibility. Small wonder then that the people in the organization tend to view such statements cynically, if not as declarations of what top management is really not going to do.
Converting a decision into action requires answering several distinct questions: Who has to know of this decision? What action has to be taken? Who is to take it? What does the action have to be so that the people who have to do it can do it? The first and the last of these questions are too often overlooked—with dire results. A story that has become a legend among operations researchers illustrates the importance of the question, “Who has to know?”:
A major manufacturer of industrial equipment decided several years ago to discontinue one of its models that had for years been standard equipment on a line of machine tools, many of which were still in use. It was, therefore, decided to sell the model to present owners of the old equipment for another three years as a replacement, and then to stop making and selling it. Orders for this particular model had been going down for a good many years. But they shot up immediately as customers reordered against the day when the model would no longer be available. No one had, however, asked, “Who needs to know of this decision?”
Consequently, nobody informed the purchasing clerk who was in charge of buying the parts from which the model itself was being assembled. His instructions were to buy parts in a given ratio to current sales—and the instructions remained unchanged.
Thus, when the time came to discontinue further production of the model, the company had in its warehouse enough parts for another 8 to 10 years of production, parts that had to be written off at a considerable loss.
The action must also be appropriate to the capacities of the people who have to carry it out. Thus:
A large U.S. chemical company found itself, in recent years, with fairly large amounts of blocked currency in two West African countries. To protect this money, top management decided to invest it locally in businesses which would: (1) contribute to the local economy, (2) not require imports from abroad, and (3) if successful, be the kind that could be sold to local investors if and when currency remittances became possible again. To establish these businesses, the company developed a simple chemical process to preserve a tropical fruit—a staple crop in both countries—which, up until then, had suffered serious spoilage in transit to its Western markets.
The business was a success in both countries. But in one country the local manager set the business up in such a manner that it required highly skilled and technically trained management of a kind not easily available in West Africa. In the other country, the local manager thought through the capacities of the people who would eventually have to run the business. Consequently, he worked hard at making both the process and the business simple, and at staffing his operation from the start with local nationals right up to the top management level.
A few years later it became possible again to transfer currency from these two countries. But, though the business flourished, no buyer could be found for it in the first country. No one available locally had the necessary managerial and technical skills to run it, and so the business had to be liquidated at a loss. In the other country, so many local entrepreneurs were eager to buy the business that the company repatriated its original investment with a substantial profit.
The chemical process and the business built on it were essentially the same in both places. But in the first country no one had asked, “What kind of people do we have available to make this decision effective? And what can they do?” As a result, the decision itself became frustrated.
This action commitment becomes doubly important when people have to change their behavior, habits, or attitudes if a decision is to become effective. Here, the executive must make sure not only that the responsibility for the action is clearly assigned, but that the people assigned are capable of carrying it out. Thus the decision maker has to make sure that the measurements, the standards for accomplishment, and the incentives of those charged with the action responsibility are changed simultaneously. Otherwise, the organization people will get caught in a paralyzing internal emotional conflict. Consider these two examples:
• When Theodore Vail was president of the Bell Telephone System 60 years ago, he decided that its business was service. This decision explains in large part why the United States (and Canada) has today an investor-owned, rather than a nationalized, telephone system. Yet this policy statement might have remained a dead letter if Vail had not at the same time designed yardsticks of service performance and introduced these as a means to measure, and ultimately to reward, managerial performance. The Bell managers of that time were used to being measured by the profitability (or at least by the cost) of their units. The new yardsticks resulted in the rapid acceptance of the new objectives.
• In sharp contrast is the recent failure of a brilliant chairman and chief executive to make effective a new organization structure and new objectives in an old, large, and proud U.S. company. Everyone agreed that the changes were needed. The company, after many years as leader of its industry, showed definite signs of aging. In many markets newer, smaller, and more aggressive competitors were outflanking it. But contrary to the action required to gain acceptance for the new ideas, the chairman—in order to placate the opposition—promoted prominent spokesmen of the old school into the most visible and highest salaried positions—in particular into three new executive vice presidencies. This meant only one thing to the people in the company: “They don’t really mean it.” If the greatest rewards are given for behavior contrary to that which the new course of action requires, then everyone will conclude that this is what the people at the top really want and are going to reward.
Only the most effective executive can do what Vail did—build the execution of his decision into the decision itself. But every executive can think through what action commitments a specific decision requires, what work assignments follow from it, and what people are available to carry it out.
Finally, information monitoring and reporting have to be built into the decision to provide continuous testing, against actual events, of the expectations that underlie the decisions. Decisions are made by people. People are fallible; at best, their works do not last long. Even the best decision has a high probability of being wrong. Even the most effective one eventually becomes obsolete.
This surely needs no documentation. And every executive always builds organized feedback—reports, figures, studies—into his or her decision to monitor and report on it. Yet far too many decisions fail to achieve their anticipated results, or indeed ever to become effective, despite all these feedback reports. Just as the view from the Matterhorn cannot be visualized by studying a map of Switzerland (one abstraction), a decision cannot be fully and accurately evaluated by studying a report. That is because reports are, of necessity, abstractions.
Effective decision makers know this and follow a rule which the military developed long ago. The commander who makes a decision does not depend on reports to see how it is being carried out. The commander or an aide goes and looks. The reason is not that effective decision makers (or effective commanders) distrust their subordinates. Rather, they learned the hard way to distrust abstract “communications.”
With the coming of the computer this feedback element will become even more important, for the decision maker will in all likelihood be even further removed from the scene of action. Unless he or she accepts, as a matter of course, that he or she had better go out and look at the scene of action, he or she will be increasingly divorced from reality. All a computer can handle is abstractions. And abstractions can be relied on only if they are constantly checked against concrete results. Otherwise, they are certain to mislead.
To go and look is also the best, if not the only way, for an executive to test whether the assumptions on which the decision has been made are still valid or whether they are becoming obsolete and need to be thought through again. And the executive always has to expect the assumptions to become obsolete sooner or later. Reality never stands still very long.
Failure to go out and look is the typical reason for persisting in a course of action long after it has ceased to be appropriate or even rational. This is true for business decisions as well as for governmental policies. It explains in large measure the failure of Stalin’s cold war policy in Europe, but also the inability of the United States to adjust its policies to the realities of a Europe restored to prosperity and economic growth, and the failure of the British to accept, until too late, the reality of the European Common Market. Moreover, in any business I know, failure to go out and look at customers and markets, at competitors and their products, is also a major reason for poor, ineffectual, and wrong decisions.
Decision makers need organized information for feedback. They need reports and figures. But unless they build their feedback around direct exposure to reality—unless they discipline themselves to go out and look—they condemn themselves to a sterile dogmatism.
Decision making is only one of the tasks of an executive. It usually takes but a small fraction of his or her time. But to make the important decisions is the specific executive task. Only an executive makes such decisions.
An effective executive makes these decisions as a systematic process with clearly defined elements and in a distinct sequence of steps. Indeed, to be expected (by virtue of position or knowledge) to make decisions that have significant and positive impact on the entire organization, its performance, and its results characterizes the effective executive.
Author’s note: This article is derived from a chapter in my forthcoming book, The Effective Executive, to be published by Harper & Row, Publishers, Inc.