The Information Executives Truly Need
Ever since the new data processing tools first emerged 30 or 40 years ago, businesspeople have both overrated and underrated the importance of information in the organization. We—and I include myself—overrated the possibilities to the point where we talked of computer-generated “business models” that could make decisions and might even be able to run much of the business. But we also grossly underrated the new tools; we saw in them the means to do better what executives were already doing to manage their organizations.
Nobody talks of business models making economic decisions anymore. The greatest contribution of our data processing capacity so far has not even been to management. It has been to operations—for example, computer-assisted design or the marvelous software that architects now use to solve structural problems in the buildings they design.
Yet even as we both overestimated and underestimated the new tools, we failed to realize that they would drastically change the tasks to be tackled. Concepts and tools, history teaches again and again, are mutually interdependent and interactive. One changes the other. That is now happening to the concept we call a business and to the tools we call information. The new tools enable us—indeed, may force us—to see our businesses differently:
• as generators of resources, that is, as organizations that can convert business costs into yields;
• as links in an economic chain, which managers need to understand as a whole in order to manage their costs;
• as society’s organs for the creation of wealth; and
• as both creatures and creators of a material environment, the area outside the organization in which opportunities and results lie but in which the threats to the success and the survival of every business also originate.
This article deals with the tools executives require to generate the information they need. And it deals with the concepts underlying those tools. Some of the tools have been around for a long time, but rarely, if ever, have they been focused on the task of managing a business. Some have to be refashioned; in their present form they no longer work. For some tools that promise to be important in the future, we have so far only the briefest specifications. The tools themselves still have to be designed.
Even though we are just beginning to understand how to use information as a tool, we can outline with high probability the major parts of the information system executives need to manage their businesses. So, in turn, can we begin to understand the concepts likely to underlie the business—call it the redesigned corporation—that executives will have to manage tomorrow.
From Cost Accounting to Yield Control
We may have gone furthest in redesigning both business and information in the most traditional of our information systems: accounting. In fact, many businesses have already shifted from traditional cost accounting to activity-based costing. Activity-based costing represents both a different concept of the business process, especially for manufacturers, and different ways of measuring.
Traditional cost accounting, first developed by General Motors 70 years ago, postulates that total manufacturing cost is the sum of the costs of individual operations. Yet the cost that matters for competitiveness and profitability is the cost of the total process, and that is what the new activity-based costing records and makes manageable. Its basic premise is that manufacturing is an integrated process that starts when supplies, materials, and parts arrive at the plant’s loading dock and continues even after the finished product reaches the end user. Service is still a cost of the product, and so is installation, even if the customer pays.
Traditional cost accounting measures what it costs to do a task, for example, to cut a screw thread. Activity-based costing also records the cost of not doing, such as the cost of machine downtime, the cost of waiting for a needed part or tool, the cost of inventory waiting to be shipped, and the cost of reworking or scrapping a defective part. The costs of not doing, which traditional cost accounting cannot and does not record, often equal and sometimes even exceed the costs of doing. Activity-based costing therefore gives not only much better cost control, but increasingly, it also gives result control.
Traditional cost accounting assumes that a certain operation—for example, heat treating—has to be done and that it has to be done where it is being done now. Activity-based costing asks, Does it have to be done? If so, where is it best done? Activity-based costing integrates what were once several activities—value analysis, process analysis, quality management, and costing—into one analysis.
Using that approach, activity-based costing can substantially lower manufacturing costs—in some instances by a full third or more. Its greatest impact, however, is likely to be in services. In most manufacturing companies, cost accounting is inadequate. But service industries—banks, retail stores, hospitals, schools, newspapers, and radio and television stations—have practically no cost information at all.
Activity-based costing shows us why traditional cost accounting has not worked for service companies. It is not because the techniques are wrong. It is because traditional cost accounting makes the wrong assumptions. Service companies cannot start with the cost of individual operations, as manufacturing companies have done with traditional cost accounting. They must start with the assumption that there is only one cost: that of the total system. And it is a fixed cost over any given time period. The famous distinction between fixed and variable costs, on which traditional cost accounting is based, does not make much sense in services. Neither does the basic assumption of traditional cost accounting: that capital can be substituted for labor. In fact, in knowledge-based work especially, additional capital investment will likely require more, rather than less, labor. For example, a hospital that buys a new diagnostic tool may have to add four or five people to run it. Other knowledge-based organizations have had to learn the same lesson. But that all costs are fixed over a given time period and that resources cannot be substituted for one another, so that the total operation has to be costed—those are precisely the assumptions with which activity-based costing starts. By applying them to services, we are beginning for the first time to get cost information and yield control.
Banks, for instance, have been trying for several decades to apply conventional cost-accounting techniques to their business—that is, to figure the costs of individual operations and services—with almost negligible results. Now they are beginning to ask, Which one activity is at the center of costs and of results? The answer: serving the customer. The cost per customer in any major area of banking is a fixed cost. Thus it is the yield per customer—both the volume of services a customer uses and the mix of those services—that determines costs and profitability. Retail discounters, especially those in Western Europe, have known that for some time. They assume that once a unit of shelf space is installed, the cost is fixed and management consists of maximizing the yield thereon over a given time span. Their focus on yield control has enabled them to increase profitability despite their low prices and low margins.
Service businesses are only beginning to apply the new costing concepts. In some areas, such as research labs, where productivity is nearly impossible to measure, we may always have to rely on assessment and judgment rather than on measurement. But for most knowledge-based and service work, we should, within 10 to 15 years, have developed reliable tools to measure and manage costs and to relate those costs to results.
Thinking more clearly about costing in services should yield new insights into the costs of getting and keeping customers in all kinds of businesses. If GM, Ford, and Chrysler had used activity-based costing, for example, they would have realized early on the utter futility of their competitive blitzes of the past few years, which offered new-car buyers spectacular discounts and hefty cash rewards. Those promotions actually cost the Big Three automakers enormous amounts of money and, worse, enormous numbers of potential customers. In fact, every one resulted in a nasty drop in market standing. But neither the costs of the special deals nor their negative yields appeared in the companies’ conventional cost-accounting figures, so management never saw the damage. Conventional cost accounting shows only the costs of individual manufacturing operations in isolation, and those were not affected by the discounts and rebates in the marketplace. Also, conventional cost accounting does not show the impact of pricing decisions on such things as market share.
Activity-based costing shows—or at least attempts to show—the impact of changes in the costs and yields of every activity on the results of the whole. Had the automakers used it, it soon would have shown the damage done by the discount blitzes. In fact, because the Japanese already use a form of activity-based costing—though still a fairly primitive one—Toyota, Nissan, and Honda knew better than to compete with U.S. automakers through discounts and thus maintained both their market share and their profits.
From Legal Fiction to Economic Reality
Knowing the cost of your operations, however, is not enough. To succeed in the increasingly competitive global market, a company has to know the costs of its entire economic chain and has to work with other members of the chain to manage costs and maximize yield. Companies are therefore beginning to shift from costing only what goes on inside their own organizations to costing the entire economic process, in which even the biggest company is just one link.
The legal entity, the company, is a reality for shareholders, for creditors, for employees, and for tax collectors. But economically, it is fiction. Thirty years ago, the Coca-Cola Company was a franchisor. Independent bottlers manufactured the product. Now the company owns most of its bottling operations in the United States. But Coke drinkers—even those few who know that fact—could not care less. What matters in the marketplace is the economic reality, the costs of the entire process, regardless of who owns what.
Again and again in business history, an unknown company has come from nowhere and in a few short years overtaken the established leaders without apparently even breathing hard. The explanation always given is superior strategy, superior technology, superior marketing, or lean manufacturing. But in every single case, the newcomer also enjoys a tremendous cost advantage, usually about 30%. The reason is always the same: the new company knows and manages the costs of the entire economic chain rather than its costs alone.
Toyota is perhaps the best-publicized example of a company that knows and manages the costs of its suppliers and distributors; they are all, of course, members of its keiretsu. Through that network, Toyota manages the total cost of making, distributing, and servicing its cars as one cost stream, putting work where it costs the least and yields the most.
Managing the economic cost stream is not a Japanese invention, however, but a U.S. one. It began with the man who designed and built General Motors, William Durant. About 1908, Durant began to buy small, successful automobile companies—Buick, Oldsmobile, Cadillac, Chevrolet—and merged them into his new General Motors Corporation. In 1916, he set up a separate subsidiary called United Motors to buy small, successful parts companies. His first acquisitions included Delco, which held Charles Kettering’s patents to the automotive self-starter.
Durant ultimately bought about 20 supplier companies; his last acquisition—in 1919, the year before he was ousted as GM’s CEO—was Fisher Body. Durant deliberately brought the parts and accessories makers into the design process of a new automobile model right from the start. Doing so allowed him to manage the total costs of the finished car as one cost stream. In fact, Durant invented the keiretsu.
However, between 1950 and 1960, Durant’s keiretsu became an albatross around the company’s neck, as unionization imposed higher labor costs on GM’s parts divisions than on their independent competitors. As the outside customers, the independent automobile companies such as Packard and Studebaker, which had bought 50% of the output of GM’s parts divisions, disappeared one by one, GM’s control over both the costs and quality of its main suppliers disappeared with them. But for 40 years or more, GM’s systems costing gave it an unbeatable advantage over even the most efficient of its competitors, which for most of that time was Studebaker.
Sears, Roebuck and Company was the first to copy Durant’s system. In the 1920s, it established long-term contracts with its suppliers and bought minority interests in them. Sears was then able to consult with suppliers as they designed the product and to understand and manage the entire cost stream. That gave the company an unbeatable cost advantage for decades.
In the early 1930s, London-based department store Marks & Spencer copied Sears with the same result. Twenty years later, the Japanese, led by Toyota, studied and copied both Sears and Marks & Spencer. Then in the 1980s, Wal-Mart Stores adapted the approach by allowing suppliers to stock products directly on store shelves, thereby eliminating warehouse inventories and with them nearly one-third of the cost of traditional retailing.
But those companies are still rare exceptions. Although economists have known the importance of costing the entire economic chain since Alfred Marshall wrote about it in the late 1890s, most businesspeople still consider it theoretical abstraction. Increasingly, however, managing the economic cost chain will become a necessity. In their article, “From Lean Production to the Lean Enterprise” (HBR, March–April 1994), James P. Womack and Daniel T. Jones argue persuasively that executives need to organize and manage not only the cost chain but also everything else—especially corporate strategy and product planning—as one economic whole, regardless of the legal boundaries of individual companies.
A powerful force driving companies toward economic-chain costing will be the shift from cost-led pricing to price-led costing. Traditionally, Western companies have started with costs, put a desired profit margin on top, and arrived at a price. They practiced cost-led pricing. Sears and Marks & Spencer long ago switched to price-led costing, in which the price the customer is willing to pay determines allowable costs, beginning with the design stage. Until recently, those companies were the exceptions. Now price-led costing is becoming the rule. The Japanese first adopted it for their exports. Now Wal-Mart and all the discounters in the United States, Japan, and Europe are practicing price-led costing. It underlies Chrysler’s success with its recent models and the success of GM’s Saturn. Companies can practice price-led costing, however, only if they know and manage the entire cost of the economic chain.
The same ideas apply to outsourcing, alliances, and joint ventures—indeed, to any business structure that is built on partnership rather than control. And such entities, rather than the traditional model of a parent company with wholly owned subsidiaries, are increasingly becoming the models for growth, especially in the global economy.
Still, it will be painful for most businesses to switch to economic-chain costing. Doing so requires uniform or at least compatible accounting systems at companies along the entire chain. Yet each one does its accounting in its own way, and each is convinced that its system is the only possible one. Moreover, economic-chain costing requires information sharing across companies, and even within the same company, people tend to resist information sharing. Despite those challenges, companies can find ways to practice economic-chain costing now, as Procter & Gamble is demonstrating. Using the way Wal-Mart develops close relationships with suppliers as a model, P&G is initiating information sharing and economic-chain management with the 300 large retailers that distribute the bulk of its products worldwide.
Whatever the obstacles, economic-chain costing is going to be done. Otherwise, even the most efficient company will suffer from an increasing cost disadvantage.
Information for Wealth Creation
Enterprises are paid to create wealth, not control costs. But that obvious fact is not reflected in traditional measurements. First-year accounting students are taught that the balance sheet portrays the liquidation value of the enterprise and provides creditors with worst-case information. But enterprises are not normally run to be liquidated. They have to be managed as going concerns, that is, for wealth creation. To do that requires information that enables executives to make informed judgments. It requires four sets of diagnostic tools: foundation information, productivity information, competence information, and information about the allocation of scarce resources. Together, they constitute the executive’s tool kit for managing the current business.
The oldest and most widely used set of diagnostic management tools are cash-flow and liquidity projections and such standard measurements as the ratio between dealers’ inventories and sales of new cars; the earnings coverage for the interest payments on a bond issue; and the ratios between receivables outstanding more than six months, total receivables, and sales. Those may be likened to the measurements a doctor takes at a routine physical: weight, pulse, temperature, blood pressure, and urine analysis. If those readings are normal, they do not tell us much. If they are abnormal, they indicate a problem that needs to be identified and treated. Those measurements might be called foundation information.
The second set of tools for business diagnosis deals with the productivity of key resources. The oldest of them—of World War II vintage—measures the productivity of manual labor. Now we are slowly developing measurements, though still quite primitive ones, for the productivity of knowledge-based and service work. However, measuring only the productivity of workers, whether blue or white collar, no longer gives us adequate information about productivity. We need data on total-factor productivity.
That explains the growing popularity of economic value-added analysis. EVA is based on something we have known for a long time: what we generally call profits, the money left to service equity, is usually not profit at all.1 Until a business returns a profit that is greater than its cost of capital, it operates at a loss. Never mind that it pays taxes as if it had a genuine profit. The enterprise still returns less to the economy than it devours in resources. It does not cover its full costs unless the reported profit exceeds the cost of capital. Until then, it does not create wealth; it destroys it. By that measurement, incidentally, few U.S. businesses have been profitable since World War II.
By measuring the value added over all costs, including the cost of capital, EVA measures, in effect, the productivity of all factors of production. It does not, by itself, tell us why a certain product or a certain service does not add value or what to do about it. But it shows us what we need to find out and whether we need to take remedial action. EVA should also be used to find out what works. It does show which product, service, operation, or activity has unusually high productivity and adds unusually high value. Then we should ask ourselves, What can we learn from those successes?
The most recent of the tools used to obtain productivity information is benchmarking—comparing one’s performance with the best performance in the industry or, better yet, with the best anywhere in business. Benchmarking assumes correctly that what one organization does, any other organization can do as well. And it assumes, also correctly, that being at least as good as the leader is a prerequisite to being competitive. Together, EVA and benchmarking provide the diagnostic tools to measure total-factor productivity and to manage it.
A third set of tools deals with competencies. Ever since C.K. Prahalad and Gary Hamel’s pathbreaking article, “The Core Competence of the Corporation” (HBR May–June 1990), we have known that leadership rests on being able to do something others cannot do at all or find difficult to do even poorly. It rests on core competencies that meld market or customer value with a special ability of the producer or supplier.
Some examples: the ability of the Japanese to miniaturize electronic components, which is based on their 300-year-old artistic tradition of putting landscape paintings on a tiny lacquer box, called aninro, and of carving a whole zoo of animals on the even tinier button that holds the box on the wearer’s belt, called a netsuke; or the almost unique ability GM has had for 80 years to make successful acquisitions; or Marks & Spencer’s also unique ability to design packaged and ready-to-eat luxury meals for middle-class budgets. But how does one identify both the core competencies one has already and those the business needs in order to take and maintain a leadership position? How does one find out whether one’s core competence is improving or weakening? Or whether it is still the right core competence and what changes it might need?
So far the discussion of core competencies has been largely anecdotal. But a number of highly specialized midsize companies—a Swedish pharmaceutical producer and a U.S. producer of specialty tools, to name two—are developing the methodology to measure and manage core competencies. The first step is to keep careful track of one’s own and one’s competitors’ performances, looking especially for unexpected successes and unexpected poor performance in areas where one should have done well. The successes demonstrate what the market values and will pay for. They indicate where the business enjoys a leadership advantage. The nonsuccesses should be viewed as the first indication either that the market is changing or that the company’s competencies are weakening.
That analysis allows for the early recognition of opportunities. For example, by carefully tracking an unexpected success, a U.S. toolmaker found that small Japanese machine shops were buying its high-tech, high-priced tools, even though it had not designed the tools with them in mind or made sales calls to them. That allowed the company to recognize a new core competence: the Japanese were attracted to its products because they were easy to maintain and repair despite their technical complexity. When that insight was applied to designing products, the company gained leadership in the small-plant and machine-shop markets in the United States and Western Europe, huge markets where it had done practically no business before.
Core competencies are different for every organization; they are, so to speak, part of an organization’s personality. But every organization—not just businesses—needs one core competence: innovation. And every organization needs a way to record and appraise its innovative performance. In organizations already doing that—among them several topflight pharmaceutical manufacturers—the starting point is not the company’s own performance. It is a careful record of the innovations in the entire field during a given period. Which of them were truly successful? How many of them were ours? Is our performance commensurate with our objectives? With the direction of the market? With our market standing? With our research spending? Are our successful innovations in the areas of greatest growth and opportunity? How many of the truly important innovation opportunities did we miss? Why? Because we did not see them? Or because we saw them but dismissed them? Or because we botched them? And how well do we convert an innovation into a commercial product? A good deal of that, admittedly, is assessment rather than measurement. It raises rather than answers questions, but it raises the right questions.
The last area in which diagnostic information is needed to manage-the current business for wealth creation is the allocation of scarce resources: capital and performing people. Those two convert into action whatever information management has about its business. They determine whether the enterprise will do well or do poorly.
GM developed the first systematic capital-appropriations process about 70 years ago. Today practically every business has a capital-appropriations process, but few use it correctly. Companies typically measure their proposed capital appropriations by only one or two of the following yardsticks: return on investment, payback period, cash flow, or discounted present value. But we have known for a long time—since the early 1930s—that none of those is the right method. To understand a proposed investment, a company needs to look at all four. Sixty years ago, that would have required endless number crunching. Now a laptop computer can provide the information within a few minutes. We also have known for 60 years that managers should never look at just one proposed capital appropriation in isolation but should instead choose the projects that show the best ratio between opportunity and risks. That requires a capital-appropriations budget to display the choices—again, something far too many businesses do not do. Most serious, however, is that most capital-appropriations processes do not even ask for two vital pieces of information:
• What will happen if the proposed investment fails to produce the promised results, as do three out of every five? Would it seriously hurt the company, or would it be just a flea bite?
• If the investment is successful—and especially if it is more successful than we expect—what will it commit us to? No one at GM seems to have asked what Saturn’s success would commit the company to. As a result, the company may end up killing its own success because of its inability to finance it.
In addition, a capital-appropriations request requires specific deadlines: When should we expect what results? Then the results—successes, near successes, near failures, and failures—need to be reported and analyzed. There is no better way to improve an organization’s performance than to measure the results of capital appropriations against the promises and expectations that led to their authorization. How much better off the United States would be today had such feedback on government programs been standard practice for the past 50 years.
Capital, however, is only one key resource of the organization, and it is by no means the scarcest one. The scarcest resources in any organization are performing people. Since World War II, the U.S. military—and so far no one else—has learned to test its placement decisions. It now thinks through what it expects of senior officers before it puts them into key commands. It then appraises their performance against those expectations. And it constantly appraises its own process for selecting senior commanders against the successes and failures of its appointments. In business, by contrast, placement with specific expectations as to what the appointee should achieve and systematic appraisal of the outcome are virtually unknown. In the effort to create wealth, managers need to allocate human resources as purposefully and as thoughtfully as they do capital. And the outcomes of those decisions ought to be recorded and studied as carefully.
Where the Results Are
Those four kinds of information tell us only about the current business. They inform and direct tactics. For strategy, we need organized information about the environment. Strategy has to be based on information about markets, customers, and noncustomers; about technology in one’s own industry and others; about worldwide finance; and about the changing world economy. For that is where the results are. Inside an organization, there are only cost centers. The only profit center is a customer whose check has not bounced.
Major changes also start outside an organization. A retailer may know a great deal about the people who shop at its stores. But no matter how successful it is, no retailer ever has more than a small fraction of the market as its customers; the great majority are noncustomers. It is always with noncustomers that basic changes begin and become significant.
At least half the important new technologies that have transformed an industry in the past 50 years came from outside the industry itself. Commercial paper, which has revolutionized finance in the United States, did not originate with the banks. Molecular biology and genetic engineering were not developed by the pharmaceutical industry. Though the great majority of businesses will continue to operate only locally or regionally, they all face, at least potentially, global competition from places they have never even heard of before.
Not all of the needed information about the outside is available, to be sure. There is no information—not even unreliable information—on economic conditions in most of China, for instance, or on legal conditions in most of the successor states to the Soviet empire. But even where information is readily available, many businesses are oblivious to it. Many U.S. companies went into Europe in the 1960s without even asking about labor legislation. European companies have been just as blind and ill informed in their ventures into the United States. A major cause of the Japanese real estate investment debacle in California during the 1990s was the failure to find out elementary facts about zoning and taxes.
A serious cause of business failure is the common assumption that conditions—taxes, social legislation, market preferences, distribution channels, intellectual property rights, and many others—must be what we think they are or at least what we think they should be. An adequate information system has to include information that makes executives question that assumption. It must lead them to ask the right questions, not just feed them the information they expect. That presupposes first that executives know what information they need. It demands further that they obtain that information on a regular basis. It finally requires that they systematically integrate the information into their decision making.
A few multinationals—Unilever, Coca-Cola, Nestlé, the big Japanese trading companies, and a few big construction companies—have been working hard on building systems to gather and organize outside information. But in general, the majority of enterprises have yet to start the job.
Even big companies, in large part, will have to hire outsiders to help them. To think through what the business needs requires somebody who knows and understands the highly specialized information field. There is far too much information for any but specialists to find their way around. The sources are totally diverse. Companies can generate some of the information themselves, such as information about customers and noncustomers or about the technology in one’s own field. But most of what enterprises need to know about the environment is obtainable only from outside sources—from all kinds of data banks and data services, from journals in many languages, from trade associations, from government publications, from World Bank reports and scientific papers, and from specialized studies.
Another reason there is need for outside help is that the information has to be organized so it questions and challenges a company’s strategy. To supply data is not enough. The data have to be integrated with strategy, they have to test a company’s assumptions, and they must challenge a company’s current outlook. One way to do that may be a new kind of software, information tailored to a specific group—say, to hospitals or to casualty insurance companies. The Lexis database supplies such information to lawyers, but it only gives answers; it does not ask questions. What we need are services that make specific suggestions about how to use the information, ask specific questions regarding the users’ business and practices, and perhaps provide interactive consultation. Or we might “outsource” the outside-information system. Maybe the most popular provider of the outside-information system, especially for smaller enterprises, will be that “inside outsider,” the independent consultant.
Whichever way we satisfy it, the need for information on the environment where the major threats and opportunities are likely to arise will become increasingly urgent.
It may be argued that few of those information needs are new, and that is largely true. Conceptually, many of the new measurements have been discussed for many years and in many places. What is new is the technical data processing ability. It enables us to do quickly and cheaply what, only a few short years ago, would have been laborious and very expensive. Seventy years ago, the time-and-motion study made traditional cost accounting possible. Computers have now made activity-based cost accounting possible; without them, it would be practically impossible.
But that argument misses the point. What is important is not the tools. It is the concepts behind them. They convert what were always seen as discrete techniques to be used in isolation and for separate purposes into one integrated information system. That system then makes possible business diagnosis, business strategy, and business decisions. That is a new and radically different view of the meaning and purpose of information: as a measurement on which to base future action rather than as a postmortem and a record of what has already happened.
The command-and-control organization that first emerged in the 1870s might be compared to an organism held together by its shell. The corporation that is now emerging is being designed around a skeleton: information, both the corporation’s new integrating system and its articulation.
Our traditional mind-set—even if we use sophisticated mathematical techniques and impenetrable sociological jargon—has always somehow perceived business as buying cheap and selling dear. The new approach defines a business as the organization that adds value and creates wealth.
1. I discussed EVA at considerable length in my 1964 book, Managing for Results, but the last generation of classical economists, Alfred Marshall in England and Eugen Böhm-Bawerk in Austria, were already discussing it in the late 1890s.
Peter F. Drucker is the Clarke Professor of Social Science and Management at the Claremont Graduate School in Claremont, California, where the Drucker Management Center was named in his honor.