TM600.88 Spring 1997 Module
3
Instructional Overview
The following material distinguishes between the management of technology development and the management of innovation. Practices for technology development and methods of evaluating opportunities for technology development are examined. Approaches for managing risk and uncertainty are also considered.
Introduction
So many innovations, so little time The opportunities for technological innovation are arguably endless, yet our capacity to adopt, absorb, and leverage them is quite limited. How do we decide which innovations to pursue, which to hold, and which to discard? Herein lies the crux of technology management.
Again, the answers resonate in the themes of our class:
Steele (1989) suggests that there is a widely held misconception that technological advances or discoveries usually are adopted eventually. The reality is that "most don't succeed -- and shouldn't" (p. 56). He attributes this to two phenomena:
These phenomena hold true, even when the technological development is not internally driven. As we have discussed in class, the software industry is renown for distributing new releases every year; many organizations choose to skip a release or two, because the costs (and hardware requirements) are too high to keep up with every release.
To proceed or not to proceed -- that is the question. The following materials, in conjunction with the assigned readings, provide alternative frameworks for selecting and developing technologies.
Managing Technology Development
The selection of a technology for development is not a single or solitary decision. Screening, evaluation, prioritization and portfolio decisions may be repeated several times over the life cycle of a project in response to emerging technologies and changing environmental, financial or commercial circumstances (Shtub et.al., 1994, p. 113). Steele (1989) suggests that, at any point in time, only 15% of development funding go to new programs; the other 85% are applied to ongoing efforts. These efforts, however, must be evaluated repeatedly for merit.
Kumar et.al. (1996) present an activity-decision stage model of the innovation process with four decision points. The first is after the Initial Screening, with preliminary market, financial, and technical assessments. The second decision point comes after more detailed assessments in the Commercial Evaluation. Development is the third stage, with specifications, detailed designs, prototypes, and preliminary testing -- followed by the third decision point. The last decision point comes after Manufacturing/Marketing Launch. Projects can -- and should be -- terminated at any one of these decision points, based on an evaluation process.
A former professor of mine, Dr. Al Rubenstein (1989, p. 289) writes, "Evaluation is a touchy subject. Everyone is in favor of it in principle, but most people resist being evaluated themselves." As a formal technology management process, evaluation:
Rubenstein (1989, p. 300) offers the following characteristics of an effective evaluation process, which he calls a project selection and resource allocation (PS/RA) system, to support research and development (R&D)/innovation planning:
This last point is critical. There must be a mechanism for taking projects out of the portfolio as priorities and circumstances change.
There are a variety of techniques used to accomplish this, depending on the status of the program under evaluation.
Evaluating Opportunities for Technology Development
Steele (1989) uses the metaphor of a funnel to describe the process of selecting programs for development. At any point in time, an organization will have different programs at different points in the funnel. There will be more projects in the early stages of development (i.e., the wide point of the funnel); these will be progressively filtered as they pass through the process (i.e., funnel). The filters vary at different points in the process.
In the later stages of the process, more specific evaluations of technical and economic feasibility are required. Fundamentally, these evaluations are intended toward an assessment of value (V), such that:
V = R*P/I
where R is the net return, P is the probability of success, and I is the total investment required (Steele, 1989, p. 103).
Scoring Methods
There are a variety of methods used in such assessments. Checklists and scoring models can be used to prioritize alternatives with criteria-based evaluations of such factors as:
In his survey of 37 research and development organizations, Rubenstein (1989, p. 329) identified seven general criteria used to judge progress and/or results (with the number of companies using the criteria in parentheses):
He found that:
In your evaluations, you should consider how to weight the criteria. The simplest approach, of course, is to use uniform weights. However, this is often invalid. For example, is the success of the technical solution (in terms of the number of patents) of equal importance to the profitability of the endeavor? Probably not -- but what if the technical solution is measured in terms of the number of problems addressed?
There are many different ways to assign weights to the criteria. One is proportional; i.e., the weights are assigned as a percent of 100. This is how I develop your grades for the class.
Another approach is prioritize the criteria and use the rank reciprocal weights (Shtub et.al., 1994, p. 116), such that:
wi = 1/ri/[S Nk (1/rk)]
where wi is the weight of criterion i, with rank ri, for all criteria 1, ,N. The formula looks daunting, but if you try it on your own with a small example of four criteria, you will gain the intuition. I will provide the illustration if asked.
Scoring methods are expensive and can lack credibility if not used judiciously and consistently. However, they do facilitate a thorough examination of a development program, and provide a useful audit trail for organizational learning and the continuous improvement of the development process.
Cost-benefit Analyses
The more conventional measures of evaluation such as ROI (return on investment) and NPV (net present value) are better used in the later stages of the development process. This is because the costs and benefits of a program are better understood, the closer it is to implementation. However, these calculations are still likely to be stochastic rather than deterministic (i.e., based on probabilities of outcomes).
The challenge is to capture all of the costs and benefits: primary, secondary, external, and intangible. The primary costs are usually well understood and simply require thorough "legwork" to assess them. The secondary costs -- the "second-level effects" (Sproull and Kiesler, 1991) -- are harder to anticipate and quantify, requiring careful and creative thought. In the same way, you should anticipate external benefits and costs, those that arise when a project produces a spillover effect on someone other than the intended group (Shtub et.al., 1994, p. 125).
Of course the intangibles are the most difficult to capture in a cost-benefit analysis; how do you quantify customer satisfaction, quality, flexibility, and employee morale, for example? In today's current business climate, these intangibles are being given more emphasis, if only to explicitly recognize them in the analysis. Sometimes these are captured as the opportunity costs of not developing the technology. Occasionally pro-forma (i.e., "what-if") analysis is used to gauge their value.
You are expected to know how to use ROI and NPV techniques, based on your earlier coursework. If you are uncomfortably applying them, please ask for help.
Decision Trees
Another useful tool for evaluation is a decision tree. Decision trees depict and facilitate the analysis of problems that involve sequential decisions and variable outcomes over time (Shtub et.al., 1994, p. 136). They are comprised of two different kinds of nodes: One kind is a decision node, which leads to alternatives. Chance nodes branch to different outcomes (e.g., market demand) with probabilities assigned to them. The probabilities must be independent and add up to 1.
For example, a decision point might be the type of microprocessor to use in a product, with the branches from the node being choice A, choice B, choice C, etc. Each of these alternatives has a different unit profitability and will result in different levels of performance (sales). So each alternative branch will lead to the second kind of node, a chance node.
(Instructor's note: I know this is tough to visualize. I have spent hours creating a diagram to illustrate the example, but cannot seem to translate the formatting in the html document. L )
To develop a solution, you would apply dynamic programming techniques, working backwards from the end to the beginning. For each chance node, calculate the expected value (the probability of the outcome multiplied by the revenues for that outcome, summed over the possible outcomes at that node). For each decision node, choose the branch with the highest expected value. Continue to work backwards, until the all of the decision nodes has been calculated.
In our small example, the expected value of A is [.6(300,000)+.4(30,000)]*100 = $1,920,000. The expected value of B is [.4(250,000)+.6(50,000)]*200 = $2,600,000. The expected value of C is [.2(100,000)+.8(15,000)]*500 = $1,600,000. So the best alternative for the decision point show in option B. Presumably, this decision point would be a segment of a larger decision tree with multiple decision and chance nodes.
Shtub et.al. (1994, p. 143) summarize the technique as follows:
Decision trees are especially helpful with risk assessments.
Managing Risk and Uncertainty
The terms "risk" and "uncertainty" are often used interchangeably. This is incorrect. Although there is an element of unknown in both terms, they are distinctly different.
Uncertainty is a measure of the limits of knowledge. These unknowns may have little impact. They are hard to analyze.
Risk is a combination of the probability of failure and the consequences of that failure (Shtub et.al., 1994, p. 131). It can be measured by multiplying the magnitude of the consequence (often measured in terms of expenditures) by the probability. This formula is deceptively simple.
Perceptions of risk (i.e., assessment of probabilities) are prone to many biases, such as overestimating proximal risks and underestimating distant ones. Part of the issue is the difficulty in holding and applying a holistic view of the technology's interactions. Proven technologies can especially create a false sense of security.
Steele (1989) provides an elegant framework for examining risk. In it, risk has three dimensions, in terms of changes in the:
So you might be applying a new technology in an existing product, or using existing technologies in a new product. Approaching a new customer base or using a new channel of distribution also presents risks.
It is interesting to apply this framework to the EMI case we covered in Module 1. EMI was high on all three dimensions (Steele calls this position the "suicide square"). GE, who ultimately acquired EMI, was using existing technologies in comfortable markets; only the product was new.
One option for managing risk is to stay in a "comfort zone", i.e., only pursue change along one dimension at a time. That is not always feasible or practical. Steele offers specific suggestions for mitigating the risks along any of the dimensions (p. 124):
Roles for Developing Technology
An important contributing factor to the risk of a development project is the people involved in the effort. Two roles are particularly critical. One is that of the sponsor, the person with the money -- and hopefully, the strong positional power -- to sustain the project. The other is the champion, often the project leader, who maneuvers the project through the organization and over the various hurdles. Steele (1989) likens the champion to a "ground commander" and the sponsor to the "air cover". Together, these players provide the vision and determination to advance the technology development. Their reputation and influence in the organization is key to the continued support of a program -- apart from the formal evaluation process.
Of course, the composition of the development team itself is important. As we have noted before, the use of a multi-disciplinary approach to technology development is recommended. The different perspectives provided by each discipline:
This is covered extensively in your reading assignment.