Posted on

LINEAR DEMAND CURVE AND NON-CONSTANT PRICE ELASTICITY Answer

LINEAR DEMAND CURVE AND NON-CONSTANT PRICE ELASTICITY

The information on price elasticity is useful to see the effect of a change in price on sales. However, you should be careful to know the price elasticity in each price range, since elasticity is not constant along the demand curve.

 

EXAMPLE 3

We are given the following demand function: P = 10 – 2Q or Q = 5 – .5P

In the case of a linear demand function (See Figure 4), while the slope of a straight line demand curve is the same at all points, the elasticity for such a curve varies from one point to the next. The reason for this is that in the ep formula

 

  1. dQ/ dP would be constant, but
  2. P/Q would fall in the case of a move down and to the right.

 

TOTAL REVENUE AND PRICE ELASTICITY

Economists have established the following relationships between price elasticity (ep) and total revenue (TR), which can aid a firm in setting its price.

 

Price ep > 1 ep = 1

 

ep < 1
Price rises TR falls

 

No change TR rises
Price falls TR Rises No change TR falls

 

 

 

 

General Approaches to Forecasting

All firms forecast demand, but it would be difficult to find any two firms that forecast demand in exactly the same way. Over the last few decades, many different forecasting techniques have been developed in a number of different application areas, including engineering and economics. Many such procedures have been applied to the practical problem of forecasting demand in a logistics system, with varying degrees of success. Most commercial software packages that support demand forecasting in a logistics system include dozens of different forecasting algorithms that the analyst can use to generate alternative demand forecasts. While scores of different forecasting techniques exist, almost any forecasting procedure can be broadly classified into one of the following four basic categories based on the fundamental approach towards the forecasting problem that is employed by the technique.

  1. Judgmental Approaches. The essence of the judgmental approach is to address the forecasting issue by assuming that someone else knows and can tell you the right answer. That is, in a judgment-based technique we gather the knowledge and opinions of people who are in a position to know what demand will be. For example, we might conduct a survey of the customer base to estimate what our sales will be next month.

 

  1. Experimental Approaches. Another approach to demand forecasting, which is appealing when an item is “new” and when there is no other information upon which to base a forecast, is to conduct a demand experiment on a small group of customers and to extrapolate the results to a larger population. For example, firms will often test a new consumer product in a geographically isolated “test market” to establish its probable market share. This experience is then extrapolated to the national market to plan the new product launch. Experimental approaches are very useful and necessary for new products, but for existing products that have an accumulated historical demand record it seems intuitive that demand forecasts should somehow be based on this demand experience. For most firms (with some very notable exceptions) the large majority of SKUs in the product line have long demand histories.

 

  1. Relational/Causal Approaches. The assumption behind a causal or relational forecast is that, simply put, there is a reason why people buy our product. If we can understand what that reason (or set of reasons) is, we can use that understanding to develop a demand forecast. For example, if we sell umbrellas at a sidewalk stand, we would probably notice that daily demand is strongly correlated to the weather – we sell more umbrellas when it rains. Once we have established this relationship, a good weather forecast will help us order enough umbrellas to meet the expected demand.

 

  1. “Time Series” Approaches. A time series procedure is fundamentally different than the first three approaches we have discussed. In a pure time series technique, no judgment or expertise or opinion is sought. We do not look for “causes” or relationships or factors which somehow “drive” demand. We do not test items or experiment with customers. By their nature, time series procedures are applied to demand data that are longitudinal rather than cross-sectional. That is, the demand data represent experience that is repeated over time rather than across items or locations. The essence of the approach is to recognize (or assume) that demand occurs over time in patterns that repeat themselves, at least approximately. If we can describe these general patterns or tendencies, without regard to their “causes”, we can use this description to form the basis of a forecast.

In one sense, all forecasting procedures involve the analysis of historical experience into patterns and the projection of those patterns into the future in the belief that the future will somehow resemble the past. The differences in the four approaches are in the way this “search for pattern” is conducted. Judgmental approaches rely on the subjective, ad-hoc analyses of external individuals. Experimental tools extrapolate results from small numbers of customers to large populations. Causal methods search for reasons for demand. Time series techniques simply analyze the demand data themselves to identify temporal patterns that emerge and persist.

Judgmental Approaches to Forecasting

By their nature, judgment-based forecasts use subjective and qualitative data to forecast future outcomes. They inherently rely on expert opinion, experience, judgment, intuition, conjecture, and other “soft” data. Such techniques are often used when historical data are not available, as is the case with the introduction of a new product or service, and in forecasting the impact of fundamental changes such as new technologies, environmental changes, cultural changes, legal changes, and so forth. Some of the more common procedures include the following:

Surveys: This is a “bottom up” approach where each individual contributes a piece of what will become the final forecast. For example, we might poll or sample our customer base to estimate demand for a coming period. Alternatively, we might gather estimates from our sales force as to how much each salesperson expects to sell in the next time period. The approach is at least plausible in the sense that we are asking people who are in a position to know something about future demand. On the other hand, in practice there have proven to be serious problems of bias associated with these tools. It can be difficult and expensive to gather data from customers. History also shows that surveys of “intention to purchase” will generally over-estimate actual demand – liking a product is one thing, but actually buying it is often quite another. Sales people may also intentionally (or even unintentionally) exaggerate or underestimate their sales forecasts based on what they believe their supervisors want them to say. If the sales force (or the customer base) believes that their forecasts will determine the level of finished goods inventory that will be available in the next period, they may be sorely tempted to inflate their demand estimates so as to insure good inventory availability. Even if these biases could be eliminated or controlled, another serious problem would probably remain. Sales people might be able to estimate their weekly dollar volume or total unit sales, but they are not likely to be able to develop credible estimates at the SKU level that the logistics system will require. For these reasons it will seldom be the case that these tools will form the basis of a successful demand forecasting procedure in a logistics system.

Consensus methods: As an alternative to the “bottom-up” survey approaches, consensus methods use a small group of individuals to develop general forecasts. In a “Jury of Executive Opinion”, for example, a group of executives in the firm would meet and develop through debate and discussion a general forecast of demand. Each individual would presumably contribute insight and understanding based on their view of the market, the product, the competition, and so forth. Once again, while these executives are undoubtedly experienced, they are hardly disinterested observers, and the opportunity for biased inputs is obvious. A more formal consensus procedure, called “The Delphi Method”, has been developed to help control these problems. In this technique, a panel of disinterested technical experts is presented with a questionnaire regarding a forecast. The answers are collected, processed, and re-distributed to the panel, making sure that all information contributed by any panel member is available to all members, but on an anonymous basis. Each expert reflects on the gathering opinion. A second questionnaire is then distributed to the panel, and the process is repeated until a consensus forecast is reached. Consensus methods are usually appropriate only for highly aggregate and usually quite long-range forecasts. Once again, their ability to generate useful SKU level forecasts is questionable, and it is unlikely that this approach will be the basis for a successful demand forecasting procedure in a logistics system.

Judgment-based methods are important in that they are often used to determine an enterprise’s strategy. They are also used in more mundane decisions, such as determining the quality of a potential vendor by asking for references, and there are many other reasonable applications. It is true that judgment based techniques are an inadequate basis for a demand forecasting system, but this should not be construed to mean that judgment has no role to play in logistics forecasting or that salespeople have no knowledge to bring to the problem. In fact, it is often the case that sales and marketing people have valuable information about sales promotions, new products, competitor activity, and so forth, which should be incorporated into the forecast somehow. Many organizations treat such data as additional information that is used to modify the existing forecast rather than as the baseline data used to create the forecast in the first place.

Experimental Approaches to Forecasting

In the early stages of new product development it is important to get some estimate of the level of potential demand for the product. A variety of market research techniques are used to this end.

Customer Surveys are sometimes conducted over the telephone or on street corners, at shopping malls, and so forth. The new product is displayed or described, and potential customers are asked whether they would be interested in purchasing the item. While this approach can help to isolate attractive or unattractive product features, experience has shown that “intent to purchase” as measured in this way is difficult to translate into a meaningful demand forecast. This falls short of being a true “demand experiment”.

Consumer Panels are also used in the early phases of product development. Here a small group of potential customers are brought together in a room where they can use the product and discuss it among themselves. Panel members are often paid a nominal amount for their participation. Like surveys, these procedures are more useful for analyzing product attributes than for estimating demand, and they do not constitute true “demand experiments” because no purchases take place.

Test Marketing is often employed after new product development but prior to a full-scale national launch of a new brand or product. The idea is to choose a relatively small, reasonably isolated, yet somehow demographically “typical” market area. In the United States, this is often a medium sized city such as Cincinnati or Buffalo. The total marketing plan for the item, including advertising, promotions, and distribution tactics, is “rolled out” and implemented in the test market, and measurements of product awareness, market penetration, and market share are made. While these data are used to estimate potential sales to a larger national market, the emphasis here is usually on “fine-tuning” the total marketing plan and insuring that no problems or potential embarrassments have been overlooked. For example, Proctor and Gamble extensively test-marketed its Pringles potato chip product made with the fat substitute Olestra to assure that the product would be broadly acceptable to the market.

Scanner Panel Data procedures have recently been developed that permit demand experimentation on existing brands and products. In these procedures, a large set of household customers agrees to participate in an ongoing study of their grocery buying habits. Panel members agree to submit information about the number of individuals in the household, their ages, household income, and so forth. Whenever they buy groceries at a supermarket participating in the research, their household identity is captured along with the identity and price of every item they purchased. This is straightforward due to the use of UPC codes and optical scanners at checkout. This procedure results in a rich database of observed customer buying behavior. The analyst is in a position to see each purchase in light of the full set of alternatives to the chosen brand that were available in the store at the time of purchase, including all other brands, prices, sizes, discounts, deals, coupon offers, and so on. Statistical models such as discrete choice models can be used to analyze the relationships in the data. The manufacturer and merchandiser are now in a position to test a price promotion and estimate its probable effect on brand loyalty and brand switching behavior among customers in general. This approach can develop valuable insight into demand behavior at the customer level, but once again it can be difficult to extend this insight directly into demand forecasts in the logistics system.

Relational/Causal Approaches to Forecasting

Suppose our firm operates retail stores in a dozen major cities, and we now decide to open a new store in a city where we have not operated before. We will need to forecast what the sales at the new store are likely to be. To do this, we could collect historical sales data from all of our existing stores. For each of these stores we could also collect relevant data related to the city’s population, average income, the number of competing stores in the area, and other presumably relevant data. These additional data are all referred to as explanatory variables or independent variables in the analysis. The sales data for the stores are considered to be the dependent variable that we are trying to explain or predict.

The basic premise is that if we can find relationships between the explanatory variables (population, income, and so forth) and sales for the existing stores, then these relationships will hold in the new city as well. Thus, by collecting data on the explanatory variables in the target city and applying these relationships, sales in the new store can be estimated. In some sense the posture here is that the explanatory variables “cause” the sales. Mathematical and statistical procedures are used to develop and test these explanatory relationships and to generate forecasts from them. Causal methods include the following:

Econometric models, such as discrete choice models and multiple regression. More elaborate systems involving sets of simultaneous regression equations can also be attempted. These advanced models are beyond the scope of this book and are not generally applicable to the task of forecasting demand in a logistics system.

Input-output models estimate the flow of goods between markets and industries. These models ensure the integrity of the flows into and out of the modeled markets and industries; they are used mainly in large-scale macro-economic analysis and were not found useful in logistics applications.

Life cycle models look at the various stages in a product’s “life” as it is launched, matures, and phases out. These techniques examine the nature of the consumers who buy the product at various stages (“early adopters,” “mainstream buyers,” “laggards,” etc.) to help determine product life cycle trends in the demand pattern. Such models are used extensively in industries such as high technology, fashion, and some consumer goods facing short product life cycles. This class of model is not distinct from the others mentioned here as the characteristics of the product life cycle can be estimated using, for example, econometric models. They are mentioned here as a distinct class because the overriding “cause” of demand with these models is assumed to be the life cycle stage the product is in.

Simulation models are used to model the flows of components into manufacturing plants based on MRP schedules and the flow of finished goods throughout distribution networks to meet customer demand. There is little theory to building such simulation models. Their strength lies in their ability to account for many time lag effects and complicated dependent demand schedules. They are, however, typically cumbersome and complicated.

 

Time Series Approaches to Forecasting

Although all four approaches are sometimes used to forecast demand, generally the time-series approach is the most appropriate and the most accurate approach to generate the large number of short-term, SKU level, locally dis-aggregated forecasts required to operate a physical distribution system over a reasonably short time horizon. On the other hand, these time series techniques may not prove to be very accurate. If the firm has knowledge or insight about future events, such as sales promotions, which can be expected to dramatically alter the otherwise expected demand, some incorporation of this knowledge into the forecast through judgmental or relational means is also appropriate.

Many different time series forecasting procedures have been developed. These techniques include very simple procedures such as the Moving Average and various procedures based on the related concept of Exponential Smoothing. These procedures are extensively used in logistics systems, and they will be thoroughly discussed in this chapter. Other more complex procedures, such as the Box-Jenkins (ARIMA) Models, are also available and are sometimes used in logistics systems. However, in most cases these more sophisticated tools have not proven to be superior to the simpler tools, and so they are not widely used in logistics systems.

 

Generally, there are four major methods used in qualitative research:

  • Observation
  • Analyzing texts and documents
  • Interviews
  • Recording and transcribing

 

In this research, mainly interviews (primary sources) and analyzing texts and documents (secondary sources) are used. In qualitative research, the textual analysis is concerned with understanding participants’ categories. Interviews, in qualitative research, are mostly semi-structured and have open questions to small samples. The advantage of the qualitative approach in this research, by getting in close proximity to the travel agents, is that one is able to best explain and describe the dynamic processes in the local travel agency sector. The best approach to this research would be to obtain in depth and rich information about how to respond to the challenges they currently face by interviewing managers or owners of local travel agencies to gain primary data.

 

Further, textual analysis will be conducted in order to get secondary data. The knowledge this method is able to create would be sufficient to answer the research question in combination with secondary data gained about the travel agencies. In detail, this means that the body of knowledge will include the influence of globalization on the local market, if it exists at all and if, in what form. Thus, knowledge will be gained about the development of the local market and the current position and strategy of the company.

 

The interview

The interview is in all probability the most extensively employed method in qualitative research. It is the flexibility of the interview that makes it so attractive. Interviewing, the transcription of interviews, and the analysis of transcripts require hard work and are all very time-consuming, but they can be more easily accommodated into researchers’ personal lives. In spite of the sharp increase of terms describing types of interview in qualitative research, the two main types are the unstructured interview and the semi-structured interview. Sometimes the term qualitative interview is employed to encapsulate these two types of interview.

Qualitative interviewing is usually very different from interviewing in quantitative research in several ways. For instance, the approach in qualitative research tends to be much less structured. In quantitative research, the approach is structured to make the most of the reliability and validity of measurement of key concepts. In qualitative interviewing, there is generally much greater interest in the interviewee’s point of view; in quantitative research, the interview reflects the researcher’s concerns. Furthermore, in qualitative interviewing, interviewers can switch more easily from any schedule or guide that is being used. For example, they can ask new questions that follow up interviewees’ replies and can change the order of questions. In quantitative research, this is unthinkable, because they will compromise the standardization of the interview process and therefore the reliability and validity of measurement. In qualitative interviewing, the researcher wants relatively rich and detailed answers; in quantitative research the interview is supposed to produce answers that can be coded and processed rapidly. Another difference would be that in qualitative interviewing, the interviewee may be interviewed on more than one and sometimes even several occasions. As opposed to quantitative research where unless the research is longitudinal in character, the person will be interviewed on one occasion only.

 

Unstructured and semi-structured interview

Qualitative interviewing, to a large degree, varies in the approach taken by the interviewer. Generally, two major types are distinguished; the unstructured interview and the semi-structured interview. As far as the unstructured interview is concerned, there may be just a single question that the interviewer asks and the interviewee is then allowed to respond quite freely, with the interviewer merely responding to points that seem interesting of being followed up. In fact, unstructured interviewing tends to be very much like to a normal conversation.

When a semi-structured interview is conducted, the researcher has a list of questions or rather specific topics to be covered, but the interviewee has a great deal of flexibility in how to reply. Questions that are not included in the guide may be asked spontaneously on aspects mentioned by interviewers. Nonetheless, all of the questions will be asked and the same questions will be used from interviewee to interviewee.

 

Posted on

Deming’s Profound Knowledge consists of four elements. Answer the following three parts relating to the “variation” element Answer

Deming’s Profound Knowledge consists of four elements. Answer the following three parts relating to the “variation” element of Deming’s Profound Knowledge. Your discussion should relate to this element of Deming’s Profound Knowledge and not variation in general.

  1. Explain how a quincunx can be used to explain variation. (10 points)
  2. Why is understanding variation important, and what do we need to do about it? (10 points)
  3. What tools do we need to use to understand variation, and why is using these tools important to our decision-making process? (10 points)

Answer:

Ans1: In a quincunx, small balls are dropped from a hole in the top and it hit a series of pins as they fall toward collection boxes. The pins cause each ball to move randomly to the left or the right as it strikes each pin on its way down. The frequency distribution of where the balls land is symmetrical bell shape of the distribution. Even though all balls are dropped from the same position, the end result shows variation. The same kind of variation exists in any production and service process, due to factors inherent in the design of the system, which cannot easily be controlled.

Ans2: Excessive variation results in products that fail or perform erratic and inconsistent service. Management should understand variation and work to reduce variation through improvements in technology, process design, and training. With less variation, both the producer and consumer benefit. The consumer has the advantage of knowing that all products and services have similar quality characteristics and will perform or be delivered consistently. Statistical methods are the primary tools used to identify and quantify variation. Every employee in the firm should be familiar with statistical techniques and other problem-solving tools. Statistics can then become the common language that every employee from top executives to line workers uses to communicate with one another.

Ans 3: Statistical methods are the primary tools used to identify and quantify variation. At the organizational level, Statistical methods helps manager and top management understand the business system , use data from the organization to assess performance and encourage employees to experiment to improve their work. Thus, every manager and employee can benefit from statistical thinking and using total quality tools in their decision making process at the organization, process and individual level.