aofsoru.com

Business Decision Models Dersi 4. Ünite Özet

Linear Programming

Introduction

Enterprises aim to get the best profitable actions in a competitive market. Since all the other rivals try to supply the market, the production inputs such as materials, workforce, production and distribution units are scarce. Inefficient use of these inputs leads to higher costs, and thus reduces the profits. Customer demands about the services and the products are almost infinite and mostly contradictive. They demand the best with the lowest cost. What is to be done is to meet this demand as best as possible. Thus, the enterprise has to make the best decisions on their product (or service) design, production planning, product delivery program, and other relevant processes as well. Operations research methods provide these decisions scientifically in a holistic manner.

The foundation of operations research is mostly cited to studies and applications in the U.S. Army right before The World War II. However, LeonidV. Kantorovich is recognized as the first to explore this area. In 1939, Kantorovich published a paper on a method for a production plan which he claimed to be the optimal way. This study included a linear program without a systematic solution. The systematic solution which is called “simplex algorithm” was later introduced by George B. Dantzig, who had planning experience in the U.S. Army Air Force. Tjalling C. Koopmans, who built a model for ship routes, was another contributor to the field. He and Kantorovich shared 1975 Nobel Prize in economics for their studies on the optimal allocation of resources.

Industrial applications of linear programming Stemmed from the oil industry after William W. Cooper and his colleagues’ publications about the efficient blending of aviation fuels. As the computational ability of the solver systems advanced rapidly, operations research started to be used widely. Operations research applications, particularly linear programming, is now a standard method used in a variety of areas and processes. Some of these are urban planning, currency arbitrage, investment, production planning, inventory control, blending and refining, manpower planning, agricultural planning, diet & nutrition, transportation and, logistics.

This chapter is concerned with linear programming and its graphical solution. In the first section, concepts and approach of the operational research are to be introduced. This section develops The basics of linear programming, which include terminology, taxonomy and its assumptions. The second section is modeling with linear programming which includes examples from a variety of uses as mentioned above. In the last section, the graphical solution of linear programming is shown. This is a solution method for a special form of linear programming models.

Linear Programming Basics

Operations research is an analytical method for the management of organizations that provides mathematical solutions to decision-making problems. There are definitive steps of this approach that make it analytical:

  1. Problem definition
  2. System analysis for the problem and data gathering
  3. Model construction
  4. Model solution and testing
  5. The decision of implementing the solution

Problems arise in every division of the organization and the operations researcher has to collaborate with decisionmakers from various backgrounds. Therefore, an important aspect of operations research is that it is interdisciplinary. A mathematical solution needs scalable inputs and these have to be the key factors that influence the outputs for the solution to be effective. System analysis ensures that the model scheme and the variables are suitable. The next phase is modeling what is derived from the real situation by system analysis. A model is a representative proxy of the real matter of concern. In particular, a mathematical model is an abstraction of the analyzed system using mathematical expressions. The modeling phase is crucial as it sets off the balance between suitability and solvability of the problem. The most prominent model type is the linear model as it adequately represent the real situation and it is relatively manageable to solve. Linear models consist of constraints and an objective in the form of a linear expression. Linear programming is used to solve these types of models.

Terminology, Taxonomy, and Assumptions

A linear expression appears below; the powers of its variables x and y are equal to one.

2x – 3y

A linear constraint requires a linear expression to be equal to a number, to be less/greater than or equal to a number.

x + y = 4

x – 3y ? 2

x ? y

The last expression stated above is called a non-negativity constraint, which requires the value x to be greater or equal to zero. A linear program either minimizes or maximizes a linear expression subject to defined linear constraints.

An optimal solution is a feasible solution that reaches the most favorable value of the objective function.

A feasible linear program has at least one feasible solution. To sum, in order to obtain an optimal solution, the linear program must be feasible and bounded.

In practice, linear programming has some limitations that deteriorate the model fitness to real-life situations. As mentioned before, there is a trade-off between suitability and solvability of the problem. Linear programming provides solutions under some assumptions. These are proportionality, additivity, divisibility and certainty assumptions. These are important to mention as it will be easier for operations researcher to evaluate how well linear programming applies to a given problem.

In both the objective function and constraints, the contribution of every decision variable is proportional to its value. As a result, proportionality assumption casts out any variable that has an exponent other than 1. Consider a firm that aims to maximize its profits under the constraint of its production costs. In practice, production costs are not the same for different levels of production as higher production lowers costs by the economies of scale. To avoid a violation of the proportionality assumption, some characteristics of real-life situations such as economies of scale must be ignored. That undoubtedly makes the model less reliable.

Either it is the objective or the constraints, every function is the sum of the individual contribution of the respective variables. This is called the additivity assumption, which prohibits cross-product terms in the expressions of a linear program. This assumption restricts the model design as well. For instance, a linear maximization model of aggregate revenue to be received from complementary products has to exclude cross-product terms that proxy the multiplier effect of each product over the other. In a linear program, decision variables are not limited to integer values. In fact, they are divisible. In certain situations, the divisibility assumption does not hold as some decision variables can only be an integer. Methods of integer programming, which are not in the scope of this book, overcome this obstacle.

The last assumption is certainty, which means that each parameter of a linear programming model is constant in all conditions. In real applications, factors affecting the decisions are not as stable as they are strictly assumed. In the modeling phase, all these assumptions need to be considered while preserving an acceptable level of similarity between reality and what is realized.

Modeling With Linear Programming

The modeling phase is the transformation of verbal description of the problem into a mathematical model under the linear programming assumptions. This model is a system of equations and inequalities representing the objective and the constraints. The decision to be derived from the model is related to quantifiable variables that are in control. These variables of decision x1 , x2 , ... , xn and the constants (i.e. the coefficients and right-hand side values of the inequalities) are called the decision variables and the parameters respectively. The objective function (for example, Z= 2x1 + x2 ) is an appropriate measure of performance for the problem. It can be maximized or minimized depending on the desired achievement. Finally, a constraint is a restriction on decision variables by means of equality or inequality (for example, 5x1 + 6x2 ? 30).

Graphical Solution of Two Variable-Linear Programs

The linear programs that have only two decision variables can be solved by drawing on a graph. However, linear programs often have numbers of decision variables more than two. This makes the graphical solution quite fruitless, yet it is a handy tool to demonstrate how to reach an optimal solution. The graphical solution begins with the determination of the solution space which is called the feasible region. The next is to determine the optimal solution from the feasible region. As an example, the steps of the graphical solution of Program 4.1. is explained below. First the program transformed into the notation introduced in the former section

Maximize Z = 2x 1 +x 2
subject to 5x1 + 6x2 ? 30
–x1 + 4x2 ? 12
x1 , x2 ? 0

First and foremost, the non-negativity constraints of the model restrict the solution area to the first quadrant, which lies above the x1 axis and to the right of the x2 axis. It is the same for all the linear programming models for which the solution cannot be somewhere else except the first quadrant.

To account for the constraint in the form of inequality, it is needed to be transformed into an equation first. Then, the line is drawn by locating two distinct points on it. A common way to do so is setting one of the variables to zero and solving the equation to obtain the other. This assures the line intercepts both vertical and horizontal axes. The line divides the first quadrant of the plane into two zones. One of these zones satisfy the constraint in its original form which is an inequality. A reference point (e.g., the origin) can easily determine the zone; if the inequality holds for the point, then the side in which it lies is the feasible one and often highlighted with hatching or shading.

  • In the maximization models , the objective function is alternatively named profit function. The logic behind this is the utmost goal of maximizing something is to gain a benefit or a “profit”.
  • In the maximization models, draw a line parallel to the objective function and farthest from the origin. This line should contain at least one point of the feasible region. Find the coordinates of this point by solving the equations of the drawn line and the boundary line(s) of the feasible region.
  • In the minimization models, the objective function is alternatively named cost function. The reason for this is the ultimate goal of minimizing something is to reduce loss or “cost”
  • In the minimization models, draw a line parallel to the objective function and near to the origin. This line should contain at least one point of the feasible region. Find the coordinates of this point by solving the equations of the drawn line and the boundary line(s) of the feasible region.

Yukarı Git

Sosyal Medya'da Paylaş

Facebook Twitter Google Pinterest Whatsapp Email