POP* - Process Optimization Program
POP* helps companies to optimize their plants' operation The final target is to maximize the Gross Margin (GM) thereby achieving the required product qualities with minimum usage of utilities.
POP* can be applied on 3 levels:
- Optimization of production with existing equipment under standard conditions. As an example can serve an optimization of a distillation column reflux flow. Such activity is free of investment and can be implemented in a short time.
- Optimization based on improved maintenance. Typical examples can be optimum cleaning of heat exchangers or good maintenance of steam traps. In such cases some additional OPEX are required.
- Optimization of process flow-sheet lay-out either by better placement of individual pieces of equipment (retrofitting) or by installing new ones and reconstructing the existing ones (revamping). As an example can serve a revamp based on the Pinch technology. On this level a significant investment may be needed, detailed economic analysis is required and these changes cannot be implemented in a short time.
Every POP* activity should start with reliable process data. Such data must be free of gross errors and must be as precise as possible. It is not possible to work with raw process data as measured by instrumentation systems (the well known "garbage in - garbage out" effect could devalue all the efforts).
The standard technique nowadays used for data validation is the Data Reconciliation (DR) complemented by other related techniques. In this way, data free of gross errors are obtained.
As DR is based on physical laws (models) which are mostly mass and energy balances, data sets are enhanced by originally unmeasured but calculated variables (unmeasured flows, concentrations, etc.). More details about data enhancement by modeling will be given in one of the next sections.
In general, data validation and enhancement plays a key role in the whole POP*:
- In the beginning it provides reliable data for the problem analysis
- in the middle of the solution it is a source of detailed data needed for example for simulation or pinch analysis
- at the end it plays a key role in monitoring of processing units after a POP* implementation to make the process improvement transparent and sustainable.
After gathering good data the next step is the process analysis. Usually it starts with the Process Flow Diagram (PFD). On the PFD there should be already completed balances of mass and energy available from the previous step.
Such PFD is a key to identifying opportunities (reserves) which are in the process as concerns economies of feedstocks and utilities. It is good to start with mapping the individual unit operations and sort them according to their consumption of utilities or their influence on product yields and qualities.
Process analysis leads to good understanding of the process. It helps to apply a system approach, which is essential for understanding the interaction between yields and energy consumption. It is typical for process industries, that minimum consumption of utilities does not correspond to the maximum yield at required product quality specifications. At the overall optimum there must be some tradeoff between yields and energy consumption.
There can be many ways how to identify opportunities. Some of the most frequent ones are
- optimization and what-if studies by simulation
- statistical analysis of past records. For example, a significant fluctuation of GM indicates chances of process improvement
- Pinch technology applied to energy intensive processes
- last but not the least is the experience and a chemical engineering common sense.
An important result of the problem analysis should be setting of the targets based on identified opportunities for improvement. Even if there can be some uncertainty in individual targets, targeting is important for setting direction in which next steps of POP* will go.
Modeling is nowadays a very broad group of activities. The following types of modeling are important for POP*.
Balances shown on a PFD serve also for some aggregation of individual streams of utilities into more complex indicators. In this way so-called Key Process Indicators (KPI's) are created, which can be later used for process monitoring purposes.
These are sometimes called the First Law models. Such models are frequently used for process data validation but they are also a basis for other sorts of models. Such models are created mostly with the aid of commercial balancing programs, which are also capable of validation of measured data (for example RECON). Process data are significantly enhanced by calculation of unmeasured process variables.
These models are created by correlation and regression analysis of historical measured process data (raw or reconciled). In this way we can obtain models for complicated processes for which it is difficult to create models by other ways. Empirical models are also the important source of data enhancement during POP*.
A good example here are so-called Quality Estimators (QE), sometimes called Software sensors. Such models express chemical/physical properties of process streams as functions of process variables as temperatures, pressures or flows. QEs can partially substitute laboratory analyses which are laborious and expensive and are not available at a sufficient frequency.
Other examples are regression models of KPI's, which relate unmeasured KPIs to directly measured process variables. Such models can be directly used for the process optimization.
These are First Law models complemented by other models based on physical laws (phase equilibria, transport processes, etc.). Even if in practice nothing is completely "rigorous", good models can be created with the aid of commercial simulation programs. So-called rigorous modeling may be time consuming and expensive but is warranted in some situations. A good rigorous model, which is in tune with actual process operations, can be directly used for a process optimization.
There is voluminous literature available on optimization, either from the academic strata or from process industries. For example, RTO (Real Time Optimization) marks a procedure in which a process is optimized on-line on the basis of a model, which is evaluated via the relevant process measured variables. RTO is usually complemented by some form of APC (Advanced Process Control). Such solutions (whose real benefits are not well documented in literature) are quite expensive and maintenance intensive/expensive and their use is warranted only in cases of biggest processing units. The following are simpler methods whose applicability is much broader.
When good house keeping is in place, there is good evidence how a process unit is run (data validation, regular daily balancing, monitoring of yields, relevant qualities and energy consumption, etc.). Even if there is no explicit "optimization" in this case, it is generally acknowledged that good house keeping improves profitability of any plant. This results from an early detection of deterioration of KPI's where encountered problems can be quickly solved by standard ways.
Well maintained information and monitoring systems also provide historical data needed for building of empirical models.
There are several well established methods of empirical optimization which can be applied to processing plants.
The classical method uses the so-called factor experiments (Box-Wilson, etc.) in which selected process variables are changed in a predefined manner. The optimized function (for example GM) is evaluated for different combinations of process variables and the optimum is found on the basis of simple polynomial equations. To reduce the presence of a process noise, relatively large variations of primary variables should be used. This can be a difficult problem due equipment limitations and product quality requirements. .
To overcome this problem, the EVOP (EVolutionary OPeration) method was developed. This method, which was quite popular in late sixties and seventies, is based on smaller changes of selected process variables, so that they do not disturb the production process. The problem of process noise is solved by repeating the optimization cycle several times, where statistical methods can suppress the influence of the process noise.
In general, empirical methods of optimization can be useful in complicated cases when it is difficult to set up models based on physical laws (biotechnologies, pharmacy). They can be very efficient when some prior knowledge is available about key process variables, which influence the optimization process. The major disadvantage of these methods is that they are quite time consuming and require disturbances in the normal production process. Nevertheless, these methods can be important in proving results of other optimization methods by plant tests.
Firstly are the empirical (regression) models. Such models can be created by regression between a target function (e.g. GM) and some process variables. The regression is usually based on historical data. Such approach can be successful if there is enough variability in process and target variables. If this is not the case, regression can not provide a good model. This approach is quite straightforward and requires minimum efforts (it is easy to collect data from modern process information systems and also it is very easy to make regression and correlation analysis using commercial software). As empirical models are usually relatively simple, there is no problem in the optimization step, which can be done easily by simple optimization tools like Excel's module Solver.
The use of rigorous models for optimization is very straightforward as good simulation programs have built-in optimizers. So the major investment for this approach consists "only" in building a reliable model.
Although this is not the explicit optimization technique, it can add to better and efficient production. There are many situations where operators must decide in a relatively short time interval how to react to some changes in their plant. It is good if they can evaluate consequences of their decisions on the process, including KPIs. If their process is modeled, they can also evaluate several possible alternatives.
The ideal solution is the integration of the process information system with a simulator. In practice satisfactory results can be achieved by special process calculators on which operators can enter process parameters and see the results. Such interactive calculators can now efficiently be created as part of the company's Intranet.
There are some issues specific for optimization in actual practice:
A region in which process variables can be changed to approach the overall optimum (so-called feasible region) is usually constrained significantly for many reasons (process safety, product quality, ). Unlike cases described in textbooks on optimization theory, the optimum in practice lies mostly on the border of a feasible region. This brings two consequences:
- an optimum strategy can be expressed in a simple way, for example "keep temperature T1 as low as possible"
- the approach to optimum on the border is limited by quality of measured data (to be sure where the process really is) and by quality of the process control. Validated data are therefore essential. The distance to an optimum, which must be maintained for some reasons, may represent significant loss in GM.
Moreover, the position of an optimum can move in time, as well as borders of a feasible region. This can be caused by many reasons. There can be changes in the process itself (fouling of heat exchangers, catalyst deactivation, etc.) or in prices of feedstocks and utilities. This brings the need to review the optimization in practice as a continuous process.
In classical mathematical optimization, the job is finished by presenting the position of the optimum. This result is sometimes complemented by a sensitivity of optimum to some variables. In industrial practice the problem of implementation is not so simple. Further on we will concentrate on the off-line optimization where the optimization algorithm only provides new setpoints which should be endorsed and used (if found feasible) by the operators.
In summary, the basic steps of POP* discussed so far are:
- data validation and enhancement
- process analysis
Although the implementation of these steps is not straight forward, the most challenging part is to make the optimization sustainable.
The relevance of monitoring values of important process variables and KPIs was stressed already in previous sections. Such system should be limited to real key variables because man's ability to accept information on a continuous basis is limited. A good method of process monitoring is the so-called Operating Window (sometimes denoted as Technological Card). OWs are sets of process variables and KPIs with specified limits in which these variables should be maintained. In practice, these limits can be constants, dependent on a mode of operation or dynamic (dependent on values of other variables). It is easy to check whether a process plant is operated within OW limits or not. Deviations can also easily be visualized. Furthermore, it is also possible to create long-term statistics about process variables kept within limits.
A standard method for process monitoring is Statistical Process Control (SPC). On the top of attention during SPC should be the GM.
Besides the monitoring of physical/chemical variables it is recommended to monitor major feedstocks and utilities also in terms of their monetary value (flow times price). In this way, operators get some feeling about the importance of the individual variables.
Sometimes it is also good to monitor losses (for example heat lost in water coolers). Such information can be in some cases more transparent and more informative than other indicators. Another type of loss is the so-called quality giveaway where the product sent to a customer has a better quality than specified which can represent a financial loss in business opportunities.
The POP* looks quite complicated, which stems from the complexity of the problem analyzed in a general way. In practice the solution does not need to be too difficult as only limited number of techniques will be applied for the individual cases, depending on the actual situation. Good house keeping based on validated data is essential in all cases. Important is that nowadays most of the processing plants have some infrastructure available for POP* implementation (DCS, process historians, PC network, etc.). It is now a good time to reap the economic benefits from the investment made in this infrastructure.