 Research
 Open Access
 Published:
Optimization of biotechnological systems through geometric programming
Theoretical Biology and Medical Modelling volume 4, Article number: 38 (2007)
Abstract
Background
In the past, tasks of model based yield optimization in metabolic engineering were either approached with stoichiometric models or with structured nonlinear models such as Ssystems or linearlogarithmic representations. These models stand out among most others, because they allow the optimization task to be converted into a linear program, for which efficient solution methods are widely available. For pathway models not in one of these formats, an Indirect Optimization Method (IOM) was developed where the original model is sequentially represented as an Ssystem model, optimized in this format with linear programming methods, reinterpreted in the initial model form, and further optimized as necessary.
Results
A new method is proposed for this task. We show here that the model format of a Generalized Mass Action (GMA) system may be optimized very efficiently with techniques of geometric programming. We briefly review the basics of GMA systems and of geometric programming, demonstrate how the latter may be applied to the former, and illustrate the combined method with a didactic problem and two examples based on models of real systems. The first is a relatively small yet representative model of the anaerobic fermentation pathway in S. cerevisiae, while the second describes the dynamics of the tryptophan operon in E. coli. Both models have previously been used for benchmarking purposes, thus facilitating comparisons with the proposed new method. In these comparisons, the geometric programming method was found to be equal or better than the earlier methods in terms of successful identification of optima and efficiency.
Conclusion
GMA systems are of importance, because they contain stoichiometric, mass action and Ssystems as special cases, along with many other models. Furthermore, it was previously shown that algebraic equivalence transformations of variables are sufficient to convert virtually any types of dynamical models into the GMA form. Thus, efficient methods for optimizing GMA systems have multifold appeal.
Background
Model based optimization of biotechnological processes is a key step towards the establishment of rational strategies for yield improvement, be it through genetic engineering, refined setting of operating conditions or both. As such, it is a key element in the rapidly emerging field of metabolic engineering [1, 2]. Optimization tasks involving living organisms are notoriously difficult, because they almost always involve large numbers of variables, representing biological components that dominate cell operation, and must account for multitudinous and complex nonlinear interactions among them [3]. The steady increase in the ready availability of computing power has somewhat alleviated the challenge, but it has also, together with other technological breakthroughs, been raising the level of expectation. Specifically, modelers are more and more expected to account for complex biological details and to include variables of diverse types and origins (metabolites, RNA, proteins...). This trend is to be welcomed, because it promises improved model predictions, yet it easily compensates for the computer technological advances and often overwhelms available hardware and software methods. As a remedy, effort has been expanded to develop computationally efficient algorithms that scale well with the growing number of variables in typical optimization tasks.
The most straightforward attempts toward improved efficiency have been based, in one form or another, on the reduction of the originally nonlinear task to linearity, because linear optimization tasks are rather easily solved, even if they involve thousands of variables. One variant of this approach is the optimization of stoichiometric flux distribution models [4]. The two great advantages of this method are that the models are linear and that minimal information is needed to implement them, namely flux rates, and potentially numerical values characterizing metabolic or physicochemical constraints. The significant disadvantage is that no regulation can be considered in these models.
An alternative is the use of Ssystem models within the modeling framework of Biochemical Systems Theory [5–7]. These models are highly nonlinear, thus allowing suitable representations of regulatory features, but have linear steadystate equations, so that optimization under steadystate conditions again becomes a matter of linear programming [8]. The disadvantages here are that much more (kinetic) information is needed to set up numerical models and that Ssystems are based on approximations that are not always accepted as valid. Linearlogarithmic models [9] similarly have the advantage of linearity at steady state and the disadvantage of being a local approximation.
An extension of these linear approaches is the Indirect Optimization Method [10]. In this method, any type of kinetic model is locally represented as an Ssystem. This Ssystem is optimized with linear methods, and the resulting optimized parameter settings are translated back into the original model. If necessary, this linearized optimization may be executed in sequential steps.
An alternative to using Ssystem models is the General Mass Action (GMA) representation within BST. GMA systems are very interesting for several reasons. First, they contain both stoichiometric and Ssystem models as direct special cases, which would allow the optimization of combinations of the two. Second, mass action systems are special cases of GMA models, so that, in some sense, MichaelisMenten functions and other kinetic rate laws are special cases, if they are expressed in their elemental, nonapproximated form. Third, it was shown that virtually any system of differential equations may be represented exactly as a GMA system, upon equivalence transformations of some of the functions in the original system. Thus, GMA systems, as a mathematical representation, are capable of capturing any differentiable nonlinearity that one might encounter in biological systems. We show here that GMA systems, while highly nonlinear, are structured enough to permit the application of efficient optimization methods based on geometric programming.
Formulation of the optimization task
Pertinent optimization problems in metabolic engineering can be stated as the targeted manipulation of a system in the following way:max or min f_{0}(X)
subject to:opearation in steady statemetabolic and physicochemical constraintscell viability
In this generic representation, (1) usually targets a flux or a yield. The optimization must occur under several constraints. The first set (2) ensures that the system will operate under steadystate conditions. Other constraints (3) are imposed to retain the system within a physically and chemically feasible state and so that the total protein or metabolite levels do not impede cell growth. Yet other constraints (4) guarantee that no metabolites are depleted below minimal required levels or accumulate to toxic concentrations. These sets of constraints are designed to allow sustained operation of the system.
Biochemical Systems Theory (BST)
Biological processes are usually modeled as systems of differential equations in which the variation in metabolites X is represented as:
The elements n_{i,j}of the stoichiometric matrix N are constant. The vector v contains reaction rates, which are in general functions of the variables and parameters of the system. This structure is usually associated with metabolic systems, but it is similarly valid for models describing gene expression, bioreactors, and a wide variety of other processes in biotechnology. In typical stoichiometric analyses, the reaction rates are considered constant. Furthermore, the analysis is restricted to steadystate operation, with the consequence that (5) is set equal to 0 and thereby becomes a set of linear algebraic equations, which are amenable to a huge repertoire of analyses.
In analyses accounting for regulation, the reaction rates become functions that depend on system variables and outside influences. Even at steady state, these may be very complex, thereby rendering direct analysis of the system a formidable task [11]. As a remedy, BST suggests to represent these rate functions with power laws:
In analogy with chemical kinetics, γ_{ i }is called the rate constant and f_{i,j}are kinetic orders, which may be any real numbers. Positive kinetic orders indicate augmentation, whereas negative values are indicative of inhibition. Kinetic orders of 0 result in automatic removal of the corresponding variable from the term. In the notation of BST, the first n variables are often considered the dependent variables, which change dynamically under the action of the system, while the remaining variables X_{ i }for i = n + 1 ... m + n are considered independent variables and typically remain constant throughout any given simulation study. Thus, metabolites, enzymes, membrane potentials or other system components can easily be made dependent or independent by the modeler without requiring alterations in the structure of the equations. BST is very compact and explicitly distinguishes variables from parameters.
Because we will later introduce concepts of geometric programming, it is noted that the powerlaw term in Eq. 6 is also called a monomial. If this monomial is an approximation of reaction rate V, its parameters can be directly related to V, by virtue of the fact that the monomial is in fact a Taylor linearization in logarithmic space [12]. Thus, choosing an operating point with index 0, one obtains:
Thus, it follows directly from 7 that the parameters of a powerlaw (monomial) term can be computed as
System equations in BST may be designed in slightly different ways. For the GMA form, each reaction is represented by its own monomial, and the result is therefore
Note that this is actually a spelledout version of Eq. 5, where the reaction rates are monomials as in Eq. 6. As an alternative to the GMA format, one may, for each dependent variable, collect all incoming reactions in one term ${V}_{i}^{+}$ and do the same with all outgoing fluxes, which are collectively called ${V}_{i}^{}$. These aggregated terms are now represented as monomials, and the result is
Thus, there are at most one positive and one negative term in each Ssystem equation.
The conversion of a GMA into an Ssystem will become important later. It is achieved by collecting the aggregated fluxes into vectors
where N^{+} and N^{} are matrices containing respectively the positive and negative coefficients of N such that N = N^{+}  N^{}. With these definitions, we can derive the matrices of kinetic orders of Ssystems from those of the corresponding GMA representation. Namely,
where V, V^{+} and V^{} are square matrices of zeros having the corresponding vectors as their main diagonals. G and H contain the kinetic orders of the Ssystem while F contains those of the GMA [13]. GMA systems may be constructed in three manners [11]. First, given a pathway diagram, each reaction rate is represented by a monomial, and equations are assembled from all reaction rates involved. Second, it is possible (though not often actually done) to dissect enzyme catalyzed reactions into their underlying mass action kinetics, without evoking the typical quasisteadystate assumption. The result is directly the special case of a GMA system where most kinetic orders are zero, one, or in some cases 2. Third, it has been shown that virtually any nonlinearity can be represented equivalently as a GMA system [14]. As an example for this recasting technique, consider a simple equation where production and degradation are formulated as traditional MichaelisMenten rate laws:
where X_{0} is a dependent or independent variable describing the substrate for the generation of X_{1}. To effect the transformation into a GMA equation, define auxiliary variables as X_{2} = K_{M,2}+ X_{1} and X_{3} = K_{M,1}+ X_{0}. The equation then becomes
For simplicity of discussion, suppose that X_{0} is a constant, independent variable. Thus, X_{3} is also constant and does not need its own equation. By contrast, X_{2} is a new dependent variable and from its definition we can calculate its initial value and see that its derivative must be equal to that of X_{1.} Therefore the equations:
form a system that is an exact equivalent of the original system but in GMA format.
Recasting can be useful with equations that are difficult to handle otherwise or for purposes of streamlining a model structure and its analysis. One must note though that often the number of variables increases significantly. In the case shown, the number of equations rises from one to two if X_{0} is independent or to three if it is a dependent variable.
Current optimization methods based on BST
The overall task is to reset some of the independent variables so that some objective is optimized. The independent variables in question are typically enzyme activities, which are experimentally manipulated through genetic means, such as the application of customized promoters or plasmids. The objective is usually the maximization of a metabolite concentration or a flux. Three approaches have been proposed in the literature.
Pure Ssystems
Among a number of convenient properties, the steady states of an Ssystem can be computed analytically by solving a system of algebraic linear equation [6]. Equating Eq. 11 to zero and rearranging one obtains:
which is a monomial of the form
Monomial equations become linear by taking logarithms on both sides thus reducing the steadystate computation to a linear task:A·y = b
whereA_{i,j}= g_{i,j} h_{i,j}y_{ i }= In X_{ i }
Monomial objective functions become linear by taking logarithms and so holds for many constraints on metabolites or fluxes. Therefore, constrained optimization of pathways modeled as Ssystems becomes a straightforward linear program [8].
Any other relevant constraint or objective function that is not a power law can also be approximated using the abovementioned methods. Then logarithms can be taken and Eqns 1–4 can be rewritten as:max or min F(y)
Subject to:A·y = bB·y = dC·y ≤ ey^{L}≤ y ≤ y^{U}
Where F is the logarithm of the flux or variable to be optimized, and superscripts L and U refer to lower and upper bounds. Eq. 20 assures operation at steady state. Matrix B and vector d account for additional equality constraints and C and e are analogous constraints for additional inequalities, which could, for instance, limit the magnitude of a metabolite concentration or flux, and improve the chances of viability. Optimization problems of this type are called linear programs (LPs) and can be solved very efficiently for large numbers of variables and constraints [15].
The advantage of the pure Ssystem approach is its great speed combined with the fact that Ssystem models have proven to be excellent representations of many pathways. The disadvantage is that the optimization process, by design, moves the system away from the chosen operating point, so that questions arise as to how accurate the Ssystem representation is at the steady state suggested by the optimization.
Indirect Optimization Method
If the pathway is not modeled as an Ssystem, the reduction of the optimization task to linearity is jeopardized. A compromise solution that has turned out to be quite effective is the Indirect Optimization Method (IOM) [10]. The first step of IOM is approximation of the alleged model with an Ssystem. This Ssystem is optimized as shown above. The solution is then translated back into the original system in order to confirm that it constitutes a stable steady state and is really an improvement from the basal state of the original model. The Ssystem solution typically differs somewhat from a direct optimization result with the original model, but since it is obtained so fast, it is possible to execute IOM in several steps with relatively tight bounds, every time choosing a new operating point and not deviating too much from this point in the next iteration [16]. The speed of the process is slower than in the pure Ssystem case, but still reasonable. Variations on IOM are to search for subsets of independent variables to be manipulated for optimal yield at lower cost and for multiobjective optimization tasks [17, 18].
Global GMA optimization
A global optimization method for GMA systems [19] has been recently proposed based on branchandreduce methods combined with convexification. These methods are interesting because of the variety of roles that GMA models can play (see above). The disadvantage of the global method is that it quickly leads to very large systems that are nonconvex, even though they allow relatively efficient solutions.
Geometric programming
Geometric programming (GP) [20] addresses a class of problems that include linear programming (LP) and other tasks within the broader category of convex optimization problems. Convex problems are among the few nonlinear tasks where, thanks to powerful interior point methods, the efficient determination of global optima is feasible even for large scale systems. For example, a geometric program of 1,000 variables and 10,000 constraints can be solved in less than a minute on a desktop computer [21]; the solution is even faster for sparse problems as they are found in metabolic engineering. Furthermore, easy to use solvers are starting to become available [22, 23].
GP addresses optimization programs where the objective function and the constraints are sums of monomials, i.e., powerlaw terms as shown in Eq. 6. Because of their importance in GP, sums of monomials, all with positive sign, are called posynomials. If some of the monomials enter the sum with negative signs, the collection is called a signomial. The peculiarities of convexity and GP methods render the difference between posynomials and signomials crucial.
A GP problem has the generic form:min P_{0}(x)
Subject to:P_{ i }(x) ≤ 1 i = 1...nM_{ i }
where P_{ i }(x) and M_{ i }(x) must fulfill strict conditions. Every function M_{ i }(x) must be a monomial, while the objective function P_{0}(x) and the functions P_{ i }(x) involved in inequalities must be posynomials. Signomials are not permitted, and optimization problems involving them require additional effort.
The equivalence between monomials and power laws immediately suggests the potential use of GP for optimization problems formulated within BST. In the next sections, several methods will be proposed to develop such potential.
Results and discussion
It is easy to see that steadystate equations of Ssystems are readily arranged as monomials as shown in Eq 18 and that optimization tasks for Ssystems directly adhere to the format of a GP, except that GP mandates minimization. However, this is easily remedied for maximization tasks by minimizing the inverse of the objective, which again is a monomial. By contrast, steadystate GMA equations as shown in Eq. 10 do not automatically fall within the GP structure, because GMA systems usually include negative terms, thus making them signomials. Furthermore, inversion of an objective that contains more than one monomial is not equivalent to a monomial.
When the objective or some restriction falls outside the GMA formalism, it can be recast into proper form as has been discussed above and will be shown in one of the case studies.
Two strategies
The proposed solutions for adapting GP solvers to treat GMA systems rely on condensation [24], but they do it in different ways. Condensation is a standard procedure in GP which is exactly equivalent to aggregation in BST. Namely, the sum of monomials is approximated by a single monomial. In the terminology of GP, the condensation $\widehat{C}()$ is generically denoted as
and, in the terminology of Eqs. 10 and 11, defined as:
where α_{ i }and g_{i,j}are chosen such that equality holds at a chosen operating point; thus, the result is equivalent to the Taylor linearization that is fundamental in BST as was shown in eqn. 7 [5, 7, 12]. As in the Taylor series, the condensed form is equal to the original equation at the operating point. For any other point, as it can be shown that the left and right hand side of eqn. 29 are equivalent to those of the ArithmeticGeometric inequality:
and therefore, the condensed form is an understimation of the original.
Objective functions can only be minimized in GP, this is seldom a problem given that the functions to maximize are often monomials that can be inverted: a variable, a reaction rate or a flux ratio. Posynomial objectives are usually entitled for minimization, like the sum of certain variables. Nonetheless, it is also relevant in metabolic engineering to consider the maximization of posynomials, such as the sum of variables or fluxes. In such cases, condensation or recasting can be used. For en extensive introduction on GP modelling see [25].
A local approach: Controlled Error Method
The steadystate equation of a GMA system may be written as the single difference of two posynomials:P(x)  Q(x) = 0
If both posynomials are condensed, every equation will be reduced to the standard form for monomial equations:
Because the division of a monomial by another is itself a monomial.
Since the steady state equations of the GMA have been condensed to those of an ssystem, this method could be regarded as a direct generalization of classical IOM methods. One of the advantages of this approach is the possibility of keeping posynomial inequalities and objectives as they are and therefore reduce the amount of condensation (approximation) needed, but there is another interesting possibility. When a posynomial is approximated by condensation, the AG inequality, Eq. 30, guarantees that the monomial is an underestimation of the constraint. Furthermore, the posynomial structure is not altered when divided by a monomial so the quotient between a posynomial and its condensed form is always greater than or equal to 1 and provides the exact error as a posynomial function. Therefore the problem can be constrained to allow a maximum error per condensed constraint:
So the original problem is solved as a series of GPs in which the GMA equations are successively condensed using the previous solution as the reference point. To assure validity an extra set of constraints is added to ensure that every iteration will only explore the neighborhood of the feasible region in which error due to condensation remains below an arbitrary tolerance set by the user.
A global approach: Penalty Treatment
A similar yet distinct strategy that minimizes the use of condensation is an extension of the penalty treatment method [26], a classic algorithm for signomial programming. In this method, a signomial constraint such asP(x)  Q(x) = 0
where P and Q are posynomials, is replaced by two posynomial equalities through the creation of an ancilliary variable t:
These are not valid GP constraints, so the following relaxed version is used:
Upon dividing by t, the feasible area of the original problem is contained in the feasible area of the new relaxed version and aproximation by condensation is not needed. In order to force these inequalities to be tight in the final solution, the objective function is augmented with penalty terms that grow with the slackness of the constraints, namely the inverses of the condensation of the relaxed constraints. The result of this procedure is a legal GP:
Where the condensed terms are calculated at the basal steady state. If the obtained solution falls within the feasible area of the original problem, it is taken as a solution, if it does not (any of the relaxed inequalities is below 1, the solution is used as the next reference point: condensations are calculated again, the weights of the violated constraints are increased and the new problem is solved. This procedure is repeated until a satisfactory solution is obtained. The original method used 1 as the initial value of the weights and increased them all in every iteration, some modifications are useful for our purposes:
The initial weights are selected such that the overall penalty terms are just a fraction of the total objective in the initial point. In the case studies explored in this paper, such fraction was 10%.
The weights are only increased if their corresponding constraint was violated in the last iteration. In such cases, the weight would be multiplied times a fixed value. For the case studies considered here, the choice in the value of such multiplier didn't have a significant impact in the performance of the method.
These variations on the original method serve to prevent the penalty terms from dominating the objective function and pushing the relaxed problem towards the boundaries of the feasible region from the very beginning.
Case studies
In order to illustrate the combination of GP with BST, some optimization tasks were explored. The first example demonstrates the procedure with a very simple two variable GMA system. The second example is a model of the anaerobic fermentation pathway in Saccharomyces cerevisiae. The third example revisits an earlier case study concerned with the tryptophan operon in E. coli. These systems were optimized using the Matlab based solver ggplab [23] running on an ordinary laptop (1.6 GHz Pentium centrino, 512 Mb RAM). Matlab scripts were written in order to perform all the transformations required by the two methods described. For comparison, the models were also optimized using IOM [10] as well as Matlab's optimization toolbox. The function used in this toolbox, fmincon(), is based on an iterative algorithm called Sequential Quadratic Programming, which uses the BGFS formula to update the estimated Hessian matrix during every iteration [27, 28].
A seemingly simple problem
A very distinctive difference between the alternative methodsfor GMA optimization can be ilustrated by a problem modified from [24], which presents the simplest possible fragmented feasible region (see Fig. 1).
The feasible region of this problem consists of two points (1.178,2.178) and (3.823,4.823), of which clearly the first solution is superior, because X_{1} is to be minimized. As these points are not connected, local methods are not able to find one solution using the other as a starting point. The problem was solved using IOM, controlled error and penalty treatment methods. The initial point was set to be (3.823,4.823), which is disconnected from the true optimal solution. While both IOM and the ControlledError method reported the initial point as the solution, the penalty treatment algorithm found the global optimum at (1.178,2.178).
In this case, most methods failed to find the optimal solution because the approximated ssystem had the operating point as the only feasible solution while the relaxed problem for the penalty treatment algorithm had a feasible area (shadowed in Fig. 1) that included and connected both feasible solutions.
Anaerobic fermentation in S. cerevisiae
This GMA model [29] (see also appendix) is derived from a previous version [30] formulated with traditional Michaelis Mentem kinetics to explain experimental data, and has been used to illustrate other optimization methods [10, 17, 19]. It has the following structure (see Fig. 2):
The model was already formulated [29] as a GMA system, so that all its fluxes are monomials:
The objective is (constrained) maximization of the ethanol production rate, v_{ PK }. Together with the upper and lower bounds of the variables, two extra constraints will be studied. The first is an upper limit to the total amount of protein. This is especially important for pathways of the central carbon metabolism as they represent a significant fraction of the total amount of cell protein and increasing the expression of its enzymes by large amounts might compromise cell viability. As a first example, we assume that the activity to protein ratio is the same for every enzyme and set an arbitrary limit of four times the amount of enzymes in the basal state. As an alternative, we explore the effect of limiting the total substrate pool. This constraint will later be subject to tradeoff analysis in order to see its influence in the optimum steady state (see Fig 3). Being posynomial functions, the constraints will be supported by GP without any transformation. The Appendix contains a complete formulation of the optimization problem.
The results are sumarized in Table 1. Both GP methods and the SQP found the same solution, although GP finished in 0.5 s while SQP was significantly slower, taking 1.5 s for the calculation. The IOM method was as fast as GP but it's solution violated one constraint.
Tryptophan operon
The third example addresses the tryptophan operon in E. coli, as illustrated in Fig. 4. This is an appealing benchmark system, because it has already been optimized with other methods [16, 31].
A model of the system was recently presented by [32] and includes transcription, translation, chemical reactions and tryptophan consumption for growth. It is thus more than a simple pathway model and demonstrates that GP and BST are applicable in more complex contexts. Finally, this model doesn't follow the structure of any standard formalism so it will be a good example on how recasting widens the applicability of the method to a higher degree of generality. The model takes the form
Here X_{1}, X_{2} and X_{3} are dimensionless quantities representing mRNA, enzyme levels and the tryptophan concentration, respectively. The rate equations are:
The GMA format is obtained by defining the following ancillary variables:
which turns the rates into power laws:
The objective function consists simply of v_{8}, which may be regarded as an aggregate term for growth and tryptophan excretion.
A recurrent feature of previously found IOM solutions was the noticeable violation of a constraint retaining a minimum tryptophan concentration. This discrepancy is a feature for comparisons between methods beyond computational efficiency. The Appendix contains a complete formulation of the optimization problem.
In order to test the effectiveness of the controlled error approach, two variants were used in this model:

Fixed tolerance. The standard method in which every iteration is limited to a maximum condensation error of 10% by constraints described in Eq. 33.

Fixed step. No limit on the condensation error. The variation of the variables in every iteration is limited to 10% distance from the reference state.
When the constraints were absent (fixed step), the variation of the variables was restricted to a fraction of the total range in every iteration, in order to prevent them from moving too far from the operating point. Fig. 5 shows the evolution of the objective function and condensation errors through iterations, both for fixed step and fixed tolerance. Though both methods find the same solution, the fixed tolerance method is much faster and keeps the error within a limit specified a priori. The fixed step method remains within a lower margin of error in this case due to the good quality of the condensed approximation but this margin is not under direct control and will depend on the size of the subintervals and on the model in an unforeseeable way. When the error tolerance was lowered to match the values observed for the fixed step method, both performed very similarly with a slight advantage of the fixed tolerance.
Both the controlled error and penalty treatment methods yielded the same results while SQP returned a solution that was feasible but yielded a lower flux. As can be seen in Table 2 no constraint violations occurred with GP. When the lower bound was extended to include the levels reached by other methods, all previous results were reproduced. The tradeoff curve resulting from solving the problem for different tryptophan lower bounds is depicted as Fig 6. SQP and error controlled method took about 1 s to find the solution while the penalty tratment took 0.3 s.
Conclusion
The main challenge of nonlinear optimization is dealing with nonconvexities. In some cases, like GP, there is an elegant transformation that convexifies the problem without adding undue complexity. But this is seldom the case and dealing with nonconvexities usually implies developing ad hoc tricks such as subdividng the system in many subsystems, finding convex relaxations of the constraints, adding extra variables or a combination of several of these strategies.
Geometric programming provides a simple and efficient tool for the optimization of biotechnological systems that takes advantage of the structural regularity and flexibility of GMA systems. In this work we have presented two different strategies to do so, of which the penalty treatment seems to be the most promising. The methods are quite general, as this treatment of GP and recasting can be applied to any rational function, which in fact include almost all rate functions used in representations of metabolic processes.
The use of geometric programming also provides a solution for the problem of constraint violations in the two strategies considered. The possibility of keeping an arbitrarily small approximation error in every iteration prevents the buildup of discrepancies in the Controlled Error Method which results in a "safer" condensation while the Penalty treatment doesn't rely on condensation to define the feasible area. It has been shown elsewhere [21] that GP can deal with big systems, and the sparse nature of the problems in metabolic engineering improves the capabilities of the approach. It is therefore reasonable to expect both strategies considered here to scale well for big problems but it is yet to be seen which one of the two behaves better in such cases.
Geometric programming is a relatively recent and active area in operations research, which implies that further improvements and refinements for the optimization of GMA systems are to be expected. But even with existing methods, the optimization of this large class of systems, which is further expanded by the technique of recasting, has become feasible for execution of moderately sized tasks even on simple desktop computers.
A Optimization problems
References
 1.
Stephanopoulos G, Aristidou A, Nielsen J: Metabolic Engineering: Principles and Methodologies. 1998, Academic Press
 2.
Torres N, Voit E: Pathway Analysis and Optimization in Metabolic Engineering. 2002, Cambridge University Press
 3.
Mendes P, Kell D: Nonlinear optimization of biochemical pathways: applications to metabolic engineering and parameter estimation. Bioinformatics. 1998, 14 (10): 86983. 10.1093/bioinformatics/14.10.869.
 4.
Varma A, Boesch BW, Palsson BO: Metabolic flux balancing: Basic concepts, scientific and practical use. BioTechnology. 1994, 994998. 12
 5.
Savageau M: Biochemical systems analysis. I. Some mathematical properties of the rate law for the component enzymatic reactions. J Theor Biol. 1969, 25 (3): 3659. 10.1016/S00225193(69)800263.
 6.
Savageau M: Biochemical systems analysis. II. The steadystate solutions for an npool system using a powerlaw approximation. J Theor Biol. 1969, 25 (3): 3709. 10.1016/S00225193(69)800275.
 7.
Voit E: Computational Analysis of Biochemical Systems. A Practical Guide for Biochemists and Molecular Biologists. 2000, Cambridge University Press
 8.
Voit E: Optimization in integrated biochemical systems. Biotechnol Bioeng. 1992, 572582. 10.1002/bit.260400504. 40
 9.
Hatzimanikatis V, Bailey JE: MCA has more to say. J Theor Biol. 1996, 233242. 10.1006/jtbi.1996.0160. 182
 10.
Torres N, Voit E, GlezAlcon C, Rodriguez F: An indirect optimization method for biochemical systems. Description of method and application to ethanol, glycerol and carbohydrate production in Saccharomyces cerevisiae. Biotech Bioeng. 1997, 5 (55): 758772. 10.1002/(SICI)10970290(19970905)55:5<758::AIDBIT6>3.0.CO;2A.
 11.
Shiraishi F, Savageau MA: The tricarboxylic acid cycle in Dictyostelium discoideum. III. Analysis of steady state and dynamic behavior. J Biol Chem. 1992, 267 (32): 2292622933.
 12.
Savageau M: Biochemical Systems Analysis. A Study of Function and Design in Molecular Biology. 1976, AddisonWesley, Reading, Massachusetts
 13.
De Atauri P, Curto R, Puigjaner J, CornishBowden A, Cascante M: Advantages and disadvantages of aggregating fluxes into synthetic and degradative fluxes when modelling metabolic pathways. Eur J Biochem. 265 (2): 671679. 10.1046/j.14321327.1999.00760.x.
 14.
Savageau M, Voit E: Recasting nonlinear differential equations as Ssystems: A canonical nonlinear form. Math Biosci. 1987, 83115. 10.1016/00255564(87)900356. 87
 15.
Dantzig G: Linear Programming and Extensions. 1963, Princeton University Press, Princeton, New Jersey
 16.
Xu G, Shao C, Xiu Z: A Modified Iterative IOM Approach for Optimization of Biochemical Systems. eprint arXiv:qbio/0508038. 2005
 17.
Vera J, de Atauri P, Cascante M, Torres N: Multicriteria optimization of biochemical systems by linear programming: application to production of ethanol by Saccharomyces cerevisiae. Biotechnol Bioeng. 2003, 83 (3): 33543. 10.1002/bit.10676.
 18.
AlvarezVasquez F, GonzalezAlcon C, Torres N: Metabolism of citric acid production by aspergillus niger: model definition, steadystate analysis and constrained optimization of citric acid production rate. Biotechnol Bioeng. 2000, 70: 82108. 10.1002/10970290(20001005)70:1<82::AIDBIT10>3.0.CO;2V.
 19.
Polisetty P, Voit E, Gatzke EP: Yield Optimization of Saccharomyces cerevisiae using a GMA Model and a MILPbased piecewise linear relaxation method. Proceedings of: Foundations of Systems Biology in Engineering. 2005
 20.
Zener C: Engineering Design by Geometric Programming. 1971, John Wiley and Sons, Inc
 21.
Boyd S, Vandenberghe L: Convex Optmization. 2004, Cambridge University Press
 22.
Grant M, Boyd S, Ye Y: CVX: Matlab Software for Disciplined Convex Programming. 2005
 23.
Koh K, Kim S, Mutapic A, Boyd S: GGPLAB: A simple Matlab toolbox for Geometric Programming. 2006, [Version 0.95]
 24.
Floudas CA: Deterministic Global Optimization. 2000, Kluwer Academic Publishers
 25.
Boyd S, Kim S, Vandenberghe L, Hassibi : A tutorial on geometric programming. [To be published in Optimization and Engineering]
 26.
Roundtree D, Rigler A: A penalty treatment of equality constraints in generalized geometric programming. Journal of Optimization Theory and Applications. 1982, 38 (2): 169178. 10.1007/BF00934080.
 27.
Goldfarb D: A Family of Variable Metric Updates Derived by Variational Mean. Mathematics of Computing. 1970, 24: 2326. 10.2307/2004873.
 28.
Fletcher D, Powell M: A rapidly convergent Descent Method for minimization. Computer Journal. 1963, 6: 163168.
 29.
Curto R, Sorribas A, Cascante M: Comparative characterization of the fermentation pathway of Saccharomyces cerevisiae using biochemical systems theory and metabolic control analysis: model definition and nomenclature. Math Biosci. 1995, 130: 2550. 10.1016/00255564(94)00092E.
 30.
Galazzo J, Bailey J: Fermentation pathway kinetics and metabolic flux control in suspended and immobilized Saccharomyces cerevisiae. Enzyme Microb Technol. 1990, 162172. 10.1016/01410229(90)90033M. 12
 31.
MarinSanguino A, Torres NV: Optimization of tryptophan production in bacteria. Design of a strategy for genetic manipulation of the tryptophan operon for tryptophan flux maximization. Biotechnol Prog. 2000, 16 (2): 133145. 10.1021/bp990144l.
 32.
Xiu Z, Chang Z, Zeng A: Nonlinear dynamics of regulation of bacterial trp operon: model analysis of integrated effects of repression, feedback inhibition, and attenuation. Biotechnol Prog. 2002, 18 (4): 68693. 10.1021/bp020052n.
Acknowledgements
This work was supported by a research grant from the Spanish Ministry of Science and Education ref. BIO200508898C0202.
Author information
Additional information
Competing interests
The author(s) declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Received
Accepted
Published
DOI
Keywords
 Metabolic Engineering
 Optimization Task
 Geometric Programming
 Feasible Area
 General Mass Action