Fundamental Time Scale Assumptions

Thumbnail
Mathieu Westerweele
Mathieu Westerweele
Posted on:
31 Jan 2014
Physical topologies are the abstract representation of the containment of the process in the physical sense. They visualise the principle dynamic contents of the process model and therefore the construction of a physical topology is the most fundamental part of the modelling process.
Physical topologies are the abstract representation of the containment of the process in the physical sense. They visualise the principle dynamic contents of the process model and therefore the construction of a physical topology is the most fundamental part of the modelling process. Any changes in the physical topology will substantially affect the structure and contents of the final model.
The structuring of the process implements the first set of assumptions in the modelling process. The resulting decomposition is, in general, not unique. However, the resulting model depends strongly on the choice of the decomposition. As a rule: the finer the decomposition, the more complex will the resulting model be.
The decision of defining subsystems is largely based on the phase argument, where the phase boundary separates two systems. The second decision criterion utilises the relative size of the capacities and the third argument is based on the relative velocity with which subsystems exchange extensive quantities. Another argument is the location of activities such as reactions. The relative size of the model components, measured in ratios of transfer resistance and capacities, termed time constants, is referred to as “granularity”. A large granularity describes a system cruder, which implies more simply, than a model with a finer granular structure. It seems apparent that one usually aims at a relative uniform granularity, as these systems are best balanced and thus more conveniently solved by numerical methods.
Models of different granularity help in analysing the behaviour of the process in different time scales. The finer the granularity, the more the dynamic details and thus the more of the faster effects are captured in the process description. Since each model is constructed for a specific goal, a process model should reflect the physical reality as accurately as needed. The accuracy of a model intended for numerical simulation, for example, should (in most cases) be higher than the accuracy of a model intended for control design.
To illustrate the concepts mentioned above, consider the following example concerning a stirred tank reactor:
ISTR
Figure I shows a stirred tank reactor, which consists of an inlet and outlet flow, a mixing element, a heating element and liquid contents. If the model of this tank is to be used for a rough approximation of the concentration of a specific component in the outlet flow or for the liquid-level control of the tank, a simple model suffices. The easiest way to model this tank is to view it as an ideally stirred tank reactor (ISTR) as shown in figure II. This implies that a number of assumptions have been made regarding the behaviour and properties of the tank. The most important assumption is the assumption that the contents of the tank is ideally mixed and hence displays uniform conditions over its volume. Another assumption can be that heat losses to the environment are negligible.
After making these and maybe some more assumptions, the modeller can write the component mass balances and the energy balance of the reactor. With these equations and some additional information (e.g. kinetics of reaction, physical properties of the contents, geometrical relations, state variable trans- formations, etc.) the modeller can describe the dynamic and/or static behaviour of the reactor.
Mixer
If the tank has to be described on a much smaller time-scale and/or the behaviour of the tank has to be described in more detail, then the ISTR model will not suffice. A more accurate description often asks for a more detailed model. In order to get a more detailed description the modeller could, for example, choose to try to describe the mixing process in the tank (see figure III). Figure IV shows a possible division of the contents of the tank into smaller parts. In this drawing a circle represents a volume element which consists of a phase with uniform conditions. Each volume element can thus be viewed as an ISTR. The arrows represent the mass flows from one volume to another. In order to describe the behaviour of the whole tank, the balances of the fundamental extensive quantities (component mass and energy usually suffice) must be established for each volume element. The set of these equations, supplemented with information on the extensive quantity transfer between the volumes and other additional information, will constitute the mathematical description of the dynamic and/or static behaviour of the reactor.
The model of the mixing process could, of course, be further extended to get a more accurate description. The number of volume elements could for example be increased or one could consider back mixing or cross mixing of the liquid between the various volume elements (In principle, if one increases the complexity of this description, one approaches the type of models that result from approximating distributed models using computational fluid dynamic packages). The conduction of heat to each volume could also be modelled. One could model a heat flow from a heating element to each volume, or only to those volume elements which are presumed to be the nearest to the heating element, etc. As one may imagine there are many ways to describe the same process. Each way usually results in a unique mathematical representation of the behaviour of the process, depending on the designers view on and knowledge of the process, on the amount of detail he wishes to employ in the description of the process and, of course, on the application of the model.

When employing the term time scale, we use it in the context of splitting the relative dynamics of a process or a signal (the result of a process) into three parts, namely a central interval where the process or signal shows a dynamic behaviour (1). This dynamic window is on one side guarded by the part of the process that is too slow to be considered in the dynamic description, thus is assumed constant (2). On the other side the dynamic window is attaching to the sub-processes that occur so fast that they are abstracted as event – they just occur in an instant (3). Any process we consider requires these assumptions and it is the choice of the dynamic window that determines largely the fidelity of the model in terms of imaging the process dynamics.
Dynamic Window
One may argue that one should then simply make the dynamic window as large as probably possible to avoid any problems, which implies an increase in complexity. Philosophically all parts of the universe are coupled, but the ultimate model is not achievable. When modelling, a person must thus make choices and place focal points, both in space as well as in time. The purpose for which the model is being generated is thus always controlling the generation of the model. And the modeller, being the person establishing the model, is well advised to formulate the purpose for which the model is generated as explicit as possible.
A window in the time scales must thus be picked with the limits being zero and infinity. On the small time scale one will ultimately enter the zone where the granularity of matter and energy comes to bear, which limits the applicability of macroscopic system theory and at the large end, things get quite quickly infeasible as well, if one extends the scales by order of magnitudes. Whilst this may be discouraging, having to make a choice is usually not really imposing any serious constraints, at least not on the large scale. Modelling the movement of tectonic plates or the material exchange in rocks asks certainly for a different time scale than modelling an explosion, for example. There are cases, where one touches the limits of the lower scale, that is, when the particulate nature of matter becomes apparent. In most cases, however, a model is used for a range of applications that usually also dene the relevant time-scale window.
The dynamics of the process is excited either by external effects, which in general are constraint to a particular time-scale window or by internal dynamics resulting from an initial imbalance or internal transposition of extensive quantity. Again, these dynamics are usually also constraint to a time-scale window. The maximum dynamic window is thus the extremes of the two kinds of windows, that is, the external dynamics and the internal dynamics.
A “good” or “balanced” model is in balance with its own time scales and the time-scale within which its environment operates. In a balanced model, the scales are adjusted to match the dynamics of the individual parts of the process model. Balancing starts with analysing the dynamics of the excitations acting on the process and the decision on what aspects of the process dynamics are of relevance for the conceived application of the model. What has been defined as a system (capacity) before may be converted into a connection later and vice versa as part of the balancing process. This makes it difficult, not to say impossible, to define systems and connections totally hard. The situation clearly calls for a compromise, which, in turn, opens the door for suggesting alternative compromises. There is not a single correct choice and there is certainly room for arguments but also for confusion. Nevertheless a decision must be taken.
Initially one is tempted to classify systems based on their unique property of representing capacitive behaviour of volumes, usually also implying mass. In a second step one may allow for an abstraction of volumes to surfaces, because in some applications it is convenient to also abstract the length scale describing accumulation to occur inside, so-to-speak, or on each side of the surface.
What is your experience with time scale assumptions? Where you aware that you actually always make them, when modelling? Are you aware of the effect they have on your end results?
I invite to post your experiences, insights and/or suggestions in the comment box below, such that we can all learn something from it.
To your success!
Mathieu.
———————————————–

Computational Order

Computational Order Thumbnail
Mathieu Westerweele
Mathieu Westerweele
Posted on:
30 Oct 2013
In the blog post of last month we discussed about the substitution of variables and whether these variable transformation are actually necessary to solve the problems at hand.
In the blog post of last month we discussed about the substitution of variables and whether these variable transformation are actually necessary to solve the problems at hand. Part of the discussion was concerned with the computational causality of the involved equations and this month I would like to spend a few more words on that topic.
As discussed in previous blog posts, the dynamic part (i.e. the differential equations) of a process model can be isolated from the static part (i.e. the algebraic equations). The dynamic part can be trivially derived from the model designer’s definition of the Physical Topology and Species Topology of the process. The static part has to be defined by the physical insight of the model designer.
For each modelling object (i.e. system, connection or reaction) the algebraic equations can, in principle, be chosen “randomly” from a database. In doing so, the problem arises that not every numerical equation solver will be able to solve the equations, since the equations are not in the so-called correct computational order and are not always in (correct) explicit form. Nowadays, many solvers (so-called DAE-1 solvers) can easily handle implicit algebraic equations, but when the equations are re-ordered and simplified by performing preliminary symbolic manipulations, a more efficient computational code could be obtained.
If you are using an explicit solver (such as Matlab) to solve your model equations, an important step to achieve an efficient computational code for DAEs, is to solve the equations for as many algebraic variables as possible. This way it is not necessary to introduce these variables as unknowns in the numerical DAE-solver, since they can be calculated at any call of the residual routine from the information available.
Consider the simple pair of equations:

y – 2x = 4  (1)

x – 7 = 0    (2)

In order to solve these equations directly, they must be rearranged into the form:

x = 7          (3)

y = 2x + 4  (4)

The implicit equation (2) cannot readily be solved for x by a numerical program, whilst the explicit form, namely (3), is easily solved for and only requires the evaluation of the right-hand-side expression. Equation (1) is rearranged to give (4) for y, so that when x is known, y can be calculated.
The rearranged form of the set of equations can be solved directly because it has the correct computational causality. As discussed in the blog on substitution, this computational causality is, quite obviously, not a physical phenomenon, but a numerical artefact. Take, for example, the ideal gas law:

pV = nRT

This is a static relation, which holds for any ideal gas. This equation does not describe a cause-and-effect relation. The law is completely impartial with respect to the question whether at constant temperature and constant molar mass a rise in pressure causes the volume of the gas to decrease or whether a decrease in volume causes the pressure to rise. For a solving program, however, it does matter whether the volume or the pressure is calculated from this equation.
It is rather inconvenient that a model designer must determine the correct computational causality of all the algebraic equations that belong to each modelling object, given a particular use of the model (simulation, design, etc.). It is much easier if the equations could just be described in terms of their physical relevance and that a computer program automatically determines the desired causality of each equation and solves each of the equations for the desired variable, for example by means of symbolic manipulation or implicit solving.
Whether the entered equations are in the correct causal form or not, they always have to adhere to some conditions:
  • For any set or equations to be solvable, there must be exactly as many unknowns as equations.
  • It must be possible to rearrange the equations such that the system of equations can be solved for all unknowns.
The first condition, called the Regularity Assumption, is obviously a necessary condition. It can be checked immediately and all numerical DAE solvers take this preliminary check.
In order to solve a set of equations efficiently, the equations must be rearranged in Block Lower Triangular (BLT) form with minimal blocks, which can be solved in a nearly explicit forward sequence.
Block Lower Triangular
Several efficient algorithms exist to transform to block lower triangular form. Many references state that it is, in general, not possible to transform the incidence matrix to a strictly lower triangular form, but that there are most likely to be square blocks of dimension > 1 on the diagonal of the incidence matrix. These blocks correspond to equations that have to be solved simultaneously.
Block Lower Triangular Form
In the above figure the incidence matrix of a set of equations (e), which are transformed to BLT form, is shown. White areas indicate that the variables (v) do not appear in the corresponding equation, grey areas that they may or may not appear, and black areas represent the variables still unknown in the block and which can be computed from the corresponding equations. So, a block of this matrix indicates which set of variables can be computed if the previous ones are known.
Although it is good to know about computational causality, a model designer does, in general, not have to worry about BLT forms, because most equation based solvers (Mobatec Modeller included) handle this automatically.
The BLT format is used by these solvers to define which variable should be solved from which equation. Only in the case two (or more) variables have to be solved from two (or more) equation, a user input could, in some cases, be required, since a “computer guess” could lead to a (numerically) non-optimal choice.
Did you ever have to deal with the computational order of your equations? Does your solver do the sorting automatically, but sometimes makes a “bad” choice (maybe without you being aware of it)?
I invite to post your comments, insights and/or suggestions in the comment box below.
To your success!
Mathieu.
———————————————–

Substitution – Yes or No?

Substitution Thumbnail
Mathieu Westerweele
Mathieu Westerweele
Posted on:
29 Sep 2013
It is often seen that model designers insist on eliminating the extensive variables from their model equations. The main reason that is brought up for this preference to write a model that does not involve the extensive variables is that often only the evolution of the application variables (e.g. intensive or geometrical variables) is of interest.
It is often seen that model designers insist on eliminating the extensive variables from their model equations. The main reason that is brought up for this preference to write a model that does not involve the extensive variables is that often only the evolution of the application variables (e.g. intensive or geometrical variables) is of interest.
Also, the transfer laws and kinetic laws are usually given in terms of intensive state variables. Therefore most model designers think they must transform the accumulation terms of the (differential) balance equations and perform a so-called state variable transformation.
In most textbooks that cover modelling these transformations are also performed, often even without mentioning why. But are these, often cumbersome, state variable transformations necessary to solve the considered problems?
Above you see a simple example transformation that is often used, when someone would be interested in the dynamic response of the level of a tank.
An even more common transformation (which most model designers often do not even realise!), is the transformation of an Energy Balance (via the Enthalpy Balance) into an equation that represents the dynamic response of the Temperature of a system.
So, usually a modified version of this basic energy balance is employed for modelling a process component. This modified version is derived from the basic energy balance through several simplifications and assumptions.
Caution should be taken, though, when you use derived energy models, because they are often incorrect or used incorrectly. You could easily introduce faults when you are adapting or further simplifying a derived model, because of lack of knowledge about previous assumptions and derivation steps. Knowledge about the common assumptions and the derivation steps is thus essential for the correct use of the different simplified energy models.
In most cases, the transformations are not necessary. There are several reasons to consider the differential algebraic equations (DAEs) directly, rather than to try to rewrite them as a set of ordinary differential equations (ODEs):
First, when modelling physical processes, the model takes the form of a DAE, depicting a collection of relationships between variables of interest and some of their derivatives. These relationships may be generated by a modelling program (such as Mobatec Modeller). In that case, or in the case of highly nonlinear models, it may be time consuming or even impossible to obtain an explicit model.
Computational causality is, quite obviously, not a physical phenomenon, but a numerical artefact. Take, for example, the ideal gas law:

pV = nRT


This is a static relation, which holds for any ideal gas. This equation does not describe a cause-and-effect relation. The law is completely impartial with respect to the question whether at constant temperature and constant molar mass a rise in pressure causes the volume of the gas to decrease or whether a decrease in volume causes the pressure to rise. For a solving program, however, it does matter whether the volume or the pressure is calculated from this equation.
It is rather inconvenient that a model designer must determine the correct computational causality of all the algebraic equations that belong to each modelling object, given a particular use of the model (simulation, design, etc.). It is much easier if the equations could just be described in terms of their physical relevance and that a computer program automatically determines the desired causality of each equation and solves each of the equations for the desired variable (either numerically or by means of symbolic manipulation).
Also, reformulation of the model equations tends to reduce the expressiveness.
Furthermore, if the original DAE can be solved directly it becomes easier to interface modelling software directly with design software.
Finally, reformulation slows down the development of complex process models, since it must be repeated each time the model is altered, and therefore it is easier to solve the DAE directly.
These advantages enable researchers to focus their attention on the physical problem of interest. There are also numerical reasons for considering DAEs. The change to explicit form, even if possible, can destroy sparsity and prevent the exploitation of system structure.
Small advantages of transforming the model to ODE form can be that for (very) small systems an analytical solution is available and that sometimes less information of physical properties is needed when substitutions are being made (sometimes, some of the parameters can be removed from the system equations when substitutions are made).
Another advantage could be that, by doing substitutions, some primary state variables are removed from the model description which could take the code faster, because less variables have to be solved.
In general, though, these advantages do not outweigh the disadvantages.
If one does want to perform substitutions, it is recommended that these are done at the very end of the model development and not, as is generally seen, as soon as possible. Postponing the substitutions as long as possible gives a much better insight in the model structure during model development.
I am very interested in your view on this subject:
Are you using substitutions in your dynamic models? Were you aware of the assumptions that are made by applying substitutions?
I invite to post your comments, insights and/or suggestions in the comment box below.
To your success!
Mathieu.