## A closer look at process model equations

 Mathieu Westerweele Posted on: 28 Jun 2013 I couldn’t agree more with the statement that modelling is far more than just writing equations; the modelling activity should not be considered separately but as an integrated part of a problem solving activity. But, as promised in the previous blog, I would like to spend some lines on setting up equations for your process model.
“Process modelling is one of the key activities in process systems engineering… In most books on this subject there is a lack of a consistent modelling approach applicable to process systems engineering as well as a recognition that modelling is not just about producing a set of equations. There is far more to process modelling than writing equations”. These are a few lines from the introduction of the book “Process Modelling and Model Analysis” of Katalin Hangos and Ian Cameron. It is one of the best books around on the subject and it gives a comprehensive treatment of process modelling useful to students, researchers and industrial practitioners.
I couldn’t agree more with the statement that modelling is far more than just writing equations; the modelling activity should not be considered separately but as an integrated part of a problem solving activity. But, as promised in the previous blog, I would like to spend some lines on setting up equations for your process model.
Before we start, it is good to repeat that modelling a chemical process requires the use of all the basic principles of chemical engineering science, such as thermodynamics, kinetics, transport phenomena, etc. It should therefore be approached with care and thoughtfulness.
A (mathematical) model of a process is usually a system of mathematical equations, whose solutions reflect certain quantitative aspects (dynamic or static behaviour) of the process to be modelled. The development of such a mathematical process model is initiated by mapping a process into a mathematical object. The main objective of a mathematical model is to describe some behavioural aspects of the process under investigation.
There are many ways to generate these equations and there are many different ways to describe the same process, which will usually result in different models. The approach a modeller takes when constructing a model for a process depends on:
• The application for which the model is to be used. Different models are used for different purposes. For example, a model which is used for the control of a process shall be different from a model which is used for the design or analysis of that same process

• The amount of accuracy that has to be employed. This is of course partially depending on the application of the model and on the time-scale in which the process has to be modelled. In general, a model which needs to describe a process on a small time-scale demands more details and accuracy then the model of the same process which describes the process over a larger time-scale;

• The view and knowledge of the modeller on the process. Different people have different backgrounds and different knowledge and will therefore often approach the same problem in different ways, which can eventually lead to different models of the same process.

The construction of the physical topology and species topology of a process are rather straight forward. When introducing the equations into the model, we are faced with some non-trivialities that need some closer look.
Having completed the first two stages of the modelling process, it is quite trivial to construct the dynamic part of the process model, namely the (component) mass and energy balances for all the systems, using the conservation principles. The resulting (differential) equations consist of flow rates and production rates, which should not be further specified at this point.
In order to fully describe the behaviour of the process, all the necessary remaining information (i.e. the mechanistic details) has to be added to the symbolic model of the process. So, in addition to the balance equations, other relationships (i.e. algebraic equations) are needed to express transport rates for mass, heat and momentum, reaction rates, thermodynamic equilibrium, and so on. The resulting set of differential and algebraic equations (DAEs) is called the equation topology.
From a certain point of view the modelling process can thus be regarded as a succession of equation-picking and equation-manipulation operations. The modeller has, virtually at least, a knowledge base containing parameterized equations that may be chosen at certain stages in the modelling process, appropriately actualized and included in the model. The knowledge base is, in most cases, simply the physical knowledge of the modeller, or might be a reflection of some of his beliefs about the behaviour of the physical process.
The equation topology forms a very important part of the modelling process, for with the information of this topology the complete model of the process is generated. The objective of the equation topology is the generation of a mathematically consistent representation of the process under the view of the model designer (who mainly judges the relative dynamics of the various parts, thus fixes intrinsically the dynamic window to which the model applies)

I realise that I’m just scratching the surface here, but a thorough discussion would be quite lengthy and probably just for a small audience. In the next blog I will, however, go into detail a bit more and will try to convey to you that once you understand the picture just above this text, you actually understand how any dynamic process model should be setup in order to be structurally solvable.
For now it will remain a bit abstract, but it gives you something to think about in the coming weeks :).
If you think you know what the picture represents, let everybody know by placing a comment in the comment box below. Any other comments, suggestions or questions regarding the topic of this blog would, of course, also be greatly appreciated.
———————————————–

## Assistance in Setting Up Models

 Mathieu Westerweele Posted on: 28 May 2013 More often than not, the time spent on collecting the information necessary to properly define an adequate model of the (part of the) process you are interested in is much greater than the time spent by a simulator program in finding a solution.
More often than not, the time spent on collecting the information necessary to properly define an adequate model of the (part of the) process you are interested in is much greater than the time spent by a simulator program in finding a solution. Most publications and textbooks present the model equations without a description of how the model equations have been developed. Hence, to learn dynamic model development, novice modellers must study examples in textbooks, the work of more skilled modellers, and/or use trial and error.
During the last decades there tends to be an increasing demand for models of higher complexity, which makes the model construction even more time consuming and error-prone. Moreover, there are many different ways to model a process (mostly depending on the application for which the model is to be used): different time scales, different levels of detail, different assumptions, different interpretations of (different parts of) the process, etc. Thus a vast number of different models can be generated for the same process.
All this calls for a systematisation of the modelling process, comprising of an appropriate, well-structured modelling methodology for the efficient development of adequate, sound and consistent process models. A Modelling tool building on such a systematic approach supports teamwork, re-use of models, provides complete and consistent documentation and, not at least, improves process understanding and provides a foundation for the education of process technology.
As promised in our last blog post, this months blog presents some of the concepts of Mobatec Modeller, a computer-aided modelling tool built on a structured modelling methodology, which aims to effectively assist in the development of process models and helps and directs a modeller through the different steps of this methodology. The objective of this tool is to provide a systematic model design method that meets all the mentioned requirements and turns the art of modelling into the science of model design.
Modelling is an acquired skill, and the average user finds it difficult. A modeller may inadvertently incorporate modelling errors during the mathematical formulation of a physical phenomenon. Formulation errors, algebraic manipulation errors, writing and typographical errors are very common when a model is being implemented in a computing environment. Thus any procedure which would allow to do some of the needed modelling operations automatically would eliminate a lot of simple, low-level (and hard to detect) errors.
Mobatec Modeller is a computer-aided modelling tool which is designed to assist a model designer to map a process into a mathematical model, using a systematic modelling methodology. The main task solved by Mobatec Modeller is the construction and manipulation of the structure and definition of process models. The output of Mobatec Modeller is a first-principles based (i.e. based on physical insight) mathematical model, which is easily transformed to serve as an input to existing modelling languages and/or simulation packages, such as our Mobatec Solver, but also Process Studio’s e-Modeler (Protomation), gProms (Process Systems Enterprise), Aspen Custom Modeler (Aspentech), Modelica (Dynasim AB), Matlab (Mathworks), or any other Differtial Algebraic Equation (DAE) Solver. For certain solvers (Mobatec Solver and e-Modeler) a Simulation Environment is available, such that the build dynamic process models can be excecuted, tuned, tested, optimised, etc.
One of the handy features of Mobatec Modeller that will help a model designer a lot when setting his process model is the Automatic component distribution.

## Automatic component distribution

The distribution of all involved species (i.e. chemical and/or biological components) as well as all reactions in the various parts of the process must be defined in most process models. This represents the Species Topology, which is superimposed on the physical topology and defines which species and what reactions are present in each part of the physical topology.
The definition of the species topology of a process is initialized by assigning sets of species (and/or reactions) to some systems. Species as well as reactions should be selected from corresponding databases. So, before the species topology can be defined, a species and a compatible reactions database must be defined. Such a database contains a list of species and a list of possible reactions between those species. A species and reactions database should, of course, be editable by the user in order to satisfy the specific needs of the user.
After the assignment of the injected species and injected reactions to a specific system, the modelling tool will (re)calculate (parts of) the species distribution. This means that the species will propagate into other systems through mass connections. Within the systems, the species may undergo reactions and generate ”new” species, which in turn may propagate further and initiate further reactions. This eventually results in a specific species distribution over the elementary systems, which is referred to as the species topology of the processing plant.
To enhance the definition of the species topology, permeability and directionality are introduced as properties of mass connections. They constrain the mass exchange between systems by making the species transfer respectively selective or uni-directional.
The injection of a reaction into a system does not automatically imply that this reaction can ”happen” in that system and thus that the products of this reaction can be formed. If not all reactants of a reaction are available in a system, then this reaction cannot take place in this system. In such a case, the system will have an injected reaction but this reaction will not be “active”. So, the reaction will not take place in the system in this case. When the species distribution is changed and the reaction can take place again, it will automatically be ”activated”.
It should be noted that the presence of an ”activated” reaction in a system does not imply that this reaction has to happen in this system. It implies that this reaction may happen in this system, depending on the operating conditions in the system and the driving force for this reaction.
Whenever an operation is executed which modifies the current species distribution, a mechanism is activated which updates the species distribution over all elementary systems and connections of the affected mass domains.
After the definition of the Species Topology, Mobatec Modeller can automatically generate the dynamic part of the process model, namely the (component) mass and energy balances for all the elementary systems, using the conservation principles. The resulting (differential) equations consist of flow rates and production rates, which are not further specified at this point. In order to fully describe the behaviour of the process, all the necessary remaining information (i.e. the mechanistic details) has to be added to the symbolic model of the process. So, in addition to the balance equations, other relationships (i.e. algebraic equations) are needed to express transport rates for mass, heat and momentum, reaction rates, thermodynamic equilibrium, and so on. The resulting set of differential and algebraic equations (DAEs) is called the equation topology.
In the next blog we will explore how Mobatec Modeller helps you in setting up correct (algebraic) equations for each part of your model.
This blog and the next blog focus a bit on how our tools can help model designers in setting up their models. Normally our blog treats more generic topics, related to modelling, but several people asked me to devote one or more blogs to the differences between our solutions and other available software ;).
Please let us know if you found this information valuable or specify a topic you would like to see discussed on one of the next blogs.
Mathieu.
———————————————–

## So, What’s the Difference?

 Mathieu Westerweele Posted on: 28 Apr 2013 Last week Mobatec was presenting Mobatec Modeller at the ECCE 2013 (9th European Congress of Chemical Engineering) in The Hague (in The Netherlands). It was a great experience to be at this congress for several days and we met a lot of interesting people.
Last week Mobatec was presenting Mobatec Modeller at the ECCE 2013 (9th European Congress of Chemical Engineering) in The Hague (in The Netherlands). It was a great experience to be at this congress for several days and we met a lot of interesting people.
Obviously, the goal of being there was to promote our modelling methodology and tool, since “it is Mobatecs goal to bring this easy to grasp modelling methodology to the world and to teach engineers that modelling can actually be quite easy and very valuable.
I noticed that people (before speaking to us) really did not realise there even could be an alternative approach to modelling and they all were genuinely interested in learning what this new modelling approach could offer. Therefore, I decided to devote a blog to this topic.
As I mentioned in a previous blog post on maintainability of process models there are typically 2 ways of constructing process models nowadays:
The first and most used approach, the “Unit oriented” or “flow sheeting” approach, I would not refer to as modelling myself, since you are only connecting (sub)models, which were completely defined by other people. You are merely “simulating” in this case and hoping that the persons who developed the models did such a good job that you get the results you require for your specific setup, with your required accuracy and your specific goal. In the rare case that you can actually see all the equations (so not only a few ones in some, mostly incomplete documentation file) that are used to get to the simulation results, they are either inaccessible or the listing is very long and very difficult/complex. Don’t get me wrong here, by the way, these tools can be very powerful and useful. I am just referring to the lack of flexibility of these programs to be able to adapt (or even just understand) the actual equations of the used models.
In the other approach, The “Equation oriented” approach, the user has the “freedom” to program everything himself. This is very flexible, but typically also a very tedious job. Much (programming) experience and a lot of patience is required to get a model up and running. The listings usually get very long (and hard to maintain), which paradoxically actually makes the end result quite inflexible.
At the congress I showed people how both approaches can be visualised with Mobatec Modeller (see picture below).
On the left side you see a “flowsheet” with columns, pumps, indicators, transmitters, etc. And on the right you see a small part of the (very long) listing of the equations that constitute the entire model.
In the next screenshot you can see how you can “look inside” any part of a model with Mobatec Modeller and “zoom in” to the details of, for example, a column. This specific column was defined as a column with a fixed 3 stages. Zooming in further to the bottom stage, you notice that it consists of a liquid phase system, a vapour phase system and a “metal” system. Also several mass (green) and heat (red) streams to and from these systems are defined.
A little bit of theory is needed at this point: The modelling methodology behind Mobatec Modeller assumes that any process can be broken down into systems and connections. Systems represent a capacity, able to store mass and energy. Connections represent the transfer of mass and energy between the defined systems.
Simply “drawing” all the systems and their interconnecting connections is enough for Mobatec Modeller to automatically setup the mass and energy balances. This is a very big advantage for the user, since he cannot make any mistakes in this important part of any model definition. It’s very easy to add, reconnect, remove or copy connections (or any larger part of the model). This only affects the automatically generated part of the model.
Another big advantage of the used approach is that the debugging of the involved equations of any (part of a) model needs to be done only on “object level”, so on the level of each system and connection. Typically, the number of equations that are associated with one system or connection is not more than 10. Above all, the tool will help you with the correct equation setup and sorting, and will notify about any object that has not yet been properly setup, such that you will be assured to have a structurally solvable model.
I presented only a few differences between other tools and Mobatec Modeller for now. In the next blog post(s) I will list several more distinguishing features of Mobatec Modeller and I will elaborate on each one, such you can understand the benefit of each of them.
Mobatec strives to make process modelling easier and more accessible to a larger group of people. If you agree or disagree with our mission and/or methodology, please post your comments, insights and/or suggestions in the comment box below, such that we can all learn something from it.
Mathieu.
———————————————–

## Quantitative HAZOP: Combining Traditional HAZOP with Dynamic Simulation

 Mathieu Westerweele Posted on: 28 Mar 2013 As you can read from the definition on Wikipedia, the HAZOP study is traditionally a qualitative study. To give it a more quantitative character, several papers have been written in the last decade that suggest to use steady-state analysis and dynamic simulation to complement the HAZOP study.
“A hazard and operability study (HAZOP) is a structured and systematic examination of a planned or existing process or operation in order to identify and evaluate problems that may represent risks to personnel or equipment, or prevent efficient operation. The HAZOP technique was initially developed to analyze chemical process systems, but has later been extended to other types of systems and also to complex operations such as nuclear power plant operation and to use software to record the deviation and consequence. A HAZOP is a qualitative technique based on guide-words and is carried out by a multi-disciplinary team (HAZOP team) during a set of meetings.” (source: Wikipedia)
As you can read from the definition on Wikipedia, the HAZOP study is traditionally a qualitative study. To give it a more quantitative character, several papers have been written in the last decade that suggest to use steady-state analysis and dynamic simulation to complement the HAZOP study. By definition, a (dynamic) simulation is the imitation of the operation of a real-world process or system over time, which means that in principle it should be the most realistic way of representing an actual process. Combining HAZOP with dynamic simulation could provide the means for investigating (and demonstrating) the consequences of deviations from normal operating conditions. Above all, dynamic simulation could enable a HAZOP team to quickly investigate and test the effectiveness of various suggested strategies dealing with emergency situations.
In the article “Combining HAZOP with dynamic simulation—Applications for safety education” the authors claim to have developed a “Quantitative HAZOP” approach which is more adequate for educational application than the qualitative HAZOP procedure. They elaborate extensively on a relatively simple example (by including all equations and variable and parameter values that describe the model) to demonstrate the proposed procedure.
I agree with the authors that using such an integrated approach can help quite a bit in the process safety education, especially when a good graphical user interface (GUI) is provided. It looks, though, that a considerable effort is needed to construct (and test) both the model and the user interface, before it is useful as a helpful tool to aid in HAZOP studies.
The question that arises is: “Can such an integrated approach be extended and also be helpful for ‘real’ industrial HAZOP studies?”. In these cases typically larger parts of large processes are being evaluated and setting up a model in the way the authors did for the educational example could take several weeks to months (maybe even more). That would, in most cases, not be acceptable.
If setting up a dynamic process model with reasonable accuracy could be done in the order of magnitude of a few days or weeks (depending on the size of the process), this would certainly change the attitude towards using dynamic simulations to aid with HAZOP studies.
As a challenge, we decided to build a simulation model (including a user interface), based on the semi-batch reactor (oxidation of 2-octanol) of the quoted article, within (max) one day of work. A bit against our modelling methodology we chose to use the equations as presented in the article, just to see if it is possible to rebuild such a model in a short period. We had to change some equations quite a bit, though, since with our methodology mass and energy balances are generated automatically and constitutive relations are normally never time dependent, but only state dependent. The result is, of course, not perfect (and could be easily improved), but it does give an impression of what is possible nowadays. (Side note: Most of our time was spent trying to interpret the original model equations, since they gave rise to a lot of questions and I had quite some doubts on the validity of several of the equations)
By having a simulation of the process as a support tool, a deeper, easier and more complete study could be carried out. That would provide a systematic screening of process deviation associated with possible hazardous events, determining the threshold values that may lead to such events and enabling the examination of a particular design for the adequate safe range of operation.
As the article rightfully concludes, dynamic simulation should be seen as a tool that complements the traditional HAZOP procedure, it does not replace it. There are still many processes that cannot be modeled accurate enough due to a lack of enough quantitative information, particularly in emergency situations.
I am curious what your thoughts on this subject are. Under which conditions would a dynamic simulation tool be interesting for doing HAZOP studies?
Please post your ideas in the comment box below, such that we can go one step closer to enhancing the HAZOP methodology by adding a flexible dynamic simulation of (part of) the process!
Mathieu.
———————————————–

## Maintainability of Process Models

 Mathieu Westerweele Posted on: 27 Feb 2013 The problem with this self-made or programmed models is that they are normally very difficult to maintain, since they are programs which have been written by individuals.
In our previous blog we discussed about the integration of modelling and simulation software in the curriculum of High Schools and Universities.
Another very interesting point covered in the paper of the professors from Rowan University is the fact that many (small) companies use self-made macros or programs to solve problems that are readily solved with commercial simulators, simply because they cannot afford the software. This does not mean, of course, that process simulation software is “a tool that graduating chemical engineers should not be familiar with.”
The problem with this self-made or programmed models is that they are normally very difficult to maintain, since they are programs which have been written by individuals. Therefore, it is not uncommon that companies do not allow their engineers to write software. Actually, computer programming (in languages such as FORTRAN, C, or PASCAL) is not a vital skill for chemical engineers in industry anymore.
The chemical engineering community thus may have a use for teaching tools and techniques that challenge students to think logically and develop algorithms without necessarily taking the time to learn a full programming language.
A closely related problem is that, even with commercial simulators there always seems to be an issue with maintainability of models (also in big companies). Especially when models become large and/or highly “custom made”.
Let me sketch a typical scenario (that I have seen several times). A company “hires” a graduate student to make a model for them, because they have no time, no money and/or not enough expertise to do it themselves. The student works on the model for months and does a lot of custom programming. After the student finishes his work, the model typically goes “in the closet” for a few months/years, because nobody has time to do something with it. After a long period, the model could actually be useful, but no one at the company knows how to work with it and no one seems to even get it running. So, they wisely decide to hire another student, who does not completely understand what the previous student has done and therefore decides to start from scratch and redo the entire job….
Not very efficient of course, but, unfortunately quite commonplace.
In my opinion the root of the problem is caused by the fact that there are basically two ways of making models nowadays:
• The “Unit oriented” or “flow sheeting” approach. Hardly any programming is required. The user just drags some predefined units on a flow sheet, connects them and configures some parameters. Quite convenient for most users, but very inflexible (sometimes nearly impossible) when deviations from standard equipment are needed. And, as discussed in our previous blog, users sometimes simply don’t know what they are actually doing.
• The “Equation oriented” approach. Nearly everything needs to be programmed out. This is very flexible, but typically also a very tedious job. Much (programming) experience and a lot of patience is required to get a model up and running

I think a large part of the maintainability issues can be resolved by introducing a “new” approach: The “Equation and system based” approach, which typically is a combination of the best of the two previously mentioned approaches. Without going into details I would state that such a methodology offers a lot of flexibility, but also provides insight in the process that is being investigated. Although this methodology has a learning curve, I have noticed that students/engineers who have mastered it are able to solve problems quicker. Also the generated models tend to be a lot easier to transfer to others, without the need for extra documentation.
What tips do you have to improve maintainability of process models?
I invite to post your experiences, insights and/or suggestions in the comment box below, such that we can all learn something from it.