Monday, September 7, 2015

PROCESS DYNAMICS AND CONTROL






Process dynamics


Process dynamics is an approach to understanding the behaviour of complex systems over time. It deals with internal feedback loops and time delays that affect the behaviour of the entire system.[1] What makes using process dynamics different from other approaches to studying complex systems is the use of feedback loops and stocks and flows. These elements help describe how even seemingly simple systems display baffling nonlinearity.

Process dynamics (PD) is a methodology and mathematical modeling technique for framing, understanding, and discussing complex issues and problems. Originally developed in the 1950s to help corporate managers improve their understanding of industrial processes, system dynamics is currently being used throughout the public and private sector for policy analysis and design.[2]

Process control


Process control is an engineering discipline that deals with architectures, mechanisms and algorithms for maintaining the output of a specific process within a desired range. For instance, the temperature of a chemical reactor may be controlled to maintain a consistent product output.

Process control is extensively used in industry and enables mass production of consistent products from continuously operated processes such as oil refining, paper manufacturing, chemicals, power plants and many others. Process control enables automation, by which a small staff of operating personnel can operate a complex process from a central control room.

Types of processes using process control


In practice, processes can be characterized as one or more of the following forms:

  • Discrete – Found in many manufacturing, motion and packaging applications. Robotic assembly, such as that found in automotive production, can be characterized as discrete process control. Most discrete manufacturing involves the production of discrete pieces of product, such as metal stamping.

  • Batch – Some applications require that specific quantities of raw materials be combined in specific ways for particular durations to produce an intermediate or end result.
  • Continuous – Often, a physical system is represented through variables that are smooth and uninterrupted in time

 

System



A system is a set of interacting or interdependent components forming an integrated whole.

Every system is delineated by its spatial and temporal boundaries, surrounded and influenced by its environment, described by its structure and purpose and expressed in its functioning.

Fields that study the general properties of systems include systems science, systems theory, systems modeling, systems engineering, cybernetics, dynamical systems, thermodynamics, complex systems, system analysis and design and systems architecture. They investigate the abstract properties of systems' matter and organization, looking for concepts and principles that are independent of domain, substance, type, or temporal scale.

Some systems share common characteristics, including:

 A system has structure, it contains parts (or components) that are directly or indirectly related to each other;

  • A system has behavior, it exhibits processes that fulfill its function or purpose;
  • A system has interconnectivity: the parts and processes are connected by structural and/or behavioral relationships.
  • A system's structure and behavior may be decomposed via subsystems and sub-processes to elementary parts and process steps.
  • A system has behavior that, in relativity to its surroundings, may be categorized as both fast and strong

Types of systems


Systems are classified in different ways:

  1. Physical or abstract systems.
  2. Open or closed systems.
  3. 'Man-made' information systems.
  4. Formal information systems.
  5. Informal information systems.
  6. Computer-based information systems.
  7. Real-time system.

Physical systems are tangible entities that may be static or dynamic in operation.

An open system has many interfaces with its environment. i.e. system that interacts freely with its environment, taking input and returning output. It permits interaction across its boundary; it receives inputs from and delivers outputs to the outside. A closed system does not interact with the environment; changes in the environment and adaptability are not issues for closed system.

Angular displacement


Angular displacement of a body is the angle in radians (degrees, revolutions) through which a point or line has been rotated in a specified sense about a specified axis. When an object rotates about its axis, the motion cannot simply be analyzed as a particle, since in circular motion it undergoes a changing velocity and acceleration at any time (t). When dealing with the rotation of an object, it becomes simpler to consider the body itself rigid. A body is generally considered rigid when the separations between all the particles remains constant throughout the objects motion, so for example parts of its mass are not flying off. In a realistic sense, all things can be deformable, however this impact is minimal and negligible. Thus the rotation of a rigid body over a fixed axis is referred to as rotational motion.

Angular acceleration


Angular acceleration is the rate of change of angular velocity. In SI units, it is measured in radians per second squared (rad/s2), and is usually denoted by the Greek letter alpha (α).

Degrees of Freedom


In control engineering, a degree of freedom analysis is necessary to determine the regulatable variables within the chemical process. These variables include descriptions of state such as pressure or temperature as well as compositions and flow rates of streams.

The number of process variables over which the operator or designer may exert control. Specifically, control degrees of freedom include:

  1. The number of process variables that may be manipulated once design specifications are set
  2. The number of said manipulated variables used in control loops
  3. The number of single-input, single-output control loops
  4. The number of regulated variables contained in control loops

The following procedure identifies potential variables for manipulation.

The Process


The method we will discuss is the Kwauk method, developed by Kwauk and refined by Smith. The general equation follows:

Degrees of freedom = unknowns - equations

Unknowns are associated with mass or energy streams and include pressure, temperature, or composition. If a unit had Ni inlet streams, No outlets, and C components, then for design degrees of freedom, C+2 unknowns can be associated with each stream. This means that the designer would be manipulating the temperature, pressure, and stream composition.

This sums to an equation of

Total Unknowns = Ni*(C+2) + No*(C+2)

If the process involves an energy stream there is one unknown associated with it, which is added to this value.
Equations may be of several different types, including mass or energy balances and equations of state such as the Ideal Gas Law.


  • After Degrees of Freedom are determined, the operator assigns controls. Carrying out a DOF analysis allows planning and understanding of the chemical process and is useful in systems design.

Applications


Single phase systems

  • All outlet streams have the same composition, and can be assumed to have the same temperature and pressure

Multiple phase systems

  • An additional (C-1) composition variable exists for each phase

Complete Process

  • When connecting units which share streams, one degree of freedom is lost from the total of the individual units

Linear system


A linear system is a mathematical model of a system based on the use of a linear operator. Linear systems typically exhibit features and properties that are much simpler than the general, nonlinear case. As a mathematical abstraction or idealization, linear systems find important applications in automatic control theory, signal processing, and telecommunications. For example, the propagation medium for wireless communication systems can often be modeled by linear systems.

Definition


A general deterministic system can be described by operator, H, that maps an input, x(t), as a function of tto an output, y(t), a type of black box description. Linear systems satisfy the properties of superposition and scaling or homogeneity. Given two valid inputs

x_1(t) \,

x_2(t) \,

as well as their respective outputs

y_1(t) = H 
\left \{ x_1(t) \right \}

y_2(t) = H 
\left \{ x_2(t) \right \}

then a linear system must satisfy

\alpha y_1(t) +
 \beta y_2(t) = H \left \{ \alpha x_1(t) + \beta x_2(t) \right \}

for any scalar values \alpha \,and \beta \,.

The system is then defined by the equation H(x(t)) = y(t), where y(t) is some arbitrary function of time, and x(t) is the system state. Given y(t) and H, x(t) can be solved for. For example, a simple harmonic oscillator obeys the differential equation:

m 
\frac{d^2(x)}{dt^2} = -kx

If H(x(t)) = m 
\frac{d^2(x(t))}{dt^2} + kx(t), then H is a linear operator. Letting y(t) = 0, we can rewrite the differential equation as H(x(t)) = y(t), which shows that a simple harmonic oscillator is a linear system.

 

Nonlinear system


In physics and other sciences, a nonlinear system, in contrast to a linear system, is a system which does not satisfy the superposition principle – meaning that the output of a nonlinear system is not directly proportional to the input.

In mathematics, a nonlinear system of equations is a set of simultaneous equations in which the unknowns (or the unknown functions in the case of differential equations) appear as variables of a polynomial of degree higher than one or in the argument of a function which is not a polynomial of degree one. In other words, in a nonlinear system of equations, the equation(s) to be solved cannot be written as a linear combination of the unknown variables or functions that appear in it (them). It does not matter if nonlinear known functions appear in the equations. In particular, a differential equation is linear if it is linear in terms of the unknown function and its derivatives, even if nonlinear in terms of the other variables appearing in it.

Definition


In mathematics, a linear function (or map) f(x)is one which satisfies both of the following properties:


(Additivity implies homogeneity for any rational α, and, for continuous functions, for any real α. For a complex α, homogeneity does not follow from additivity; for example, an antilinear map is additive but not homogeneous.) The conditions of additivity and homogeneity are often combined in the superposition principle

f(\alpha x + 
\beta y) = \alpha f(x) + \beta f(y) \,

An equation written as

f(x) = C\,

is called linear if f(x)is a linear map (as defined above) and nonlinear otherwise. The equation is called homogeneous if C = 0.

The definition f(x) = Cis very general in that xcan be any sensible mathematical object (number, vector, function, etc.), and the function f(x)can literally be any mapping, including integration or differentiation with associated constraints (such as boundary values). If f(x)contains differentiation with respect to x, the result will be a differential equation.

Feedback control


There are many different control mechanisms that can be used, both in everyday life and in chemical engineering applications. Two broad control schemes, both of which encompass each other are feedback control and feed-forward control. Feedback control is a control mechanism that uses information from measurements to manipulate a variable to achieve the desired result. Feed-forward control, also called anticipative control, is a control mechanism that predicts the effects of measured disturbances and takes corrective action to achieve the desired result. The focus of this article is to explain application, advantages, and disadvantages of feedback control.

Feedback control is employed in a wide variety of situations in everyday life, from simple home thermostats that maintain a specified temperature, to complex devices that maintain the position of communication satellites. Feedback control also occurs in natural situations, such as the regulation of blood-sugar levels in the body.

Feedback Systems process signals and as such are signal processors. The processing part of a feedback system may be electrical or electronic, ranging from a very simple to a highly complex circuits. Simple analogue feedback control circuits can be constructed using individual or discrete components, such as transistors, resistors and capacitors, etc, or by using microprocessor-based and integrated circuits (IC’s) to form more complex digital feedback systems.

Advantages

The advantages of feedback control lie in the fact that the feedback control obtains data at the process output. Because of this, the control takes into account unforeseen disturbances such as frictional and pressure losses. Feedback control architecture ensures the desired performance by altering the inputs immediately once deviations are observed regardless of what caused the disturbance. An additional advantage of feedback control is that by analyzing the output of a system, unstable processes may be stabilized. Feedback controls do not require detailed knowledge of the system and, in particular, do not require a mathematical model of the process. Feedback controls can be easily duplicated from one system to another. A feedback control system consists of five basic components: (1) input, (2) process being controlled, (3) output, (4) sensing elements, and (5) controller and actuating devices. A final advantage of feedback control stems from the ability to track the process output and, thus, track the system’s overall performance.

Closed Loop System

In a closed loop control system, the input variable is adjusted by the controller in order to minimize the error between the measured output variable and its set point. This control design is synonymous to feedback control, in which the deviations between the measured variable and a set point are fed back to the controller to generate appropriate control actions.The controller C takes the difference e between ther reference r and the output to change the inputs u to the system. This is shown in figure below. The output of the system y is fed back to the sensor, and the measured outputs go to the reference value

Close-loop-control.jpg

Open Loop System

On the other hand, any control system that does not use feedback information to adjust the process is classified as open loop control. In open loop control, the controller takes in one or several measured variables to generate control actions based on existing equations or models. Consider a CSTR reactor that needs to maintain a set reaction temperature by means of steam flow: A temperature sensor measures the product temperature, and this information is sent to a computer for processing. But instead of outputting a valve setting by using the error in temperature, the computer (controller) simply plugs the information into a predetermined equation to reach output valve setting. In other words, the valve setting is simply a function of product temperature.

Laplace transform


The Laplace transform is a widely used integral transform in mathematics with many applications in physics and engineering. It is a linear operator of a function f(t) with a real argument t (t ≥ 0) that transforms f(t) to a function F(s) with complex argument s, given by the integral

F(s) = 
\int_0^\infty f(t) e^{-st}\,dt.

The Laplace transform is related to the Fourier transform, but whereas the Fourier transform expresses a function or signal as a superposition of sinusoids, the Laplace transform expresses a function, more generally, as a superposition of moments. Like the Fourier transform, the Laplace transform is used for solving differential and integral equations. In physics and engineering it is used for analysis of linear time-invariant systems such as electrical circuits, harmonic oscillators, optical devices, and mechanical systems. In such analyses, the Laplace transform is often interpreted as a transformation from the time-domain, in which inputs and outputs are functions of time, to the frequency-domain, where the same inputs and outputs are functions of complex angular frequency, in radians per unit time.

Inverse Laplace transform


In mathematics, the inverse Laplace transform of a function F(s) is the function f(t) which has the property \mathcal{L}\left\{ f\right\}(s) = F(s), or alternatively \mathcal{L}_t\left\{ f(t)\right\}(s) = F(s), where \mathcal{L}denotes the Laplace transform.

It can be proven, that if a function F(s)has the inverse Laplace transform f(t), i.e. fis a piecewise-continuous and exponentially-restricted real function fsatisfying the condition

\mathcal{L}_t\{f(t)\}(s) = F(s),\ \forall s \in \mathbb R

then f(t)is uniquely determined (considering functions which differ from each other only on a point set having Lebesgue measure zero as the same).

The Laplace transform and the inverse Laplace transform together have a number of properties that make them useful for analysing linear dynamic systems.

Transfer function


In engineering, a transfer function (also known as the system function[1] or network function and, when plotted as a graph, transfer curve) is a mathematical representation for fit or to describe inputs and outputs of black box models.

Technically it is a represention in terms of spatial or temporal frequency, of the relation between the input and output of a linear time-invariant system with zero initial conditions and zero-point equilibrium.[2] With optical imaging devices, for example, it is the Fourier transform of the point spread function (hence a function of spatial frequency) i.e. the intensity distribution caused by a point object in the field of view.

Proper transfer function


In control theory, a proper transfer function is a transfer function in which the degree of the numerator does not exceed the degree of the denominator.

A strictly proper transfer function is a transfer function where the degree of the numerator is less than the degree of the denominator.

Signal transfer function


The signal transfer function (SiTF) is a measure of the signal output versus the signal input of a system such as an infrared system or sensor. There are many general applications of the SiTF. Specifically, in the field of image analysis, it gives a measure of the noise of an imaging system, and thus yields one assement of its performance.

Vandermonde matrix


In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row, i.e., an m × n matrix

V=\begin{bmatrix}
1 & \alpha_1 & \alpha_1^2 & \dots & \alpha_1^{n-1}\\
1 & \alpha_2 & \alpha_2^2 & \dots & \alpha_2^{n-1}\\
1 & \alpha_3 & \alpha_3^2 & \dots & \alpha_3^{n-1}\\
\vdots & \vdots & \vdots & \ddots &\vdots \\
1 & \alpha_m & \alpha_m^2 & \dots & \alpha_m^{n-1}
\end{bmatrix}

or

V_{i,j} = 
\alpha_i^{j-1} \,

for all indices i and j.

The determinant of a square Vandermonde matrix (where m = n) can be expressed as:

\det(V) = 
\prod_{1\le i<j\le n} (\alpha_j-\alpha_i).

This is called the Vandermonde determinant or Vandermonde polynomial. If all the numbers \alpha_iare distinct, then it is non-zero (provided the numbers come from an integral domain).

The Vandermonde determinant is sometimes called the discriminant, although many sources, including this article, refer to the discriminant as the square of this determinant. Note that the Vandermonde determinant is alternating in the entries, meaning that permuting the \alpha_iby an odd permutation changes the sign, while permuting them by an even permutation does not change the value of the determinant. It thus depends on the order, while its square (the discriminant) does not depend on the order.

Properties


In the case of a square Vandermonde matrix, the Leibniz formula for the determinant gives

 \det(V) = 
\sum_{\sigma \in S_n} \sgn(\sigma) \prod_{i = 1}^n 
\alpha_i^{\sigma(i)-1},

where Sn denotes the set of permutations of \{1,\ldots,n\}, and \sgn(\sigma)denotes the signature of the permutation σ. This determinant factors as

\sum_{\sigma 
\in S_n} \sgn(\sigma) \prod_{i = 1}^n \alpha_i^{\sigma(i)-1}=\prod_{1\le
 i<j\le n} (\alpha_j-\alpha_i).

Each of these factors must divide the determinant, because the latter is an alternating polynomial in the n variables. It also follows that the Vandermonde determinant divides any other alternating polynomial; the quotient will be a symmetric polynomial.

If m ≤ n, then the matrix V has maximum rank (m) if and only if all αi are distinct. A square Vandermonde matrix is thus invertible if and only if the αi are distinct; an explicit formula for the inverse is known.[2][3][4]

Applications


The Vandermonde matrix evaluates a polynomial at a set of points; formally, it transforms coefficients of a polynomial a_0+a_1x+a_2x^2+\cdots+a_{n-1}x^{n-1}to the values the polynomial takes at the points \alpha_i.

The Vandermonde determinant plays a central role in the Frobenius formula, which gives the character of conjugacy classes of representations of the symmetric group.

Confluent Vandermonde matrices are used in Hermite interpolation.

The Vandermonde matrix diagonalizes the companion matrix.

The Vandermonde matrix is used in some forms of Reed–Solomon error correction codes.

Advantages of Transfer function


The key advantage of transfer functions is in their compactness, which makes them suitable for frequency-domain analysis and stability studies. However, the transfer function approach suffers from neglecting the initial condition

Advantages of state variable analysis.


  • It can be applied to non linear system.
  • It can be applied to tile invariant systems.
  • It can be applied to multiple input multiple output systems.
  • Its gives idea about the internal state of the system.

The Transportation Lag

The transportation lag is the delay between the time an input signal is applied to a system and the time the system reacts to that input signal. Transportation lags are common in industrial applications. They are often called “dead time”.

x(t)                             Transportation Lag                      x (t-τ)

X (s)                                                                              e -τsX(s)

What is a Thermocouple?

A Thermocouple is a sensor used to measure temperature. Thermocouples consist of two wire legs made from different metals. The wires legs are welded together at one end, creating a junction. This junction is where the temperature is measured. When the junction experiences a change in temperature, a voltage is created. The voltage can then be interpreted using thermocouple reference tables to calculate the temperature.

There are many types of thermocouples, each with its own unique characteristics in terms of temperature range, durability, vibration resistance, chemical resistance, and application compatibility. Type J, K, T, & E are “Base Metal” thermocouples, the most common types of thermocouples.


Types of Thermocouples:


Type K Thermocouple (Nickel-Chromium / Nickel-Alumel): The type K is the most common type of thermocouple. It’s inexpensive, accurate, reliable, and has a wide temperature range.

Type J Thermocouple (Iron/Constantan): The type J is also very common. It has a smaller temperature range and a shorter lifespan at higher temperatures than the Type K. It is equivalent to the Type K in terms of expense and reliability.

Type T Thermocouple (Copper/Constantan): The Type T is a very stable thermocouple and is often used in extremely low temperature applications such as cryogenics or ultra low freezers.

 

Thermocouple Type
Composition
Temperature Range
B
Platinum 30% Rhodium (+)
2500-3100 degrees F
 
Platinum 6% Rhodium (-)
1370-1700 degrees C
C
W5Re Tungsten 5% Rhenium (+)
3000-4200 degrees F
 
W26Re Tungsten 26% Rhenium (-)
1650-2315 degrees C
E
Chromel (+)
200-1650 degrees F
 
Constantan (-)
95-900 degrees C
J
Iron (+)
200-1400 degrees F
 
Constantan (-)
95-760 degrees C
K
Chromel (+)
200-2300 degrees F
 
Alumel (-)
95-1260 degrees C
M
Nickel (+)
32-2250 degrees F
 
Nickel (-)
0-1287 degrees C
N
Nicrosil (+)
1200-2300 degrees F
 
Nisil (-)
650 -1260 degrees C
R
Platinum 13% Rhodium (+)
1600-2640 degrees F
 
Platinum (-)
870-1450 degrees C
S
Platinum 10% Rhodium (+)
1800-2640 degrees F
 
Platinum (-)
980-1450 degrees C
T
Copper (+)
negative 330-660 degrees F
 
Constantan (-)
negative 200-350 degrees C

Control valve sizing


Control valve sizing is based on flow coefficient Cv calculation. Flow coefficient Cv calculation is made for required flow rate and related pressure drop in control valve. With flow coefficient Cv calculated, size of control valve can be selected, or two control valves from different manufacturers can be compared in terms of flow capacity for certain pressure drop and the same control valve size.

Control valves sizing calculator can be use to calculate maximum flow rate through control valve for given pressure drop and known flow coefficient of control valve Cv.

Control valve calculator can be used for turbulent flow of water or other incompressible fluid. For compressible flow of gases and steam gas flow coefficient Cg should be calculated.

Flow coefficient of control valve Cv is expressed as the flow rate of water in gpm u.s. (m3/h) for a pressure drop of 1 psi (1 bar) across a flow passage (flow coefficient: Cv-imperial, Kv-metric).

Magnetic flow meter


The third most common flowmeter behind differential pressure and positive displacement flow meters, is the magnetic flow meter, also technically an electromagnetic flow meter or more commonly just called a mag meter. A magnetic field is applied to the metering tube, which results in a potential difference proportional to the flow velocity perpendicular to the flux lines. The physical principle at work is electromagnetic induction. The magnetic flow meter requires a conducting fluid, for example, water that contains ions, and an electrical insulating pipe surface, for example, a rubber-lined steel tube.

Usually electrochemical and other effects at the electrodes make the potential difference drift up and down, making it hard to determine the fluid flow induced potential difference. To mitigate this, the magnetic field is constantly reversed, cancelling out the static potential difference. This however impedes the use of permanent magnets for magnetic flowmeters

  

How Magnetic Flowmeters Work


Magnetic flowmeters use Faraday’s Law of Electromagnetic Induction to determine the flow of liquid in a pipe. In a magnetic flowmeter, a magnetic field is generated and channeled into the liquid flowing through the pipe. Following Faraday’s Law, flow of a conductive liquid through the magnetic field will cause a voltage signal to be sensed by electrodes located on the flow tube walls. When the fluid moves faster, more voltage is generated. Faraday’s Law states that the voltage generated is proportional to the movement of the flowing liquid. The electronic transmitter processes the voltage signal to determine liquid flow.

In contrast with many other flowmeter technologies, magnetic flowmeter technology produces signals that are linear with flow. As such, the turndown associated with magnetic flowmeters can approach 20:1 or better without sacrificing accuracy.

Transducer


A transducer is an electronic device that converts energy from one form to another. Common examples include microphones, loudspeakers, thermometers, position and pressure sensors, and antenna. Although not generally thought of as transducers, photocells, LEDs (light-emitting diodes), and even common light bulbs are transducers.

Efficiency is an important consideration in any transducer. Transducer efficiency is defined as the ratio of the power output in the desired form to the total power input. Mathematically, if P represents the total power input and Q represents the power output in the desired form, then the efficiency E, as a ratio between 0 and 1, is given by:

Requires Free Membership to View


E = Q/P

Pressure Transducers

A pressure transducer, sometimes called a pressure transmitter, is a transducer that converts pressure into an analog electrical signal. Although there are various types of pressure transducers, one of the most common is the strain-gage base transducer. The conversion of pressure into an electrical signal is achieved by the physical deformation of strain gages which are bonded into the diaphragm of the pressure transducer and wired into a wheatstone bridge configuration. Pressure applied to the pressure transducer produces a deflection of the diaphragm which introduces strain to the gages. The strain will produce an electrical resistance change proportional to the pressure.

In ultrasound investigating technology, an electrical signal is used to create mechanical energy in the transducer, thus creating an outgoing pulse. Then the returning mechanical energy, the echo, is converted into electrical energy by the same transducer. The electrical signal is then used in imaging what the transducer is "looking at" by creating a picture of some sort which will then be evaluated. In another way, it could be said that a transducer is a device for sensing and relaying a signal, but keep in mind the idea of a change of "form" of the energy. That's what a transducer does is make the change. A transducer might be used to detect level, pressure, temperature, flow, displacement, accelaration, velocity, etc., from the sensing location so it can be sent to another place (like a control room). It consists of different parts like sensing element (the sensor), signal conditioning unit (filtering, amplification, etc.) and in some cases protocol interface (in order to convert the measured value into a digital frame for example).

 


Cascade Control


A cascade control system is a multiple-loop system where the primary variable is controlled by adjusting the setpoint of a related secondary variable controller. The secondary variable then affects the primary variable through the process.

The primary objective in cascade control is to divide an otherwise difficult to control process into two portions, whereby a secondary control loop is formed around a major disturbances thus leaving only minor disturbances to be controlled by the primary controller.

The advantages of cascade control are all somewhat interrelated . They include:

  1. Better control of the primary variable
  2. Primary variable less affected by disturbances
  3. Faster recovery from disturbances
  4. Increase the natural frequency of the system
  5. Reduce the effective magnitude of a time-lag
  6. Improve dynamic performance
  7. Provide limits on the secondary variable

Cascade control is most advantageous on applications where the secondary closed loop can include the major disturbance and second order lag and the major lag is included in only the primary loop. The secondary loop should be established in an area where the major disturbance occurs. It is also important that the secondary variable respond to the disturbance. If the slave loop is controlling flow and the disturbance is in the heat content of the fluid, obviously the flow controller will not correct for this disturbance.

  

RGA


Relative Gain Array is an analytical tool used to determine the optimal input-output variable pairings for a multi-input-multi-output (MIMO) system. In other words, the RGA is a normalized form of the gain matrix that describes the impact of each control variable on the output, relative to each control variable's impact on other variables. The process interaction of open-loop and closed-loop control systems are measured for all possible input-output variable pairings. A ratio of this open-loop 'gain' to this closed-loop 'gain' is determined and the results are displayed in a matrix.

RGA= \Lambda =
\begin{bmatrix}
   \lambda_{11} & \lambda_{12} & \cdots & \lambda_{1n} \\
   \lambda_{21} & \lambda_{22} & \cdots & \lambda_{2n} \\
   \vdots \\
   \lambda_{n1} & \lambda_{n2} & \cdots & \lambda_{nn}
\end{bmatrix}

The array will be a matrix with one column for each input variable and one row for each output variable in the MIMO system. This format allows a process engineer to easily compare the relative gains associated with each input-output variable pair, and ultimately to match the input and output variables that have the biggest effect on each other while also minimizing undesired side effects.

Results of RGA :

·         The closer the values in the RGA are to 1 the more decoupled the system is

·         The maximum value in each row of the RGA determines which variables should be coupled or linked

·         Also each row and each column should sum to 1

There are two main ways to calculate RGA:

(1) Experimentally determine the effect of input variables on the output variables, then compile the results into an RGA matrix.

(2) Use a steady-state gain matrix to calculate the RGA matrix.

Decouple


DEFINITION of 'Decoupling'


The occurrence of returns on asset classes diverging from their expected or normal pattern of correlation. Decoupling takes place when two different asset classes that typically rise and fall together move in opposing directions, such as one increasing and the other decreasing.

A system of inputs and outputs can be described as one of four types: SISO (single input, single output), SIMO (single input, multiple output), MISO (multiple input, single output), or MIMO (multiple input, multiple output).

Multiple input, multiple output (MIMO) systems describe processes with more than one input and more than one output which require multiple control loops. Examples of MIMO systems include heat exchangers, chemical reactors, and distillation columns. These systems can be complicated through loop interactions that result in variables with unexpected effects. Decoupling the variables of that system will improve the control of that process.

There are two ways to see if a system can be decoupled. One way is with mathematical models and the other way is a more intuitive educated guessing method. Mathematical methods for simplifying MIMO control schemes include the relative gain array (RGA) method, the Niederlinski index (NI) and singular value decomposition (SVD). This article will discuss the determination of whether a MIMO control scheme can be decoupled to SISO using the SVD method. It will also discuss a more intuitive way of decoupling a system using a variation of the RGA method.

Cutoff frequency


In physics and electrical engineering, a cutoff frequency, corner frequency, or break frequency is a boundary in a system's frequency response at which energy flowing through the system begins to be reduced (attenuated or reflected) rather than passing through.

Typically in electronic systems such as filters and communication channels, cutoff frequency applies to an edge in a lowpass, highpass, bandpass, or band-stop characteristic – a frequency characterizing a boundary between a passband and a stopband. It is sometimes taken to be the point in the filter response where a transition band and passband meet, for example, as defined by a 3 dB corner.

Crossover Frequency


A gain of factor 1 (equivalent to 0 dB) where both input and output are at the same voltage level and impedance is known as unity gain. When the gain is at this frequency, it is often referred to as crossover frequency.

Frequency-response design is practical because we can easily evaluate how gain changes affect certain aspects of systems. With frequency-response design, we can determine the phase margin for any value of without needing to redraw the magnitude or phase information.

Gain and Phase Margin


 

Gain Margin: Gain margin is gain perturbation that makes the system marginally stable. It is the additional gain that makes the system on the verge of instability.

Phase Margin: Phase margin is the negative phase perturbation that makes the system marginally stable. It is the additional phase lag that makes the system on the verge of instability.

Consider the following unity feedback system:



where $K$is a variable (constant) gain and $G(s)$is the plant under consideration. The gain margin is defined as the change in open-loop gain required to make the system unstable. Systems with greater gain margins can withstand greater changes in system parameters before becoming unstable in closed-loop.

The phase margin is defined as the change in open-loop phase shift required to make a closed-loop system unstable.

The phase margin also measures the system's tolerance to time delay. If there is a time delay greater than $180/W_{pc}$in the loop (where $W_{pc}$is the frequency where the phase shift is 180 deg), the system will become unstable in closed-loop. The time delay, $\tau_d$can be thought of as an extra block in the forward path of the block diagram that adds phase to the system but has no effect on the gain. That is, a time delay can be represented as a block with magnitude of 1 and phase $\omega \tau_d$(in radians/second).

The phase margin is the difference in phase between the phase curve and -180 degrees at the point corresponding to the frequency that gives us a gain of 0 dB (the gain crossover frequency, $W_{gc}$). Likewise, the gain margin is the difference between the magnitude curve and 0 dB at the point corresponding to the frequency that gives us a phase of -180 degrees (the phase crossover frequency, $W_{pc}$).

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.