Bhushan Gopaluni

Professor

Relevant Thesis-Based Degree Programs

 
 

Great Supervisor Week Mentions

Each year graduate students are encouraged to give kudos to their supervisors through social media and our website as part of #GreatSupervisorWeek. Below are students who mentioned this supervisor since the initiative was started in 2017.

 

Prof. Gopaluni strongly supports the professional development of his students by encouraging and funding academic activities such as travelling for conferences, studying abroad, publishing papers, providing conference workshops and exploring a broad spectrum of relevant academic interests. He engages his students by drawing inspiration and innovative insight from multiple disciplines while encouraging his students to pursue excellence and become independent thinkers. 

Lee Rippon (2019)

 

Graduate Student Supervision

Doctoral Student Supervision

Dissertations completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest dissertations.

Data-driven degradation modeling of lithium-ion batteries (2024)

Ensuring the safe and reliable usage of Lithium-ion batteries (LIBs) necessitates accurate degradation modeling. While data-driven methods offer promising prospects for modeling battery degradation, the intricate structures often make them specific to datasets. Furthermore, the black-box nature of data-driven models complicates the understanding of their decision-making process. In this thesis, we delve into data-driven modeling of battery degradation, focusing on capacity estimation and cycle life prediction, to address the challenges of generalizability and interpretability.To improve the generalizability of the model, we first propose adopting a simple and robust machine learning model, partial least squares regression (PLSR), for joint battery capacity estimation and remaining useful life (RUL) prediction. Experimental results on three battery cells cycled at varied conditions demonstrate superior generalizability of the suggested model over complex and sophisticated methods. Another approach we propose to improve the generalizability of the model is to use transfer learning. This approach presents excellent performance in handling the significant diversity in different types of batteries, as it can transfer the knowledge contained in well-studied batteries to a new battery. The key idea involves training a model in one type of battery with sufficient data. Then, the model can be applicable to a new type of battery by fine-tuning some parameters with limited data. Experimental results confirm that transfer learning can effectively enhance the generalizability of data-driven models in capacity estimation and cycle life prediction across different battery types.To build interpretable models, we advocate the use of decision trees for capacity estimation. We start with a classic regression tree with parallel splits for capacity estimation, but it requires a tree depth of 11 to achieve satisfactory performance. To address this challenge, we adopt optimal regression trees with hyperplane splits and propose a novel algorithm, DE-LR-ORTH, to train such a tree. DE-LR-ORTH initially conducts a one-step optimal hyperplane split for each branch node via differential evolution, followed by logistic regression-based fine-tuning to achieve overall optimality. Additionally, a GPU-accelerated implementation is proposed to significantly reduce the training time. Experimental results reveal a 1.0% capacity estimation error at depth 6 while maintaining high interpretability.

View record

High T-cell concentration bioprocessing for cell therapy manufacturing (2024)

The full abstract for this thesis is available in the body of the thesis, and will be available when the embargo expires.

View record

Interpretable and stable soft sensor modeling for industrial applications (2024)

Soft sensor technology is an effective way to measure parameters that are hard to measure in real time. It is of significant importance for the monitoring, control, and optimization of industrial production processes. With the richness of process data and the rapid development of machine learning techniques, data-driven soft sensor technologies are increasingly favored. Although soft sensor models have great potential and value in industrial applications, they still face significant challenges, particularly in the areas of model interpretability and stability. Ensuring interpretability and stability is crucial because it directly impacts the reliability and safety of operations in hazardous industrial environments.This dissertation provides a detailed exploration of soft sensor technologies, focusing on enhancing their interpretability and stability for industrial process monitoring. Chapters 2 and 3 focus on improving the interpretability of soft sensors. Chapter 2 introduces the Extra Trees (ET) algorithm and employs SHapley Additive exPlanations (SHAP) to enhance the interpretability of this inherently accurate but complex model. Chapter 3 explores interpretable feature selection techniques, particularly emphasizing the role of SHAP in the selection of meaningful features from complex industrial data. Subsequently, we utilize the selected interpretable features to establish a simple soft sensor model. In Chapter 4, the main topic shifts to the stability of the soft sensor model; we propose a stable learning algorithm based on the generation of virtual samples to improve stability in the face of industrial disturbances and data scarcity. Chapter 5 delves into the role of causality in soft sensor modeling, demonstrating how mining causal relationships between variables can significantly improve both stability and interpretability. It also emphasizes the importance of incorporating the knowledge of the process to ensure precision in the discovery of causal relationships. Chapter 6 presents two methods for extracting unsupervised and supervised latent causal features. By extracting latent causal features, not only is interpretability retained, but our model also becomes more robust. Finally, we analyze the main contributions and consider how they can be utilized in industrial contexts to improve the efficiency, safety,reliability, and interpretability of soft sensors.

View record

Applications of process analytics and machine learning in pyrometallurgy and kraft pulping (2023)

The optimization of legacy industrial processes is critical for the economic viability of many rural communities in Canada. Automation and advanced process control is of paramount importance for many large-scale industrial processes to maintain viability in a constricting regulatory environment that is increasingly competitive economically. However, with legacy industrial processes comes a rich history of automation, including large quantities of underappreciated process data. This dissertation is about leveraging existing historical data with machine learning and process analytics to generate novel data-driven solutions to outstanding process faults. An application-driven approach provides insights into the full stack of considerations including identifying and framing data-driven opportunities for control of complex industrial processes, acquiring the necessary resources, preparing the data, developing and evaluating methods, and deploying sustainable solutions. Contributions are made to help address highly troublesome faults in two distinct industrial processes.The first industrial case study involves mitigating the impact of unexpected loss of plasma arc in an electric arc furnace that is key to a 60,000 tonne/year pyrometallurgy operation. A convolutional neural network classifier is trained to learn a representation from the operating data that enables prediction of the arc loss events. The operating data and problem formulation are published as a novel benchmark challenge to address observed shortcomings with existing fault detection benchmark literature.The second industrial case study involves advanced monitoring of a rotary lime kiln in a 152,000 tonne/year kraft pulp mill to mitigate faults such as ring formation and refractory wear. A novel shell temperature visualization strategy is published that enables improved monitoring and empowers researchers and industry professionals to obtain value from thermal camera data. Various approaches are studied for monitoring ring formation. Aberrations in shell temperatures led to the discovery of a novel phenomenon known as rotational aliasing that has important implications for measurement and analysis of shell temperature data. Finally, inferential sensing of residual calcium carbonate content is studied to help optimize specific energy and reduce emissions.

View record

Deep reinforcement learning agents for industrial control system design (2023)

Deep reinforcement learning (RL) is an optimization-driven framework for producing control strategies without explicit reliance on process models.Powerful new methods in RL are often showcased for their performance on difficult simulated tasksIn contrast, industrial control system design has many intrinsic features that make "nominal" RL methods unsafe and inefficient.We develop methods for automatic control based on RL techniques while balancing key industrial requirements, such as interpretability, efficiency, and stability.A practical testbed for new control techniques is proportional-integral (PI) control due to its simple structure and prevalence in industry.In particular, PI controllers are elegantly compatible with RL methods as trainable policy "networks".We deploy this idea on a pilot-scale two-tank system, elucidating the challenges in real-world implementation and advantages of our method.To improve the scalability of RL-based controller tuning, we propose an extension based on "meta-RL" wherein a generalized agent is trained for the fast adaptation across a broad collection of dynamics.A key design element is the ability to leverage model-based information offline during training while maintaining a model-free policy structure for interacting with novel processes.Beyond PI control, we propose a framework for the design of feedback controllers that combines the model-free advantages of deep RL with the stability guarantees provided by the Youla-Kucera parameterization to define the search domain.This is accomplished through a data-driven realization of the Youla-Kučera parameterization working in tandem with a neural network representation of stable nonlinear operators.Ultimately, our approach is flexible, modular, and decouples the stability requirement from the choice of RL algorithm.

View record

Targeted feature extraction : a deep learning approach (2023)

This thesis details the progressive development of a Machine Learning workflow, aimed towards multi-class problems in the engineering and clinical fields. We select Deep Learning as the basis of this modelling framework, as their universal approximation property renders them agnostic to different types of underlying data structures. We propose an optimal Deep Learning model which extracts interpretable features, capturing the decisive, salient characteristics of each data class. This is accomplished by revising the traditional Deep Learning objective, introducing an additional term which enhances class separation and identity. Using mathematical properties of the discovered latent space, we introduce a Feature Extractor based on weight traceback, which connects the decisive class-specific neurons to the raw variables in the input layer. The efficacy and necessity of the proposed strategy is demonstrated across six total case studies. The first two studies highlight the inconsistency across clusters discovered by traditional Unsupervised Learning models, as well as the misconception of traditional Deep Learning as a magical solution to every problem. The following two studies demonstrate proof-of-concept for the proposed strategy on two Machine Learning benchmark datasets, showing visible improvements in both classification accuracy and feature extraction compared to baseline models. Finally, the remaining two studies explore clinical applications concerning the diagnosis of COVID-19 and Scleroderma patients. In each case, the proposed Machine Learning strategy is compared against traditional, state-of-art models, with respect to class cluster separability, prediction accuracy, and biomarker discovery. The results show clear improvements in each aforementioned area; moreover, computational complexity analysis shows that our method scales linearly with the number of samples in the dataset, and in a linearithmic fashion with respect to the number of raw variables. The main practical contributions of this thesis include a significant improvement in prediction accuracy through the reduction of false discovery rates, as well as the discovery of signature variables which allow for targeted mitigation of undesired conditions.

View record

Pelletization stragegies to reduce costs of wildfire mitigation (2022)

The Prince George Timber Supply Area (TSA) produces 43,000 dry tonnes of easily accessible forest residues in the area around Prince George with an average of 279 m³ haˉ¹ or roughly 36 tonnes haˉ¹. These forest residues are excellent feedstock for high moisture mobile pelletization, using a self-loading mobile woodchipper to comminute and forward the material to the trailer mounted pelletization system. A high moisture pellet mill can accept feedstock with moisture contents up to 36% wet basis and reduce the needed drying energy 4-7% by extruding water through the compression and frictional heating during the pelletization process. Wood pellets produced in the mobile wood pellet system cost $402.71 tonneˉ¹. Feedstock costs were $64.89 tonneˉ¹, labor costs were $74.63 tonneˉ¹ and energy costs were $30.17 tonneˉ¹. Transportation costs in the mobile system are minimized by limiting the distance the chipped forest residues travel and transporting the much denser wood pellets the longest distances. Drying in the mobile system can also use forest residues to generate heat in place of natural gas or propane. In the traditional pellet mill assessed using the same parameters, wood pellets cost $181.98 tonneˉ¹ to produce. The mobile pellet mill employs 5 people to operate the system, a manager to oversee all the operations and an additional 5 to operate the self-loading mobile woodchipper used to collect feedstock. If the province provided a subsidy to cover the difference of $222.71 tonneˉ¹ between the market selling price of wood pellets and the production costs of the mobile system, the province would receive $1.62 in benefits for every dollar invested. The mobile system provided the province with $382.83 tonneˉ¹ in reduced fuel treatment costs, $69.76 tonneˉ¹ in avoided Employment Insurance payments, $12.56 tonneˉ¹ business taxes, and $6.98 tonneˉ¹ in income taxes while producing 89,232 tonnes of wood pellets from 22 mobile systems.

View record

Stochastic multi-objective economic model predictive control of two-stage high consistency mechanical pulping processes (2020)

Model predictive control (MPC) has attracted considerable research efforts and has been widely applied in various industrial processes. This thesis aims at developing economic MPC (econ MPC) strategies to optimize and control the nonlinear mechanical pulping (MP) process with two high consistency (HC) refiners, which is one of the most energy intensive processes in the pulp and paper industry. It possesses substantial economic motives and environmental benefits to develop advanced control techniques to reduce the energy consumption of MP processes. We propose four econ MPC schemes for nonlinear MP processes. Firstly, assuming that all the state variables are directly measurable, two different econ MPC schemes are proposed by adding different penalties on the state and input to ensure the closed-loop stability and convergence. Secondly, to address the issue of state variable off-sets from the steady-state target induced by above schemes, we further propose a multi-objective economic MPC (m-econ MPC) strategy. An auxiliary MPC controller and a stabilizing constraint are incorporated into the econ MPC. The stability of econ MPC is then achieved by preserving the inherent stability of the auxiliary MPC controller. Thirdly, to remove the assumption that all state variables are measurable, a moving horizon estimator (MHE) is employed to estimate the unmeasurable states. We then propose a practical framework integrating the m-econ MPC and MHE. Finally, we develop a tractable approximation for stochastic MPC (SMPC) to handle uncertainties associated with state variables. It can largely reduce the conservativeness or numerical instability incurred in robust or chance constraints of the traditional SMPC. The effectiveness of the proposed algorithms is validated by simulation examples of a nonlinear MP process consisting of a primary and a secondary HC refiner. It is shown that the proposed m-econ MPC schemes can significantly reduce the energy consumption (approximately 10\%-27\%) and guarantee the closed-loop stability and convergence. Therefore, the proposed methodology presents a great promise on practically implementing m-econ MPC to save costs for MP processes.

View record

Adaptive model-predictive control and its applications in paper-making processes (2018)

Model-based controllers such as model-predictive control (MPC) have become dominated control strategies for various industrial applications including sheet and film processes such as the machine-directional (MD) and cross-directional (CD) processes of paper machines. However, many industrial processes may have varying dynamics over time and consequently model-based controllers may experience significant performance loss under such circumstances, due to the presence of model-plant mismatch (MPM). We propose an adaptive control scheme for sheet and film processes, consisting of performance assessment, MPM detection, optimal input design, closed-loop identification and controller adaptive tuning. In this work, four problems are addressed for the above adaptive control strategy. First, we extend conventional performance assessment techniques based on minimum-variance control (MVC) to the CD process, accounting for both spatial and temporal performance limitations. A computationally efficient algorithm is provided for large-scale CD processes. Second, we propose a novel closed-loop identification algorithm for the MD process and then extend it to the CD process. This identification algorithm can give consistent parameter estimates asymptotically even when true noise model structure is not known. Third, we propose a novel MPM detection method for MD processes and then further extend it to the CD process. This approach is based on routine closed-loop identifications with moving windows and process model classifications. A one-class support vector machine (SVM) is used to characterize normal process models from training data and detect the MPM by predicting the classification of models from test data. Fourth, an optimal closed-loop input design is proposed for the CD process based on noncausal modeling to address the complexity from high-dimensional inputs and outputs. Causal-equivalent models can be obtained for the CD noncausal models and thus closed-loop optimal input design can be performed based on the causal-equivalent models. The effectiveness of the proposed algorithms are verified by industrial examples from paper machines. It is shown that the developed adaptive controllers can automatically tune controller parameters to account for process dynamic changes, without the interventions from users or recommissioning the process. Therefore, the proposed methodology can greatly reduce the costs on the controller maintenance in the process industry.

View record

Bioenergy supply chain optimization - decision making under uncertainty (2018)

In an age of dwindling fossil fuels, increased air pollution, and toxic groundwater, it is time we embrace renewable energy sources and commit to global green initiatives. In principle, biomass could be used to manufacture all the fuels and chemicals currently being manufactured from fossil fuels. Unlike fossil fuels, which take millions of years to reach a usable form, biomass is an energy source that can close the loop on many of our recycling and hazardous waste problems. The goal of this research is to develop flexible and easy to use mathematical frameworks suitable for the design and planning of biomass supply chains. This thesis deals with the development of discrete-continuous decision support methodology and algorithms for solving complex optimization problems frequently encountered in the procurement of biomass for bioenergy production. Uncertainty and randomness is predominant, although often ignored, throughout the biomass supply chain. Uncertainty in the biomass supply chain may be classified as upstream (supply) uncertainty, internal (process) uncertainty, and downstream (demand) uncertainty. This thesis endeavors to incorporate uncertainty in the modeling of biomass supply chains. For this purpose, stochastic modeling and scenario analysis methodologies are utilized. The main contributions of this thesis are: (i) the development of a novel stochastic optimization methodology, called quantile-based scenario analysis (QSA); and (ii) the development of optimization algorithms, namely constrained cluster analysis (CCA) and min-min min-max optimization algorithm (MMROA), for the collection of bales across multiple adjoining fields. These methodologies are applied to three distinct biomass procurement case studies. Results show that QSA achieves more favorable solutions than those obtained using existing stochastic or deterministic approaches. In addition, QSA is found to be computationally more efficient. In a case study involving the collection of forest harvest residues for several competing power plants, QSA achieved an average cost reduction of 11%. In a case study involving the collection of sawmill residues, QSA obtained a 6% gain in performance by accounting for uncertainty in the model parameters. In a case study involving the collection of bales, an 8.7% reduction in the total travel distance was obtained by the MMROA.

View record

Assessment of type II diabetes mellitus (2017)

Several methods have been proposed to evaluate a person's insulin sensitivity (ISI). However, all are neither easy nor inexpensive to implement. Therefore, the purpose of this research is to develop a new ISI that can be easily and accurately obtained by patients themselves without costly, time consuming and inconvenient testing methods. In this thesis, the proposed testing method has been simulated on the computerized model of the type II diabetic-patients to estimate the ISI. The proposed new ISI correlates well with the ISI called M-value obtained from the gold standard but elaborate euglycemic hyperinsulinemic clamp (r = 0.927, p = 0.0045).In this research, using a stochastic nonlinear state-space model, the insulin-glucose dynamics in type II diabetes mellitus is modeled. If only a few blood glucose and insulin measurements per day are available in a non-clinical setting, estimating the parameters of such a model is difficult. Therefore, when the glucose and insulin concentrations are only available at irregular intervals, developing a predictive model of the blood glucose of a person with type II diabetes mellitus is important. To overcome these difficulties, under various levels of randomly missing clinical data, we resort to online Sequential Monte Carlo estimation of states and parameters of the state-space model for type II diabetic patients. This method is efficient in monitoring and estimating the dynamics of the peripheral glucose, insulin and incretins concentration when 10%, 25% and 50% of the simulated clinical data were randomly removed. Variabilities such as insulin sensitivity, carbohydrates intake, exercise, and more make controlling blood glucose level a complex problem. In patients with advanced TIIDM, the control of blood glucose level may fail even under insulin pump therapy. Therefore, building a reliable model-based fault detection (FD) system to detect failures in controlling blood glucose level is critical. In this thesis, we propose utilizing a validated robust model-based FD technique for detecting faults in the insulin infusion system and detecting patients organ dysfunction. Our results show that the proposed technique is capable of detecting disconnection in insulin infusion systems and detecting peripheral and hepatic insulin resistance.

View record

Blood Glucose Regulation in Type II Diabetic Patients (2016)

Type II diabetes is the most pervasive diabetic disorder, characterized by insulin resistance, β-cell failure in secreting insulin and impaired regulatory effects of the liver on glucose concentration. Although in the initial steps of the disease, it can be controlled by lifestyle management, but most of the patients eventually require oral diabetic drugs and insulin therapy. The target for the blood glucose regulation is a certain range rather than a single value and even in this range, it is more desirable to keep the blood glucose close to the lower bound.Due to ethical issues and physiological restrictions, the number of experiments that can be performed on a real subject is limited. Mathematical modeling of glucose metabolism in the diabetic patient is a safe alternative to provide sufficient and reliable information on the medical status of the patient. In this thesis, dynamic model of type II diabetes has been expanded by incorporation of the pharmacokinetic-pharmacodynamic model of different types of insulin and oral drug to study the impact of several treatment regimens. The most efficient treatment has been then selected amongst all possible multiple daily injection regimens according to the patient's individualized response. In this thesis, the feedback control strategy is applied in this thesis to determine the proper insulin dosage continuously infused through insulin pump to regulate the blood glucose level. The logarithm of blood glucose concentration has been used as the controlled variable to reduce the nonlinearity of the glucose-insulin interactions. Also, the proportional-integral controller has been modified by scheduling gains calculated by a fuzzy inference system. Model predictive control strategy has been proposed in this research for the time that sufficient measurements of the blood glucose are available. Multiple linear models have been considered to address the nonlinearity of glucose homeostasis. On the other hand, the optimization objective function has been adjusted to better fulfill the objectives of the blood glucose regulation by considering asymmetric cost function and soft constraints. The optimization problem has been solved by the application of multi-parametric quadratic programming approach which reduces the on-line optimization problem to off-line function evaluation.

View record

Fault Isolation and Alarm Design in Non-linear Stochastic Systems (2015)

In this project, first we propose a novel model-based algorithm for fault detection and isolation (FDI) in stochastic non-linear systems. The algorithm is established based on parameter estimation by monitoring any changes in the behaviour of the process and identifying the faulty model using a bank of particle filters running in parallel with the process model. The particle filters are used to generate a sequence of hidden states, which are then used in a log-likelihood ratio to detect and isolate the faults. The newly developed scheme is demonstrated through implementation in two highly non-linear case studies. Finally, the effectiveness and robustness of the proposed diagnostic algorithm are illustrated by comparing the results obtained by applying the algorithm to the multi-unit chemical reactor system using other FDI techniques, based on EKF and UKF state estimators.Second, we propose an approach based on particle filter algorithm to isolate actuatorand sensor faults in stochastic non-linear and non-Gaussian systems. The proposed FDI approach is based on a state estimation approach using a general observer scheme (GOS), whereby a bank of particle filters is used to generate a set of residuals, each sensitive to all but one fault. The faults are then isolated by monitoring the behaviour of the residuals where the residuals of the faulty sensors or actuators behave differently than the faultless residuals. The approach is demonstrated through implementing two highly non-linear case studies.Non-linear stochastic systems pose two important challenges for designing alarms : (1) measurements are not necessarily Gaussian distributed and (2) measurements are correlated - in particular, for closed-loop systems. We therefore present an algorithm for designing alarms based on delay timers and deadband techniques for such systems, with unknown and known models. In the case of unknown models, our approach is based on Monte Carlo simulations. In the case of known models, it makes use of a probability density function approximation algorithm called particle filtering. The alarm design algorithm is illustrated through two simulation examples. We show that the proposed alarm design is effective in detecting the fault, even though the measurements are non-Gaussian.

View record

Dynamic modeling of glucose metabolism for the assessment of type II diabetes mellitus (2013)

Diabetes mellitus is one of the deadliest diseases affecting millions of people worldwide. Due to ethical issues, physiological restrictions and high expenses of human experimentation, mathematical modeling is a popular alternative approach in obtaining reliable information on a disease in a safe and cost effective way. In this thesis, I have developed and expanded a compartmental model of blood glucose regulation for type II diabetes mellitus based on a former detailed physiological model for healthy human subjects. The original model considers the interactions of glucose, insulin and glucagon on regulating the blood sugar. I have expanded the model by eliminating the main drawback of the original model which was its limitation on the route of glucose entrance to the body only to the intravenous glucose injection. I have added a model of glucose absorption in the gastrointestinal tract and incorporated the stimulatory hormonal effects of incretins on pancreatic insulin secretion followed by oral glucose intake. The parameters of the expanded model are estimated and the results of the model are validated using available clinical data sets taken from diabetic and healthy subjects. The estimation of model parameters is accomplished through solving nonlinear optimization problems. To obtain more information about the medical status of the subjects, I have designed some in silico tests based on the existing clinical tests, applied them to the model, and analyzed the model results. To accommodate model uncertainties and measurement noises, noise effects are included into the states and outputs of the model and a filtering method called particle filters is employed to estimate the hidden states of the model. The estimated model states are used to calculate the glucose metabolic rates which in turn provide more information about the medical condition of the patients. Another contribution of the type II diabetes model is developing a pharmacokinetic-pharmacodynamic model to study pharmaceutic impact of different medications on diabetes treatment. A preliminary study on metformin treatment on diabetic patients is performed using the developed type II diabetes model.

View record

Master's Student Supervision

Theses completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest theses.

Meta-reinforcement learning approaches to process control (2022)

Meta-learning is a branch of machine learning which trains neural network models to synthesize a wide variety of data in order to rapidly solve new problems. Many industrial processes have similar and well-understood dynamics, which suggests it is feasible to create a generalizable controller through meta-learning. In this work, two meta-reinforcement learning-based control strategies are formulated. First, a deep reinforcement learning-based controller which uses accumulated process data to adapt to different systems or control objectives is introduced. Next, a meta-reinforcement learning-based controller tuning strategy is introduced. This tuning strategy takes advantage of known, offline information for training, such as systems gains or time constants, yet efficiently tunes fixed-structure controllers for novel systems in a completely model-free fashion. The meta-RL tuning strategy has a recurrent structure that accumulates "context" for its current process dynamics through a hidden state variable. This end-to-end architecture enables the agent to automatically adapt to changes in the process dynamics. Moreover, the same agent can be deployed on systems with previously unseen nonlinearities and timescales. In tests reported here, the meta-RL tuning strategy was trained entirely offline, yet produced good control results in novel settings.

View record

Modeling and simulation of a photovoltaic assisted single-slope solar still (2021)

Water is crucial and very important to our lives needs such as human needs, artificial needs and agriculture’s needs. Seawater desalination process in industrial applications plays a primarily role in meeting the demands for fresh water. The energy of desalination process can be obtained from fossil fuel or from a renewable source of energies such as solar, wind and geothermal energy. Nowadays, Solar energy can be utilized in water production by evaporating saline water in order to produce fresh water. Solar still desalination is considered as one of the emerging processes among other different methods that employs renewable source of energy. This technology has multiple advantages comprising simplicity, ease of maintenance, low cost and low environmental impact. Solar still is a renowned technology for water desalination, impurities and contaminants removal and high-quality water production. Scientists have to illustrate the applications of the solar desalination system based on energy, exergy, thermo dynamic properties and cost analysis. The design analysis should consider the technique and types of the desalination system. It is necessary to model a flexible visualized computer program in order to design/or perform a reliable analysis for a widespread range of solar desalination processes with different structures. The proposed topic claims for modeling and simulation of an integrated solar cell heating element (photovoltaic cell) system accompanied with a single slope solar still for performance improvement, process optimization and efficiency enhancement. The main objective of this study is to develop a software using SIMULINK in order to design and simulate solar desalination systems single slope solar still with a photovoltaic Cell. The study results reveal that solar desalination technique without a photovoltaic cell is astonishing with lower efficiency and performance, while comparing with solar desalination technique assisted with a photovoltaic cell (heating coil) that improve the efficiency by 45% and enhance the performance of the entire system.

View record

Deep reinforcement learning approaches for process control (2018)

The conventional and optimization based controllers have been used in process industries for more than two decades. The application of such controllers on complex systems could be computationally demanding and may require estimation of hidden states. They also require constant tuning, development of a mathematical model (first principle or empirical), design of control law which are tedious. Moreover, they are not adaptive in nature. On the other hand, in the recent years, there has been significant progress in the fields of computer vision and natural language processing that followed the success of deep learning. Human level control has been attained in games and physical tasks by combining deep learning with reinforcement learning. They were also able to learn the complex go game which has states more than number of atoms in the universe. Self-Driving cars, machine translation, speech recognition etc started to gain advantage of these powerful models. The approach to all of them involved problem formulation as a learning problem. Inspired by these applications, in this work we have posed process control problem as a learning problem to build controllers to address the limitations existing in current controllers.

View record

Reconstruction of process topology using historical data and process models (2017)

Modern process industries are large and complex. Their units are highly interconnected with each other. If there is an abnormal situation in the process, the faults might propagate from one part of the process to another. To keep the process safe, it is vital to know a causality and connectivity relationship of the process. Alarms control all process variables and let operators know if there is any fault in the process. During a process malfunction, alarms start from a single process variable and quickly propagate to other variables. This leads to alarm flooding showing continuous appearance of alarms in the monitoring panels. During alarm flooding, it is difficult for operators to find the root cause and solve the problem on time. Causality analysis between different variables is one of the methods to avoid alarm flooding. The method helps to provide a process topology based on the process models and data. Process topology is a map that shows how all units and parts of the process are connected; it helps to find root causes of the fault and to predict future abnormalities. There are many techniques for causality detection. Transfer entropy is a popular method of causality detection that is used for both linear and nonlinear systems. The method estimates the variables’ entropy using their probabilities. This thesis focuses on the transfer entropy based on historical data of the Tennessee-Eastman process. The Tennessee-Eastman is a widely used benchmark in process control studies. The thesis aims to detect the causality and connectivity map of the continuous process measurements. Particle filters or Sequential Monte Carlo methods are also considered to approximate density functions of the filtering problem by spreading particles.

View record

Sheet profile estimation and machine direction adaptive control (2017)

Sheet and film process control is often structured such that separate controllers and actuators are dedicated to either the temporal (i.e, machine direction) variations or the spatial (i.e., cross direction) variations. The dedicated machine direction (MD) and cross direction (CD) controllers require separate measurements of the MD and CD sheet property profiles, respectively. The current industrial standard involves a traversing sensor that acquires a signal containing both MD and CD property variations. The challenge then becomes how does one extract separate MD and CD profiles from the mixed signal. Numerous techniques have been proposed, but ultimately the traditional exponential filtering method continues to be the industrial standard. A more recent technique, compressive sensing, appears promising but previous developments do not address the industrial constraints. In the first part of this thesis the compressive sensing technique is developed further, specifically with regards to feasibility of implementation. A comparative analysis is performed to determine the benefits and drawbacks of the proposed method. Model-based control has gained widespread acceptance in a variety of industrial processes. To ensure adequate performance, these model-based controllers require a model that accurately represents the true process. However, the true process is changing over time as a result of the various operating conditions and physical characteristics of the process. In part two of this thesis an integrated adaptive control strategy is introduced for the multi-input multi-output MD process of a paper machine. This integrated framework consists of process monitoring, input design and system identification techniques developed in collaboration with multiple colleagues. The goal of this work is to unify these efforts and exhibit the integrated functionality on an industrial paper machine simulator.

View record

Developing Mixture Rules for Non-Conservative Properties for Pulp Suspensions (2016)

Nowadays new technologies emerge constantly and people continuously strive to meet challenges. The Pulp and Paper industry has been faced with many changes in recent years. One of which is to diversify the fiber baskets to produce a wide range of products. To help papermakers to accommodate this transition from a single pulp component to a multi-component furnish used in their process, this paper first puts effort into developing a sound and effective methodology to characterize mixture rules that predict properties such as tensile strength and pulp freeness. Using an expansion of a higher order Taylor series as the backbone of model development and removing model parameters based on the limitation of the separately refined system and statistical analysis, the tensile strength and pulp freeness models give predictions close to the observed measurements within 10% variance. Furthermore, two methods, one being the minimization approach using least squares, and the other being the one variable approach, when granting more emphasis on one particular mixture parameter than the other is preferred, are established to determine the operating conditions required to satisfy multiple target properties. Lastly, a graphical user interface, built on the defined mixture models, is also constructed to make recommendations of the optimized condition that can be applied to generate a mixture to achieve both target properties at minimum cost.

View record

Algorithm for nonlinear process monitoring and controller performance recovery with an application to semi-autogenous grinding (2013)

Chemical and mineral processing industries commonly commission linear feedback controllers to control unit processes over a narrow and linear operating region where the economy of the process is maximized. However, most of these processes are nonlinear outside of this narrow operating region. In the event of a large unmeasured disturbance, a process can shift away from nominal and into an abnormal operating region. Owing to the nonlinearity of these processes, a linear controller tuned for the nominal operating region will perform poorly and possibly lead to an unstable closed-loop system in an abnormal operating region. Moreover, it is often difficult to determine whether a process has shifted to an abnormal operating region if none of the constraints on the measured process outputs are violated. In these events, it is the operator who must detect and recover the process, and this manual response results in a sub-optimal recovery. This thesis develops and demonstrates a control strategy that monitors several process variables simultaneously and provides an estimate of the process shift to a nonlinear abnormal operating region where a linear recovery controller is implemented to recover the process back to nominal. To monitor the process, principal component analysis is proposed for process shifts that can be detected by linear variable transformations. Alternatively, for nonlinear or high-dimensional processes, locally linear embedding is proposed. Once a process shift to an abnormal operating region is detected, the control strategy uses the estimate of the process shift in a recovery controller to recover the process. In the event the linear recovery controller is unable to recover the process, an expert system overrides the recovery controller to return the process to a recoverable region. A case study on a semi-autogenous grinding mill at a processing facility in British Columbia presents the successful application of the control strategy to detect and recover the mill from overloading. Portions of this control strategy have been implemented at this facility, and it provides the operators with a real-time estimate on the magnitude of the mill overload.

View record

Identification of essential metabolites in metabolite networks (2013)

Metabolite essentiality is an important topic in systems biology and as such there has been increased focus on their prediction in metabolic networks. Specifically, two related questions have become the focus of this field: how do we decrease the amount of gene knock-out workloads and is it possible to predict essential metabolites in different growth conditions? Two different approaches to these questions: interaction-based method and constraints-based method, are conducted in this study to gain in depth understanding of metabolite essentiality in complex metabolic networks. In the interaction-based approach, the correlations between metabolite essentiality and the metabolite network topology are studied. With the idea of predicting essential metabolites, the topological properties of the metabolite network are studied for the Mycobacterium tuberculosis model. It is found that there is strong correlation between metabolite essentiality and the degree and the number of shortest paths through the metabolite. Welch’s two sample T-test is performed to help identify the statistical significance of the differences between groups of essential metabolites and non-essential metabolites.In the constraint-based approach, essential metabolites are identified in-silico. Flux Balance Analysis (known as FBA), is implemented with the most advanced in-silico model of Chlamydomonas Reinhardtii, which contains light usage information in 3 different growth environments: autotrophic, mixotrophic, and heterotrophic. Essential metabolites are predicted by metabolite knock out analysis, which is to set the flux of a certain metabolite to zero, and categorized into 3 types through Flux Sum Analysis. The basal flux-sum for metabolites is found to follow a exponential distribution, it is also found that essential metabolites tend to have larger basal flux-sum.

View record

Energy optimization and controller performance assessment in a pulp mill cogeneration facility (2010)

Over the past few decades, the production and sale of “green" electricity from cogeneration has become a critical component of economic and environmental sustainability for the pulp and paper industry. As with almost every complex industrial process, the true value of a cogeneration facility is highly dependent on how efficiently and effectively it is utilized. This thesis develops and demonstrates two optimization-based process management tools that maximize the economic outputs from cogeneration: a high level unit economic performance assessment method, and an energy management strategy for optimal real time cogeneration facility management. The economic performance assessment tool simultaneously optimizes the steady state operating setpoints and process variability loads according to an economic objective function. Setpoints are optimized based on a back-off approach to constraint handling, and variability loads are optimized based on the comparison of current control with LQG control strategies. The result is a realistic quantification of potential process performance. Additionally, the convex form of the optimization problem results in quick solution times. Results are presented in the form of two case studies. The energy management system maximizes cogeneration profitability in real time by effectively coordinating key process parameters and various external influences according to an economic objective function. Potential process configurations are constrained using a cogeneration plant model. The optimization procedure is carried out using a flexible forecast horizon that predicts such time-dependant influences as electricity sale prices, limited fuel costs and supplies, and special cases of dynamic operational safety constraints. By constructing such a complete optimization problem based on the complex operation of a cogeneration facility, a sustainable and economically optimal plant management strategy is achieved. Additionally, the convex form of the optimization problem results in quick solution times, which is critical to effective online implementation. Results are presented in the form of three case studies.

View record

 
 

If this is your researcher profile you can log in to the Faculty & Staff portal to update your details and provide recruitment preferences.

 
 

Planning to do a research degree? Use our expert search to find a potential supervisor!