The Effect of Driver Variables on the Estimation of Bivariate Probability Density of Peak Loads for Robust Design of Multi-Energy Systems

It stands to reason that developing more accurate forecasting methods is the pillar of building robust multi-energy systems (MES). In this context, long-term forecasting is also indispensable to have a robust expansion planning program for modern power systems. While very short-term and short-term forecasting are usually represented with point estimation, this approach is highly unreliable in medium-term and long-term forecasting due to inherent uncertainty in predictors like weather variable in long terms. Accordingly, long-term forecasting is usually represented by probabilistic forecasting values which are based on probabilistic functions. In this paper, a self-organizing mixture network (SOMN) is developed to estimate the probability density function (PDF) of peak load in long-term horizons considering the most important drivers of seasonal similarity, population, gross domestic product (GDP), and electricity price. The proposed methodology is applied to forecast the PDF of annual and seasonal peak load in Queensland Australia.


Introduction
A new paradigm in the energy sector is MES, which captures the interactions among various energy carriers, e.g. electricity, heating, and cooling to improve the performance of the system [1,2]. To design robust multi-energy systems, forecasting is of paramount importance; therefore, it is of significance to conduct novel and accurate forecasting methods in the multi-energy systems to arrange the operation mode of integrated energy system efficiently and economically [3].
Load forecasting as a dominant field of study in designing the multi-energy systems draws a lot of interest [3][4][5]. Conventional load forecasting approaches mainly concerned with only one type of loads, such as power loads, cooling loads, or heating loads. However, multi-energy load forecasting, as an ensemble forecasting approach considers the aggregated load, which has the intrinsic characteristics of the single load type, as well as the relevance among the series [5].
Long-term load forecasting is an indispensable tool for an effective planning of power systems. In longterm forecasting, inaccurate forecasts result in excessive investment, not fully utilized generating facilities, or insufficient generation and unfulfilled demand [6,7]. Nevertheless, only few researchers have ever proposed new methods for long-term load forecasting in comparison with short-term forecasting [7].
The current load forecasting literatures have mainly focused on point forecasting, in which the expected value of the future load is forecasted through different techniques These forecasting techniques can be categorized as 1) statistical techniques, such as regression models, and time series models, 2) artificial intelligence techniques, such as neural networks and support vector machines, or 3) hybrid methods which are the combination of both statistical and artificial intelligence techniques. The point forecasting is mainly applied for very short-term and short-term forecasting, however, in medium-term and long-term forecasting, point forecast is not reliable since the inputs of forecasting models, which are mainly weather data, suffer from high uncertainty in long terms. Instead, probabilistic forecasting is applied for long-term forecasting where the possibility of having a demand is presented by a probabilistic value.
In spite of the importance of medium-term and long-term forecasting in operation and planning of power system, most of studies have focused on point forecasting in the short-term horizon, and few studies have been only conducted on probabilistic forecasting. Nonetheless, among these few studies on probabilistic load forecasting, most of them have focused on short-term forecasting. In [8], a review on probabilistic load forecasting is presented. Table 1 provides an overview of studies carried out in the literature of forecasting, taking into account inherent uncertainties in different contexts. Table 1. Taxonomy of recent studies in probabilistic forecasting

Ref.
Forecasting methods Context [9,10] Probabilistic time series Medium-term load forecasting [11] Regression Aesthetic quality assessments [12] Fuzzy regression Short-term load forecasting [13] Quantile regression Probabilistic forecasting of solar power generation [14] Probabilistic SVR General applications [15] Artificial intelligent methods Very short-term load forecasting [16] Ensemble BMA package in R Weather forecasting [17] PDF estimation using machine learning techniques Wind power ramp forecasting [18] Group method of data handling (GMDH) Day-ahead electricity peak load interval forecasting Fuzzy intervals are defined based on the covariance of data in different operating points, which are characterized by linear regression models. In this context, a fuzzy regression method is presented in [11] to predict the aesthetic quality of a new product or service considering all uncertain objective drivers. In this method, a genetic programming is used to develop nonlinear structures of the models while model coefficients are determined by optimizing the fuzzy criteria. In short-term and medium-term load forecasting context, a fuzzy interaction regression is applied by [12] to forecast electric load in the shortterm horizon with the help of fuzzy intervals. Moreover, a prediction interval construction model based on linear programming is presented in [13] to quantify the variability and uncertainty of the output of photovoltaic generating units for very short-term forecasting purposes (i.e., 5-min). This model is based on extreme learning machine and quantile regression. Apart from the considerations concerning the methods and applications of probabilistic forecasting, provided in Table 1, a fuzzy interval model, which is suitable for forecasting of electric demand and the output power of weatherdependent renewable energy sources that have limited dispatchability, is also presented in [19].
The authors of [20] present a practical methodology for probabilistic load forecasting based on a set of predictions, called sister point forecasts, generated from the same family of models. This approach performs the quantile regression on the average of sister point forecasts and generates prediction intervals of future electric loads. Reference [21] also presents a data-driven framework for probabilistic peak demand estimation using smart meter data of the consumers. This approach proposes four main steps including load modeling, customer grouping, maximum diversified demand estimation, and peak load estimation, to finally address both challenges of the unknown data of future loads and the influence of demand diversity among different customers. References [9,10], among others, introduce a comprehensive class of time series models to precisely forecast the electric demand of industrial corporations. A simple procedure is also proposed to classify load profiles and present a probabilistic medium-term load forecasting tool for special types of industrial loads.
Among recent research works, the authors of [18] have presented a day-ahead electricity peak load interval forecasting that can easily convert an interval forecasting problem into a classification forecasting problem. The authors have applied a semi-supervised feature selection algorithm called group method of data handling (GMDH) to address an electricity load classification forecasting issue. From a computational point of view, [15] has proposed a hybrid method for probabilistic load forecasting, including a generalized learning machine to train an improved wavelet neural network, and wavelet preprocessing as well as bootstrapping. This hybrid method provides a load forecasting with high reliability, accuracy and speed so that it would be more profitable for practical applications in the electricity market. However, this method has not been used for long-term forecasting purposes.
In a long-term context, [7] has presented a practical methodology for density forecast of the long-term peak electricity demand instead of common pointforecast approaches. The solution proposed by this methodology can hedge the financial risk accrued by uncertain demand. At first stage, the authors have used semi-parametric additive models to estimate the relationships between demand and the most influential driver variables such as temperature, calendar effects, and some economic variables. Then, they have forecasted the probability distribution of annual and weekly peak electricity demand up to ten years ahead by using a mixture of temperature simulation, future economic scenarios, and residual bootstrapping. This methodology captures the complex nonlinear effect of temperature and also other possible drivers such as calendar effects, price changes, and economic growth.
Another method which is recently proposed in this area is the probabilistic wind power ramp forecasting, which is presented in [17]. The authors have applied an ensemble machine learning technique to generate wind power scenarios and calculate the historical forecasting errors. Then, they used Gaussian mixture model to fit the probability distribution function (PDF) of forecasting errors. This method has not been used for demand forecasting purposes, although it is able to predict with a high level of accuracy.
In this paper, we develop the method proposed by [22] for long-term peak load forecasting considering different driver variables. In fact, we estimate the PDF of peak load in long-term horizons taking into account the most important drivers, like peak load in similar seasons in past years, peak load in the last season, population, and GDP. We apply a SOMN to estimate the PDF for the reason explained in [22] that much more accurate estimations with a rapid convergence could be obtained through this method. The results show good forecasting capability of the proposed methodology at predicting the forecast PDF.
The paper is organized as follows. Section 2 presents the model and its concepts as well as the SOMN for estimating the PDF. The application and high performance of the proposed approach for a real case study are demonstrated in Section 3. Finally, the conclusions are drawn in Section 4.

The Concept of the bivariate distribution
Let the random variable Y denote the randomly selected peak load in a period of time, in MW. Then, suppose we are interested in determining the probability that Y would be between 9000 and 10000 MW, i.e.,   9000 10000 PY  . It is clear that the peak load increases as the population or GDP increases. So, for the purpose of calculating the probability that Y is between 9000 and 10000 MW, we will find it more informative to first take into account a population or GDP value, say X. That is, we calculate such a conditional probability, we need to find the conditional distribution of Y given Xx  .
Based on three assumptions, we would find the conditional distribution of peak load ( Y ) given population, GDP, or other drivers ( x ). Required assumptions are stated below:  Peak load ( Y ) follows a normal distribution (it should be also normalized).
Var Y x , the conditional variance of Y given x is constant.

Multivariate PDF and conditional density
In the case of a univariate normal distribution, the probability distribution or density function of variable y is represented as (1): where y is the peak load (a random variable),  is the mean, and  is the standard deviation.
In pattern estimation applications, each sample observation is assigned to a pattern component which has a prior probability. These situations are modeled by mixture distributions. The assumptions indicate that the conditional distribution of Y given Xx  is: where  is the correlation coefficient of X and Y .
Based on last three stated assumptions, we found the conditional distribution of Y given Xx  . In this vein, the fourth assumption would be added: X follows a normal distribution for x     as (3).
Based on the four stated assumptions, the joint probability density function of X and Y is defined as (4).

xy
This joint PDF is called the bivariate normal distribution. In fact, the bivariate distribution represents the joint distribution of two random variables [23]. The two random variables x and y are related to each other in the sense that they are not independent on each other. This dependency is reflected by the correlation  between the two variables 1 y and 2 y .

Self-organizing mixture network
According to [22], self-organizing mixture network (SOMN) is a powerful unsupervised learning method. This network contains two layers of nodes include an input layer and an output layer. In the input layer, there is a weight vector and a position related to each node. The objective of SOMN is to maximize the degree of similarity of patterns within a cluster, as well as to minimize the similarity of patterns belonging to different clusters. In addition, SOMN transforms high dimensional input patterns into the responses of two-dimensional arrays of neurons, thus it can facilitate the detection of the innate structure and the interrelationship of data [24][25][26][27].
The learning process of SOMN is summarized as follows: Step 1. Initialize random values for the weights associated with the input pattern.
Step 2. Find the winning node as one whose weights are very similar to the input vector considering the minimum distance Euclidean criterion.
Step 3. Update the weights of the winner and its neighborhood neurons in such a way that by strengthening them, this area would be more likely to fire up when a similar input pattern is presented next time. The significance of the strengthening decreases with the distance from the winner.
Step 4. The process of weight updating will be performed for a specified number of iterations. If the map is not unfolded, the algorithm must restart the training process with a different set of initial weights.

SOMN Structure for PDF estimation
The SOM structure for PDF estimation problem is illustrated in Fig. 1  Considering a limited number of conditions, the SOM network places M nodes in the input space. The parameters vector θi includes mean vectors and covariance matrices related to the assumed bivariate normal density function which are considered as learning weights. At each iteration, a sample point is randomly taken from input space i.e., a finite data set. A winner is chosen according to its output multiplied by its estimated posterior probability [22].
The number of nodes should be equal to or greater than the number of conditions to avoid the underrepresented problem [22]. The Kullback-Leibler information metric (7), also called relative entropy [28], measures the divergence between p(x) and p(x) . In (7), the density function of the actual data and the estimated one are indicated by p(x) and p(x) , respectively.
The optimal estimate of parameters in mixture distribution could be calculated by minimizing their partial differentials in respect to each model parameter by Lagrangian method considering the constraint k i i1p 1    . Also, according to [22], if the actual distribution function is unknown, the Robbins-Monro stochastic approximation method can be used instead of direct Lagrangian method. The parameters updating can be limited to a small neighborhood of the winning node, which has the largest posterior probability. Therefore, the density can be approximated by a mixture of a small number of nodes at one time: The learning rules for updating the mean vector and covariance matrix in the SOM algorithms are as follow: A large neighborhood at the beginning of learning process means a large variance of the Gaussians as well as a high mobility for the neurons. This would be helpful to find the global optimum, or at least to result in a better local optimum, especially at the beginning of the learning. In contrast, small neighborhood sizes mean small variances for the Gaussians as well as a low mobility. As the learning progresses, the neighborhood during the process is shrink to provide an adjustment to the variance of the Gaussians [22].

Results
In this section, the proposed approach is applied on a real case study network in Queensland, Australia to derive long-term probabilistic forecasting. Halfhourly demand data during 2001 to 2016 were obtained from Australian Energy Market Operator (AEMO) [29]. The studied case is examined from two points of view including annual peak load and seasonal peak load. The yearly peak load data between 2001 and 2016 is illustrated in Fig. 2. The seasonal peak load data between 2007 and 2016 is also depicted in Fig. 3. It should be noted that a part of seasonal peak load data is ignored to avoid unnecessary historical data. It is worth mentioning that the SOMN algorithm is implemented in MATLAB © and executed on a Windows-based PC with a Core™ i5 processor clocking at 3.2 GHz and 4 GB of RAM. In addition, all simulations for comparison and statistical tests are implemented in R-Studio © 3. 4

.2.
To derive long-term probabilistic forecasting, the univariate density estimation for two cases are studied separately for both teaching and learning purposes.
The histogram graph of the annual and seasonal peak loads along with normal density function curve are illustrated in Figs. 4 and 5. As shown, it is obvious that the histogram is a poor method to determine the shape of the empirical data distribution. Therefore, the initial univariate kernel density function for annual and seasonal peak loads are depicted in Figs. 6 and 7. The seasonal and annual peak loads are subjected to different social, economic, and calendar drivers, such as population growth, changing technology, changing the economic condition, and so on [7]. To address this, the values of the Pearson correlation between the seasonal peak load and some influential drivers are provided in Table 2. As seen, the highest correlation is between the peak load and that of the similar season in the last year. In this study, to analyze the fitting performance, the error metric of root-mean-square error (RMSE) in (11) (11) To evaluate the average of the obtained RMSEs, the proposed algorithm is carried out ten times. The averages of RMSEs for these ten times considering each of driver variables as the second variable for seasonal and annual peak load are illustrated in Tables 3 and 4, respectively. The driver variables for seasonal peak load are population, GDP, peak load in last season, and peak load in similar season in the last year. However, the driver variables for annual peak load are population, GDP. Furthermore, to evaluate the fitted PDFs, the RMSEs of our proposed PDF estimation method are compared with those of the non-central multivariate 't' distribution in the "mvtnorm" package of R. According to table 3 for PDF estimation of the seasonal peak load, considering the driver variables lead to a lower RMSE. Accordingly, the lowest RMSE is obtained considering GDP as driver variable in the case with 150 hidden neurons. However, for some number of the hidden neurons, other driver variables lead to a lower RMSE. For instance, average RMSE associated with the PDF estimation of the seasonal peak load for 30 and 100 hidden neurons are decreased considering the population as driver variable. Table 4 for PDF estimation of the annual peak load provides similar results.

Discussion
The key feature of our proposed methodology is providing a full density forecast for the peak demand with quantifiable probabilistic uncertainty, which captures the complex nonlinear effect of possible drivers. The RMSE results illustrate that the proposed method performs well on the historical data.
In light of the results presented in Tables 3 and 4 and the RMSE values compared in Fig. 8, we can conclude that the proposed method for PDF estimation is more effective than the commonly used method in "mvtnorm" package. In addition, if we consider GDP as the second variable (see Fig. 8 Tables 3 and 4), the result will lead to the lowest RMSE. Nevertheless, according to the overall pattern of RMSEs, there is no robust evidence to define a relationship between "correlation between the dependent variable and each of driver variables" and "RMSE"; this constitutes the ground for future research work.
Since economic relationships might change over time, there is significant uncertainty associated with the economic forecasts especially over the long term.

Conclusion and further research
In this paper, an unsupervised learning method called SOMN was proposed for estimating the bivariate density functions of the annual and seasonal peak load. The major contribution of this paper is presenting a novel systematic methodology for forecasting the density of long-term peak electricity demand in multi-energy system. Using the measure RMSE, the performance of the proposed method was compared with the non-central multivariate 't' distribution. The simulation results demonstrated that the proposed method outperforms the noncentral multivariate 't' distribution.
According to the values of RMSE, it can be inferred that a high correlation between two variables does not necessarily lead to a low RMSE. In other words, there is no robust evidence to define a relationship between these concepts.
The results show that making a relationship between the "correlation between dependent variable and driver variables" and "RMSE" in bivariate probability density function is still needs further research. Furthermore, the method proposed in this paper would be developed from several aspects. The most important one is the improvement of the proposed algorithm through introducing ensemble method by combining several artificial intelligent algorithms.

•
Availability of data and materials The datasets analyzed during the current study are available from the corresponding author on request. •