 Research
 Open Access
 Published:
The effect of driver variables on the estimation of bivariate probability density of peak loads in longterm horizon
Journal of Big Data volume 8, Article number: 15 (2021)
Abstract
It is evident that developing more accurate forecasting methods is the pillar of building robust multienergy systems (MES). In this context, longterm forecasting is also indispensable to have a robust expansion planning program for modern power systems. While very shortterm and shortterm forecasting are usually represented with point estimation, this approach is highly unreliable in mediumterm and longterm forecasting due to inherent uncertainty in predictors like weather variables in long terms. Accordingly, longterm forecasting is usually represented by probabilistic forecasting values which are based on probabilistic functions. In this paper, a selforganizing mixture network (SOMN) is developed to estimate the probability density function (PDF) of peak load in longterm horizons considering the most important drivers of seasonal similarity, population, gross domestic product (GDP), and electricity price. The proposed methodology is applied to forecast the PDF of annual and seasonal peak load in Queensland Australia.
Introduction
A new paradigm in the energy sector is MES, which captures the interactions among various energy carriers, e.g. electricity, heating, and cooling to improve the performance of the system [1, 2]. To design robust multienergy systems, forecasting is of paramount importance; therefore, it is of significance to conduct novel and accurate forecasting methods in the multienergy systems to arrange the operation mode of integrated energy system efficiently and economically [3].
Load forecasting as a dominant field of study in designing the multienergy systems draws a lot of interest [2,3,4,5]. Conventional load forecasting approaches mainly concerned with only one type of loads, such as power loads, cooling loads, or heating loads. However, multienergy load forecasting, as an ensemble forecasting approach considers the aggregated load, which has the intrinsic characteristics of the single load type, as well as the relevance among the series [4].
Longterm load forecasting is an indispensable tool for an effective planning of power systems. In longterm forecasting, inaccurate forecasts result in excessive investment, not fully utilized generating facilities, or insufficient generation and unfulfilled demand [6, 7]. Nevertheless, only few researchers have ever proposed new methods for longterm load forecasting in comparison with shortterm forecasting [7].
The current load forecasting literatures have mainly focused on point forecasting, in which the expected value of the future load is forecasted through different techniques These forecasting techniques can be categorized as (1) statistical techniques, such as regression models, and time series models, (2) artificial intelligence techniques, such as neural networks and support vector machines, or (3) hybrid methods which are the combination of both statistical and artificial intelligence techniques. The point forecasting is mainly applied for very shortterm and shortterm forecasting, however, in mediumterm and longterm forecasting, point forecast is not reliable since the inputs of forecasting models, which are mainly weather data, suffer from high uncertainty in long terms. Instead, probabilistic forecasting is applied for longterm forecasting where the possibility of having a demand is presented by a probabilistic value [7].
In spite of the importance of mediumterm and longterm forecasting in operation and planning of power system, most of studies have focused on point forecasting in the shortterm horizon, and few studies have been only conducted on probabilistic forecasting. Nonetheless, among these few studies on probabilistic load forecasting, most of them have focused on shortterm forecasting. In [8], a review on probabilistic load forecasting is presented. Table 1 provides an overview of studies carried out in the literature of forecasting, taking into account inherent uncertainties in different contexts.
Fuzzy intervals are defined based on the covariance of data in different operating points, which are characterized by linear regression models. In this context, a fuzzy regression method is presented in [12] to predict the aesthetic quality of a new product or service considering all uncertain objective drivers. In this method, a genetic programming is used to develop nonlinear structures of the models while model coefficients are determined by optimizing the fuzzy criteria. In shortterm and mediumterm load forecasting context, a fuzzy interaction regression is applied by [13] to forecast electric load in the shortterm horizon with the help of fuzzy intervals. Moreover, a prediction interval construction model based on linear programming is presented in [14] to quantify the variability and uncertainty of the output of photovoltaic generating units for very shortterm forecasting purposes (i.e., 5min). This model is based on extreme learning machine and quantile regression. Apart from the considerations concerning the methods and applications of probabilistic forecasting, provided in Table 1, a fuzzy interval model, which is suitable for forecasting of electric demand and the output power of weatherdependent renewable energy sources that have limited dispatchability, is also presented in [22].
The authors of [23] present a practical methodology for probabilistic load forecasting based on a set of predictions, called sister point forecasts, generated from the same family of models. This approach performs the quantile regression on the average of sister point forecasts and generates prediction intervals of future electric loads. Ref. [24] also presents a datadriven framework for probabilistic peak demand estimation using smart meter data of the consumers. This approach proposes four main steps including load modeling, customer grouping, maximum diversified demand estimation, and peak load estimation, to finally address both challenges of the unknown data of future loads and the influence of demand diversity among different customers. References [9, 10], among others, introduce a comprehensive class of time series models to precisely forecast the electric demand of industrial corporations. A simple procedure is also proposed to classify load profiles and present a probabilistic mediumterm load forecasting tool for special types of industrial loads.
Among recent research works, the authors of [21] have presented a dayahead electricity peak load interval forecasting that can easily convert an interval forecasting problem into a classification forecasting problem. The authors have applied a semisupervised feature selection algorithm called group method of data handling (GMDH) to address an electricity load classification forecasting issue. From a computational point of view, [17] has proposed a hybrid method for probabilistic load forecasting, including a generalized learning machine to train an improved wavelet neural network, and wavelet preprocessing as well as bootstrapping. This hybrid method provides a load forecasting with high reliability, accuracy and speed so that it would be more profitable for practical applications in the electricity market. However, as far as authors’ knowledge concerned, this method has not been used for longterm forecasting purposes.
In a longterm context, [7] has presented a practical methodology for density forecast of the longterm peak electricity demand instead of common pointforecast approaches. The solution proposed by this methodology can hedge the financial risk caused by uncertain demand. At the first stage, the authors have used semiparametric additive models to estimate the relationships between demand and the most influential driver variables such as temperature, calendar effects, and some economic variables. Then, they have forecasted the probability distribution of annual and weekly peak electricity demand up to 10 years ahead by using a mixture of temperature simulation, future economic scenarios, and residual bootstrapping. This methodology captures the complex nonlinear effect of temperature and also other possible drivers such as calendar effects, price changes, and economic growth.
Another method which is recently proposed in this area is the probabilistic wind power ramp forecasting, which is presented in [19]. The authors have applied an ensemble machine learning technique to generate wind power scenarios and calculate the historical forecasting errors. Then, they used Gaussian mixture model to fit the probability distribution function (PDF) of forecasting errors. This method has not been used for demand forecasting purposes, although it is able to predict with a high level of accuracy.
In this paper, we develop the method proposed by [25] for longterm peak load forecasting considering different driver variables. In fact, we estimate the PDF of peak load in longterm horizons taking into account the most important drivers, like peak load in similar seasons in past years, peak load in the last season, population, and GDP. We apply a SOMN to estimate the PDF, for the reason explained in [25], which allows much more accurate estimates to be obtained with a rapid convergence. The results show good forecasting capability of the proposed methodology at predicting the forecast PDF.
The paper is organized as follows. “Proposed method” section presents the model and its concepts as well as the SOMN for estimating the PDF. The application and high performance of the proposed approach for a real case study are demonstrated in “Results” section. Finally, the conclusions are drawn in “Conclusion and further research” section.
Proposed method
The Concept of the bivariate distribution
Let the random variable \(Y\) denote the randomly selected peak load in a period of time, in MW. Then, suppose we are interested in determining the probability that \(Y\) would be between 9000 and 10,000 MW, i.e., \(P\left( {9000 < Y < 10000} \right)\). It is clear that the peak load increases as the population or GDP increases. So, for the purpose of calculating the probability that \(Y\) is between 9000 and 10,000 MW, we will find it more informative to first take into account a population or GDP value, say X. That is, we may want to find \(P\left( {9000 < Y < 10000X = x} \right)\). To calculate such a conditional probability, we need to find the conditional distribution of \(Y\) given \(X = x\). Based on three assumptions, we can easily find the conditional distribution of electric peak load (\(Y\)) given electricity price, population, GDP or other drivers (\(x\)). Required assumptions are stated below [26]:

Peak load (\(Y\)) follows a normal distribution (or easily transform to normal distribution [27]).

\(E(Yx)\), the conditional mean of \(Y\) given \(x\) is linear with respect to \(x\).

\(Var(Yx)\), the conditional variance of \(Y\) given \(x\) is constant.
The first assumption is considered to facilitate using the proposed method, and is easily achievable through transforming from unknown distribution to normal distribution (Fig. 5). The associated expected value and conditional variance for second and third assumptions are as follows, respectively [26].
It should be mentioned that in machinelearning approaches (e.g., Bayesian method), it is common to select a prior distribution. Then, after observing data X_{1},…,X_{n}, we can update our beliefs and calculate the posterior distribution f (θX_{1},…,X_{n}) [28].
In the next section, the multivariate PDF and conditional density for SOMN will be discussed.
Multivariate PDF and conditional density
In the case of a univariate normal distribution, the probability distribution or density function of variable \(y\) is represented as (1):
where \(y\) is the peak load (a random variable), \(\mu\) is the mean, and \(\sigma\) is the standard deviation.
In pattern estimation applications, each sample observation is assigned to a pattern component which has a prior probability. These situations are modeled by mixture distributions. The assumptions indicate that the conditional distribution of \(Y\) given \(X = x\) is:
where \(\rho\) is the correlation coefficient of \(X\) and \(Y\).
Based on last three stated assumptions, we found the conditional distribution of \(Y\) given \(X = x\). In this vein, the fourth assumption would be added; \(X\) follows a normal distribution for \( \infty < x < \infty\).
Based on the four stated assumptions, the joint probability density function of \(X\) and \(Y\) is defined as (4).
This joint PDF is called the bivariate normal distribution. In fact, the bivariate distribution represents the joint distribution of two random variables [29]. The two random variables X and Y are related to each other in the sense that they are not independent on each other. This dependency is reflected by the correlation \(\rho\) between the two variables X and Y.
Selforganizing mixture network
According to [25], selforganizing mixture network (SOMN) is a powerful unsupervised learning method. This network contains two layers of nodes, including an input layer and an output layer. In the input layer, there is a weight vector and a position related to each node. The objective of SOMN is to maximize the degree of similarity of patterns within a cluster, as well as to minimize the similarity of patterns belonging to different clusters. In addition, SOMN transforms high dimensional input patterns into the responses of twodimensional arrays of neurons, and thus, it can facilitate the detection of the innate structure and the interrelationship of data [30,31,32,33].
The learning process of SOMN is summarized as follows:
Step 1. Initialize random values for the weights associated with the input pattern.
Step 2. Find the winning node as one whose weights are very similar to the input vector considering the minimum distance Euclidean criterion.
Step 3. Update the weights of the winner and its neighborhood neurons in such a way that by strengthening them, this area would be more likely to fire up when a similar input pattern is presented next time. The significance of the strengthening decreases with the distance from the winner.
Step 4. The process of weight updating will be performed for a specified number of iterations. If the map is not unfolded, the algorithm must restart the training process with a different set of initial weights.
SOMN Structure for PDF estimation
The SOM structure for PDF estimation problem is illustrated in Fig. 1, where \(\left\{ {\mu_{i} ,\varSigma_{i} } \right\}\) are the mean vector and covariance matrix of the ith value of assumed normal density function, respectively. Also, \(\eta_{c}\) is a neighborhood of the winner whose weight must be updated. According to previous section, given \(\theta_{i} = \left\{ {\theta_{i1} ,\theta_{i2} } \right\} = \left\{ {\mu_{i} ,\varSigma_{i} } \right\}\), the conditional probability density of data sample is derived by (5), where \(p_{i} (y\theta_{i} )\) is the ith componentconditional density and P_{i} is the prior probability of the ith component.
Considering a limited number of conditions, the SOM network places M nodes in the input space. The parameters vector θ_{i} includes mean vectors and covariance matrices related to the assumed bivariate normal density function which are considered as learning weights. At each iteration, a sample point is randomly taken from input space i.e., a finite data set. A winner is chosen according to its output multiplied by its estimated posterior probability [25].
The number of nodes should be equal to or greater than the number of conditions to avoid the underrepresented problem [25]. The Kullback–Leibler information metric (7), also called relative entropy [34], measures the divergence between \(p(x)\) and \(\hat{p}(x)\). In (7), the density function of the actual data and the estimated one are indicated by \(p(x)\) and \(\hat{p}(x)\), respectively.
The optimal estimate of parameters in mixture distribution could be calculated by minimizing their partial differentials in respect to each model parameter by Lagrangian method considering the constraint \(\sum\nolimits_{i = 1}^{k} {\hat{p}_{i} = 1}\). Also, according to [25], if the actual distribution function is unknown, the Robbins–Monro stochastic approximation method can be used instead of direct Lagrangian method. The parameters updating can be limited to a small neighborhood of the winning node, which has the largest posterior probability. Therefore, the density can be approximated by a mixture of a small number of nodes at one time:
The learning rules for updating the mean vector and covariance matrix in the SOM algorithms are as follow:
A large neighborhood at the beginning of learning process means a large variance of the Gaussians as well as a high mobility for the neurons. This would be helpful to find the global optimum, or at least to result in a better local optimum, especially at the beginning of the learning. In contrast, small neighborhood sizes mean small variances for the Gaussians as well as a low mobility. As the learning progresses, the neighborhood during the process is shrink to provide an adjustment to the variance of the Gaussians [25].
Numerical studies and results
In this section, the proposed approach is applied on a real data in Queensland, Australia to derive longterm probabilistic forecasting. Halfhourly demand data during 2001 to 2016 were obtained from Australian Energy Market Operator (AEMO) [35]. The studied case is examined from two points of view including annual peak load and seasonal peak load. The yearly peak load data between 2001 and 2016 is illustrated in Fig. 2. The seasonal peak load data between 2007 and 2016 is also depicted in Fig. 3. It should be noted that a part of seasonal peak load data is ignored to avoid unnecessary historical data. The rest of seasonal data are presented in Table 2.
It is worth mentioning that the SOMN algorithm is implemented in MATLAB© and executed on a Windowsbased PC with a Core™ i5 processor clocking at 3.2 GHz and 4 GB of RAM. In addition, all simulations for comparison and statistical tests are implemented in RStudio© 3.4.2.
To derive longterm probabilistic forecasting, the univariate density estimation for two cases are studied separately for both training and learning purposes. Although the initial data do not show a normal distribution, they can easily transform to the normal distribution. The histogram graph of the initial and normalized annual and seasonal peak loads along with normal density function curve is illustrated in Fig. 4.
The seasonal and annual peak loads are subjected to different social, economic, and calendar drivers, such as population growth, changing technology, changing the economic condition, and so on [7].
The values of the Pearson correlation between the seasonal peak load and some influential drivers are provided in Table 3. As seen, the highest correlation is between the peak load and that of the similar season in the last year.
Besides, we conduct a principal components analysis (PCA) for different driver variables considering data provided in [36]. PCA aims to maximize the variance of a linear combination of the variables, and forms new variables which are linear composites of the original variables, and the new variables are uncorrelated among themselves [37, 38]. The results for data provided in [36] are illustrated in Figs. 5 and 6. Figure 5 illustrates the coefficients of each variable in principal components. In Fig. 6, the first two eigenvalues form a steep curve as a bend at the beginning and then a straightline trend with shallow slope. Accordingly, we need to keep those eigenvalues in the steep curve before the first one on the straight line, here upon, two components can be retained as follows:
However, due to the lack of data for a wide array of variables in case of Australia, we inevitably avoided conducting PCA and only relied on available data.
Parameters of the component densities including mean vectors and variance–covariance matrices, as well as prior probabilities are the learning weights. Hence, the initial mean vectors of a fournode Gaussian SOMN are set to small random vectors around the mean of standard normal distribution [0, 0]. Besides, the initial variance–covariance matrices are defined as matrices equal to the initial sample variance plus a random coefficient (a random value between 1 and 30) of the initial sample variance, and the initial probabilities are equally set to 1/3.
At each iteration, one data point was randomly taken from the 120point training set. The learning rates for means and for variances and mixing priors were decreasing from 0.5 and 0.05, respectively.
Three possible scenarios for seasonal peak load forecasting in the most probable range of values are illustrated in Fig. 7. To analyze the fitting performance, the error metric of rootmeansquare error (RMSE) in (13) is applied.
RMSE is more preferable in comparison with other measures like Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE). For example, MAPE is a pooraccuracy indicator although it is a quite wellknown measure among business managers. With reference to its mathematical formulation, MAPE divides each error individually by the demand, so it is clearly skewed. It means that high errors during lowdemand periods will have a significant impact on MAPE. For this reason, optimizing MAPE will result in a strange forecast that will most likely some undershoots may be seen in the demand profile pooraccuracy indicator [39]. On the other hand, compared to MAE and MAPE, the indicator RMSE is more accurate and it does not treat each error as the same. It gives more importance (i.e., weight) on the most significant errors which means that one big error is enough to get a very bad RMSE. Thus, taking the square root of the average squared errors might have some interesting implications for RMSE because the errors are squared before they are averaged and thus, the RMSE gives a relatively high weight to large errors. This proves that the RMSE could be more useful when large errors are particularly undesirable [40].
To evaluate the average of the obtained RMSEs, the proposed algorithm is carried out ten times. The averages of RMSEs for these ten times considering each of driver variables as the second variable for seasonal and annual peak load are illustrated in Tables 4 and 5, respectively. The driver variables for seasonal peak load are population, GDP, peak load in last season, and peak load in similar season in the last year. However, the driver variables for annual peak load are population, GDP. Furthermore, to evaluate the fitted PDFs, the RMSEs of our proposed PDF estimation method are compared with those of the noncentral multivariate ‘t’ distribution in the “mvtnorm” package of R.
According to Table 4 for PDF estimation of the seasonal peak load, considering the driver variables lead to a lower RMSE. Accordingly, the lowest RMSE is obtained considering GDP as the driver variable in the case with 150 hidden neurons. However, for some number of the hidden neurons, other driver variables lead to a lower RMSE. For instance, average RMSE associated with the PDF estimation of the seasonal peak load for 30 and 100 hidden neurons are decreased considering the population as the driver variable. Table 4 for PDF estimation of the annual peak load provides similar results.
The key feature of our proposed methodology is provided a full density forecast for the peak demand with quantifiable probabilistic uncertainty, which captures the complex nonlinear effect of possible drivers. The RMSE results illustrate that the proposed method performs well on the historical data.
In light of the results presented in Tables 4 and 5 and the RMSE values in Fig. 8, we can conclude that the proposed method for PDF estimation is more effective than the commonly used method in “mvtnorm” package. In addition, if we consider GDP as the second variable (see Fig. 8 and the last row in Tables 4 and 5), the result will lead to the lowest RMSE.
Nevertheless, according to the overall pattern of RMSEs, there is no hard evidence to define a relationship between “correlation between the dependent variable and each of driver variables” and “RMSE”; this constitutes the ground for future research work.
Discussion
In this paper, as a preliminary study, we aimed to find an appropriate forecasting approach for multienergy systems. In this research, we focused on the importance of our proposed method on multienergy systems. According to [1], the aim of multienergy systems is considering the interaction among electricity, heat, cooling, fuels, transport at various levels to improve technical, economical, and environmental performance at the operational and planning stage in comparison with classical energy systems whose sectors are treated separately [1].
Multienergy systems have two aspects; first, these systems as integrated energy systems are known as robust systems due to their ability to stand various types of disturbances by increasing the system responsibility and decreasing the system volatility through providing various alternatives.
On the other hand, the main issues in integrated energy systems are the uncertainty and scalability [41]. To practically implement multienergy systems, their uncertainty parameters and their uncertainty sets should be first defined. For instance, according to the robust design methodology proposed by [41, 42], the noise factors beyond the control of the designer should be considered in the multienergy system design.
Therefore, it stands to reason that considering different driver variables via a comprehensive forecasting method, which deals with uncertainties in multienergy systems, is of paramount importance. However, due to the inherent randomness of the underlying energy resource (e.g., wind speed, solar radiation) alongside economic and social impacts, there will be definitely a high uncertainty associated with the load forecasts especially over the long term.
In addition, to cope with the bottleneck of performance improvement, a practical methodology for density forecast of the longterm peak electricity demand instead of common pointforecast approaches is highly needed. Applying such approaches can hedge the financial risk imposed by uncertain demands. Such approaches also capture the complex nonlinear effect of different possible drivers. Besides, the applied method of PDF estimation is also necessary. For example, here, we have applied a SOMN algorithm to estimate the PDF, which produces accurate estimations with rapid convergence.
Conclusion and further research
In this paper, an unsupervised learning method called SOMN was proposed for estimating the bivariate density functions of the annual and seasonal peak load. The major contribution of this paper is presenting a novel systematic methodology for forecasting the density of longterm peak electricity demand in multienergy system. Using the measure RMSE, the performance of the proposed method was compared with the noncentral multivariate ‘t’ distribution. The simulation results demonstrated that the proposed method outperforms the noncentral multivariate ‘t’ distribution.
According to the values of RMSE, it can be inferred that a high correlation between two variables does not necessarily lead to a low RMSE. In other words, there is no hard evidence to define a relationship between these concepts.
The results show that making a relationship between the “correlation between dependent variable and driver variables” and “RMSE” in bivariate probability density function still needs further research. Furthermore, the method proposed in this paper would be developed from several aspects. The most important one is the improvement of the proposed algorithm through introducing ensemble method by combining several artificial intelligent algorithms.
Availability of data and materials
The datasets analyzed during the current study are available from the corresponding author on request.
Abbreviations
 MES:

Multienergy systems
 SOMN:

Selforganizing mixture network
 PDF:

Probability density function
 GDP:

Gross domestic product
 GMDH:

Group method of data handling
 RMSE:

Rootmeansquare error
 PCA:

Principal components analysis
 i:

The index for pattern components
 c:

The index for winning node
 E:

Expected value
 Var:

Variance
 µ:

Mean vector
 σ:

Standard deviation
 n:

Time step
 K:

The number of pattern components
 ρ:

Correlation coefficient
 Σ:

Covariance matrices
 p_{i}(x θ_{i}):

ith componentconditional density
 θ_{i} :

Parameter vector for the ith conditional density for i = 1, 2, …, K
 P_{i} :

Prior probability of the ith component
 η_{c} :

The neighborhood of the winner node in SOMN algorithm
 α(n):

Learning rate at the time step n
 [.]^{T} :

Transpose function
References
 1.
Mancarella P. MES (multienergy systems): an overview of concepts and evaluation models. Energy. 2014;65:1–17.
 2.
Gabrielli P, Gazzani M, Martelli E, Mazzotti M. Optimal design of multienergy systems with seasonal storage. Appl Energy. 2018;219:408–24.
 3.
Tieyan Z, Hening L, Qian H, Xuan K, Shengyu G, Xiaochen Y, Huan H (2019) Integrated Load Forecasting Model of MultiEnergy System Based on Markov Chain Improved Neural Network. In: 2019 11th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA) (p. 454–457). IEEE.
 4.
Wang S, Wang S, Chen H, Gu Q. Multienergy load forecasting for regional integrated energy systems considering temporal dynamic and coupling characteristics. Energy. 2020;195:116964.
 5.
Gabrielli P, Fürer F, Murray P, Orehounig K, Carmeliet J, Gazzani M, Mazzotti M. A timeseriesbased approach for robust design of multienergy systems with energy storage. Computer Aided Chemical Engineering, vol. 43. Amsterdam: Elsevier; 2018. p. 525–30.
 6.
Avdaković S, Bećirović E, Hasanspahić N, Musić M, Merzić A, Tuhčić A, Lončarević AK. Longterm forecasting of energy, electricity and active power demand–Bosnia and Herzegovina case study. Balkan J Electr Comput Eng. 2015;3(1):11–6.
 7.
Hyndman RJ, Fan S. Density forecasting for longterm peak electricity demand. IEEE Trans Power Syst. 2010;25(2):1142–53.
 8.
Hong T, Fan S. Probabilistic electric load forecasting: a tutorial review. Int J Forecast. 2016;32(3):914–38.
 9.
Berk K, Probabilistic Forecasting of Electricity Load for Industrial Enterprises. Siegen, 2016.
 10.
Berk K, Müller A. Probabilistic forecasting of mediumterm electricity demand: a comparison of time series models. J Energy Markets. 2016;9(2):1–20.
 11.
Sangrody H, Zhou N, Qiao X. Probabilistic models for daily peak loads at distribution feeder. In: 2017 IEEE Power & Energy Society General Meeting, 2017, p. 1–5: IEEE.
 12.
Chan KY, Lam HK, Yiu CKF, Dillon TS. A flexible fuzzy regression method for addressing nonlinear uncertainty on aesthetic quality assessments. IEEE Trans Syst Man Cybern. 2017;47(8):2363–77.
 13.
Hong T, Wang P. Fuzzy interaction regression for short term load forecasting. Fuzzy Optim Decis Making. 2014;13(1):91–103.
 14.
Wan C, Lin J, Song Y, Xu Z, Yang G. Probabilistic forecasting of photovoltaic generation: an efficient statistical approach. IEEE Trans Power Syst. 2017;32(3):2471–2.
 15.
Sangrody H, Zhou N, An initial study on load forecasting considering economic factors. In: 2016 IEEE Power and Energy Society General Meeting (PESGM), 2016, p. 1–5: IEEE.
 16.
Lin CJ, Weng RC, Simple probabilistic predictions for support vector regression, National Taiwan University, Taipei, 2004.
 17.
Rafiei M, Niknam T, Aghaei J, ShafieKhah M, Catalão JP. Probabilistic Load Forecasting using an Improved Wavelet Neural Network Trained by Generalized Extreme Learning Machine. In: IEEE Transactions on Smart Grid, ed, 2018.
 18.
Fraley C, Raftery A, Gneiting T, Sloughter M, Berrocal V. Probabilistic weather forecasting in R. Contributed Research Articles. 2011;3(1):55–63.
 19.
Cui M, Feng C, Wang Z, Zhang J, Wang Q, Florita A, Krishnan V, Hodge BM 2017. Probabilistic wind power ramp forecasting based on a scenario generation method. In: 2017 IEEE Power & Energy Society General Meeting, p. 1–1. IEEE.
 20.
Khorramdel B, Khorramdel H, Zare A, Safari N, Sangrody H, Chung C, A nonparametric probability distribution model for shortterm wind power prediction error.In: 2018 IEEE Canadian Conference on Electrical & Computer Engineering (CCECE), 2018, p. 1–5: IEEE.
 21.
Yang L, Yang H, Yang H, Liu H. GMDHBased SemiSupervised Feature Selection for Electricity Load Classification Forecasting. Sustainability. 2018;10(1):217.
 22.
Sáez D, Ávila F, Olivares D, Cañizares C, Marín L. Fuzzy prediction interval models for forecasting renewable resources and loads in microgrids. IEEE Trans Smart Grid. 2015;6(2):548–56.
 23.
Liu B, Nowotarski J, Hong T, Weron R. Probabilistic load forecasting via quantile regression averaging on sister forecasts. IEEE Trans Smart Grid. 2017;8(2):730–7.
 24.
Sun M, Wang Y, Strbac G, Kang C. Probabilistic peak load estimation in smart cities using smart meter data. IEEE Trans Ind Electron. 2018;66(2):1608–18.
 25.
Yin H, Allinson NM. Selforganizing mixture networks for probability density estimation. IEEE Trans Neural Networks. 2001;12(2):405–11.
 26.
Tong YL. The multivariate normal distribution. Berlin: Springer Science & Business Media; 2012.
 27.
Osborne J. Improving your data transformations: applying the BoxCox transformation. Pract Assess Res Eval. 2010;15(1):12.
 28.
Wasserman L. All of statistics: a concise course in statistical inference. Berlin: Springer Science & Business Media; 2013.
 29.
Bertsekas DP, Tsitsiklis JN. Introduction to probability. Belmont: Athena Scientific; 2002.
 30.
Hsu SH, Hsieh JPA, Chih TC, Hsu KC. A twostage architecture for stock price forecasting by integrating selforganizing map and support vector regression. Expert Syst Appl. 2009;36(4):7947–51.
 31.
Chang FJ, Chang LC, Kao HS, Wu GR. Assessing the effort of meteorological variables for evaporation estimation by selforganizing map neural network. J Hydrol. 2010;384(1–2):118–29.
 32.
Verbeek JJ, Vlassis N, Kröse BJ. Selforganizing mixture models. Neurocomputing. 2005;63:99–123.
 33.
Lin GF, Chen LH. Time series forecasting by combining the radial basis function network and the selforganizing map. Hydrol Process. 2005;19(10):1925–37.
 34.
Kullback S, Leibler RA. On information and sufficiency. Ann Math Stat. 1951;22(1):79–86.
 35.
http://www.aemo.com.au/Electricity/NationalElectricityMarketNEM/Datadashboard#pricedemand.
 36.
Soliman SA, AlKandari AM. Electrical load forecasting: modeling and model construction. Amsterdam: Elsevier; 2010.
 37.
Rencher AC. Methods of multivariate analysis. Hoboken: Wiley; 2003.
 38.
Sharma S. Applied Multivariate Techniques. New York: Wiley; 1996. p. 512.
 39.
https://towardsdatascience.com/forecastkpirmsemaemapebiascdc5703d242d.
 40.
https://medium.com/humaninamachineworld/maeandrmsewhichmetricisbettere60ac3bde13d.
 41.
Arteconi A. An overview about criticalities in the modelling of multisector and multienergy systems. Environments. 2018;5(12):130.
 42.
Jaddi NS, Abdullah S, Hamdan AR, Taguchibased parameter designing of genetic algorithm for artificial neural network training. In: 2013 International Conference on Informatics and Creative Multimedia, 2013, p. 278–281: IEEE.
Funding
This work was supported by Niroo Research Institute (NRI) under Contract No. PONPN06.
Author information
Affiliations
Contributions
ZK: Developed the theory, collected the data, performed the numerical simulations. discussed the results, wrote the manuscript. MS: Supervised the project, developed the theory, collected the data, performed the numerical simulations, discussed the results and commented on the manuscript, contributed to the final version of the manuscript. Both authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kaheh, Z., Shabanzadeh, M. The effect of driver variables on the estimation of bivariate probability density of peak loads in longterm horizon. J Big Data 8, 15 (2021). https://doi.org/10.1186/s40537020004048
Received:
Accepted:
Published:
Keywords
 Longterm forecasting
 Robust multienergy systems
 Annual and seasonal peak load
 Selforganizing mixture network
 Probability density function
 Driver variables