evaluate_dnn_in_test_dataset(experiment_id, path_datasets_folder='./datasets', path_hyperparameter_folder='./experimental_files', path_recalibration_folder='./experimental_files', nlayers=2, dataset='PJM', years_test=2, shuffle_train=True, data_augmentation=0, calibration_window=4, new_recalibration=False, begin_test_date=None, end_test_date=None)¶
Function for easy evaluation of the DNN model in a test dataset using daily recalibration.
The test dataset is defined by a market name and the test dates dates. The function generates the test and training datasets, and evaluates a DNN model considering daily recalibration and an optimal set of hyperparameters.
Note that before using this class, a hyperparameter optimization run must be done using the
hyperparameter_optimizerfunction. Moreover, the hyperparameter optimization must be done using the same parameters:
calibration_window, and either the
years_testor the same
An example on how to use this function is provided here.
- experiment_id (str) – Unique identifier to read the trials file. In particular, every hyperparameter optimization
set has an unique identifier associated with. See
hyperparameter_optimizerfor further details
- path_datasets_folder (str, optional) – Path where the datasets are stored or, if they do not exist yet, the path where the datasets are to be stored
- path_hyperparameter_folder (str, optional) – Path of the folder containing the trials file with the optimal hyperparameters
- path_recalibration_folder (str, optional) – Path to save the forecast of the test dataset
- nlayers (int, optional) – Number of hidden layers in the neural network
- dataset (str, optional) – Name of the dataset/market under study. If it is one one of the standard markets,
"DE", the dataset is automatically downloaded. If the name is different, a dataset with a csv format should be place in the
- years_test (int, optional) – Number of years (a year is 364 days) in the test dataset. It is only used if the arguments begin_test_date and end_test_date are not provided.
- begin_test_date (datetime/str, optional) – Optional parameter to select the test dataset. Used in combination with the argument end_test_date. If either of them is not provided, the test dataset is built using the years_test argument. begin_test_date should either be a string with the following format d/m/Y H:M, or a datetime object
- end_test_date (datetime/str, optional) – Optional parameter to select the test dataset. Used in combination with the argument begin_test_date. If either of them is not provided, the test dataset is built using the years_test argument. end_test_date should either be a string with the following format d/m/Y H:M, or a datetime object
- shuffle_train (bool, optional) – Boolean that selects whether the validation and training datasets were shuffled when performing the hyperparameter optimization. Note that it does not select whether shuffling is used for recalibration as for recalibration the validation and the training datasets are always shuffled.
- data_augmentation (bool, optional) – Boolean that selects whether a data augmentation technique for electricity price forecasting is employed
- calibration_window (int, optional) – Number of days used in the training/validation dataset for recalibration
- new_recalibration (bool, optional) – Boolean that selects whether a new recalibration is performed or the function re-starts an old one.
To restart an old one, the .csv file with the forecast must exist in the
A dataframe with all the predictions in the test dataset. The dataframe is also written to the folder
- experiment_id (str) – Unique identifier to read the trials file. In particular, every hyperparameter optimization set has an unique identifier associated with. See