GCT-TTE: Graph Convolutional Transformer for Travel Time Estimation

This paper introduces a new transformer-based model for the problem of travel time estimation. The key feature of the proposed GCT-TTE architecture is the utilization of different data modalities capturing different properties of an input path. Along with the extensive study regarding the model configuration, we implemented and evaluated a sufficient number of actual baselines for path-aware and path-blind settings. The conducted computational experiments have confirmed the viability of our pipeline, which outperformed state-of-the-art models on both considered datasets. Additionally, GCT-TTE was deployed as a web service accessible for further experiments with user-defined routes.


Introduction
Travel time estimation (TTE) is an actively developing branch of computational logistics that considers the prediction of potential time expenditures for specific types of trips [1,2].With the recent growth of urban environment complexity, such algorithms have become highly demanded both in commercial services and general traffic management [3].Following this line, better TTE decreases logistic costs for different kinds of delivery [4], improves end-user experience for taxi services [5], and ensures the quality of adaptive traffic control [6].
Despite the applied significance of travel time estimation, it still remains a challenging task in the case of ground vehicles.This situation arises from the influence of different patterns of road network topology, nonlinear traffic dynamics, changing weather conditions, and other types of unexpected temporal events.The majority of the currently established algorithms [7,8] tend to utilize specific data modalities in order to capture complex spatio-temporal dependencies influencing the traffic flow.With the recent success of multimodal approaches in adjacent areas of travel demand prediction [9] and journey planning [10], fusing the features from different sources is expected to be the next step towards better performance in TTE.
In this paper, we explored the predictive capabilities of TTE algorithms with different temporal encoders and proposed a new transformer-based model GCT-TTE.The main contributions of this study are the following: 1.In order to perform the experiments with the image modality, we extended the graphbased datasets for Abakan and Omsk [11] by the map patches (image modality) in accordance with the provided trajectories.Currently, the extended datasets are the only publicly available option for experiments with multimodal TTE algorithms.2. In order to boost further research in the TTE area, we reimplemented and published the considered baselines in a unified format as well as corresponding weights and data preprocessing code.This contribution will enable the community to enhance evaluation quality in the future, as most of the TTE methods lack official implementations.3. We proposed the GCT-TTE neural network for travel time estimation and extensively studied its generalization ability under various conditions.Obtained results allowed us to conclude that our pipeline achieved better performance regarding the baselines in terms of several metrics.Conducted experiments explicitly indicate that the performance of the transformer-based models is less prone to decrease (in the sense of the considered metrics) with the scaling of a road network size.This property remains crucial from an industrial perspective, as the classic recurrent models undergo considerably larger performance dropdowns.4. For demonstration purposes, we deployed inference of the GCT-TTE model as the web application accessible for manual experiments.
The web application is available at http://gctte.onlineand the code is published in the GitHub repository of the project.https:// github.com/ Eigho net/ GCT-TTE.

Related work
Travel time estimation methods can be divided into two main types of approaches corresponding to the path-blind and path-aware estimation, Table 1.The path-blind estimation refers to algorithms relying only on data about the start and end points of a route [12].The path-aware models use intermediate positions of a moving object represented in the form of GPS sequences [13], map patches [14], or a road subgraph [7].Despite the computational complexity increase, such approaches provide significantly better results, which justify the attention paid to them in the recent studies [8,15,16].
One of the earliest path-aware models was the wide-deep-recurrent (WDR) architecture [17], which mostly inherited the concept of joint learning from recommender systems [18].In further studies, this approach was extended regarding the usage of different data modalities.In particular, the DeepIST [14] model utilizes rectangular fragments of a general reference map corresponding to elements of a route GPS sequence.Extracted images are fed into a convolutional neural network (CNN) that captures spatial patterns of depicted infrastructure.These feature representations are further concatenated into the matrix processed by the long short-term memory (LSTM) layer [19].
In contrast with the other approaches, DeepTTE [20] is designed to operate directly on GPS coordinates via geospatial convolutions paired with a recurrent neural network.The first part of this pipeline transforms raw GPS sequences into a series of feature maps capturing the local spatial correlation between consecutive coordinates.The final block learns the temporal relations of obtained feature maps and produces predictions for the entire route along with its separate segments.
The concept of modality fusing was first introduced in TTE as a part of the DeepI2T [21] model.This architecture uses large-scale information network embedding [22] to produce grid representations and 3-layer CNN with pooling for image processing.As well as DeppTTE, DeepI2T includes the segment-based prediction component implemented in the form of residual blocks on the top of the Bi-LSTM encoder.
In addition to extensively studied recurrent TTE methods, it is also important to mention recently emerged transformer models [23,24].Despite the limited comparison with classic LSTM-based methods, they have already demonstrated promising prediction quality, preserving the potential for further major improvements [25,26].As most of the transformer models lack a comprehensive evaluation, we intend to explore GCT-TTE performance with respect to a sufficient number of state-of-the-art solutions to reveal its capabilities explicitly.

Preliminaries
In this section, we introduce the main concepts required to operate with the proposed model, Fig. 1.
Fig. 1 Example of data modalities of an arbitrary route r: for each GPS coordinate c r i from c r , there is a corresponding node g r i with associated features and map patch p r i Route A route r is defined as the set {c r , a r , t r } , where c r is the sequence of GPS coor- dinates of a moving object, a r is the vector of temporal and weather data, t r is the travel time.
As the image modality p r of a route r, we use geographical map patches corresponding to each coordinate c r i ∈ c r .Each image has a fixed size 256 × 256 × 3 across all of the GPS sequences in a specific dataset.
Road network Road network is represented in the form of graph G = {V , E, X} , where V = {v 1 , ... , v n } is the set of nodes corresponding to the segments of city roads, is a feature matrix of nodes describing properties of the roads' segments (additional information regarding available graph features is provided in Additional file 1: S1).
Description of a route r can be further extended by the graph modality , where ρ(c r i , v j ) is the minimum Euclidean distance between coordinates associated with v j and c r i .Following the same concept as in the case of p r , the graph modality represents a sequence of nodes and their features aggregated with respect to the initial GPS coordinates c r .
Travel time estimation For each entry r, it is required to estimate the travel time t r using the elements of feature description {c r , p r , g r , a r }. centroids and geographical coordinates of graph vertexes.Due to the limitations of the API throughput, the procedure of image extraction was distributed between several machines with a total execution time exceeding 1 week.The provided extension consists of images dated July 2022: due to the absence of significant changes in the road network topology since 2020, image modality for Abakan and Omsk remains actual with respect to the original graph-based data.The content of the patches includes a full range of geographic objects useful for travel time estimation (e.g., road networks, landscape groups, buildings and associated infrastructural objects) and covers all of the routes provided in the initial datasets.

Data
Depending on the requirements of the considered learning model, image datasets had to be organized regarding the fixed grid partitions or centered around the elements of GPS sequences.In the first case, a geographical map of a city was divided into equal disjoint patches, which were further mapped with the GPS data in accordance with the presence of coordinates in a specific partition.Trajectory-based approach to dataset construction does not require the disjoint property of images and relies on the extraction of patches with the center in the specified coordinate, Algorithm 1 (collect and split functions can be accessed in Additional file 1: S2, S3).The obtained grid-based image dataset consists of 96,101 instances for Abakan and 838,865 for Omsk while the trajectory-based dataset has 544,502 and 3,376,294 images correspondingly.
One of the crucial features of the considered datasets is the absence of traffic flow properties.The availability of such data is directly related to the specialized tracking systems (based on loop detectors or observation cameras), which are not presented in the majority of cities.In order to make the GCT-TTE suitable for the greatest number of urban environments, we decided not to limit the study by the rarely accessible data.

Method
In this section, we provide an extensive description of the GCT-TTE main components: pointwise and sequence representation blocks, Fig. 3.

Patches encoder
In order to extract features from the image modality, we utilized the RegNetY [27] architecture from the SEER model family.The key component of this architecture is the convolutional recurrent neural network (ConvRNN) which controls the spatiotemporal information flow between building blocks of the neural network.
Each RegNetY block consists of three operators.The initial convolution layer of t'th block processes the input tensor X t 1 and returns the feature map X t 2 .Next, the obtained representation X t 2 is fed to ConvRNN: where H t−1 is the hidden state of the previous RegNetY block, b h is a bias tensor, C x and C h correspond to convolutional layers.In the following stage, X t 2 and H t are fed as input to the last convolution layer, which is further extended by residual connection.
As the SEER models are capable of producing robust features that are well-suited for out-of-distribution generalization [28], we pre-trained RegNetY with the following autoencoder loss: where L is the binary cross-entropy function, f is an image flattening operator, and W is the projection matrix of learning parameters that maps model output to the flattened image.

Auxiliary encoder
Along with the map patches and graph elements, we apply additional features a r corresponding to the temporal and weather data (e.g., trip hour, type of day, precipitation).The GCT-TTE model processes this part of the input with the help of a trivial linear layer: (1) where W is a matrix of learning parameters.

Graph encoder
The graph data is handled with the help of the graph convolutional layers defined as follows: where h is a matrix of learning parameters of k'th convolutional layer, N (u) is a set of neighbour nodes of u, AGG v∈N (u) is a sum aggregarion function, and To accelerate the convergence of the GCT-TTE model, we pre-trained the weights of the graph convolutions by the Deep Graph InfoMax algorithm [30].This approach optimizes the loss function that allows learning the difference between initial and corrupted embeddings of nodes: where h u is an embedding of node u based on the initial graph G , hu is an embedding of a node u from the corrupted version G of the graph G , D corresponds to the discriminator function.
The final output of the pointwise block constitutes a concatenation of the weighted representations and auxiliary data for each route r with k segments: where H r is the matrix of size k × e g of graph-based segment embeddings, I r is the matrix of size k × e i obtained from a flattened RegNet output, α , (1 − α) , and β correspond to the weight coefficients of specific modalitites.

Sequence representation block
To extract sequential features from the output of the pointwise representation block, it is fed to transformer encoder [31].The encoder consists of two attention layers with a residual connection followed by a normalization operator.The multi-head attention coefficients are defined as follows: where x i , x j ∈ P r , h is an attention head, d k is a scale coefficient, W T h,q and W T h,k are query and key weight matrices, w j is a vector of softmax learning parameters.The output of the attention layer will be: (3) A r = Wa r , (4) where W T h,v is value weight matrix, H is a number of attention heads.The final part of the sequence representation block corresponds to the flattening operator and several linear layers with the ReLU activation, which predict the travel time of a route.

Results
In this section, we reveal the parameter dependencies of the model and compare the results of the considered baselines.

Experimental setup
The experiments were conducted on 16 GPU Tesla V100.For the GCT-TTE training, Adam optimizer [32] was chosen with a learning rate 5 × 10 −5 and batch size of 16.For better convergence, we apply the scheduler with patience equal to 10 epochs and 0.1 scaling factor.The training time for the final configuration of the GCT-TTE model is 6 h in the case of Abakan and 30 for Omsk.
The established values of quality metrics were obtained from the 5-fold crossvalidation procedure.As the measures of the model performance, we use mean absolute error (MAE), rooted mean squared error (RMSE), and 10% satisfaction rate (SR).Additionally, we compute mean absolute percentage error (MAPE) as it is frequently applied in related studies.

Models comparison and evaluation
The results regarding path-blind evaluation are depicted in Table 3. Neighbor average (AVG) and linear regression (LR) demonstrated the worst results among the trivial baselines as long as gradient boosted decision trees (GBDT) explicitly outperformed more complex models in the case of the largest city.The MURAT model achieved the best score for Abakan in terms of MAE and RMSE, while GCT-TTE has the minimum MAPE among all of the considered architectures.
Demonstrated variability of metric values makes the identification of the best model rather a hard task for a path-blind setting.The simplest models are still capable to be competitive regarding such architectures as MURAT, which was expected to perform (8)  tangibly better on both considered datasets.The results regarding GCT-TTE can be partially explained by its structure as it was not initially designed for a path-blind evaluation.
As can be seen in Table 4, the proposed solution outperformed baselines in terms of the RMSE value, which proves the rigidity of GCT-TTE towards large errors prevention.The comparison of MAE and RMSE for considered methods has shown a minimal gap between these metrics in the case of GCT-TTE for both cities, signifying the efficiency of the technique with respect to dataset size.Overall, the results have confirmed that GCT-TTE appeared to be a more reliable approach than the LSTM-based models: while MAPE remains approximately the same across top-performing architectures, GCT-TTE achieves significantly better MAE and RMSE values.Conducted computational experiments also indicated that DeepI2T and WDR have intrinsic problems with the convergence, while GCT-TTE demonstrates smoother training dynamics.

Performance analysis
In the case of both datasets, dependencies between the travelled distance and obtained MAE on the corresponding trips reveal similar dynamics: as the path length increases, the error rate continues to grow, Fig. 4b, d.The prediction variance is inversely proportional to the number of routes in a particular length interval except for the small percentage of the shortest routes.The main difference between the MAE curves is reflected in the higher magnitudes of performance fluctuations in Abakan compared to Omsk.
The temporal dynamics of GCT-TTE errors exhibit rich nonlinear properties during a 24-hour period.The shape of the error curves demonstrates that our model tends to accumulate a majority of errors in the period between 16:00 and 18:00, Fig. 4a, c.This time interval corresponds to the end of the working day, which has a crucial impact on the traffic flow foreseeability.
Despite the mentioned performance outlier, the general behaviour of temporal dependencies allows concluding that GCT-TTE successfully captures the factors influencing the target value in the daytime.With the growing sparsity of data during night hours, it is still capable of producing relevant predictions for Omsk.In the case of Abakan, the GCT-TTE performance drop can be associated with a substantial reduction in intercity trips number (which emerged to be an easier target for the model).Focusing on higher levels of seasonality, day-and week-based temporal dependencies of error demonstrate explicit periodical behaviour, Fig. 5.The GCT-TTE model performs better at the end of the week for both considered cities, with a pronounced error decrease in the case of Omsk.In contrast, the middle of the week (i.e.Wednesday for Abakan and Tuesday for Omsk) is the most challenging period, which has averagely 12.48% higher MAE compared to Saturday and Sunday.

Sensitivity analysis
In order to achieve better prediction quality, we extensively studied the dependencies between GCT-TTE parameters and model performance in the sense of the MAE metric.The best value for modality coefficient α was 0.9, which reflects the significant contri- bution of graph data towards error reduction.For the final model, we utilized 2 graph convolutional layers with hidden size 192, Fig. 6a, b.The lack of aggregation depth can significantly reduce the performance of GCT-TTE, while the excessive number of layers has a less expressive negative impact on MAE.A similar situation can be observed in the case of the hidden size, which is getting close to a plateau after reaching a certain threshold value.
Along with the graph convolutions, we explored the configuration of the sequence representation part of GCT-TTE.Since the transformer block remains its main component, the computational experiments were focused on the influence of encoder depth on quality metrics, Fig. 3c.As it can be derived from the U-shaped dependency, the best number of attention layers is 3.

Demonstration
In order to provide access to the inference of GCT-TTE, we deployed a demonstrational application http:// gctte.online in a website format, Fig. 7.The application's interface consists of a user guide, navigation buttons, erase button, and a comparison button.A potential user can construct and evaluate an arbitrary route by clicking on the map at the desired start and end points: the system's response will contain the shortest path and the corresponding value of the estimated time of arrival.
For additional evaluation of considered baselines, the limited number of predefined trajectories with known ground truth can also be requested.In this case, the response will contain three random trajectories from the datasets with associated predictions of WDR, DeepI2T, and GCT-TTE models along with the real travel time.

Conclusion
In this paper, we introduced a multimodal transformer architecture for travel time estimation and performed an extensive comparison with the other existing approaches.Obtained results allow us to conclude that the transformer-based models can be efficiently utilized as sequence encoders in the path-aware setting.Our experiments with different data modalities revealed the superior importance of graphs compared to map patches.Such an outcome can be explained by the inheritance of main features between modalities where graph data represents the same properties more explicitly.In further studies, we intend to focus on the design of a more expressive image encoder as well as consider the task of path-blind travel time estimation, which currently remains challenging for the GCT-TTE model.

Fig. 2
Fig. 2 Cumulative frequencies of car activity and distribution of trips duration for Abakan (a) and Omsk (b) in the four hours interval

Fig. 3
Fig. 3 Demonstration of the GCT-TTE pipeline: feature extraction algorithms applied to considered modalities and extended by transformer encoder capturing the concatenated sequence of embeddings

4
Spatial and temporal (hour) dependencies across the different groups of test entries for Abakan (a, b) and Omsk (c, d): blue and red lines depict mean and median values of MAE, borders of filled area correspond to Q1 and Q3 quartiles of a MAE distribution

Fig. 5
Fig. 5 Temporal (day and week) dependencies across the different groups of test entries for Abakan (a, b) and Omsk (c, d): blue and red lines depict mean and median values of MAE, borders of filled area correspond to Q1 and Q3 quartiles of a MAE distribution.The weekends are represented by the vertical areas filled with a darker colour

Fig. 6
Fig. 6 Parametric dependencies of GCT-TTE performance for Abakan: number of graph convolutions (a), hidden size of graph convolutions (b), and number of transformer encoder layers (c)

Table 1
Demonstration of utilized modalities in path-blind and path-aware models

Table 2
Description of the Abakan and Omsk datasets

Table 3
Path-blind models comparison

Table 4
Path-aware models comparison