Real-time spatio-temporal event detection on geotagged social media

A key challenge in mining social media data streams is to identify events which are actively discussed by a group of people in a specific local or global area. Such events are useful for early warning for accident, protest, election or breaking news. However, neither the list of events nor the resolution of both event time and space is fixed or known beforehand. In this work, we propose an online spatio-temporal event detection system using social media that is able to detect events at different time and space resolutions. First, to address the challenge related to the unknown spatial resolution of events, a quad-tree method is exploited in order to split the geographical space into multiscale regions based on the density of social media data. Then, a statistical unsupervised approach is performed that involves Poisson distribution and a smoothing method for highlighting regions with unexpected density of social posts. Further, event duration is precisely estimated by merging events happening in the same region at consecutive time intervals. A post processing stage is introduced to filter out events that are spam, fake or wrong. Finally, we incorporate simple semantics by using social media entities to assess the integrity, and accuracy of detected events. The proposed method is evaluated using different social media datasets: Twitter and Flickr for different cities: Melbourne, London, Paris and New York. To verify the effectiveness of the proposed method, we compare our results with two baseline algorithms based on fixed split of geographical space and clustering method. For performance evaluation, we manually compute recall and precision. We also propose a new quality measure named strength index, which automatically measures how accurate the reported event is.

researchers and organizations are relying on social media for obtaining timely news. One emerging use case of significant importance is where social media information is used for real-time event detection. For example, governments and organizations may be interested in events that are occurring in a particular geographical area, such as detecting a bush fire near residential areas, traffic congestion and accidents on highways, protests and other security incidents in the city. Being able to promptly detect such events is important as this early detection allows the relevant authorities and organizations to make the necessary responses to address these potentially adversarial events.
The traditional approach to detect events in social streams is to track the aggregate trend changes based on the count of geotagged social media data at given location and time. This approach is very closely related to topic detection and tracking, where an event is conventionally represented by a number of keywords, topics or tweets showing bursts in appearance count, i.e. keywords that are mentioned significantly more often during a (not too short) time period than in the period preceding it [4,5]. Most of these existing approaches detect events at fixed spatial and temporal resolutions, e.g., grids, which do not adequately capture the dynamic changes in tweeting volume across different areas and time [6]. However, real-life events can occur at any spatial or temporal resolution, which is not known a priori and, therefore, algorithms that have fixed resolution result in suboptimal performance. While there are some approaches that are designed for detecting events at multiscale spatio-temporal resolutions [7][8][9], they are essentially batch-based algorithms which are not directly applicable in the online realtime event detection scenarios. While there are a few online spatial event detection algorithms proposed in the literature [10][11][12], they are fixed in terms of spatial and temporal resolution. Many of these works also utilized a supervised approach to event detection which may not work well for new types of events. In this paper, we aim to address the following problem: Given a stream of geotagged social media posts, how can we identify a set of posts that corresponds to a spatio-temporal event based on spatial and temporal proximity.

Research objectives and contributions
Our main objective is to detect spatio-temporal events from social media, which could be any event that is being discussed loudly (frequently) in a specific local or global area. There are various challenges with this type of event detection. The first challenge associated with event detection is that there is no consensus among researchers on the definition of an event. The second challenge is that the location, time and the scale of the events (both in time and space) are not known before hand. Furthermore, the characteristics of past events may not be indicative of future events. Finally, the event detection algorithms developed need to be single pass and computationally efficient as we are interested in detecting events in real-time from high velocity data streams.
To address these challenges, we propose a novel approach to online spatio-temporal event detection that utilizes: (i) a quad-tree and Poisson model variant to dynamically identify events across different spatial scales; and (ii) a smoothing and filtering approach to effectively detect events with different temporal resolutions. The contribution of this paper can be summarized as follows 1 : • We leverage the quad-tree data structure for multi-scale event detection, to overcome the problem of detecting events with varying spatial coverage. • We combine a Poisson model with a smoothing function for unsupervised event detection, to enable us to detect previously unseen events with different temporal resolutions. • We proposed a new event validation measure, strength index (SI), which automatically assesses the accuracy of the detected events by using social media entities. • We performed quantitative and comparative evaluations, which confirm the effectiveness of the proposed method in detecting new events correctly and completely. • We demonstrated the generalizability of the proposed method by evaluating on different social media datasets, namely: Twitter and Flickr.

Structure and organization
This paper is structured as follows. Section "Related work" reviews related work in the area of event detection, while Section "Problem statement" introduces the formulation of the event detection problem. Section "Proposed algorithm" describes our proposed algorithm for location-based event detection, and Section "Results" shows the experimental results of our proposed algorithm against various baselines. Finally, Section "Conclusion" summarizes and concludes this paper.

Related work
In this section, we discuss various works that related to event detection on social media, ranging from general event detection works to those that focus on location-specific event detection.

General event detection
There are various related work that study the general problem of event detection using social media, without an explicit focus on the spatial aspect of events. These works aim to detect events on social media in the form of trending events based on posting patterns or specific pre-determined events. For example, numerous researchers have examined the problem of identifying trending and bursty events [14][15][16], and detecting controversial events [17,18]. Among these works, there are many research that utilize supervised approaches for detecting events, such as [15] that used a Support Vector Machine for classifying tweets into one of the four topics of news, ongoing events, memes, and commemoratives. Others like Sakaki et al. [19,20] also use a trained Support Vector Machine to first determine if tweets are earthquake-related or not, then applying Kalman filtering and particle filtering on tweets to estimate the centres of these detected earthquakes. Popescu and Pennacchiotti [17] proposed a Gradient Boosted Decision Tree trained on textual, social and news related features to determine if a set of tweets are controversy related or not. These works typically require a labelled dataset on which social media posts is associated with a specific event type, which works well for those events but may be challenging to generalize to unseen or new types of events.
Another group of these works utilize largely unsupervised approaches for detecting events, without the need for explicit labels. For example, Weng and Lee [21] utilize (tweet) word signals derived from wavelet analysis, which are clustered together using a modularity-based graph partitioning to represent detected events. In a similar spirit, Aggarwal and Subbian [4] proposed an online clustering approach for detecting events based on the textual content of social media posts, their temporal distribution and the interaction network among users. Many of these works utilize clustering techniques or similar approaches for detecting events. However, many of these works aim to detect events without considering the spatial aspects of these events. Considering the spatial aspect of event is important for applications such as disaster detection and crisis management.
Similarly, there are also various web and mobile applications for tracking general events or retrieving tweets related to specific events [22][23][24][25]. For a more detailed survey on general event detection, we refer readers to [26][27][28].

Location-specific event detection
Various approaches have also been developed for event detection in the spatial, textual (i.e. semantic) and temporal context, with many of these considering these different aspects separately [6,29]. However, there are only limited works which combine spatiotemporal information for event detection. In [30], spatio-temporal events are detected by clustering the geotagged tweets, followed by topic modelling using the summarized words in each estimated cluster. Similarly, [24,31] adopted an approach of identifying topics associated with specific locations by applying Latent Dirichlet allocation on tweets posted in the same locality. Others like [32] combine clustering techniques with embeddings of tweet location, time, and text for event detection. Although Twitter enables users to post tweets with their current locations (longitude and latitude), only an average rate of 0.85-3% tweets being geotagged per day, around 7,000,000 geotagged tweets are posted per day [33]. This characteristic where only a small proportion of the tweets are geotagged severely restricts the accuracy of spatial-based event detection approaches.
Another key challenge in this research area is regarding the method by which the geographical area is partitioned for subsequent event detection. A typical approach is to utilize a uniform-grid approach, which applies an equal-width grid of a specific size over the data domain. However, this approach does not solve the problem for various reasons. First, a good method for choosing the grid size is required, which has not been adequately covered in the literature [34]. Second, fixed grid cells might not help in finding both local and global events. For example, using a low resolution grid for spatial data might capture only the global events occurring on the state or the country level, while a high resolution grid will detect the events on smaller scales (local events), i.e. within the community or the city where the grid cell ranges from 1 km-50 km. Another solution is to manually select a set of points of interests (POIs), where each POI is a fixed size grid cell. Following this approach, we can control the number of POIs based on tweets distribution density. For instance, areas in city centre might have many POIs with small grid cells, while areas far from the city might have few POIs with large grid size. But, having fixed POIs limits the location of detected events to the chosen POIs only. In addition, the manually selected POIs has to be done for each geographical area of analysis.

Differences with earlier works
Our proposed method differs from these earlier works in the following ways. The existing works on general event detection aims to detect general events that are discussed on social media without identifying the locality of these events. In contrast, our proposed method aims to detect location-specific events, which are localized events happening within a specific area. Furthermore, the supervised approaches to general event detection requires implicitly labelled events that may not generalized well to unseen or new events, whereas our work do not require explicit event labels.

Problem statement
In this section, we first introduce some basic notation and definitions used in our work, before formally defining the problem of spatio-temporal event detection. Table 1 summarizes the key notations used in our work, which we elaborate next.

Definition 1 (Social Media Post):
In our work, we aim to utilize geo-tagged social media as an input to our location-based event detection algorithm. As such, each social media post p ∈ S forms the basic component of our algorithm. We represent each social media post as p = �t, l, f � , where each social media post p is associated with a timestamp t, location l and features f. The timestamp t and location l are straightforward representations of date/time and latitude/longitude coordinates but can be easily modified to other representations, e.g., unix timestamp and landmarks, instead of date/time and latitude/longitude coordinates. On the other hand, features f can represent multiple aspects of different types of social media, e.g., text in a tweet, user tags for a photo, etc.   A set of posts E ⊆ W that represents a spatio-temporal event S = {p 1 , p 2 , . . . , p n } denote the first n posts from the data stream, ordered temporally such that for p i and p j where i < j , t i ≤ t j . Thus, data stream S represents a series of social media posts ordered in the sequence they arrived in.

Definition 3 (Current/Query Window):
In the context of a Data Stream S = {p 1 , p 2 , . . . , p n } , we define a current/query window W = {p n−m+1 , . . . , p n−1 , p n } , where W ⊆ S . This current/query window represents the current set of social media posts from post p n−m+1 to post p n . For generalizability and flexibility, the window size can be based on either a fixed number of posts, m > 0 , or a fixed duration between posts p n−m+1 and post p n , i.e. t n − t n−m+1 . This current/query window allows a user to decide on the temporal resolution in which he/she wants to be able to detect an event.

Formal problem definition
The focus of our work is to develop an algorithm for detecting spatio-temporal events from streaming social media, based on a provided set of current social media posts, i.e., the query/current window. We define a spatio-temporal event as a set of social media posts that represents an increase in activity across a period of time within the same locality, based on the current/query window. Given a data stream of social media posts S = {p 1 , p 2 , . . . , p n } and a query window W = {p n−m+1 , p n−m+2 , . . . , p n } that represents currently observed social media posts, we want to identify a set of posts E ∈ W with the following goals: • Spatial Proximity, e.g. p x ∈E p y ∈E dist(l x , l y ) should be significantly smaller than that for the same number of posts drawn uniformly at random from W. • Temporal Proximity, e.g. p x ,p y ∈E (t y − t x ) (for consecutive p x and p y ), should be significantly smaller than that for the same number of posts drawn uniformly at random from W. • Significance, |E| should be as large as possible while maintaining Spatial and Temporal Proximity goals.
Here dist(l x , l y ) is the geographical distance between posts p x and p y , while (t x − t y ) is the time difference between consecutive posts p x and p y . In short, we are selecting a subset of social media posts that are representative of a spatio-temporal event, based on their spatial and temporal proximity.

Proposed algorithm
In our work, we address spatial proximity by considering windows that are defined in terms of a region, and we address temporal proximity by considering a sliding window and assessing the change in the number of posts in a given region for two consecutive windows. Formally, we consider a number of sliding windows, W i (Ŵ j ) , identified by an unbounded slide sequence number i = 1, 2, . . . and finite set of regions Ŵ j , j = 1, 2, . . . , γ , where Ŵ j = Ŵ . All of our sliding windows have a time duration of T and slide increment T , with the head of the window being T i , i.e. every sliding window i covers time interval [T i − T , T i ) . In this way, we define Each region serves as a spatial proximity bound for the posts that it contains, in the sense that we can consider the posts being within a given region as satisfying the spatial proximity goal from the problem definition. There are many ways that regions can be selected, e.g. they could be a uniform mesh based partition of the space, or each region could be associated with a POI in the space (e.g. a region around a park or building), etc. In our work, we consider a multi-scale region selection approach based on a quad-tree division of space; in this case regions are overlapping with some regions subsuming others. In previous work, we have also considered the POI region selection approach and we make comparisons between them in this paper. In order to assess the change in the number of posts from one window to the next, we assume that the number of posts arriving in a given time interval has a Poisson distribution, and we thereby assign an estimate T-arrival rate of posts for each region based on its sliding window: where |x| is the cardinality of set x. Finally, as the basis for event detection in each region, for each window W ij , we consider the observed number of posts in the slide increment and make use of the Poisson p.m.f.: If P ij is significantly low (below a threshold) then we consider the possibility that region j has exhibited an event and we consider the posts within the slide increment to potentially be comprising that event. The details of our approach, called Spatio-temporal Online Event Detection Algorithm (Algorithm 1), includes more aspects that are explained next, such as: (1) building a multiscale spatial resolution grid using the quadtree method, (2) event detection using the Poisson model and signal smoothing, (3) event merging and (4) event pruning. Generally, our algorithm maintains an unbounded set of detected events E found in the unbounded stream S . Figure 1 provides an overview of our algorithm and the detailed explanation for each phase is provided in the following subsections. As well, Table 2 provides an overview of the notation used in the algorithm.

Phase 1: build quad-tree
In this phase, we use the quad-tree method for spatial decomposition [35,36]. This method has been used in a variety of applications including image processing, computer graphics, geographic information systems and robotics [37,38]. We construct a (1) The set of posts in the sliding window at interval i for region j  quad-tree at each time interval i. The quad-tree in two dimensional space starts with a large rectangular region, in our work Ŵ 1 = Ŵ , which represents the root of the quadtree. The root region Ŵ 1 is subdivided into four equal sized regions {Ŵ 11 , Ŵ 12 , Ŵ 13 , Ŵ 14 } and each subregion is recursively subdivided, i.e. creating {Ŵ 111 , Ŵ 112 , . . . } , and so on. Subdivision of a region x only occurs if both |W ix | > θ count posts and the area of region x is at least θ area . These constraints limit the minimum spatial resolution. As the quad-tree is constructed we also compute ij and C ij for each node, including internal nodes; here node is synonymous with region in that region j is node j.

Phase 2: event detection
For a sliding window interval i and all regions j (including those at internal nodes of the quad-tree), we use the Poisson distribution [39][40][41] to measure how likely the observed number of posts, C ij , is for the slide increment T that immediately follows the sliding window. The estimate arrival rate of posts is computed as in Eq. 2 and the probability, P ij , of observing C ij posts in time T is computed as in Eq. 3. The more unlikely the observation, which may result from a significantly large increase or decrease in posts from the mean, the more we consider the posts (or lack thereof ) to comprise an event. Therefore regions with P ij < τ 1 , a constant threshold, could be flagged as potential regions for events. To compensate for sparse and/or incomplete data, where the stream of posts may not have a significantly strong representation of social media posts, we "smooth" the Poisson signal by computing an exponential decaying average event signal, F ij : where δ ij is the scaled Poisson signal, F ij and F i−1,j are the event signal values for node (region) j at the i and i − 1 intervals respectively, and 0 ≤ α ≤ 1 is a constant decay parameter. Finally, if F ij ≥ τ 2 , an event detection threshold, we flag the posts, or more specifically the interval and region, as comprising an event.

Phase 3: merge events
Each event found in the previous Event Detection phase has a different spatial resolution and a fixed temporal scale ( T ). In this phase, we construct events with multiscale temporal resolution using a merging method. Events at the same region that occur at consecutive time intervals are merged. This gives an estimate for the period of time during in which an event is highlighted rather than assuming a predefined fixed duration ( T ). For instance, if two events e 1 and e 2 occur in the same region at time intervals i and i + 1 in order, then both events are combined to one event with period of 2 T . When merging we combine the posts and average the signal strength for the merged event.

Phase 4: prune events
To further increase the precision of our event detection and to handle spatio-temporal events that occur over a changing resolution we prune events after merging them. First, we only select events with duration ≥ θ duration to be included in the final set of detected events. The idea is that the longer the event duration, the more reliable and accurate it is. In other words, regions/nodes which are flagged for short periods are most likely to be noise (i.e. false positives). Second, the fact that we compute the signal for all quad-tree nodes (i.e. both internal and leaf nodes), leads to the propagation of some flagged events over the different tree levels (i.e. multi spatial resolution). So if an event is detected at the same time on different tree levels, we only keep the node with the strongest signal.
In other words, if overlapping tree nodes (i.e. parent, child, grand child, and so on) are flagged as events from time t 1 to t 2 , then we select the node (i.e. region) with the strongest signal to be the spatial resolution of the final detected event. This gives us a set of unique events which happened at different spatial and temporal resolutions. Third, we utilize the entities in the social media posts to detect and eliminate spam or fake events. We extract the set of unique entities (which may be keywords, mentions, hashtags, etc., depending on the type of social media post) across the posts in the event. If the size of the set is less than θ entity then we remove the event.

Experimental design
In this section, we describe our datasets and give an overview of the evaluation metrics and baseline algorithm used in our experimental methodology.

Dataset and data collection
To demonstrate the generalizability of our proposed algorithm, we perform our experimental evaluation on two datasets based on Twitter and Flickr. For our Twitter dataset, we performed a two-stage collection of tweets, similar to [42]. We first used the Twitter REST API to retrieve all geo-tagged tweets posted by users in Melbourne in 2017. As we focus on geo-tagged tweets, this collection process resulted in 203, 519 geotagged tweets by 22, 264 different users.
For our Flickr dataset, we focused on geo-tagged photos posted in four large cities, namely Melbourne, London, New York and Paris. These geo-tagged photos were extracted from the Yahoo! Flickr 100M Creative Commons (YFCC100M) dataset [43,44]. The YFCC100M dataset comprises 100M geo-tagged photos and videos along with their meta-data such as latitude/longitude coordinates, date/time taken, photo name, user description, assigned tags, etc.

Evaluation methodology
In our experiments, we evaluate the various algorithms using the standard metrics of precision and recall, and our proposed metric of strength index. Precision and recall are defined based on the common definitions of true and false positives and true and false negatives, commonly used in confusion matrix. In the context of our study, they are defined as:

Precision
We use precision to measure the ratio of correctly detected events (true positives) to the total detected events, by using the formula in Eq. 5.
where tp and fp are true and false positives respectively.
The absence of ground truth labels makes the task of computing precision very challenging. As it is impractical to manually label the overly large number of events in the dataset, we propose a semi-automated assessment methodology using Google search results where each event is assigned 1 if it is a true event, 0 otherwise. To do so, we first query Google using the top k entities as well as the date-time of each detected event. We use Google query results to decide whether an event is True or False, with 1 and 0 referring to true and false event, respectively. If we do not get any useful information about the event from Google, then we manually look at the posts of the event to decide if it is a personal/private event, spam or wrong event.

Recall
Recall is also calculated using Eq. 6, which reflects the ability of the model to find all actual events within a dataset. In the context of event detection, recall measures the percentage of detected events with respect to important events/news appearing on a realworld news headlines.
where tp and fn are true positives and false negatives respectively.
Similar to precision, we perform a manual assessment for recall due to the absence of ground truth events. This is done by using Google search engine to select the most common events appearing on the news headlines for the days corresponding to the analysis. This includes festivals, public holiday events and international performances occurring in the area of analysis. Each event is represented by a list of entities, which are used to manually decide whether an event is detected by our method or not.

Strength index (SI)
To examine if the posts assigned to an event e = (region , start, end, period, posts, signal) are relevant or not, we introduce a metric, which we refer to as the event strength index (SI). SI is the fraction of the retrieved top entities to the total count of event posts. We use SI as an indicator of how important/precise a reported event is. For an event e with total number of posts C = |posts| and χ i being the i-th most frequent entity (could be hashtags and mentions for twitter or image tags and description for Flickr), we calculate SI using the following formula: for constant k > 0 , where C χ i is the number of posts that contain χ i . SI ranges from 0 to k, where k is the number of top entities. We obtain a small value for SI ( ≪ 1 ), when the top entities do not match the context of the detected event or if they are relevant but with a small number of occurrences. For example, a value of 0 for SI means that all posts for an event e are irrelevant, while a value of k means that all event posts contain at least one occurrence of each top entity.

Baseline algorithm
To show the effectiveness of the proposed method, we compare it with two baseline event detection algorithms. The first baseline is based on POIs and the second one uses clustering approach for real-time event detection. More details about each baseline are provided in the following sections.

Location-based event detection using points of interests (POIs)
POIs have been frequently used in location-based recommendation [45][46][47] and various types of event detection techniques based on POIs [48]. Building upon these works, we develop a baseline using the similar idea of tagging geo-tagged social media to POIs. Similar to these earlier works, we obtain a list of known and popular POIs for each city from their respective Wikipedia entries. In this baseline algorithm, we utilize a spatial representation of tweets based on their proximity (<100m) to known POIs, instead of assigning tweets to dynamically-sized grids based on quad trees. The remaining steps of computing Poisson signals and determining event duration remain the same as previously described in Section "Proposed algorithm".

Incremental clustering for real-time event detection
Among the existing event detection techniques and algorithms discussed in the introduction, we select the clustering based event detection approach proposed in [10] for comparison with our approach. The reason is that it is very closely related to our introduced problem. The approach detects significant clusters that are sufficiently dense and large in streams of spatial events, with the advantage of tracking cluster evolution over time. Given a list of active data points (i.e. spatial events) that occur in the interval [t c − T , t c ] , where t c is the current time and T is a maximal temporal gap, the algorithm finds the set of significant clusters by repeatedly extracting a set of event circles and unions every time tick t. In which, an event circle C is a group of active events that fits in a circle with maximal radius R. While a union is a set of event circles that have at least K-overlapped events (i.e. K-connecting events). Following this, the algorithm finds all significant clusters, where a significant event cluster is a union that includes at least N spatial events, i.e. minimal cluster size. The values for parameters t, T , R, K, and N are user-specified. More details about the algorithm can be found in [10].

Results
In this section, we evaluate the proposed method in four different aspects. Firstly, we present a preliminary analysis of the proposed method Section "Preliminary analysis". Secondly, we present a detailed comparative analysis with the baseline algorithm Section Comparative analysis of baseline algorithms". Thirdly, using the tweets over a period of one-year we evaluate our algorithm based on the precision, recall and strength index as statistical metrics Section "Case study: Twitter dataset". Finally, we show a case study of event detection using Flickr image dataset Section "Case study: Flickr dataset".

Preliminary analysis
In this section, we use a subset of the collected tweets to evaluate the individual phases of the proposed method. We extract January-2017 Melbourne tweets which contains 23,327 geotagged tweets for 5427 users. First, quad-tree is used to construct multiscale spatial grid. Then, events are detected using Poisson model. Following this, a smoothing function is applied for accurate estimation of event duration. Finally, a false positives removal phase is performed to eliminate both falsely highlighted events and spam events. Table 3 presents the different parameters used in the proposed method. The parameters are chosen after several experiments, to achieve the best performance. Figure 2 shows an example of the quad-tree results.
The Poisson signal threshold τ 1 has a strong impact on the results of our proposed method, and so we explored this parameter and show in Table 4 the number of events detected for variable values for τ 1 at different phases. Combined with Table 5, it is clear that reducing τ 1 reduces the total number of detected events while keeping the strongest events. For our case studies, we chose τ 1 = 0.01 so we can get reliable estimation of events in terms of both precision and recall.

Case study: Twitter dataset
To evaluate the performance and reliability of the proposed method, we experiment with the whole dataset for Melbourne in 2017. Figure 3 visualises some of the detected events on the map. Each event has the start and end time, total tweets, area   and the top 5 hashtags/mentions. We use the top hashtags/mentions along with the event time to manually evaluate the correctness of the event. Table 6, column "Twitter" shows the total number of flagged events after each phase of the proposed method. In total, we detect 137 events after the removal of all false positives.

Precision and strength index results
We select a random 45 events as the evaluation set. Table 7 reports the manual evaluation results for sample selected events from the evaluation set. For each event, the algorithm returns the results of the event start and end date-time, area/region in sqkm, tweets count, top 5 hashtags/mentions with its occurrences. In total, 40 out of 45 are correct events according to the manual evaluation, with a precision measure of 89%. False positives are highlighted in italics in Table 7. The table shows that both local and global events are detected using the proposed method (see the "area" column in Table 7), concurrent events such as events # 26 and 45 (highlighted in bolditalics). The method also finds private events such as event #34 and opinion events such as event #38 about marriage equality. SI index is also reported for all events (see column "SI" in Table 7). We obtain SI with an average of 1.2 across the evaluation set. This is an indication that all tweets for an event contain at least one of the relevant hashtags/mentions, which confirms the accurate results of the proposed method.

Recall and strength index results
We select 15 events to assess the recall with a total of 10 correctly detected events according to the manual evaluation, with a recall measure of 66.7% and average SI of  1.034. The reduction in recall is explainable since the social media does not contain information about all actual events. This causes a certain increase in the false negatives. Table 8 reports the date-time, top 5 entities, tweets count, event area and SI for sample selected events from the evaluation set. In the table, the false negatives are highlighted in italic.

Case study: Flickr dataset
In this section, we present our second case study for event detection using the Flickr dataset introduced in Section "Dataset and data collection". The dataset was further reduced by keeping only geotagged images for January-2013 for Melbourne and New York, while for London and Paris we selected data for January 2012. The chosen years have largest number of photos taken in January. We evaluated the proposed event detection method using the set of images collected for each city. In our experiments, instead of extracting top k hashtags/mentions as in the twitter case study, we use the title, user tags and description attributes for each image to extract the most frequent entities of an event. Table 6, column "Flickr" reports the results each individual phase of the proposed method using the Flick dataset for each city. Figure 4 shows a sample of detected events in London and Paris, January-2012 using the Flickr dataset. We manually evaluate the detected events by using Google and looking at the tags, description and title of images in an event. The figure includes the description of each event as well as the computed SI index. Similarly, we were able to identify and verify various detected events in Melbourne and New York for January-2013. These results demonstrate the effectiveness of the proposed method for event detection in different social media.

POI-based event detection
We use the January-2017 Melbourne geotagged tweets dataset to run this experiment. We selected 242 POIs (100 m x 100 m) in Melbourne where most of events occur. We apply the proposed event detection algorithm on the POIs instead of the the multiscale grid generated by quad-tree method. Every T (i.e. 10-min), we flag all POIs with smoothed signal less than the threshold ( τ 2 ). Table 9 reports the details of the detected events on 28/01/2017 using each of the proposed and baseline methods. The results show that our approach is able to detect all events that were identified by the baseline method, along with additional events with different spatial scales that the baseline method was unable to detect. Also, the conducted experiments show that POI-based method detects events with shorter duration than the quad-tree based method. The reason is that POI grid cells are small (100 m × 100 m).

Incremental clustering for event detection
We implemented the full algorithm in Python and experimented using the geotagged tweets dataset for Melbourne in 2017. We run the algorithm using the following parameters: time tick t = 10min, R = 0.25km , T = 3hrs, K = 2 , and N = 5 . The values of the first two parameters are chosen to be close to the values used in our approach for fair comparison. While the values of the other parameters are chosen to be the same as the ones used in the paper [10].
For each time tick, i.e. 10 min, we record the set of significant unions (i.e. clusters). In total, we obtained 154546 significant events in Melbourne 2017. To avoid detecting same event many times through different time ticks, we group the set of detected events by their union ID. Then we keep only the event with the highest number of tweets for each union ID. This grouping strategy has decreased the total number of significant events to 3994, since it removes all redundant events. Figure 5 visualises some of the detected events on the map. Each event has the start and end time, total tweets, SI index and the top 5 hashtags/mentions.
For quantitative comparison with our approach, we compute precision and recall for the implemented algorithm. Similar to the evaluation of our approach, we use the top hashtags/mentions along with the event time to manually evaluate the correctness of the event.
Precision evaluation. We select a random 20 events as the evaluation set. Table 10 shows the manual evaluation results for all events. For each event, we calculate the the event start and end date-time by using the date-time of all tweets assigned to each cluster. We also report area/region in sqkm, tweets count, top 5 hashtags/mentions  the majority of these events are just noise. This can be improved by increasing the values of k, and N which will result in smaller number of detected events and accordingly less false positives. Recall Evaluation. We use the same 15 common events used for recall assessment in our approach. We obtained a total of 13 correctly detected events, according to the manual evaluation, with a recall measure of 86.7% which is higher than the recall measure of our approach. The reason is that the parameters used in this experiment result in large number of true negatives and false positives. Parameters should be tuned to balance the trade off between recall and precision. Table 11 reports the date-time, top 5 hashtags/ mentions with its occurrences, tweets count, event area and SI for all events. The false negatives are highlighted in italic in the table.
Parameters effect. A different values for each of T , K, and N have been used to their their effect on the total number of detected events as well as precision and recall measures. In total we ran 21 experiments, each has different set of parameters. For each experiment, we record the total number of significant unions/events and the detected events, as shown in Table 12. From the table, the smallest number of events in Melbourne 2017 is detected using the parameters: T = 30minutes, K = 10 , and N = 15 , with a total of 59 events (highlighted in red in the bolditalic). While the largest number of events is detected using the parameters: T = 3hrs, K = 2 , and N = 5 , with a total of 154546 events (highlighted in italics in the table). This shows the great impact of parameter selection on the final detected events. Figure 6 also visualises the effect of the temporal gap parameter T on the total detected events.
We also compute the precision and recall using the parameters set that yields the smallest number of detected events, i.e. 59 events, The used parameters are: T = 30 min, K = 10 , and N = 15 . This is to confirm that tuning the parameters to get a smaller number of detected events will result in an increase for the precision while decrease the recall. For precision, 15 out of 20 events are correctly detected, resulting in precision measure of 75%. For recall, only 4 out of 15 events were correctly detected, with a recall measure of 26.7%. The detected events belong to "Australia Day", "ANZAC Day", "AFL Grand Final Holiday", and "Boxing Day". The evaluation measures for both the clustering-based method and ours are summarized in Table 13. The performed quantitative comparison shows that our method outperforms the clustering-based method.    to the tweets with specific keywords in order to obtain the desirable results, as the authors of the paper did in the twitter case study where they applied the algorithm on the weatherrelated tweets dataset.

Conclusion
In this paper, we present a multiscale spatio-temporal real-time event detection approach which is capable of detecting social media events of different spatial and temporal resolution in real-time. This approach utilizes a quad-tree data structure to identify events with varying spatial coverage, and a Poisson model with a smoothing function to detect previously unseen events with different temporal resolutions. Also, the proposed method is an unsupervised approach that does not require a list of defined topics for event detection and effectively detects both local and global events. The method is evaluated using different social media datasets: Twitter and Flickr; for different cities: Melbourne, London, Paris and New York. The experiments have demonstrated that the proposed method achieves better results than the baseline algorithm.
In the future, we plan to improve our method by taking into account the changing structure of the constructed quad-tree over time. Also, more experiments will be conducted to fine-tune the parameters of the proposed method using different datasets. The proposed method will be extended to use non-geotagged social media data based on textual information [49][50][51][52][53]. Finally, we can also improve tour recommendation works by planning itineraries that avoid detected events such as accidents [54][55][56][57][58].