Other than using only heading information from IMU sensor, we proposed of using course-over-ground information as another sensorial data that we could use to further refine the drone navigation. course-over-ground is calculated using bearing between 2 recorded geodetic coordinates while drone flying in a specific trajectory created with previous algorithm. To further refine the course-over-ground value, our proposed method uses position covariances provided by Autopilot Software which sent together with current geodetic coordinate of the drone. We use Unscented Transform [19] to calculate bearing between 2 coordinates considering those 2 coordinates have covariances. The problem lies in position covariance provided by the Autopilot Software, which in meters. This causes some disagreement in the convention to define the position of the drone. To fix this, we have to calculate the variance of the position back in degree form. To do this, we could use the Taylor Approximation of the Hubeny distance to provide us the reverse function from meters to degree. If there are more than one course-over-ground data, we could use Kalman Filter [20] to refine the our measurement. Figure 8 shows the algorithm used to obtain course-over-ground using Unscented Transform.

To use course-over-ground, we could imagine the course-over-ground being the output trajectory of the current trajectory input created from the previous iteration of trajectory calculation. With this in mind, we could replace the need of heading information entirely by comparing the input, output, and the bearing, but in this system, we calculate the new trajectory together with heading information. To do this, we use Kalman Gain concept [20] to fuse the two calculations into one trajectory input. Before we fuse these calculations, we also use Unscented Transform to pass the trajectory calculation function for both trajectory from heading and trajectory from course-over-ground. The calculation stated below:

$$\begin{array}{*{20}c} {\theta _{{world_{t} }}^{{cog}} ,\sigma _{{\theta _{{world_{t} }}^{{cog}} }} = UT\left( {\left[ {\begin{array}{*{20}c} {p_{\varphi }^{d} } \\ {p_{\lambda }^{d} } \\ {\theta _{{cog_{{t - 1}} }} } \\ \end{array} } \right],\left[ {\begin{array}{*{20}c} {\sigma _{{p_{\varphi }^{d} }} } & 0 & 0 \\ 0 & {\sigma _{{p_{\varphi }^{d} }} } & 0 \\ 0 & 0 & {\sigma _{{\theta _{{cog_{{t - 1}} }} }} } \\ \end{array} } \right],\,func} \right)} \\ \end{array}$$

(14)

$$\begin{array}{*{20}c} {\theta_{{world_{t} }}^{hdg},\sigma_{{\theta_{{world_{t} }}^{hdg} }} = UT\left( {\left[ {\begin{array}{*{20}c} {p_{\varphi }^{d} } \\ {p_{\lambda }^{d} } \\ {\theta_{{hdg_{t} }} } \\ \end{array} } \right],\left[ {\begin{array}{*{20}c} {\sigma_{{p_{\varphi }^{d} }} } & 0 & 0 \\ 0 & {\sigma_{{p_{\lambda }^{d} }} } & 0 \\ 0 & 0 & {\sigma_{{\theta_{{hdg_{t} }} }} } \\ \end{array} } \right], \,func} \right)} \\ \end{array}$$

(15)

where \(\theta_{{world_{t} }}^{cog}\) denotes the difference between course-over-ground and bearing between current position and target position, \(\theta_{{world_{t} }}^{hdg}\) denotes difference between heading and bearing. Symbol \(\sigma\) denotes variance of its subscript. The UT (v, c, f) function is an Unscented Transform function, which passes vector v with covariance c into function f using Unscented Transform to obtain the result of the function and its covariance. The function func above is a function to find the difference between angle provided as parameter (3rd row of the vector) and the bearing between current position and target position.

After obtaining 2 input trajectories, we fuse those two calculations, below is the formula:

$$\begin{array}{*{20}c} {\theta_{{drone_{t} }}^{cog} = \theta_{{drone_{t - 1} }} + \theta_{{world_{t} }}^{cog} } \\ \end{array}$$

(16)

$$\begin{array}{*{20}c} {\theta_{{drone_{t} }}^{hdg} = \theta_{{world_{t} }}^{hdg} } \\ \end{array}$$

(17)

$$\begin{array}{*{20}c} {\theta_{innovation} = \theta_{{drone_{t} }}^{cog} - \theta_{{drone_{t} }}^{hdg} } \\ \end{array}$$

(18)

$$\begin{array}{*{20}c} {K = \frac{{\sigma_{{\theta_{{world_{t} }}^{hdg} }} }}{{\sigma_{{\theta_{{world_{t} }}^{hdg} }} + \,\sigma_{{\theta_{{world_{t} }}^{cog} }} }}} \\ \end{array}$$

(19)

$$\begin{array}{*{20}c} {\theta_{{drone_{t} }} = \theta_{{drone_{t} }}^{hdg} + \left( {K{\cdot}\theta_{innovation} } \right)} \\ \end{array}$$

(20)

where \(\theta_{{drone_{t} }}^{cog}\) denotes the difference between the difference between course-over-ground and bearing and the previous input trajectory, \(\theta_{{drone_{t} }}^{hdg}\) denotes the difference between drone heading and bearing, and \(\theta_{{drone_{t} }}\) denotes the trajectory input for the drone, with subscript t as the time step. Figure 9 shows the full navigation system with course-over-ground.