2.1.1. Differential Techniques:
Differential techniques compute optical flow information from spatial and temporalvariations of the image brightness based on brightness constancy and temporal consistence assumptions[
15],[
16]. These assumptions result in a motion constraint estimated by the first order Taylor’s expansion of
where
. The Taylor expansion of (1) or intensity conservation assumption
implies that
Where
and
denotes the usual dot product. Equation (3) gives the normal component (U
n) of motion of constant intensity spatial contour as
, where s is the normal speed and n is the normal direction given by.
Second order differential methods bounds 2-D velocity using second order derivatives
The conservation of equation (5) implies
The coefficients of these equations are combinations of spatial and temporal derivatives of the image brightness.
The Horn-Schunck method uses the assumption that flow is smooth over the whole image[
17].Therefore solutions which show more smoothness are preferred. The minimum distortions in the flow are ensured by minimizing a global energy function. The energy function is given for two-dimensional imagery as[
15],[
18]:
where
,
and
are the derivatives of the pixel intensity values along the x, y and time t dimensions respectively
is the flow vector, and
is a regularization constant. To get smoother flow the value
is made larger[
19]. Iterative equations (6) are used to minimize the global energy function and obtain image velocity in 2-D.
where
J is the number of iteration.
The Horn and Schunck method use first-order differences to estimate intensity derivatives. The Horn-Schunck algorithm applied to
Fig 3(a) and
Fig 3(b) gives the images in
Fig 3(c) and
Fig 3(d). The velocities obtained through this method are (-0.16, -0.38).
Although Horn-schunck algorithm is a complete algorithm for optical flow estimation but its iterative nature results in high computational cost[
17]. The solution is provided by Lucas-Kanade methods which implements least square method to find the velocity minimizing the constraints errors, squared errors in this case[
16],[
18]. Major assumption here is constant flow in local neighborhood of the pixel under consideration. Velocity is obtained by solving basic optical flow equations for all the pixels in that neighborhood
satisfying the least square criterion
where
is an
n×
n diagonal matrix containing the weights. The diagonal matrix ensures that constraints have more influence on pixels at the center of the neighborhood than those at the periphery. The solution to (8) is given by
where for n points
at a single time t,
The solution to (10) is
, the solution exists if
is nonsingular matrix.
where all sums are taken over points in the neighborhood.
Equations (8) and (9) represent weighted least squares estimates of velocity components from estimates of normal velocities
. Equation (10) is equivalent to
Where the coefficients reflect that how good in the normal velocity estimates are; here
To solve equation
,
eigenvalues should satisfy the constraint
[
17].
should not be too small in order to avoid noise. Another constraint is, small value of
which if not satisfied leads to the aperture problems. This condition is also works for Corner detection[
18].
Fig 4(c) and
Fig 4(d) are the resultant images after application of Lucas-Kanade algorithm to
Fig 4(a) and
Fig 4(b). The velocities obtained through this method are (0.10, -0.63).
Experimentation
Experimentation is performed at around 4 pm in an evening at NUST College of E&ME Rawalpindi using a grid of 1m x 1m having four cells. The aerial platform that is used in experimentation is an X-configuration quad-rotor. It has an empty weight of 989grams and a payload carrying capacity of another 900 grams leading to a total takeoff and flying weight of approximately 2000 grams. The quadrotor uses 4 BLDC motors having thrust of 1280Nm each. The motors are controlled by a PD controller. The BLDCs has a maximum speed of 3200rpm. For imaging, the quad-rotor is equipped with LG G3 mobile phone having a 16 megapixels wide angle camera.
Figure 2 shows the imaging platform.
A ball is placed first in the center of the grid that is taken as the starting point. The quad-rotor is made to hover over the grid at a constant altitude of 1.5 m above the ground. The ball is moved to different locations in the grid and the images are acquired. The images are then imported in MATLAB environment and are processed by the developed algorithm to calculate the change in position of the ball in different frames.
Figure 3 shows the experimental grid, whereas;
Figure 4 shows the different positions of the ball in the grid.
Figure 5 show the outcome of HSV filter and
Figure 6 shows an inverted binarized image produce by the algorithm to calculate the change in position of the ball.
Figure 5.
Lucas Knade(vector u).
Figure 5.
Lucas Knade(vector u).
Figure 6.
Lucas Knade(vector v).
Figure 6.
Lucas Knade(vector v).