|
Ground penetrating radar for anti-personnel landmine detection Presenter Ir. Luc van Kempen - ETRO-VUB Abstract Introduction Detection of buried targets with a GPR is rather simple almost anything under the surface of the ground presents a return signal, which may be confused with a valid (lethal) target. In this regard, the characterization of the radar returns in the context of the environment is essential. It is not sufficient to say that "something" is buried...rather, in humanitarian demining, it is mandatory that a lethal target be detected with nearly 100 per cent reliability in any soil type.
The purpose of this thesis is to investigate the processing of GPR data to extract relevant information with the aim of coming as close as possible to a full understanding of the subsurface. The processing chain is subdivided in several steps, each of which is summarized in the following paragraphs.
Data Pre-Processing In Chapter 3, the focus is put on the pre-processing of the data. This is a step, needed to remove all non essential information out of the data before attempting to characterize it. A first step here is deconvolution of the antenna characteristics out of the data. For this, an efficient deconvolution algorithm is proposed and implemented. One has to keep in mind however that even the most robust deconvolution algorithm is still dependent on the estimation of the signal to be deconvolved. Indeed these antenna characteristics signals are very often obtained by performing a measurement in the absence of objects or in the presence of absorbers. The characteristics will change when the antenna is near a ground surface, so the deconvolved signal is not exactly equal to the emitted one. Moreover, when creating a forward model it is sometimes easier to directly model the antenna with a simple excitation signal since several numerical software packages allow to do this. The main conclusion is that deconvolution can be useful in some types of processing but should never be blindly used.
A second important pre-processing step is the removal of unwanted reflections due to clutter in the subsurface. The most problematic reflection to be removed from the data is the air-ground interface reflection. For this, several linear prediction algorithms are proposed and compared leading to the conclusion that the Linear Predictive Coding (LPC) method gives the best results. However, the removal of the clutter will unavoidably also distort or remove target signals of small scatterers, if they are close to the subsurface. This is the main reason why GPR should be deployed in combination with other sensors, preferably sensors that are performing well in the very shallow subsurface.
Anomaly Detection The next step is the detection of anomalies in the subsurface, in order to be able to focus further processing on those areas. For this several techniques are proposed on all types of GPR data (One dimensional A-scans, two dimensional B-scans and three dimensional C-scans). It is quite clear that A-Scan detection is not sufficient to find the areas where an object might be present. The method developed for B-Scan detection is based on Gabor filters to enhance the edges of the hyperbola-like structures in the B-Scan. This method usually gives a large abundance of detected hyperbola edges, but gives little detail about the amount of reflected energy, indicating whether the reflecting object was a small clutter piece or a slightly more substantial object. Therefore the C-Scan is submitted to the Karhunen-Loève Transform (KLT), in order to obtain a 2D energy map of the subsurface so that in combination with the detected hyperbolas in the B-Scan an estimation of the position of the objects can be found.
Finally, based on the detected hyperbola edges, hyperbola estimation is done using a generalized Hough transform. Once the hyperbolas are selected, parameter estimation can give a first estimate of the objects depth and position as well as of the velocity of propagation in the subsurface. These are however only very coarse estimates since the method makes many assumptions which are not equally valid: e.g. point scatterer, perfect hyperbolic signature.
Classification Based on Extracted Features Once the suspected areas in the data have been detected and cleaned from most external influences, they are put through a classification algorithm, in order to make the decision whether the reflecting object is a mine or a friendly object. For this, features are extracted out of the data in the regions of interest, and transformed in several domains. Special emphasis is laid on the time-frequency domain, since the time frequency features showed high discrimination capabilities. Note that all features considered in this step are A-Scan based. The addition of other features extracted from B-Scans and features obtained after modelling and reconstruction is not yet considered. This is however a source of information which should be investigated to see if these features allow to achieve a better discriminant performance than the proposed ones. Once all features are selected, only those which are most discriminant need to be retained. For this, three feature selection methods are proposed and compared. This results in the best possible feature set, based on a given learning set. The final classification can be done in two ways, either with one classifier using the full feature set or with several classifiers, one per feature type and combining the classification results afterwards. The latter method yields better results. The main problem with classification algorithms is how to obtain completeness of the learning set. Indeed when one wants to obtain a library of mine classes, the features must include the mine under all possible circumstances in order to be able to recognize it. A partial solution to this is the investigation of features which are independent of these external circumstances such as object depth, soil type etc. However in order to obtain a usable library a large number of measurements have to be acquired and processed for each mine type one wants to recognize and for each soil type.
Qualitative Reconstruction Another method of extracting information out of the acquired data is to try to reconstruct the subsurface properties. The main two subsets of such reconstruction algorithms (Backprojection and Backpropagation) are introduced in Chapter 5. As an example of a backpropagation method, Synthetic Aperture Radar (SAR) is proposed, and Kirchoff Migration is introduced as a backprojection method. We conclude that backprojection and backpropagation methods are strong algorithms that can produce accurate results. They are, however, sensitive to the estimation error on several parameters such as antenna height, electromagnetic properties of the soil, etc. A robust and correct pre-processing is thus required in order to exploit the full potential of the reconstruction algorithms. Another drawback of any full three dimensional processing is time consumption. This is due mostly to the cost of travel time estimation which is needed in both methods. This problem can be reduced by combining the reconstruction with an intermediate detection step. A coarse reconstruction will determine the areas of interest, which will be reconstructed with a finer resolution in a second step. This combination results in a reliable and reasonably fast algorithm. Comparison of the backprojection with the backpropagation methods yields that the results of both are similar. Estimations of target positions and depths seem to be accurate however target dimensions can be mis-estimated especially when reconstructing complex targets.
Quantitative Reconstruction In the last chapter the full nonlinear GPR inverse problem is approached by a three step method. The main novelty lays in the combination of these three steps which are first a qualitative solution to the linearized inverse problem using MUltiple SIgnal Classification (MUSIC) or SAR, second a quantitative solution to the linearized inverse problem using Algebraic Reconstruction Theory (ART) and finally the solution of the non-linear problem using the Adjoint method. The 3-step method is applied to both realistic and simulated data. For the simulated data the results are very alike for the clean and the noisy data. Both yield a reconstructed signal which is very similar to the simulated signal in amplitude and shape. In both cases there is a phase shift. For the real data, the position and depth were well estimated. The reconstructed values are close to those initially set. The main disadvantage from this method is its extremely high time consumption. Faster and better numerical methods should be the subject for a further research.
|