Login  |  Join Us  |  Subscribe to Newsletter
Login to View News Feed and Manage Profile
☰
Login
Join Us
Login to View News Feed and Manage Profile
Agency
Agency
  • Home
  • Information
    • Discussion
    • Articles
    • Whitepapers
    • Use Cases
    • News
    • Contributors
    • Subscribe to Newsletter
  • Courses
    • Data Science & Analytics
    • Statistics and Related Courses
    • Online Data Science Courses
  • Prodigy
    • Prodigy Login
    • Prodigy Find Out More
    • Prodigy Free Services
    • Prodigy Feedback
    • Prodigy T&Cs
  • Awards
    • Contributors Competition
    • Data Science Writer Of The Year
  • Membership
    • Individual
    • Organisational
    • University
    • Associate
    • Affiliate
    • Benefits
    • Membership Fees
    • Join Us
  • Consultancy
    • Professional Services
    • Project Methodology
    • Unlock Your Data
    • Advanced Analytics
  • Resources
    • Big Data Resources
    • Technology Resources
    • Speakers
    • Data Science Jobs Board
    • Member CVs
  • About
    • Contact
    • Data Science Foundation
    • Steering Group
    • Professional Standards
    • Government And Industry
    • Sponsors
    • Supporter
    • Application Form
    • Education
    • Legal Notice
    • Privacy
    • Sitemap
  • Home
  • Information
    • Discussion
    • Articles
    • Whitepapers
    • Use Cases
    • News
    • Contributors
  • Courses
    • Data Science & Analytics
    • Statistics and Related Courses
    • Online Data Science Courses
  • Prodigy
    • Prodigy Login
    • Prodigy Find Out More
    • Prodigy Free Services
    • Prodigy Feedback
    • Prodigy T&Cs
  • Awards
    • Contributors Competition
    • Data Science Writer
  • Membership
    • Individual
    • Organisational
    • University
    • Associate
    • Affiliate
    • Benefits
    • Membership Fees
    • Join Us
  • Consultancy
    • Professional Services
    • Project Methodology
    • Unlock Your Data
    • Advanced Analytics
  • Resources
    • Big Data Resources
    • Technology Resources
    • Speakers
    • Data Science Jobs Board
    • Member CVs
  • About
    • Contact
    • Data Science Foundation
    • Steering Group
    • Professional Standards
    • Government And Industry
    • Sponsors
    • Supporter
    • Application Form
    • Education
    • Legal Notice
    • Privacy
    • Sitemap
  • Subscribe to Newsletter

AN ADAPTIVE MODEL FOR RUNWAY DETECTION AND LOCALIZATION IN UNMANNED AERIAL VEHICLE

A DSF Whitepaper
12 November 2021
Dr.Barakkath Nisha.U
Author Profile
Other Articles
Follow

Share with your network:

Dr. Barakkath Nisha U1,, Senior Member IEEE

ABSTRACT

Unmanned aerial vehicles is gaining more popularity in recent years due to its ability in performing dangerous task that cannot be done using manned aerial vehicles. Apart from military purpose they are also effectively used in urban planning too. Lots of data that are stored in hardware of uav is destroyed during accidents in landing time. It’s all because we don’t have an efficient system for the detecting the landing sites. Here in our work we provide an excellent mechanism using CNN models to detect the runway and to provide its exact location. Apart from that we are also providing methods to detect the runway even in bad weather condition, also augmentation is done on the dataset to increase the accuracy of the model.

Keywords - Runway, augmentation, Localization, deep learning.

1. INTRODUCTION

With the UAVs (Unmanned aerial vehicles) in the trade and technological fields application such as global researches, detection of airports by satellite images, military administrations, surveying, weather forecasting, monitoring of ocean for sailors and so on. The accuracy and authenticity is based on the quality of the captured images from an aerial view. However, the system should try to ensure the safety and steadiness. The spatial detection of airport runway by deep Learning algorithm such as CNN will aid as to efficiently determine on an embedded platform of runways. The adjoining implementation is the localization of images. Those tasks are the central activity for UAVs. It is demanded to ensure the safe and stable landing on the carriers. It is an important task for UAVs.

Vision based UAVs is capturing the images by using an onboard camera. The extraction and classification of images will help to identification of the runway for aircraft. For fixed-wing UAV landing involves vision-based approaches. The vision-based steps for runway detection, alignment of UAV to runway and a controller to guide UAV accordingly.

2. LITERATURE REVIEW

This part of paper [1] mainly focuses on CNN methods for detecting the runway from the aerial images collected using UAV. These methods are very useful in preventing accidents during landing time of UAV. A lot of data collected during flight are stored inside the hardware of UAV so it is important to ensure the safety of those data. In [2] it provides sensor fusion architecture for detecting runway which is also possible to get information of surrounding terrains. In [3] developed a hard example mining and balanced weight strategy to construct a CNN for airport detection. [4] Gives detection of airport from optical images. [5] Works using NWPU-RESISC45 the large dataset for sensing image classification. [6] Provide a structure for object segmentation using mask RCNN. [7] Gives a method based on rotary wing projection. [8] Gives residual learning methods for training smaller networks for image recognition. From the above analysis it can be deduced that there are many methods for detection of airport, but not runway. Deep learning methods can be used for accurate detection with advanced hardware. This part of paper [1] mainly focuses on different methods used for detecting runway in a uav.

3. METHODOLOGY

It has been proved that deep learning techniques using CNN gives more precise results when compared to machine learning. So here we introduce deep learning models for detection and localization purpose.

 
  1. DETECTION OF RUNWAY

    Most prior work of computer vision is classification of image. We are using the CNN classification models for this detection purpose.

    The dataset that we are using for this purpose is remote sensing dataset RESISC 45 with 45 classes and each class with 700 satellite images. For the purpose of feature extraction we are using three CNN classification models like VG16, ResNet50, ResNet152, and densenet 161. The aerial image is been resized to 224x224 for the classification purpose. And mean normalization is done. For purpose of feature extraction Keras model with backend as tensorflow is been used. VGGNet used to extract the 4096 feature. We are using two models like ResNet50 and ResNet152 used for extraction of 2048 dimensional feature. And densenet for extracting feature 2208. Now we use Softmax classifier activation function on these features. It works by: First it will initialize the biases to zero and weight to some random values. Then later the extracted feature is been multiplied with weight matrices and biases are added. Later training labels are converted into one hot encoding sequence. The loss is calculated using the cross entropy. The formula of cross entropy as follows.

    ​

    Here fj is the jth element of class score vector f. The whole loss is Li. Out of three models one with a minimum loss has been found out by using the gradient descent function. Once the model with high accuracy (ResNet50) has been identified we need to do the fine tuning of that model. Keras based implementation of ResNet50 trained on imageNet dataset is been used for fine tuning. For fine-tuning the data has been divided into 80% for training, 10% for validation and 10% for testing.

  2. AUGMENTATION

    The Data Augmentation is a data-space solution to the problem of limited data. Data Augmentation encompasses a suit of techniques that enhances the size and quality of training dataset such that better Deep Learning model can be built by them. The increased dataset size results are more robust representation of low level characteristics such as lines and edges. The training on augmented data to learn the initial data of a deep Convolution Neural network is similar to the transferring of inputs trained on other datasets such as ImageNet these inputs are then fine-tuned only with the other training data. It will increase the overall accuracy and performance of the system. Data Augmentation is a very useful technique for constructing better dataset.

  3. LOCALIZATION OF RUNWAY

    Runway Localization is used to find the demand place of the runway in the images. There is use both line detection algorithm and deep learning CNN models. For the grouping and localization purpose same dataset with class runway is used.

    1. Line Detection Techniques: The runway structure is frame based on straight line. Line detection techniques is an algorithm that take a collection of n edge point and also finds all the lines on which these edge points lie .Line detection algorithm can be used to localize runway.

    2. Hough Transform (HT): From selected dataset take runway and convert it to gray scale image. The gray values of runway are different than those of background. The canny algorithm with hysteresis threshold ratio of 1:3 is used for detecting edges in the images.

      X1=x0+n*(-b)
      Y2=y0-n*(a)

      X2=x0-n*(-b)
      Y1=y0+n*(a)

      The distance is,

      ​

      Calculation of the angle of a line I and the horizontal axis,

      ​

      Check the sign to verify the correct values for angles. There are two conditions to choose the runway:

    3. Line Segment Detector (LSD): Initially converted the runway images from selected dataset into the gray scale images. The length is calculated only after detecting segments in the image. It is called elongated structure and it has long boundaries. Therefore there is some threshold is set and lines are cleaned based on their length. There is two points (x1, y2) (x2, y2).The length of line segment is

      ​

      CNN: The aim is to localize the runway that is the runway can be extracted with its boundaries. The subset of the images which includes the required object is given by bounding objects. It cannot extract the require objects as it is. A segmentation algorithm is needed to extract the object with boundaries. Each pixel is assigned to a class. Each pixels choose whether that either it is belongs to a particular class or not, so it is also be called as pixels level classification.

      Dataset is from the selected dataset take the images of the class runway for labeling the images by ‘Label Me’. Only the part of the runway has been labeled which is used for landing use the white lanes the runway part is converted.

    4. EXPERIMENTS: After conducting experiments on the train, validation and test dataset the ratios obtained is Train: Validation: Test = 70: 10: 20. To know how the model behaves on the runway images if it have been taken from height more than the selected dataset. The 100 images downloaded from goggle earth have been divided equally into 16 images in each dataset similar to selected dataset. The self customized dataset have been divided for training and testing purpose, like 381 images are used for training and rest 76 were used for validation purpose out of the total of 457 images.

      The parameters used for training purpose and their values obtained are given in table 1.

      Parameters Values
      Learning rate 0.0001
      Momentum 0.9
      Decay 0.0001
      Batch Size 1
      Number of Epochs 10

       

      Table 1. Parameter values of proposed Methodology

      1. Land Classification (Detection)

        Correctness of model can be measured using the accuracy, because each class in dataset is provided with equal number of images. Instead if there were unequal number of images it would have caused the problem of biasing. So here we can calculate accuracy by dividing number of instances (where predicted class same as true class) with the samples.

        ​x100

        Feature Extraction: We have taken four CNN classification models (ResNet50, ResNet152, Densenet161, and VGG16) to extract features from images. The graph below shows the comparison of four models. From the graph we can find that accuracy have been increased with increasing the training dataset. The model ReNet50 which is almost similar to the model ReNet152 is showing the same performances on the accuracy. ResNet50 is so faster when compared to the other models so it has been taken for fine-tuning.

        ​

        Fig.1. Model comparison graph showing their processing time

        Table 2 shows describe processing time of ResNet and image read.

        Average time of image read operation

        Gpu

        (sec/image)

        Cpu

        (sec/image)

        0.75

        0.52

        Average time of classifying extracted deep features

        ReNet50

        ResNet152

        0.038

        0.038

        Table 3 explains the comparison of CNN models used for feature extraction.

         

        Feature Extraction

        cpu(sec/image)

        gpu(sec/image)

        VGG16

        0.56

        0.024

        ResNet50

        0.27

        0.028

        ResNet152

        0.76

        0.056

        Desnet161

        0.75

        0.078

      2. Fine-tuning:

        After doing certain experiments on the four models we have found that CNN model ResNet50 is much faster in fine tuning .Pre-trained weights from ImageNet dataset have been used to initialize the model. The validation accuracy has been marked as 97.33% on 80% of training dataset and test accuracy on 80% of dataset is 96.63%. Later precision and recall has also been calculated for runway class.

        Precision and recall for 80% of training dataset is 94.44% and 97.14%. This gives the capability of a model to classify the runway image. The model has been compared with previous research, the table 4 shows comparison.

          Model used Without fine tuning With fine tuning
        10% 20% 10% 20%
        Existing results VGG16 76.47±0.18 79.79±0.15 87.15±0.45 90.36±0.18
        Proposed approach Resnet50 82.80±0.20 85.33±0.06 88.47 90.06

        Table 4. Proposed method efficacy compared with existing method

      3. Customized dataset result

        From the 45 different classes available only the one with runway has been considered and rest of the classes is treated as negative samples. On the self customized dataset evaluation metrics like accuracy, precision and recall have been used for evaluating the ResNet 50 model. Accuracy goes unbiased since there is equal number of positive and negative samples. Accuracy of ResNet50 on customized dataset is 88.88% .The model predict the runway with a precision of 86.36% and recall of 92.34%. The difference tells us model has a good true positive rate for runway class. Accuracy for customized dataset is 90.73% on our selected model. Finally the fine tuned model also predicts the class runway with a precision of 89.03% and recall of 92.89%. Since we are using a pre trained model the time taken for feature selection can be reduced but there is a need of fine-tuning the model for better performance. Here we have done fine tuning on model ResNet50 and found out that accuracy have been increased almost by 2% than previous model.

      4. Bad Weather condition (Modification)

        Model has been successful in its ability to accurately detect runways masks for test dataset. For training purpose, masks generated for runways mostly included straight parts of runway. This property is depicted where only that part of the runway is detected which has been marked as runway in the ground truth image. To evaluate these experiments, all three evaluation metrics discussed above have been used. Intersection over union has been calculated using OR and AND operations on images. For each image in val/test dataset, pixel wise accuracy, precision and recall has been calculated. For these metrics, binary classification is considered that is whether a pixel belongs to runway (class 1) or background (class 0).This will also include images of runway during the bad weather conditions also. During the bad weather it is difficult to detect the runway to land. So we also include certain images that show bad weathering and during this bad whether according to the images it calculate and detect the runway.

      5. Data Augmentation

        Deep Neural Network has performed in many computer vision tasks. The performance of the image recognition is not only dependent on the employment of appropriate hardware. The feasibility of deploying the complex vision task to interpret the efficiency to confirm its precision. The vision-based image detection in aviation imposes exhaustive problems like reduced visibility, cluttered environments and unavailability of image storage.

        The Data Augmentation is a data-space solution to the problem of limited data. Data Augmentation encompasses a suit of techniques that enhances the size and quality of training dataset such that better Deep Learning model can be built by them. It artificially enlarges the training dataset from existing data. The increased dataset size results is more robust representation of low level characteristics such as lines and edges and improve the detection result. The priming on augmented data to learn the initial data of a deep Convolution Neural network is similar to the transferring of inputs trained on other datasets such as ImageNet. These inputs are then fine-tuned only with the other training data. It will increase the overall accuracy and performance of the system. Image augmentation is applied as pre-processing step before we train the model in real time. For image augmentation keras Image data generator is exploring here. Basic augmentation techniques that we are used here is rotation, cropping, zooming, shearing, changing brightness contrast and flipping. Data Augmentation is a very useful technique for constructing better dataset. The future of Data Augmentation is very bright. The combination of images is derived exclusively from the learned parameters of a prepended CNN. This system is significantly improves the quality of data by increases the memory of dataset. The impressive performance of Data Augmentation has resulted in increased degree of accuracy.

      6. Runway localization

        For hough transform from 500 runway images of the selected dataset, 481 images selected with different properties. By using inspection it evaluated if the runway has been successfully localized or not. If two detected lines are almost same as the real boundaries of the runway then it considered the runway localized successfully. All images have been counted manually and the accuracy has been reported. Table VII indicates accuracy results for simple hough transform. Table VIII indicates accuracy results for simple hough transform based approach. The stepwise results of HT based approach and PHT based approach respectively indicates on the figure 3 and 4.

        Table 5. Accuracy of HT based approach

        ρ θ Vote Threshold Accuracy
        1 π/150 100 74.13%
        1 π/180 100 70

         

        Table 6. Accuracy of PHT based approach

        ρ θ Vote Threshold Min Length Max Gap Accuracy
        1 π/160 100 100 10 70.65%
        1 π/180 100 70 90 74.50%

         

        For Line Segment Detector in this line segment detector we use openCV to detect the line. It based implementation of LSD if it is used with default parameterization as it indicated fulfilling the results excluding for number of bins. By use the dataset the number of bins has been selected. Alike set of images is used in the above method and the runway has been accurately localized in nearly 76.5% of the total imges. Figure 5 shows the stepwise results based on the LSD approach.

        a
        b
        c

        d
        e

        Fig. 2.Stepwise results of HT based approach (a) Original image (b) Grayscale image (c)Result of canny edge detection (d) Result of applying HT (e) Result of applying constraints I,II

        In CNN the experiments both selected dataset and novel customized data has been used. In each case, weight seemed make ready with pre-trained weight of COCO dataset for fine tuning. Based on evaluation metrics the parameters used have been fine tuned manually. For evaluating the models; IOU, pixels wise evolution and average precision by evaluation metrics is used.

        a
        b
        c

        d
        e

        Fig .3.Stepwise results of PHT based approach (a) Original image (b) Grayscale image (c) Result of canny edge detection (d) Result of PHT applying (e) Result of applying constraints I, II

        a
        b
        c

        d
        e

        Fig 4. Stepwise results of LSD based approach (a) Original image (b) Grayscale image (c) Result of applying LSD (d) Result of applying Length Constrains (e) Result of applying constraints I, II

        In Insertion Over Union (IOU) the name implies IOU is a fraction with a numerator which gets the area of overlap connection the predicted and ground truth mask and the denominator to get the area of union of both predicted and ground truth mask. The mathematical form

        ​(7)

        For Pixels wise evaluation pixel wise accuracy, precession and recall calculated for each image. Binary classification is contemplated that is either a pixels belongs to runway (class1) or background (classes).

        Average precision with standard deviation of 0.05 the IOU is differentiating with threshold in range of 0.05 to 1.0. Based on this comparison both thresholds, precision are calculated. For a single image the average precision is overall the thresholds.

        Selected dataset results from fig 6 models has been victorious in its capacity to exactly detect the runways marks for the test dataset. The mask bring about for runways mainly contained linear part of the runways for the training purpose. This possession is represented in the last image of beyond figure where that portion of the runway is detected which has been noticeable as the runways in the ground veracity image. To evaluate these experiments, all three evaluation metrics talk about the above has been used. Intersection over union has been calculated using OR and AND operation on images. Mean IOU (masks) for validation set is found to be 0.80 and the average IOU (masks) for the test set is found to be 0.76. For each images in val/test dataset, pixel wise, accuracy, precision and recall over whole dataset is reported table IX. Table X shows mean average precision for different threshold.

        Mean precise Validation Test
        accuracy 0.93 0.88
        precision 0.90 0.82
        recall 0.84 0.79

         

        Table 7: Pixel wise evaluation

        ​

        Fig.6. Mask R-CNN result on the selected dataset. Above row shows true mask and lower row shows predicted mask

        Threshold mAP
        0.5-0.6 0.94
        0.6-0.7 0.90
        0.7-0.8 0.85
        0.8-0.9 0.75
        0.9-0.1 0.37

         

        Mean average precision

      7. Customized Dataset Results: From the selected dataset the customized data set is different in the sense it has narrower ways. Mostly border runway images from the selected dataset have. The dataset is used to task that how the model behaves when these are narrow runways in an image. From fig7, model is able to correctly detect these narrow runways. Intersection over union for validation set is found to be 0.73

        ​

        Fig.7 Mask R-CNN results on customized dataset. Above row shows true mask and lower row show predicted masks

4. CONCLUSION

This paper presents a method to detect the runway from aerial images collected from UAVs .Non machine learning approaches is effective in detecting airport but they are only merely used in runway detection due to accuracy related issues. The first part includes detection of runway, detecting bad weather condition and later we apply data augmentation to increase the size of dataset and also to reuse the available data and prepare new data from them. Then second part of the work is dealing with localization. For the detection part we have used different models and finally found that ResNet50 is giving more accuracy. Here in localization were done using line detection algorithms and also using CNN methods. The proposed model for detection has an IOU of 0.8 that validate the efficiency.

5. REFERENCES

[1]Javeria Akbar1, Muhammad Shahzad1,2, Muhammad Imran Malik1,2, “Run Way Detection And Localisation In Aerial Image as Using Deep Learning” 1school Of Electrical Engineering And Computer Science, National University Of Sciences And Technology (Nust), Islamabad, Pakistan.(2019).

[2] A. F. Fadhil, R. Kanneganti, L. Gupta, R. Vaidyanathan, Fusion Of Enhanced And Synthetic Vision System Images For Runway And Horizon Detection, Sensors (Basel), 19(17), 2019.

[3] B. Cai, Z. Jiang, H. Zhang, D. Zhao, Y. Yao, "Airport Detection Using End-To-End Convolution Neural Network With Hard Example Mining", Remote Sensing, 9, 1198, 2017.

[4] P. Zhang, X. Niu, Y. Dou And F. Xia, "Airport Detection On Optical Satellite Images Using Deep Convolution Neural Networks," Ieee Geoscience And Remote Sensing Letters, Vol. 14, No. 8, Pp. 1183-1187, Aug. 2017.

[5] G. Cheng, J. Han, X. Lu. "Remote Sensing Image Scene Classification: Benchmark And State Of The Art." Proceedings Of The Ieee, Vol. 105, Issue 10, 2017.

[6] K. He, G. Gkioxari, P. Dollar, R. Girshick, "Mask Rcnn", Proceedings Of The Ieee International Conference On Computer Vision, 2017

[7] Z. Guan, L. Jie, Y. Huan. "Runway Extraction Method Based On Rotating Projection For Uav" Proceedings Of The 6th International Asia Conference On Industrial Engineering And Management Innovation. Atlantis Press, 2016.

[8] K. He, X. Zhang, S. Ren, J. Sun, "Deep Residual Learning For Image Recognition" Proceedings Of The Ieee Conference On Computer Vision And Pattern Recognition, 2016.

 

Rate this Whitepaper
Rate 1 - 10 by clicking on a star
(573 Views)
Download

If you found this Whitepaper interesting, why not review the other Whitepapers in our archive.

Login to Comment and Rate

Email a PDF Whitepaper

Categories

  • Data Science
  • Data Security
  • Analytics
  • Machine Learning
  • Artificial Intelligence
  • Robotics
  • Visualisation
  • Internet of Things
  • People & Leadership Skills
  • Other Topics
  • Top Active Contributors
  • Balakrishnan Subramanian
  • Abhishek Mishra
  • Mayank Tripathi
  • Michael Baron
  • Santosh Kumar
  • Recent Posts
  • AN ADAPTIVE MODEL FOR RUNWAY DETECTION AND LOCALIZATION IN UNMANNED AERIAL VEHICLE
    12 November 2021
  • Deep Learning
    05 November 2021
  • Machine Learning
    05 November 2021
  • Data is a New oil : A step into WSN enabled IoT and security
    26 October 2021
  • Highest Rated Posts
  • DEEP LEARNING: FIGHTING COVID-19 WITH NEURAL NETWORKS
  • What have the changes made to primary and secondary assessment frameworks done to the ‘London effect’ in school performance?
  • Graph Analytics and Big Data
  • Understanding Imbalanced Datasets and techniques for handling them
  • Data Driven Business Models in FMCG & Retail
To attach files from your computer

    Comment

    You cannot reply to your own comment or question. You can respond to another member's comment in this thread.

    Get in touch

     

    Subscribe to latest Data science Foundation news

    I have read and agree to the Data science Foundation Privacy Policy

    • Home
    • Information
    • Resources
    • Membership
    • Services
    • Legal
    • Privacy
    • Site Map
    • Contact

    © 2022 Data science Foundation. All rights reserved. Data S.F. Limited 09624670

    Site By-Peppersack

    We use cookies

    Cookie Information

    We are using cookies to provide statistics that help us to improve your experience of our site. You can choose to use the site without cookies. However, by continuing to use the site without changing your settings, you are agreeing to our use of cookies.

    Contact Form

    This member is participating in the Prodigy programme. This message will be directed to Prodigy Admin the Prodigy Programme manager. Find out more about Prodigy

    Complete your membership listing and tell others about your interests, experience and qualifications with a Personal Profile page.

    Add a Personal Profile

    Your Personal Profile page is missing information about your experience and qualifications that other members would find interesting. Click here to update.

    Login / Join Us

    Login to your membership account to view your personalised news feed, update your profile, manage your preferences. publish articles and to create a following.

    If you are not a member but work with or have an interest in Data Science, Machine Learning and Artificial Intelligence, join us today.

    Login | Join Us

    Support the work of the Data Science Foundation

    Help to fund our work and enable us to provide free communications and knowledge sharing services to members across the globe.

    Click here to set-up a donation of £30 per year

    Follow

    Login

    Login to follow this member

    Login