Browsing by Author "Keçeli, Ali Seydi"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Article Analysis of transfer learning for deep neural network based plant classification models(2019) Kaya, Aydın; Keçeli, Ali Seydi; Çatal, Çağatay; Yalıç, Hamdi Yalın; Temuçin, Hüseyin; Tekinerdoğan, Bedir; 35304; 36190; 182651Plant species classification is crucial for biodiversity protection and conservation. Manual classification is time-consuming, expensive, and requires experienced experts who are often limited available. To cope with these issues, various machine learning algorithms have been proposed to support the automated classification of plant species. Among these machine learning algorithms, Deep Neural Networks (DNNs) have been applied to different data sets. DNNs have been however often applied in isolation and no effort has been made to reuse and transfer the knowledge of different applications of DNNs. Transfer learning in the context of machine learning implies the usage of the results of multiple applications of DNNs. In this article, the results of the effect of four different transfer learning models for deep neural network-based plant classification is investigated on four public datasets. Our experimental study demonstrates that transfer learning can provide important benefits for automated plant identification and can improve low-performance plant classification models.Book Part Model analytics for defect prediction based on design-level metrics and sampling techniques(Academic Press, 2020) Kaya, Aydın; Keçeli, Ali Seydi; Çatal, Çağatay; Tekinerdoğan, Bedir; 3530Predicting software defects in the early stages of the software development life cycle, such as the design and requirement analysis phase, provides significant economic advantages for software companies. Model analytics for defect prediction lets quality assurance groups build prediction models earlier and predict the defect-prone components before the testing phase for in-depth testing. In this study, we demonstrate that Machine Learning-based defect prediction models using design-level metrics in conjunction with data sampling techniques are effective in finding software defects. We show that design-level attributes have a strong correlation with the probability of defects and the SMOTE data sampling approach improves the performance of prediction models. When design-level metrics are applied, the Adaboost ensemble method provides the best performance to detect the minority class samples.Conference Object Softare Vulnerability Prediction using Extreme Learning Machines Algorithm(2019) Keçeli, Ali Seydi; Kaya, Aydın; Çatal, Çağatay; Tekinerdoğan, Bedir; 3530Software vulnerability prediction aims to detect vulnerabilities in the source code before the software is deployed into the operational environment. The accurate prediction of vulnerabilities helps to allocate more testing resources to the vulnerability-prone modules. From the machine learning perspective, this problem is a binary classification task which classifies software modules into vulnerability-prone and non-vulnerability-prone categories. Several machine learning models have been built for addressing the software vulnerability prediction problem, but the performance of the state-of-the-art models is not yet at an acceptable level. In this study, we aim to improve the performance of software vulnerability prediction models by using Extreme Learning Machines (ELM) algorithms which have not been investigated for this problem. Before we apply ELM algorithms for selected three public datasets, we use data balancing algorithms to balance the data points which belong to two classes. We discuss our initial experimental results and provide the lessons learned. In particular, we observed that ELM algorithms have a high potential to be used for addressing the software vulnerability prediction problem.Article The impact of feature types, classifiers, and data balancing techniques on software vulnerability prediction models(2019) Kaya, Aydın; Keçeli, Ali Seydi; Çatal, Çağatay; Tekinerdoğan, Bedir; 3530Software vulnerabilities form an increasing security risk for software systems, that might be exploited to attack and harm the system. Some of the security vulnerabilities can be detected by static analysis tools and penetration testing, but usually, these suffer from relatively high false positive rates. Software vulnerability prediction (SVP) models can be used to categorize software components into vulnerable and neutral components before the software testing phase and likewise increase the efficiency and effectiveness of the overall verification process. The performance of a vulnerability prediction model is usually affected by the adopted classification algorithm, the adopted features, and data balancing approaches. In this study, we empirically investigate the effect of these factors on the performance of SVP models. Our experiments consist of four data balancing methods, seven classification algorithms, and three feature types. The experimental results show that data balancing methods are effective for highly unbalanced datasets, text-based features are more useful, and ensemble-based classifiers provide mostly better results. For smaller datasets, Random Forest algorithm provides the best performance and for the larger datasets, RusboostTree achieves better performance.