Browsing by Author "İnan, Tolga"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Conference Object Citation - WoS: 9Citation - Scopus: 14A Machine Learning Study to Enhance Project Cost Forecasting(Elsevier, 2022) Inan, Tolga; İnan, Tolga; Narbaev, Timur; Hazir, Oncu; Elektrik-Elektronik MühendisliğiIn project management it is critical to obtain accurate cost forecasts using effective methods. This study presents a Machine Learning model based on Long-Short Term Memory to forecast the project cost. The model uses the seven-dimensional feature vector, including schedule and cost performance factors and their moving averages as a predictor. Based on the cost variation patterns from the training phase, we validate the model using three hundred experiments in the testing phase. Overall, the proposed model produces more accurate cost estimates when compared to the traditional Earned Value Management index-based model. Copyright (C) 2022 The Authors.Conference Object Artificial Neural Networks Modeling of Uniform Temperature Effects of Symmetric Linear Haunched Beams(2019) İnan, Tolga; 60368; Elektrik-Elektronik MühendisliğiArticle Ear semantic segmentation in natural images with Tversky loss function supported DeepLabv3+ convolutional neural network(2022) İnan, Tolga; Kacar, Umit; Inan, Tolga; Elektrik-Elektronik MühendisliğiSemantic segmentation is a fundamental problem for computer vision. On the other hand, for studies in the field of biometrics, semantic segmentation is gaining more importance. Many successful biometric recognition systems require a high- performance semantic segmentation algorithm. In this study, we present an effective ear segmentation technique in natural images. A convolutional neural network is trained for pixel-based ear segmentation. DeepLab v3+ network structure, with ResNet-18 as the backbone and Tversky lost function layer as the last layer, has been trained with natural and uncontrolled images. We perform the proposed network training using only the 750 images in the Annotated Web Ears (AWE) training set. The corresponding tests are performed on the AWE Test Set, University of Ljubljana Test Set, and the Collection A of In-The-Wild dataset. For the Annotated Web Ears (AWE) dataset, intersection over union (IoU) is measured as 86.3% for the AWE database. To the best of our knowledge, this is the highest performance achieved among the algorithms tested on the AWE test set.