Bilgisayar Mühendisliği Bölümü Yayın Koleksiyonu
Permanent URI for this collectionhttps://hdl.handle.net/20.500.12416/253
Browse
Browsing Bilgisayar Mühendisliği Bölümü Yayın Koleksiyonu by Publication Index "WoS"
Now showing 1 - 20 of 156
- Results Per Page
- Sort Options
Article Citation - WoS: 32Citation - Scopus: 43A 3d Virtual Environment for Training Soccer Referees(Elsevier Science Bv, 2019) Isler, Veysi; O'Connor, Rory V.; Clarke, Paul M.; Gulec, Ulas; Yilmaz, MuratEmerging digital technologies are being used in many ways by and in particular virtual environments provide new opportunities to gain experience on real-world phenomena without having to live the actual real-world experiences. In this study, a quantitative research approach supported by expert validation interviews was conducted to determine the availability of virtual environments in the training of soccer referees. The aim is to design a virtual environment for training purposes, representing a real-life soccer stadium to allow the referees to manage matches in an atmosphere similar to the real stadium atmosphere. At this point, the referees have a chance to reduce the number of errors that they make in real life by experiencing difficult decisions that they encounter during the actual match via using the virtual stadium. In addition, the decisions and reactions of the referees during the virtual match were observed with the number of different fans in the virtual stadium to understand whether the virtual stadium created a real stadium atmosphere for the referees. For this evaluation, Presence Questionnaire (PQ) and Immersive Tendencies Questionnaire (ITQ) were applied to the referees to measure their involvement levels. In addition, a semi-structure interview technique was utilized in order to understand participants' opinions about the system. These interviews show that the referees have a positive attitude towards the system since they can experience the events occurred in the match as a first person instead of watching them from camera as a third person. The findings of current study suggest that virtual environments can be used as a training tool to increase the experience levels of the soccer referees since they have an opportunity to decide about the positions without facing the real-world risks.Conference Object Citation - Scopus: 1Adaptive Embedded Zero Tree for Scalable Video Coding(int Assoc Engineers-iaeng, 2011) Choupanı, Roya; Choupani, Roya; Tolun, Mehmet Reşit; Wong, Stephan; Tolun, Mehmet R.; Bilgisayar Mühendisliği; Yazılım MühendisliğiVideo streaming over the Internet has gained popularity during recent years mainly due to the revival of video-conferencing and video-telephony applications and the proliferation of (video) content providers. However, the heterogeneous, dynamic, and best-effort nature of the Internet cannot always guarantee a certain bandwidth for an application utilizing the Internet. Scalability has been introduced to deal with such issues (up to a certain point) by adapting the video quality with the available bandwidth. In addition, wavelet based scalability combined with representation methods such as embedded zero trees (EZWs) provides the possibility of reconstructing the video even when only the initial part of the streams have been received. EZW prioritizes the wavelet coefficients based on their energy content. Our experiments however, indicate that giving more priority to low frequency content improves the video quality at a specific bit rate. In this paper, we propose a method to improve on the compression rate of the EZW by prioritizing the coefficients by combining each frequency sub-band with its energy content. Initial experimental show that the first two layers of the generated EZW are about 22.6% more concise.Conference Object Citation - WoS: 2Adopting Augmented Reality for the Purpose of Software Development Process Training and Improvement: an Exploration(Springer international Publishing Ag, 2018) Oge, Irem; Orkun, Bora; Yilmaz, Murat; Tuzun, Eray; Clarke, Paul; O'Connor, Rory V.; Ohri, IpekAugmented reality (AR) is a technological field of study that bridges the physical and digital world together with a view to improving user experience. AR holds great potential to change the delivery of software services or software process improvement by utilizing a specific set of components. The purpose of this exploratory study is to propose an integration framework to support AR for improving the onboarding process, notably in introducing new hires to the development process while performing their daily tasks. In addition, it also aims to enhance the software development workflow process using AR. Similar to a GPS device that can guide you from point A to point B, our goal is to create software artifacts like navigation components where software teams may benefit from digitally enhanced working conditions provided using AR. After conducting a review in the literature, we confirmed that there is lack of studies about the combination of augmented reality with software engineering disciplines for onboarding. In this paper, we formalized our approach based on the benefits of AR. Ultimately; we propose an AR-based preliminary model for improving the software development process.Conference Object Citation - WoS: 11Citation - Scopus: 12Adopting Virtual Reality as a Medium for Software Development Process Education(Assoc Computing Machinery, 2018) Isler, Veysi; O'Connor, Rory, V; Clarke, Paul; Gulec, Ulas; Yilmaz, MuratSoftware development is a complex process of collaborative endeavour which requires hands-on experience starting from requirement analysis through to software testing and ultimately demands continuous maintenance so as to mitigate risks and uncertainty. Therefore, training experienced software practitioners is a challenging task. To address this gap, we propose an interactive virtual reality training environment for software practitioners to gain virtual experience based on the tasks of software development. The goal is to transport participants to a virtual software development organization where they experience simulated development process problems and conflicting situations, where they will interact virtually with distinctive personalities, roles and characters borrowed from real software development organizations. This PhD in progress paper investigates the literature and proposes a novel approach where participants can acquire important new process knowledge. Our preliminary observations suggest that a complementary VR-based training tool is likely to improve the experience of novice software developers and ultimately it has a great potential for training activities in software development organizations.Article Citation - WoS: 190Citation - Scopus: 281Adoption of E-Government Services in Turkey(Pergamon-elsevier Science Ltd, 2017) Arifoglu, Ali; Tokdemir, Gul; Pacin, Yudum; Kurfali, MurathanThis research aims to investigate underlying factors that play role in citizens' decision to use e-government services in Turkey. UTAUT model which was enriched by introducing Trust of internet and Trust of government factors is used in the study. The model is evaluated through a survey conducted with Turkish citizens who are from different regions of the country. A total of 529 answers collected through purposive sampling and the responses were evaluated with the SEM (Structural Equation Modeling) technique. According to the results, Performance expectancy, Social influence, Facilitating conditions and Trust of Internet were found to have a positive effect on behavioral intention to use e-government services. Additionally, both Trust factors were found to have a positive influence on Performance expectancy of e-government services, a relation which, to our best knowledge, hasn't been tested before in e-government context. Effect of Effort expectancy and Trust of government were found insignificant on behavioral intention. We believe that the findings of this study will guide professionals and policy makers in improving and popularizing e-government services by revealing the citizen's priorities regarding e-government services in Turkey. (C) 2016 Elsevier Ltd. All rights reserved.Conference Object Citation - WoS: 7Citation - Scopus: 8Ads-B Attack Classification Using Machine Learning Techniques(Ieee, 2021) Kacem, Thabet; Kaya, Aydin; Keceli, Ali Seydi; Catal, Cagatay; Wijsekera, Duminda; Costa, PauloAutomatic Dependent Surveillance Broadcast (ADS-B) is one of the most prominent protocols in Air Traffic Control (ATC). Its key advantages derive from using GPS as a location provider, resulting in better location accuracy while offering substantially lower deployment and operational costs when compared to traditional radar technologies. ADS-B not only can enhance radar coverage but also is a standalone solution to areas without radar coverage. Despite these advantages, a wider adoption of the technology is limited due to security vulnerabilities, which are rooted in the protocol's open broadcast of clear-text messages. In spite of the seriousness of such concerns, very few researchers attempted to propose viable approaches to address such vulnerabilities. In addition to the importance of detecting ADS-B attacks, classifying these attacks is as important since it will enable the security experts and ATC controllers to better understand the attack vector thus enhancing the future protection mechanisms. Unfortunately, there have been very little research on automatically classifying ADS-B attacks. Even the few approaches that attempted to do so considered just two classification categories, i.e. malicious message vs not malicious message. In this paper, we propose a new module to our ADS-Bsec framework capable of classifying ADS-B attacks using advanced machine learning techniques including Support Vector Machines (SVM), Decision Tree, and Random Forest (RF). Our module has the advantage that it adopts a multi-class classification approach based on the nature of the ADS-B attacks not just the traditional 2-category classifiers. To illustrate and evaluate our ideas, we designed several experiments using a flight dataset from Lisbon to Paris that includes ADS-B attacks from three categories. Our experimental results demonstrated that machine learning-based models provide high performance in terms of accuracy, sensitivity, and specificity metrics.Article Citation - WoS: 4Citation - Scopus: 5Almost Autonomous Training of Mixtures of Principal Component Analyzers(Elsevier Science Bv, 2004) Musa, MEM; de Ridder, D; Duin, RPW; Atalay, VIn recent years, a number of mixtures of local PCA models have been proposed. Most of these models require the user to set the number of submodels (local models) in the mixture and the dimensionality of the submodels (i.e., number of PC's) as well. To make the model free of these parameters, we propose a greedy expectation-maximization algorithm to find a suboptimal number of submodels. For a given retained variance ratio, the proposed algorithm estimates for each submodel the dimensionality that retains this given variability ratio. We test the proposed method on two different classification problems: handwritten digit recognition and 2-class ionosphere data classification. The results show that the proposed method has a good performance. (C) 2004 Elsevier B.V. All rights reserved.Article Analysing Iraqi Railways Network by Applying Specific Criteria Using the Gis Techniques(Coll Science Women, Univ Baghdad, 2019) Naji, Hayder Fans; Maras, H. HakanThe railways network is one of the huge infrastructure projects. Therefore, dealing with these projects such as analyzing and developing should be done using appropriate tools, i.e. GIS tools. Because, traditional methods will consume resources, time, money and the results maybe not accurate. In this research, the train stations in all of Iraq's provinces were studied and analyzed using network analysis, which is one of the most powerful techniques within GIS. A free trial copy of ArcGIS (R) 10.2 software was used in this research in order to achieve the aim of this study. The analysis of current train stations has been done depending on the road network, because people used roads to reach those train stations. The data layers for this study were collected and prepared to meet the requirements of network analyses within GIS. In this study, the current train stations in Iraq were analyzed and studied depending on accessibility value for those stations. Also, to know the numbers of people who can reach those stations within a walking time of 20 minutes. So, this study aims to analyze the current train stations according to multiple criteria by using network analysis in order to find the serviced areas around those stations. Results will be presented as digital maps layers with their attribute tables that show the beneficiaries from those train stations and serviced areas around those stations depending on specific criteria, with a view to determine the size of this problem and to support the decision makers in case of locating new train stations within the best locations for it.Conference Object Analysis of Neurooncological Data To Predict Success of Operation Through Classification(Assoc Computing Machinery, 2016) Tokdemir, Gul; Cagiltay, Nergiz; Maras, H. Hakan; Bagherzadi, Negin; Borcek, Alp OzgunData mining algorithms have been applied in various fields of medicine to get insights about diagnosis and treatment of certain diseases. This gives rise to more research on personalized medicine as patient data can be utilized to predict outcomes of certain treatment procedures. Accordingly, this study aims to create a model to provide decision support for surgeons in Neurooncology surgery. For this purpose, we have analyzed clinical pathology records of Neurooncology patients through various classification algorithms, namely Support Vector Machine, Multi Perceptron and Naive Bayes methods, and compared their performances with the aim of predicting surgery complication. A large number of factors have been considered to classify and predict percentage of patient's complication in surgery. Some of the factors found to be predictive were age, sex, clinical presentation, previous surgery type etc. For classification models built up using Support Vector Machine, Naive Bayes and Multi Perceptron, Classification trials for Support Vector Machine have shown %77.47 generalization accuracy, which was established by 5-fold cross-validation.Article Citation - WoS: 17Citation - Scopus: 23Application of Bilstm-Crf Model With Different Embeddings for Product Name Extraction in Unstructured Turkish Text(Springer London Ltd, 2024) Arslan, SerdarNamed entity recognition (NER) plays a pivotal role in Natural Language Processing by identifying and classifying entities within textual data. While NER methodologies have seen significant advancements, driven by pretrained word embeddings and deep neural networks, the majority of these studies have focused on text with well-defined grammar and structure. A significant research gap exists concerning NER in informal or unstructured text, where traditional grammar rules and sentence structure are absent. This research addresses this crucial gap by focusing on the detection of product names within unstructured Turkish text. To accomplish this, we propose a deep learning-based NER model which combines a Bidirectional Long Short-Term Memory (BiLSTM) architecture with a Conditional Random Field (CRF) layer, further enhanced by FastText embeddings. To comprehensively evaluate and compare our model's performance, we explore different embedding approaches, including Word2Vec and Glove, in conjunction with the Bidirectional Long Short-Term Memory and Conditional Random Field (BiLSTM-CRF) model. Furthermore, we conduct comparisons against BERT to assess the efficacy of our approach. Our experimentation utilizes a Turkish e-commerce dataset gathered from the internet, where traditional grammatical and structural rules may not apply. The BiLSTM-CRF model with FastText embeddings achieved an F1 score value of 57.40%, a precision value of 55.78%, and a recall value of 59.12%. These results indicate promising performance in outperforming other baseline techniques. This research contributes to the field of NER by addressing the unique challenges posed by unstructured Turkish text and opens avenues for improved entity recognition in informal language settings, with potential applications across various domains.Conference Object Citation - WoS: 8Citation - Scopus: 16Applying Blockchain To Improve the Integrity of the Software Development Process(Springer international Publishing Ag, 2019) Tuzun, Eray; Gulec, Ulas; O'Connor, Rory V.; Clarke, Paul M.; Yilmaz, Murat; Tasel, SerdarSoftware development is a complex endeavor that encompasses application and implementation layers with functional (refers to what is done) and non-functional (how is done) aspects. The efforts to scale agile software development practices are not wholly able to address issues such as integrity, which is a crucial non-functional aspect of the software development process. However, if we consider most software failures are Byzantine failures (i.e., where components may fail and there is imperfect information on which a component has failed.) that might impair the operation but do not completely disable the production line. In this paper, we assume software practitioners who cause defects as Byzantine participants and claim that most software failures can be mitigated by viewing software development as the Byzantine Generals Problem. Consequently, we propose a test-driven incentive mechanism based on a blockchain concept to orchestrate the software development process where production is controlled by a similar infrastructure based on the working principles of blockchain. We discuss the model that integrates blockchain with the software development process, and provide some recommendations for future work to address the issues while orchestrating software production.Conference Object Citation - WoS: 30Citation - Scopus: 51An Artificial Neural Network-Based Stock Trading System Using Technical Analysis and Big Data Framework(Assoc Computing Machinery, 2017) Ozbayoglu, A. Murat; Dogdu, Erdogan; Sezer, Omer BeratIn this paper, a neural network-based stock price prediction and trading system using technical analysis indicators is presented. The model developed first converts the financial time series data into a series of buy-sell-hold trigger signals using the most commonly preferred technical analysis indicators. Then, a Multilayer Perceptron (MLP) artificial neural network (ANN) model is trained in the learning stage on the daily stock prices between 1997 and 2007 for all of the Dow30 stocks. Apache Spark big data framework is used in the training stage. The trained model is then tested with data from 2007 to 2017. The results indicate that by choosing the most appropriate technical indicators, the neural network model can achieve comparable results against the Buy and Hold strategy in most of the cases. Furthermore, fine tuning the technical indicators and/or optimization strategy can enhance the overall trading performance.Article Citation - WoS: 6Citation - Scopus: 6Auction-Based Serious Game for Bug Tracking(Wiley, 2019) Usfekes, Cagdas; Tuzun, Eray; Yilmaz, Murat; Macit, Yagup; Clarke, PaulToday, one of the challenges in software engineering is utilising application lifecycle management (ALM) tools effectively in software development. In particular, it is hard for software developers to engage with the work items that are appointed to themselves in these ALM tools. In this study, the authors have focused on bug tracking in ALM where one of the most important metrics is mean time to resolution that is the average time to fix a reported bug. To improve this metric, they developed a serious game application based on an auction-based reward mechanism. The ultimate aim of this approach is to create an incentive structure for software practitioners to find and resolved bugs that are auctioned where participants are encouraged to solve and test more bugs in less time and improve quality of software development in a competitive environment. They conduct hypothesis tests by performing a Monte Carlo simulation. The preliminary results of this research support the idea that using a gamification approach for an issue tracking system enhances the productivity and decreases mean time to resolution.Article Citation - WoS: 41Citation - Scopus: 53Automated Classification of Rheumatoid Arthritis, Osteoarthritis, and Normal Hand Radiographs With Deep Learning Methods(Springer, 2022) Maras, Hadi Hakan; Ureten, KemalRheumatoid arthritis and hand osteoarthritis are two different arthritis that causes pain, function limitation, and permanent joint damage in the hands. Plain hand radiographs are the most commonly used imaging methods for the diagnosis, differential diagnosis, and monitoring of rheumatoid arthritis and osteoarthritis. In this retrospective study, the You Only Look Once (YOLO) algorithm was used to obtain hand images from original radiographs without data loss, and classification was made by applying transfer learning with a pre-trained VGG-16 network. The data augmentation method was applied during training. The results of the study were evaluated with performance metrics such as accuracy, sensitivity, specificity, and precision calculated from the confusion matrix, and AUC (area under the ROC curve) calculated from ROC (receiver operating characteristic) curve. In the classification of rheumatoid arthritis and normal hand radiographs, 90.7%, 92.6%, 88.7%, 89.3%, and 0.97 accuracy, sensitivity, specificity, precision, and AUC results, respectively, and in the classification of osteoarthritis and normal hand radiographs, 90.8%, 91.4%, 90.2%, 91.4%, and 0.96 accuracy, sensitivity, specificity, precision, and AUC results were obtained, respectively. In the classification of rheumatoid arthritis, osteoarthritis, and normal hand radiographs, an 80.6% accuracy result was obtained. In this study, to develop an end-to-end computerized method, the YOLOv4 algorithm was used for object detection, and a pre-trained VGG-16 network was used for the classification of hand radiographs. This computer-aided diagnosis method can assist clinicians in interpreting hand radiographs, especially in rheumatoid arthritis and osteoarthritis.Article Citation - WoS: 3Citation - Scopus: 4Automatic Coastline Detection Using Image Enhancement and Segmentation Algorithms(Hard, 2016) Caniberk, Mustafa; Maras, Hadi Hakan; Maras, Erdem EminCoastlines have hosted numerous civilizations since the earliest times of mankind due to the advantages they offer such as natural resources, transportation, arable areas, seafood, trade, and biodiversity. Coastal regions should be monitored vigilantly by planners and control mechanisms, and any changes in these regions should be detected with its human or natural origin, and future plans and possible interventions should be formed in these aspects to maintain ecological balance, sustainable development, and planned urbanization. Integrated coastal zone management (ICZM) provides an important tool to reach that goal. One of the important elements of ICZM is the detection of coastlines. While there are several methods to detect coastlines, remote sensing methods provide the fastest and the most efficient solutions. In this study, color infrared, grayscale, RGB, and fake infrared images were processed with the median filtering and segmentation software developed within the study, and coastal lines were detected by the edge detection method. The results show that segmentation with fake infrared images derived from RGB images give the best results.Conference Object Citation - WoS: 1Citation - Scopus: 4Automatic Detection of Mitochondria From Electron Microscope Tomography Images: a Curve Fitting Approach(Spie-int Soc Optical Engineering, 2014) Mumcuoglu, Erkan U.; Perkins, Guy; Martone, Maryann; Tasel, Serdar F.; Hassanpour, RezaMitochondria are sub-cellular components which are mainly responsible for synthesis of adenosine tri-phosphate (ATP) and involved in the regulation of several cellular activities such as apoptosis. The relation between some common diseases of aging and morphological structure of mitochondria is gaining strength by an increasing number of studies. Electron microscope tomography (EMT) provides high-resolution images of the 3D structure and internal arrangement of mitochondria. Studies that aim to reveal the correlation between mitochondrial structure and its function require the aid of special software tools for manual segmentation of mitochondria from EMT images. Automated detection and segmentation of mitochondria is a challenging problem due to the variety of mitochondrial structures, the presence of noise, artifacts and other sub-cellular structures. Segmentation methods reported in the literature require human interaction to initialize the algorithms. In our previous study, we focused on 2D detection and segmentation of mitochondria using an ellipse detection method. In this study, we propose a new approach for automatic detection of mitochondria from EMT images. First, a preprocessing step was applied in order to reduce the effect of non-mitochondrial sub-cellular structures. Then, a curve fitting approach was presented using a Hessian-based ridge detector to extract membrane-like structures and a curve-growing scheme Finally, an automatic algorithm was employed to detect mitochondria which are represented by a subset of the detected curves. The results show that the proposed method is more robust in detection of mitochondria in consecutive EMT slices as compared with our previous automatic method.Article Citation - WoS: 3Citation - Scopus: 3Binary Background Model With Geometric Mean for Author-Independent Authorship Verification(Sage Publications Ltd, 2023) Sezer, Ebru A.; Sever, Hayri; Canbay, PelinAuthorship verification (AV) is one of the main problems of authorship analysis and digital text forensics. The classical AV problem is to decide whether or not a particular author wrote the document in question. However, if there is one and relatively short document as the author's known document, the verification problem becomes more difficult than the classical AV and needs a generalised solution. Regarding to decide AV of the given two unlabeled documents (2D-AV), we proposed a system that provides an author-independent solution with the help of a Binary Background Model (BBM). The BBM is a supervised model that provides an informative background to distinguish document pairs written by the same or different authors. To evaluate the document pairs in one representation, we also proposed a new, simple and efficient document combination method based on the geometric mean of the stylometric features. We tested the performance of the proposed system for both author-dependent and author-independent AV cases. In addition, we introduced a new, well-defined, manually labelled Turkish blog corpus to be used in subsequent studies about authorship analysis. Using a publicly available English blog corpus for generating the BBM, the proposed system demonstrated an accuracy of over 90% from both trained and unseen authors' test sets. Furthermore, the proposed combination method and the system using the BBM with the English blog corpus were also evaluated with other genres, which were used in the international PAN AV competitions, and achieved promising results.Article Citation - WoS: 3Citation - Scopus: 4Block Size Analysis for Discrete Wavelet Watermarking and Embedding a Vector Image as a Watermark(Zarka Private Univ, 2019) Sever, Hayri; Sever, Hayri; Senol, Ahmet; Elbasi, Ersin; Bilgisayar MühendisliğiAs telecommunication and computer technologies proliferate, most data are stored and transferred in digital format. Content owners, therefore, are searching for new technologies to protect copyrighted products in digital form. Image watermarking emerged as a technique for protecting image copyrights. Early studies on image watermarking used the pixel domain whereas modern watermarking methods convert a pixel based image to another domain and embed a watermark in the transform domain. This study aims to use, Block Discrete Wavelet Transform (BDWT) as the transform domain for embedding and extracting watermarks. This study consists of 2 parts. The first part investigates the effect of dividing an image into non overlapping blocks and transforming each image block to a DWT domain, independently. Then, effect of block size on watermark success and, how it is related to block size, are analyzed. The second part investigates embedding a vector image logo as a watermark. Vector images consist of geometric objects such as lines, circles and splines. Unlike pixel-based images, vector images do not lose quality due to scaling. Vector watermarks deteriorate very easily if the watermarked image is processed, such as compression or filtering. Special care must be taken when the embedded watermark is a vector image, such as adjusting the watermark strength or distributing the watermark data into the image. The relative importance of watermark data must be taken into account. To the best of our knowledge this study is the first to use a vector image as a watermark embedded in a host image.Article Citation - WoS: 17Citation - Scopus: 18Boron Doped Graphene Nanostructures(Wiley-v C H verlag Gmbh, 2008) Ozdogan, Cem; Kunstmann, Jens; Fehske, Holger; Quandt, AlexanderWe present results from an ab initio study of metallized semiconducting graphene nanostructures. Our model system consists of an alternating chain of quasi-planar B-7 clusters embedded into a semiconducting arm-chair nanoribbon. We observe the appearance of overlapping bands around the Fermi-level, with crystal momenta pointing into the direction of these boron chains. This observation could be a vantage point for the development of graphene nanodevices and integrated nanocircuits, based on existing technologies. (C) 2008 WILEY-VCH Verlag GmbH & Co. KGaA, WeinheimConference Object Camera Auto-Calibration Using a Sequence of 2d Images With Small Rotations and Translations(Crc Press-taylor & Francis Group, 2003) Hassanpour, Reza; Hassanpour, R; Atalay, V; Yazılım Mühendisliği3D model generation needs depth information of the object in the input images. This information can be found using stereo imaging but it needs camera parameters. Camera calibration is not possible without some knowledge about the objects in the scene or assuming fixed or known values for the camera parameters. When using fixed camera parameters, however, small rotation angles or small translation in camera position can degenerate the results. The degeneracy can be omitted by adding new restrictions to the a-priori knowledge about the camera parameters. The calibrated data may be used to reconstruct 3D model of the scene.

