Browsing by Author "Akagündüz, Erdem"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Conference Object Comparison of Single Channel Indices for U-Net Based Segmentation of Vegetation in Satellite Images(SPIE, 2020) Ülkü, İrem; Barmpoutis, P.; Stathaki, T.; Akagündüz, Erdem; 233834Hyper-spectral satellite imagery, consisting of multiple visible or infrared bands, is extremely dense and weighty for deep operations. Regarding problems related to vegetation as, more specifically, tree segmentation, it is difficult to train deep architectures due to lack of large-scale satellite imagery. In this paper, we compare the success of different single channel indices, which are constructed from multiple bands, for the purpose of tree segmentation in a deep convolutional neural network (CNN) architecture. The utilized indices are either hand-crafted such as excess green index (ExG) and normalized difference vegetation index (NDVI) or reconstructed from the visible bands using feature space transformation methods such as principle component analysis (PCA). For comparison, these features are fed to an identical CNN architecture, which is a standard U-Net-based symmetric encoder-decoder design with hierarchical skip connections and the segmentation success for each single index is recorded. Experimental results show that single bands, which are constructed from the vegetation indices and space transformations, can achieve similar segmentation performances as compared to that of the original multi-channel caseArticle Defining Image Memorability Using the Visual Memory Schema(2020) Akagündüz, Erdem; Bors, Adrian G.; Evans, Karla K.; 233834Memorability of an image is a characteristic determined by the human observers' ability to remember images they have seen. Yet recent work on image memorability defines it as an intrinsic property that can be obtained independent of the observer. The current study aims to enhance our understanding and prediction of image memorability, improving upon existing approaches by incorporating the properties of cumulative human annotations. We propose a new concept called the Visual Memory Schema (VMS) referring to an organization of image components human observers share when encoding and recognizing images. The concept of VMS is operationalised by asking human observers to define memorable regions of images they were asked to remember during an episodic memory test. We then statistically assess the consistency of VMSs across observers for either correctly or incorrectly recognised images. The associations of the VMSs with eye fixations and saliency are analysed separately as well. Lastly, we adapt various deep learning architectures for the reconstruction and prediction of memorable regions in images and analyse the results when using transfer learning at the outputs of different convolutional network layers.