Bilgilendirme: Kurulum ve veri kapsamındaki çalışmalar devam etmektedir. Göstereceğiniz anlayış için teşekkür ederiz.
 

Task-Based Visual Attention for Continually Improving the Performance of Autonomous Game Agents

dc.contributor.author Ulu, Eren
dc.contributor.author Capin, Tolga
dc.contributor.author Celikkale, Bora
dc.contributor.author Celikcan, Ufuk
dc.date.accessioned 2025-05-11T16:41:25Z
dc.date.available 2025-05-11T16:41:25Z
dc.date.issued 2023
dc.description Celikkale, Ismail Bora/0000-0002-2281-8773; Celikcan, Ufuk/0000-0001-6421-185X; Ulu, Eren/0009-0005-0993-2554 en_US
dc.description.abstract Deep Reinforcement Learning (DRL) has been effectively performed in various complex environments, such as playing video games. In many game environments, DeepMind's baseline Deep Q-Network (DQN) game agents performed at a level comparable to that of humans. However, these DRL models require many experience samples to learn and lack the adaptability to changes in the environment and handling complexity. In this study, we propose Attention-Augmented Deep Q-Network (AADQN) by incorporating a combined top-down and bottom-up attention mechanism into the DQN game agent to highlight task-relevant features of input. Our AADQN model uses a particle-filter -based top-down attention that dynamically teaches an agent how to play a game by focusing on the most task-related information. In the evaluation of our agent's performance across eight games in the Atari 2600 domain, which vary in complexity, we demonstrate that our model surpasses the baseline DQN agent. Notably, our model can achieve greater flexibility and higher scores at a reduced number of time steps.Across eight game environments, AADQN achieved an average relative improvement of 134.93%. Pong and Breakout games both experienced improvements of 9.32% and 56.06%, respectively. Meanwhile, SpaceInvaders and Seaquest, which are more intricate games, demonstrated even higher percentage improvements, with 130.84% and 149.95%, respectively. This study reveals that AADQN is productive for complex environments and produces slightly better results in elementary contexts. en_US
dc.identifier.doi 10.3390/electronics12214405
dc.identifier.issn 2079-9292
dc.identifier.scopus 2-s2.0-85176558568
dc.identifier.uri https://doi.org/10.3390/electronics12214405
dc.identifier.uri https://hdl.handle.net/20.500.12416/9542
dc.language.iso en en_US
dc.publisher Mdpi en_US
dc.relation.ispartof Electronics
dc.rights info:eu-repo/semantics/openAccess en_US
dc.subject Deep Reinforcement Learning en_US
dc.subject Deep Q-Learning en_US
dc.subject Layer-Wise Relevance Propagation en_US
dc.subject Particle Filter en_US
dc.subject Bottom-Up And Top-Down Visual Attention en_US
dc.subject Saliency Map en_US
dc.subject Convolutional Neural Network en_US
dc.title Task-Based Visual Attention for Continually Improving the Performance of Autonomous Game Agents en_US
dc.type Article en_US
dspace.entity.type Publication
gdc.author.id Celikkale, Ismail Bora/0000-0002-2281-8773
gdc.author.id Celikcan, Ufuk/0000-0001-6421-185X
gdc.author.id Ulu, Eren/0009-0005-0993-2554
gdc.author.scopusid 58694122300
gdc.author.scopusid 6603846240
gdc.author.scopusid 55872217900
gdc.author.scopusid 27867506800
gdc.author.wosid Celikkale, Bora/Mds-7170-2025
gdc.author.wosid Celikcan, Ufuk/H-1191-2017
gdc.author.wosid Ulu, Eren/Iyj-5120-2023
gdc.bip.impulseclass C5
gdc.bip.influenceclass C5
gdc.bip.popularityclass C5
gdc.coar.access open access
gdc.coar.type text::journal::journal article
gdc.collaboration.industrial false
gdc.description.department Çankaya University en_US
gdc.description.departmenttemp [Ulu, Eren; Celikcan, Ufuk] Hacettepe Univ, Dept Comp Engn, TR-06570 Ankara, Turkiye; [Ulu, Eren; Capin, Tolga] TED Univ, Dept Comp Engn, TR-06790 Ankara, Turkiye; [Celikkale, Bora] Cankaya Univ, Dept Software Engn, TR-06790 Ankara, Turkiye en_US
gdc.description.issue 21 en_US
gdc.description.publicationcategory Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı en_US
gdc.description.scopusquality Q2
gdc.description.startpage 4405
gdc.description.volume 12 en_US
gdc.description.woscitationindex Science Citation Index Expanded
gdc.description.wosquality Q2
gdc.identifier.openalex W4388267259
gdc.identifier.wos WOS:001100199100001
gdc.index.type WoS
gdc.index.type Scopus
gdc.oaire.accesstype GOLD
gdc.oaire.diamondjournal false
gdc.oaire.impulse 1.0
gdc.oaire.influence 2.6019726E-9
gdc.oaire.isgreen false
gdc.oaire.popularity 2.8397242E-9
gdc.oaire.publicfunded false
gdc.oaire.sciencefields 0202 electrical engineering, electronic engineering, information engineering
gdc.oaire.sciencefields 02 engineering and technology
gdc.openalex.collaboration National
gdc.openalex.fwci 0.51088578
gdc.openalex.normalizedpercentile 0.67
gdc.opencitations.count 0
gdc.plumx.mendeley 4
gdc.plumx.newscount 1
gdc.plumx.scopuscites 2
gdc.scopus.citedcount 2
gdc.virtual.author Çelikkale, İsmail Bora
gdc.wos.citedcount 1
relation.isAuthorOfPublication f5541ef2-5a92-402d-ae8c-0a824f7cad09
relation.isAuthorOfPublication.latestForDiscovery f5541ef2-5a92-402d-ae8c-0a824f7cad09
relation.isOrgUnitOfPublication aef16c1d-5b84-42f9-9dab-8029b2b0befd
relation.isOrgUnitOfPublication 43797d4e-4177-4b74-bd9b-38623b8aeefa
relation.isOrgUnitOfPublication 0b9123e4-4136-493b-9ffd-be856af2cdb1
relation.isOrgUnitOfPublication.latestForDiscovery aef16c1d-5b84-42f9-9dab-8029b2b0befd

Files