Vision-Based Localization in Urban Areas for Mobile Robots

dc.authorwosidENE-5696-2022
dc.authorwosidHHN-2819-2022
dc.authorwosidAAA-7944-2019
dc.contributor.authorAlimovski, Erdal
dc.contributor.authorErdemir, Gökhan
dc.contributor.authorKuzucuoğlu, Ahmet Emin
dc.contributor.authorAlımovskı, Erdal
dc.contributor.department-temp
dc.date.accessioned2026-05-05T14:52:22Z
dc.date.issued2025
dc.departmentMühendislik ve Doğa Bilimleri Fakültesi
dc.description.abstractRobust autonomous navigation systems rely on mapping, locomotion, path planning, and localization factors. Localization, one of the most essential factors of navigation, is a crucial requirement for a mobile robot because it needs the capability to localize itself in the environment. Global Positioning Systems (GPSs) are commonly used for outdoor mobile robot localization tasks. However, various environmental circumstances, such as high-rise buildings and trees, affect GPS signal quality, which leads to reduced precision or complete signal blockage. This study proposes a visual-based localization system for outdoor mobile robots in crowded urban environments. The proposed system comprises three steps. The first step is to detect the text in urban areas using the “Efficient and Accurate Scene Text Detector (EAST)” algorithm. Then, EasyOCR was applied to the detected text for the recognition phase to extract text from images that were obtained from EAST. The results from text detection and recognition algorithms were enhanced by applying post-processing and word similarity algorithms. In the second step, once the text detection and recognition process is completed, the recognized word (label/tag) is sent to the Places API in order to return the recognized word’s coordinates that are passed within the specified radius. Parallely, points of interest (POI) data are collected for a defined area by a certain radius while the robot has an accurate internet connection. The proposed system was tested in three distinct urban areas by creating five scenarios under different lighting conditions, such as morning and evening, using the outdoor delivery robot utilized in this study. In the case studies, it has been shown that the proposed system provides a low error of around 4 m for localization tasks. Compared to existing works, the proposed system consistently outperforms all other approaches using just one sensor. The results indicate the efficacy of the proposed system for localization tasks in environments where GPS signals are limited or completely blocked.
dc.identifier.citationAlimovski, E., Erdemir, G., & Kuzucuoglu, A. E. (2025). Vision-Based Localization in Urban Areas for Mobile Robots. Sensors (Basel, Switzerland), 25(4), 1178. https://doi.org/10.3390/s25041178
dc.identifier.doi10.3390/s25041178
dc.identifier.endpage39
dc.identifier.issn1424-8220
dc.identifier.issue4
dc.identifier.orcid0000-0003-0909-2047
dc.identifier.pmid40006410
dc.identifier.startpage1
dc.identifier.urihttps://doi.org/10.3390/s25041178
dc.identifier.urihttps://hdl.handle.net/20.500.12436/9494
dc.identifier.volume25
dc.identifier.wos001431769700001
dc.identifier.wosqualityQ2
dc.indekslendigikaynakWeb of Science
dc.indekslendigikaynakPubMed
dc.language.isoen
dc.publisherMDPI
dc.relation.ispartofSensors
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı
dc.rightsinfo:eu-repo/semantics/openAccess
dc.subjectVisual localization
dc.subjectMobile robot
dc.subjectText recognition
dc.subjectMap
dc.titleVision-Based Localization in Urban Areas for Mobile Robots
dc.typeArticle
dspace.entity.typePublication
relation.isAuthorOfPublicationcc7c1de3-227c-4ac2-a706-637b14ee45fa
relation.isAuthorOfPublication.latestForDiscoverycc7c1de3-227c-4ac2-a706-637b14ee45fa

Dosyalar

Orijinal paket

Listeleniyor 1 - 1 / 1
Yükleniyor...
Küçük Resim
İsim:
sensors-25-01178.pdf
Boyut:
22.82 MB
Biçim:
Adobe Portable Document Format
Açıklama:
Article file

Lisans paketi

Listeleniyor 1 - 1 / 1
Yükleniyor...
Küçük Resim
İsim:
license.txt
Boyut:
1.17 KB
Biçim:
Item-specific license agreed upon to submission
Açıklama: