ENGLISH

27卷/4期

27卷/4期

華藝線上圖書館

Pages:

193-211

論文名稱

以深度學習萃取高解析度無人機正射影像之農地重劃區現況資訊

Title

Extracting Terrain Detail Information from High Resolution UAV Orthoimages of Farm Land Readjustment Area Using Deep Learning

作者

汪知馨、邱式鴻

Author

Chih-Hsin Wang, Shih-Hong Chio

中文摘要

目前多以地面測量方式執行現況測量,然該方法耗費大量人力、時間,且測量成果通常較為局部,而無人機具有低成本、快速產製高解析度正射影像的特性,故本研究採用ResU-net協助萃取高解析度無人機正射影像中農地重劃區全域的現況資訊,並分析經後處理的萃取成果應用於地籍測量相關作業之可行性。研究加入DSM探討高程對模型之助益,研究成果顯示標籤資料涵蓋高程變化處時,加入高程資訊能些微提升模型精度,宜蘭、台中測試資料F Score分別達0.73、0.86;於平面位置精度檢驗,統計得約80%資料符合相關規定,顯示應用深度學習萃取農地重劃區現況資訊有可行性。

Abstract

Currently, the detail data is mostly surveyed by theodolites and satellite positioning instruments; however, it is time-consuming and labor-intensive. Additionally, the surveying result is usually local data. Recently, UAVs are increasingly being used as a low-cost, efficient system which can support in acquiring high-resolution data; therefore, this study attempts to use ResU-net to assist in extracting global terrain detail information from high-resolution UAV orthoimages of farm land readjustment areas, and evaluate the feasibility of using the post-processing results in the detail survey. Except for the high-resolution orthoimages, the digital surface model (DSM) by dense matching was also used. The results showed that if the label data covered the elevation changes, adding DSM data by dense matching could promote the accuracy of detection. The F Score in Yilan testing data was 0.73; Taichung testing data was 0.86. In terms of the planar position difference, the result showed that about 80% data meet the accuracy requirement and it demonstrated the feasibility of using deep learning to assist in extracting global terrain detail information for farm land readjustment areas.

關鍵字

地籍測量、現況測量、深度學習、影像分割

Keywords

Cadastral Survey, Detail Survey, Deep Learning, Image Segmentation

附件檔名

華芸線上圖書館

https://www.airitilibrary.com/Publication/Index/10218661-202212-202212150002-202212150002-193-211/

備註說明

N / A

Pages:

213-236

論文名稱

Goldstein 濾波參數探討—以SNAP為例

Title

Discussion on Parameters of Goldstein Filtering Function of the SNAP Software

作者

吳彥誼、任玄

Author

Yen-Yi Wu, Hsuan Ren

中文摘要

相位濾波是採用合成孔徑雷達干涉技術一項必要的步驟。本研究使用SNAP軟體進行Goldstein濾波,探討之研究問題有二:(1) Goldstein濾波之各項參數對干涉圖所產生的影響;(2) 十字圖樣之生成原因、初步分析十字圖樣對精度的影響。本研究成果顯示:(1) 適應性濾波指數及快速傅立葉轉換大小對濾波強度的影響最顯著。(2) 濾波強度增強,雖可以達到降雜訊的效果,但同時也會增加十字圖樣;當相位圖中的十字圖樣過多,會對干涉圖的影像判釋及最終精度都造成巨大的破壞,因此使用者應依據實驗區域的狀況酌情調整濾波強度,以取得降雜訊及十字圖樣間的平衡。

Abstract

Phase filtering is one necessary step in the Interferometric Synthetic Aperture Radar (InSAR) procedure. In this study, the Goldstein Filtering function of the SNAP software was used. The research questions to be explored are two: first, the influences of adjustable parameters of the Goldstein Filtering on the interferogram; second, the cause of the “cross-like patterns” and the initial analysis of their impacts on the accuracy. The results has shown that: first, adaptive filter exponent and FFT size possess the most significant effect on the strength of filtering. Second, although enhancing the strength of filtering could reduce speckle noises, it will also increase the cross-like patterns. If there are too many cross-like patterns in the interferogram, it will undermine image interpretation and the accuracy of the InSAR final products. Therefore, users should adjust the filtering parameters appropriately according to the conditions of the experimental area, so as to achieve a balance between the noise reduction and the existence of cross-like patterns.

關鍵字

合成孔徑雷達干涉技術、相位濾波、SNAP軟體、十字圖樣

Keywords

Interferometric Synthetic Aperture Radar Technique, Phase Filtering, SNAP Software, Cross-like Pattern

附件檔名

華芸線上圖書館

https://www.airitilibrary.com/Publication/Index/10218661-202212-202212150002-202212150002-213-236/

備註說明

N / A

Pages:

237-246

論文名稱

運用AI技術進行衛星影像的水域辨識

Title

Extraction of Water Bodies from Satellite Imagery Using AI Techniques

作者

林威昕、張揚暉

Author

Daniel-Wilson Lin, Colo Chang

中文摘要

許多衛星影像的分析應用 (例如:分析熱島效應、揚塵區域、河道變遷、水資源的監測與保護、評估洪水災害) 都仰賴於正確地找出水域範圍,進而才能夠得到有意義的分析結果。目前,最普遍被用於衛星影像的水域偵測方法為常態差異化水體指標 (Normalized Difference Water Index, NDWI),但我們觀察到該方法用於福衛五號衛星影像時,要找出水域會有一些問題 (例如容易將道路、建築物誤判為水域)。基於這樣的觀察,我們率先嘗試運用深度學習技術來進行福衛五號衛星影像的水域分割。我們採用的卷積神經網路架構被稱為U-Net,實驗結果顯示相較於NDWI,U-Net的水域分割準確率有著顯著提升。

Abstract

Water body segmentation from remote sensing imagery is essential for monitoring and protecting water resources, as well as for assessing the risks of disasters such as flooding. However, traditional index-based approaches to water body identification have significant limitations. In this study, we applied and trained a convolutional neural network (CNN) called U-Net on FORMOSAT-5 imagery of the greater Tainan City area to identify water bodies. The experimental results of the U-Net model were compared with the Normalized Difference Water Index (NDWI) and convincingly showed that the U-Net model had achieved significantly better water body detection performance.

關鍵字

福爾摩沙衛星五號、NDWI、U-Net、水域分割、卷積神經網路

Keywords

FORMOSAT-5, NDWI, U-Net, Water Segmentation, Convolutional Neural Network (CNN)

附件檔名

華芸線上圖書館

https://www.airitilibrary.com/Publication/Index/10218661-202212-202212150002-202212150002-237-246/

備註說明

N / A

Pages:

247-257

論文名稱

應用深度學習技術輔助橋梁裂縫辨識

Title

Application of Deep Learning Technology to Assist Bridge Crack Identification

作者

王姿樺、高書屏、林志憲

Author

Zi-Hua Wang, Szu-Pyng Kao, Jhih-Sian Lin

中文摘要

臺灣橋梁約有兩萬九千座且橋齡30年以上橋梁佔總數31%,傳統橋梁檢測方式容易使判斷結果過於主觀、耗時、高成本且使檢測人員暴露於危險當中,因此藉由深度學習取代傳統檢測並使用公開裂縫資料集及自行拍攝橋梁裂縫資料,選用Faster-RCNN模型搭配ResNet 50為骨幹的卷積神經網路做為裂縫辨識的方法。研究結果證實相較於傳統橋梁檢測方式之檢測效率、安全性及靈活度也相對提升,研究成果對於裂縫辨識平均精度可達到80.7%、召回率可達77%可成功檢測出橋梁受損區域,且測試影像中有87.76%影像能夠完全預測裂縫位置,另針對具干擾辨識的裂縫影像也僅有12.24%影像有誤判情形。

Abstract

There are about 29,000 bridges in Taiwan and 31% of the bridges are over 30 years old. Traditional bridge inspection methods are too subjective, time-consuming, costly and expose inspectors to danger. Therefore, deep learning was used to replace the traditional inspection method and the Faster-RCNN model with ResNet 50 as the backbone of the convolutional neural network was chosen as the crack identification method using public crack dataset and self-photographed bridge crack data. The results of the study confirmed that the detection efficiency, safety and flexibility were relatively improved compared with the traditional bridge detection method. The average accuracy of the research results for crack identification reached 80.7% and the recall rate reached 77% to successfully detect the damaged area of the bridge. In addition, 87.76% of the test images were able to predict the crack location completely, and only 12.24% of the crack images with interference recognition were misidentified.

關鍵字

橋梁檢測、深度學習、Faster-RCNN、卷積神經網路、裂縫辨識

Keywords

Bridge Inspection, Deep Learning, Faster-RCNN, Neural Network, Crack Identification

附件檔名

華芸線上圖書館

https://www.airitilibrary.com/Publication/Index/10218661-202212-202212150002-202212150002-247-257/

備註說明

N / A

更多活動學刊