30卷/2期

30卷/2期

華藝線上圖書館

Pages:

73-83

論文名稱

海岸線自動偵測與AI模型

Title

Coastline Automatic Detection and AI Model

作者

陳聿新、簡浩丞、朱宏杰

Author

Yu-Xin Chen, Hao-Cheng Chien, Hone-Jay Chu

中文摘要

在遙測領域中,海岸線偵測是一重要的研究課題,尤其在面對氣候變遷、海平面上升和環境變遷等問題時,快速海岸線偵測有助於環境監測與管理。本研究結合水體指標和深度學習於外傘頂洲海岸線偵測,以指標如多種NDWI (Normalized Difference Water Index)與AWEI (Automated Water Extraction Index)相關公式來進行海岸線偵測研究,在GEE平台套用相關五個水體指標公式來區分出外傘頂洲區域的水體與陸地區域,產製二值化地圖,接著應用Otsu法自動找出閾值,以此閾值作為初始值後續再進行微調,本研究將所有結果來進行比對與觀察,確認整個海岸線偵測的自動化流程,最後,以深度學習模型產出海岸線偵測結果圖,使用深度學習U-Net模型,透過訓練後,U-Net模型可對衛星影像直接進行海岸線自動偵測,模型的準確度(Accuracy)達到99.6%、F1 Score為0.72,能有效的偵測出外傘頂洲主體的海岸線。

Abstract

Coastline detection is an important research topic in the field of remote sensing, particularly in the context of climate change, sea-level rise, and environmental change. Rapid coastline detection plays a crucial role in environmental monitoring and management. This study combines water indices and deep learning for coastline detection. This study employs the Normalized Difference Water Index (NDWI) and the Automated Water Extraction Index (AWEI), among other related indices, to conduct coastline detection research. By applying five relevant water indices within the Google Earth Engine (GEE) platform, water and land areas in the Waisanding Sandbar region are distinguished to generate binary maps. Subsequently, Otsu's method is used to automatically determine the optimal threshold and refinement for land and sea classification. The results are compared and analyzed to evaluate the automation process of the coastline detection workflow. Finally, a deep learning approach is applied to produce the coastline detection results. A U-Net deep learning model is trained to automatically detect coastlines directly from satellite imagery. The model achieved an accuracy of 99.6% and an F1 score of 0.72, indicating its effectiveness in detecting the coastline of the Waisanding Sandbar.

關鍵字

海岸線偵測、NDWI、AWEI、U-Net

Keywords

Coastline Detection, NDWI, AWEI, U-Net

附件檔名

華芸線上圖書館

N / A

備註說明

N / A

Pages:

85-107

論文名稱

應用魚眼鏡頭物像對應模式於雨量計顯露度測量

Title

Applying Fisheye Object-Image Correspondence in Rain Gauge Exposure Measurement

作者

彭綿聿、趙鍵哲

Author

Mien-Yu Peng, Jen-Jer Jaw

中文摘要

雨量計的運作受環境遮蔽程度影響,過度遮蔽可能導致降雨量測量偏差,為確保測量成果的代表性,雨量計應架設於開闊場域,並持續監測其遮蔽情形,針對此需求,本研究提出基於影像的演算法,以估計雨量計之環境顯露度。透過魚眼鏡頭拍攝雨量計上方的對空影像,套用魚眼鏡頭物像對應模式,並依據顯露度幾何條件定義參數,擬定傾斜誤差改正方法。實驗結果顯示,本研究所研擬之方法應用於顯露度測量,在方便野外的施作程序及考量各類誤差的修正模式下,可提供仰角誤差小於1度之測量品質,滿足顯露度測量之精度需求。

Abstract

The operation of rain gauges is influenced by environmental shielding, where excessive obstruction may lead to precipitation measurement biases. To ensure the representativeness of measurement results, rain gauges should be installed in open areas, and their obstruction conditions should be continuously monitored. To address this need, this study proposes an image-based algorithm to estimate the rain gauge exposure. Through capturing sky images above the rain gauge using a fisheye lens and applying the fisheye object-image correspondence, and then defines parameters based on exposure geometric conditions and devises a tilt error correction method. Experimental results demonstrate that, in a correction mode that is convenient for field application procedures and takes into account various errors, the proposed method for exposure measurement can achieve an elevation angle accuracy within 1 degree, meeting the quality requirements of exposure measurement.

關鍵字

魚眼鏡頭、物像對應、雨量計、顯露度測量、傾斜誤差改正

Keywords

Fisheye Lens, Object-Image Correspondence, Rain Gauge, Exposure Measurement, Tilt Error Correction

附件檔名

華芸線上圖書館

N / A

備註說明

N / A

Pages:

109-121

論文名稱

室內三維製圖導入低成本點雲輔助策略與成本評估

Title

Assessments of Indoor 3D Mapping with Low-cost Point Clouds

作者

賴哲儇、孫翊晴、許皓香

Author

Jhe-Syuan Lai, Yi-Cing Sun, Hao-Hsiang Hsu

中文摘要

研究使用低成本的RGB-D感測器,獲取室內點雲資料輔助人工三維製圖,並比較純人工測量與建模的做法,探討「測量作業時間」和「建模作業時間」的差異。在「測量作業時間」中,純人工測量花費的時間較RGB-D做法少了1小時25分鐘,原因在於後者因儀器限制必須執行多站掃描和點雲套合任務,使得測量時間倍增。在「建模作業時間」中,點雲輔助策略所花費的時間相較純人工建模少了4小時。綜合上述,本研究策略比純人工測量與建模所花時間少;若提升儀器規格或針對複雜且不規則的目標,「測量作業時間」的改善預期更為顯著。

Abstract

This study applies the RGB-D sensor, which is one of low-cost devices, to capture point clouds of the indoor environment for aiding 3D reconstruction manually. Based on the goal of reducing the cost, the proposed solution is compared with that of measurements and 3D reconstructions by manually. For measuring the indoor environment, the manual approach is faster than the RGB-D scanning, saving 1 hour and 25 minutes. The RGB-D sensor captures point clouds with multiple stations and data registrations causing time consuming operations. For reconstruction, the RGB-D based method improves than the manual approach to save 4 hours. In summary, this study demonstrates the feasibility of the low-cost strategy. The significant improvements of measurement will be expected by upgrading the specification of the used sensor or modeling the complex and irregular targets.

關鍵字

三維製圖、室內環境、低成本方案、點雲、RGB-D感測器

Keywords

3D Mapping, Indoor Environment, Low-cost Solution, Point Cloud, RGB-D Sensor

附件檔名

華芸線上圖書館

N / A

備註說明

N / A

Pages:

123-137

論文名稱

基於人工智慧分類法之無人機影像在精準農業中的應用:以台灣宜蘭縣部分鄉鎮的水稻田坵塊分類為例

Title

Application of UAV Imagery in Precision Agriculture Based on Artificial Intelligence Classification Methods: A Case Study of Paddy Fields in Selected Townships of Yilan County, Taiwan

作者

沈育璋、雷祖強、陳勝義

Author

Yu-Zhang Shen, Tsu-Chiang Lei, Sheng-Yi Chen

中文摘要

隨著無人機與影像分析技術進步,精準農業逐漸成為提升農業效率的關鍵,本研究使用高解析度UAV影像並比較三種機器學習與深度學習模型於宜蘭縣水稻田坵塊偵測問題。機器學習模型如倒傳神經網路、羅吉斯迴歸與C5.0決策樹利用原始波段與紋理特徵,最佳精度為倒傳神經網路(總體精度95.62%、Kappa值0.912);而深度學習模型為Alexnet、VGG16、VGG19同樣利用原始波段與影像增揚特徵後,最佳精度為VGG16 (總體精度93.83%、Kappa值0.894)。雖然機器學習部分工具精準度略高,但其需依賴繁瑣特徵工程才能達成目的,反之深度學習只需要原始波段加入簡單影像增揚特徵後就能產生一定程度的判釋結果,這顯示圖像式(2D)CNN在地物判釋上的優越性,其在農業環境調查中具有高度的應用潛力。

Abstract

With the advancement of UAV and image analysis technologies, precision agriculture has become a key strategy for enhancing productivity. This study uses high-resolution UAV imagery to evaluate three machine learning and three deep learning models for detecting rice paddy plots in Yilan County, Taiwan. Among the machine learning models (BPNN, logistic regression, and the C5.0 decision tree), the BPNN performed best, achieving 95.62% accuracy and a Kappa coefficient of 0.912. In the deep learning category (AlexNet, VGG16, and VGG19), VGG16 yielded the highest performance with 93.83% accuracy and a Kappa of 0.894 using RGB bands and basic enhancements bands. Although certain machine learning models demonstrated slightly higher accuracy, they required complex and time-consuming feature engineering. In contrast, deep learning models produced competitive interpretation results using only RGB bands and simple enhancement techniques. These findings demonstrate the superior capability of image-based (2D) convolutional neural networks (CNNs) in land cover interpretation and highlight their strong potential for application in agricultural environmental surveys.

關鍵字

精準農業、水稻田、機器學習、深度學習、UAV影像

Keywords

Precision Agriculture, Paddy Fields, Machine Learning, Deep Learning, UAV Imagery

附件檔名

華芸線上圖書館

N / A

備註說明

N / A

更多學刊論文