31卷/1期

31卷/1期

華藝線上圖書館

Pages:

1-19

論文名稱

應用類神經網路整合光學-SAR紋理分析於緊急崩塌測繪

Title

Integrated Optical–SAR Texture Analysis for Neural Network–Based Emergency Landslide Surveying and Mapping

作者

陳以耕、姜壽浩

Author

Yi-Keng Chen, Shou-Hao Chiang

中文摘要

光學遙測影像可提供崩塌目錄製作所需之關鍵光譜資訊,但易受雲霧與天候影響而限制其應用。相較之下,合成孔徑雷達具全天候觀測能力且對地表後向散射變化敏感,可作為崩塌擾動之重要補充。本研究整合光學與雷達資料,發展快速且穩定之崩塌地辨識方法。方法採物件導向影像分析進行地物分割,並由多期影像推導標準化植被指數差值與標準化後向散射指數,計算六項灰階共生矩陣紋理特徵以表徵地物變化。本研究比較光學、雷達、受雲影響光學及光學–雷達融合四種情境。結果顯示,融合模式可提升崩塌地判釋之空間一致性與完整性;於高雲覆條件下,雷達亦能有效辨識大型崩塌地。顯示所提方法具良好作業效能,可應用於事件導向之快速製圖與災害評估。

Abstract

Optical remote sensing imagery provides critical spectral information for mapping landslide inventories but is frequently hindered by cloud cover and adverse weather. In contrast, Synthetic Aperture Radar (SAR) penetrates clouds and is highly sensitive to surface backscatter changes, offering complementary insights for detecting landslide disturbances. This study presents an integrated approach combining optical and SAR data to enable rapid and reliable landslide detection under challenging conditions. The proposed framework applies object-based image analysis (OBIA) to segment terrain into meaningful units and derives NDVIdiff and NDSI indices from pre- and post-event imagery, together with six GLCM-based texture features, to characterize surface disturbances. Four classification scenarios, including optical-only, SAR-only, cloud-obstructed optical, and fusion-based models, were systematically compared. Results demonstrate that the fusion approach consistently yields more spatially coherent and complete landslide inventories, while SAR-based mapping alone successfully delineates most large-scale landslides under heavy cloud cover. These findings confirm the operational effectiveness of the proposed optical–SAR framework for rapid, event-driven landslide mapping and disaster risk assessment.

關鍵字

崩塌測繪、合成孔徑雷達、影像紋理、影像融合、人工類神經網路

Keywords

Landslide Detection, Synthetic Aperture Radar, Image Texture, Image Fusion, Artificial Neural Network

附件檔名

華芸線上圖書館

https://www.airitilibrary.com/Publication/Index?DocID=10218661-N202604150001-00001

備註說明

N / A

Pages:

21-35

論文名稱

陸海交界處大地起伏建模精度探討:以臺灣東部為例

Title

Accuracy Assessment of Geoid Modeling at the Land-Sea Interface: A Case Study of Eastern Taiwan

作者

楊致逸、蕭宇伸

Author

Zhi-Yi Yang, Yu-Shen Hsiao

中文摘要

本研究以臺灣東部花蓮清水斷崖沿岸為研究區,探討數值地形模型(DEM)、數值海底地形模型(DBM)空間解析度與海岸線定位精度對陸海交界處重力法大地起伏建模之影響。研究採用 GSHHG 與人工數化海岸線,結合不同解析度 DEM-DBM,於去除–回復架構下,以剩餘地形模型及最小二乘配置法建立大地起伏模型,並以 GNSS/水準觀測資料進行精度檢核。結果顯示,GSHHG 海岸線於斷崖海岸與實際海岸線存在明顯差異,易造成海岸附近地形遮罩與改正誤差;人工數化海岸線可提升陸海邊界一致性與模型穩定性。解析度比較顯示,3″×3″ DEM-DBM 之整體精度優於 9″×9″與 1″×1″。研究結果指出,精確海岸線與適當地形解析度為提升陸海交界區大地起伏建模品質之關鍵。

Abstract

This study investigates the effects of Digital Elevation Model (DEM) and Digital Bathymetric Model (DBM) resolutions, as well as coastline positioning accuracy, on gravimetric geoid modeling across the land–sea interface along the Qingshui Cliff coast in eastern Taiwan. Two coastline datasets, namely the Global Self-consistent, Hierarchical, High-resolution Geography (GSHHG) database and manually digitized coastlines, were combined with DEM-DBM datasets of different spatial resolutions. Under the remove–compute–restore framework, gravimetric geoid models were constructed using the Residual Terrain Model and Least-Squares Collocation methods, and their accuracy was evaluated using GNSS/leveling-derived geoid heights. The results show that the GSHHG coastline deviates noticeably from the actual shoreline in cliff-type coastal areas, leading to terrain masking and correction errors near the coast. In contrast, manually digitized coastlines improve boundary consistency and model stability. Resolution comparisons further indicate that the 3″×3″ DEM-DBM provides better overall accuracy than the 9″×9″ and 1″×1″ models. These findings suggest that accurate coastline definition and appropriate terrain resolution are essential for improving geoid modeling in complex land–sea transition zones.

關鍵字

數值高程模型、數值海底地形模型、大地起伏建模、陸海交界區

Keywords

DEM, DBM, Geoid Modeling, Land–Sea Transition Zone

附件檔名

華芸線上圖書館

https://www.airitilibrary.com/Publication/Index?DocID=10218661-N202604150001-00002

備註說明

N / A

Pages:

37-45

論文名稱

使用深度學習進行快速無參考標的無人機影像品質評估

Title

Real-time and Reference-free UAV Image Quality Assessment using Deep Learning

作者

林亞立、蘇冠秦、鄒來翰、林昭宏、饒見有、賴威伸、胡智超

Author

Ya-Li Lin, Guan-Chin Su, Lai-Han Zou, Chao-Hung Lin, Jiann-Yeou Rau, Wei-Shen Lai, Chih-Chao Hu

中文摘要

隨著無人飛行載具(UAV)應用於基礎設施監測,影像品質穩定性成為影響深度學習與攝影測量精度的關鍵。然而 UAV 影像常受環境干擾影響,現行品質篩選多仰賴人工檢視,且傳統結構相似度指標(SSIM)需參考影像並受限於空間對齊,難以實務應用。為此,本研究提出一套基於 Swin-Unet 的快速無參考影像品質評估方法。首先設計改良型 CLIP-SSIM 結合 Swin-Transformer,建立高精度影像品質圖(RMSE = 0.0193),再以該品質圖作為標註資料訓練 Swin-Unet 模型,使單張影像推論時間降至 0.3 秒,並維持良好準確度(RMSE = 0.04)。結果顯示,本方法可有效取代人工檢視流程,滿足高頻 UAV 影像應用需求。

Abstract

With the increasing use of unmanned aerial vehicles (UAVs) in infrastructure monitoring and environmental inspection, stable image quality has become critical for deep learning and photogrammetry applications. However, UAV images are often degraded by environmental disturbances, while existing quality filtering still relies on manual inspection, making it unsuitable for high-frequency or real-time deployment. This study proposes a realtime, reference-free image quality assessment (IQA) framework based on a Swin-Unet architecture to improve screening efficiency and ensure data quality stability, while simultaneously generating image quality maps (IQMs) for downstream applications. To overcome limitations of traditional SSIM-based methods, including the requirement for reference images and sensitivity to pixel misalignment, an improved metric, termed CLIP-SSIM (CSSIM), is introduced to construct an image scoring model. A probability-weighted Swin-Transformer is first employed to generate high-accuracy IQMs (RMSE = 0.0193); however, its pixel-wise inference is computationally expensive (532 seconds per image). Therefore, the generated IQMs are used as supervisory labels to train a Swin-Unet model, enabling real-time inference (0.3 s per image) with acceptable accuracy (RMSE = 0.04). The proposed approach provides an efficient, accurate, and scalable solution for UAV image screening, effectively replacing manual inspection in high-frequency UAV applications.

關鍵字

影像品質評估、深度學習、無人機影像、影像結構相似度指標

Keywords

Image Quality Assessment, Deep Learning, UAV Imagery, Structural Similarity

附件檔名

華芸線上圖書館

https://www.airitilibrary.com/Publication/Index?DocID=10218661-N202604150001-00003

備註說明

N / A

Pages:

47-62

論文名稱

應用深度資訊於混凝土橋梁結構物影像拼接之研究

Title

Research on Depth-Enhanced Image Stitching for Concrete Bridge Structures

作者

林于婷、高書屏、王豐良、林志憲

Author

Yu-Ting Lin, Szu-Pyng Kao, Feng-Liang Wang, Jhih-Sian Lin

中文摘要

影像拼接可擴展視野、消除盲區,但場景深度差異容易導致視差與重影。為此,本研究整合單影像深度估計與語義分割模型,建立橋梁立面影像拼接流程,重建完整結構外觀圖作為損壞分析和管理底圖。 透過遷移學習建置橋側影像數據集,沿用預訓練參數訓練 RGB-D 語義分割模型,mIoU 達 86.44%、mAcc 91.24%、召回率 92.11%、F1-score 91.56%,展現穩定性與泛化能力,並藉其成果間接驗證深度估計模型準確性。針對影像傾斜導致的幾何錯位,利用深度圖重建點雲校正。拼接精度比較顯示,結合分割模型與校正影像之平均 SSIM為 0.6807 高於傳統方法 0.5081,證實本研究方法在精度與視覺一致性上的優勢。

Abstract

Image stitching can expand the field of view and eliminate blind spots, but differences in scene depth often lead to parallax and ghosting. To address this, this study integrates single-image depth estimation and semantic segmentation models to establish a facade image stitching workflow for bridges, reconstructing a complete structural appearance map for damage analysis and management. Through transfer learning, a bridge-side image dataset was constructed, and a pretrained RGB-D semantic segmentation model was fine-tuned. The model achieved an mIoU of 86.44%, mAcc of 91.24%, recall of 92.11%, and F1-score of 91.56%, demonstrating stability and generalization capability, while indirectly validating the accuracy of the depth estimation model. To correct geometric misalignment caused by image tilt, depth maps were used to reconstruct point clouds for correction. A stitching accuracy comparison shows that the average SSIM of images corrected with the segmentation model (0.6807) was higher than that of the traditional method (0.5081), confirming the advantages of the proposed approach in terms of accuracy and visual consistency.

關鍵字

橋梁立面檢測、影像拼接、深度估計、語義分割、影像幾何校正

Keywords

Bridge Façade Inspection, Image Stitching, Depth Estimation, Semantic Segmentation, Image Geometric Correction

附件檔名

華芸線上圖書館

https://www.airitilibrary.com/Publication/Index?DocID=10218661-N202604150001-00004

備註說明

N / A

更多學刊論文