基于TransKG-Chat的交通运输知识模型构建及应用

胡海东, 李肇嘉, 伍朝辉

交通运输研究 ›› 2025, Vol. 11 ›› Issue (4) : 79-92.

交通运输研究 ›› 2025, Vol. 11 ›› Issue (4) : 79-92. DOI: 10.16503/j.cnki.2095-9931.2025.04.007
专刊:交通运输数字化转型

基于TransKG-Chat的交通运输知识模型构建及应用

作者信息 +

Construction and Application of Transportation Knowledge Models Based on TransKG-Chat

  • HU Haidong 1 ,  
  • LI Zhaojia 2, * ,  
  • WU Zhaohui 1
Author information +
文章历史 +

摘要

针对交通领域图谱模型构建中的结构化表达和复杂知识自动抽取能力不足,提出TransKG交通运输知识大模型及TransKG-Chat知识图谱自动构建方法。首先,基于交通运输专业语料,采用指令微调、高层有监督参数优化与多任务联合损失,提升模型对交通运输知识的理解与结构化抽取能力。然后,设计多层级五元组体系,结合自动解析与分层归一算法,实现知识高精度批量抽取和复杂语义归属层级组织。最后,结合五元组自动抽取,构建图谱驱动的智能应用体系,实现货运枢纽监控、多式联运等场景下知识可视化与辅助决策。实验结果显示:TransKG模型在交通运输领域问答集的Pass@1指标较相同参数规模的主流模型有明显提升,五元组抽取准确率达95%;自动化效率方面,TransKG-Chat方法在500字与20 000字文本下构建用时分别为人工的2.98倍和12.83倍。结果表明,该方法在完成交通知识自动抽取任务中具有领先优势,能够有效支撑行业智能化服务应用。

Abstract

To address the limitations in structured representation and automated extraction of complex knowledge in transportation knowledge graph modeling, this study proposed the TransKG large model of transportation knowledge and the TransKG-Chat automatic construction method of knowledge graph. Firstly, based on domain-specific transportation corpora, instruction fine-tuning, high-level supervised parameter optimization, and multi-task joint loss were employed to enhance the model′s understanding and structured extraction abilities for transportation knowledge.Then, a multi-level quintuple system was designed, combined with automatic parsing and hierarchical normalization algorithms, to achieve high-precision batch extraction of knowledge and hierarchical organization of complex semantic attributions. Finally, by integrating automatic quintuple extraction, a knowledge graph-driven intelligent application framework was constructed, achieving knowledge visualization and auxiliary decision-making in scenarios such as freight hub monitoring and multimodal transport. Experimental results demonstrated that: the TransKG model significantly improved the Pass@1 metric on transportation domain question-answering datasets compared to mainstream models with the same parameter scale, and achieved a quintuple extraction accuracy of 95%; In terms of automation efficiency, the construction times of the TransKG-Chat method for texts of 500 words and 20,000 words are 2.98 times and 12.83 times faster than manual processing, respectively. Overall, the results verified the leading advantage and industry application value of the proposed method in automatic transportation knowledge extraction and intelligent applications.

关键词

人工智能 / 交通运输 / 知识图谱 / 自动知识抽取 / 深度学习

Key words

Artificial Intelligence / transportation / knowledge graph / automated knowledge extraction / deep learning

引用本文

导出引用
胡海东, 李肇嘉, 伍朝辉. 基于TransKG-Chat的交通运输知识模型构建及应用[J]. 交通运输研究. 2025, 11(4): 79-92 https://doi.org/10.16503/j.cnki.2095-9931.2025.04.007
HU Haidong, LI Zhaojia, WU Zhaohui. Construction and Application of Transportation Knowledge Models Based on TransKG-Chat[J]. Transport Research. 2025, 11(4): 79-92 https://doi.org/10.16503/j.cnki.2095-9931.2025.04.007
中图分类号: U491.1   

参考文献

[1]
HOU Y, SHAO Y, HAN Z, et al. Construction and application of traffic accident knowledge graph based on LLM[R]. Warrendale, PA: SAE Technical Paper, 2025.
[2]
TAN J, QIU Q, GUO W, et al. Research on the construction of a knowledge graph and knowledge reasoning model in the field of urban traffic[J]. Sustainability, 2021, 13(6): 3191. DOI: 10.3390/su13063191.
[3]
ZHONG L, WU J, LI Q, et al. A comprehensive survey on automatic knowledge graph construction[J]. ACM Computing Surveys, 2023, 56(4): 1-62.
[4]
MASOUD M, PEREIRA B, MCCRAE J, et al. Automatic construction of knowledge graphs from text and structured data: a preliminary literature review[C]// 3rd Conference on Language, Data and Knowledge (LDK 2021). Zaragoza, Spain:Schloss Dagstuhl-Leibniz Center for Informatics, 2021: 19: 1-19: 9.
[5]
BOSSELUT A, RASHKIN H, SAP M, et al. COMET:Commonsense transformers for automatic knowledge graph construction[C]// Proceedings of 57th annual meeting of the Association for Computational Linguistics (ACL 2019). Florence: ACL, 2019: 4762-4779.
[6]
RADFORD A, NARASIMHAN K, SALIMANS T, et al. Improving language understanding by generative pre-training[R]. San Francisco: OpenAI, 2018.
[7]
CHEN B, BERTOZZI A. AutoKG:Efficient automated knowledge graph generation for language models[C]// 2023 IEEE International Conference on Big Data (BigData). Sorrento, Italy: IEEE, 2023: 3117-3126.
[8]
ZHOU J, CHEN X, ZHANG H, et al. Automatic knowledge graph construction for judicial cases[J]. arXiv Preprint, 2024. arXiv: 2404.09416.
[9]
HU S, ZHANG H, HU X, et al. Chinese named entity recognition based on BERT-CRF model[C]// 2022 IEEE/ACIS 22nd international conference on computer and information science (ICIS). Zhuhai, China: IEEE, 2022: 105-108.
[10]
WANG L, WANG M C, ZHANG Y R, et al. Automated identification and representation of system requirements based on large language models and knowledge graphs[J]. Applied Sciences, 2025, 15(7): 3502. DOI:10.3390/app15073502.
[11]
ACHIAM J, ADLER S, AGARWAL S, et al. GPT-4 technical report[J]. arXiv Preprint. arXiv: 2303.08774, 2023. DOI: 10.48550/arXiv.2303.08774.
[12]
TAN J, QIU Q, GUO W, et al. Research on the construction of a knowledge graph and knowledge reasoning model in the field of urban traffic[J]. Sustainability, 2021, 13(6): 3191. DOI: 10.3390/su13063191.
[13]
陈娇娜, 张静, 靳引利, 等. 基于 RoBERTa- BiGRU-CRF 的交通事故处置流程文本信息抽取[J]. 交通运输研究, 2024, 10(3):20-28.
[14]
ZHU J, HAN X, DENG H, et al. KST-GCN: A knowledge-driven spatial-temporal graph convolutional network for traffic forecasting[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(9): 15055-15065.
[15]
TANG Y, ZHAO Z, DENG W, et al. RouteKG: A knowledge graph-based framework for route prediction on road networks[J]. arXiv Preprint. arXiv: 2310.03617, 2023. DOI: 10.48550/arXiv.2310.03617.
[16]
李劲业, 李永强. 融合知识图谱的时空多图卷积交通流量预测[J]. 浙江大学学报 (工学版), 2024, 58(7):1366-1376.
[17]
安芃, 胡振中, 林佳瑞, 等. 知识图谱对工程安全管理的智能支持方法研究[C]// 第八届全国BIM学术会议论文集. 深圳: 中国图学学会,2022:7-12.
[18]
GUO D, YANG D, ZHANG H, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning[J]. arXiv Preprint. arXiv: 2501.12948, 2025. DOI: 10.48550/arXiv.2501.12948.
[19]
YANG A, XIAO B, WANG B, et al. Baichuan 2: Open large-scale language models[J]. arXiv Preprint. arXiv: 2309.10305, 2023. DOI: 10.48550/arXiv.2309.10305.
[20]
BAI J, BAI S, CHU Y, et al. Qwen technical report[J]. arXiv Preprint. arXiv: 2309.16609, 2023. DOI: 10.48550/arXiv.2309.16609.
[21]
TOUVRON H, MARTIN L, STONE K, et al. Llama 2: Open foundation and fine-tuned chat mo-dels[J]. arXiv Preprint. arXiv:2307.09288, 2023. DOI: 10.48550/arXiv.2307.09288.
[22]
DU Z, QIAN Y, LIU X, et al. Glm: General language model pretraining with autoregressive blank infilling[J]. arXiv Preprint. arXiv: 2103. 10360, 2021. DOI: 10.48550/arXiv.2103.10360.

基金

陕西省交通运输厅2023年度交通科研项目(23-02X)
科技创新2030-“新一代人工智能”重大项目(2022ZD0115602)

Accesses

Citation

Detail

段落导航
相关文章

/