Study of the Influence of Generative Artificial Intelligence on the Analysis of Static Code Scanning Results Using the Bearer CLI Static Analyzer
Abstract
Purpose: to investigate how generative artificial intelligence can improve static code analysis results interpretation using the Bearer CLI tool.
Method: quantitative and experimental methods, including using large language models for processing the results of SAST analysis, and expert evaluation by the authors of their accuracy, usefulness, consistency, and validity of assessing the consequences of risks using statistical analysis based on a 5-point scale.
Findings: five popular large language models (GPT-4o, GPT-4.5, GPT-o3-mini-high, Gemini 2.5 Pro, Claude 3.7 Sonnet) were evaluated for their ability to enhance the analysis of Bearer CLI static scanning results for three types of vulnerabilities. Comparative analysis based on criteria of accuracy, usefulness, consistency, and business impact assessment showed that integrating generative AI significantly improves the interpretation of the tool's standard output. The Gemini 2.5 Pro model demonstrated the best overall performance, confirming the significant potential of AI for improving the quality and accessibility of SAST report analysis.
Theoretical implications: The study deepens understanding of the capabilities of generative AI to more effectively analyze the results of SAST analyzers, demonstrating their potential in providing more accurate explanations and fixes for vulnerabilities, which is a theoretical basis for future research in code security automation.
Practical implications: the obtained results can serve as a basis for improving static analysis tools such as Bearer CLI. This will help developers navigate scan results more efficiently and quickly respond to potential threats.
Value: The study demonstrates how using AI to analyze the results of static scanning can reduce the number of false positives, make reports more understandable and effectively identify real vulnerabilities, which contributes to improving code security in the early stages of development.
Future research: deeper integration of LLM into static analysis, adaptation of models for different programming languages, and evaluation of efficiency in real projects.
Paper type: empirical study.
Downloads
References
OpenText. What is SAST? [Електронний ресурс]. Available from: https://www.opentext.com/what-is/sast. Accessed: April 1, 2025.
Ahmad, B., Tan, B., Karri, R., & Pearce, H. (2023). FLAG: Finding line anomalies (in code) with generative AI. arXiv. https://doi.org/10.48550/arXiv.2306.12643
Bearer. Bearer CLI documentation [Електронний ресурс]. Available from: https://docs.bearer.com. Accessed: April 1, 2025.
Markson, Reed & Samson, Mike & Owen, Antony. (2023). Automated Code Review: Leveraging Generative AI for Vulnerability Detection in Software Development. ResearchGate. https://www.researchgate.net/publication/390108092_Automated_Code_Review_Leveraging_Generative_AI_for_Vulnerability_Detection_in_Software_Development
Check Point. What is Static Application Security Testing (SAST)? [Електронний ресурс]. Available from: https://www.checkpoint.com/cyber-hub/cloud-security/what-is-static-application-security-testing-sast. Accessed: April 1, 2025.
Sharma, R. An Analysis of Generative AI Capabilities in Security Testing. Evaluating Static Code Analysis Performance. Digitala Vetenskapliga Arkivet. https://www.diva-portal.org/smash/get/diva2:1941555/FULLTEXT01.pdf
Amazon Web Services. What is a large language model? [Електронний ресурс]. Available from: https://aws.amazon.com/what-is/large-language-model. Accessed: April 1, 2025.
DataCamp. How transformers work [Електронний ресурс]. Available from: https://www.datacamp.com/tutorial/how-transformers-work. Accessed: April 1, 2025.
Yuanjiang Cao, Quan Z. Sheng, Julian McAuley & Lina Yao. (2021). Reinforcement Learning for Generative AI: A Survey. Journal Of Latex Class Files, 14(8). https://doi.org/10.48550/arXiv.2308.14328
OpenAI. Guides: Reasoning [Електронний ресурс]. Available from: https://platform.openai.com/docs/guides/reasoning?api-mode=responses. Accessed: April 1, 2025.
Ding, A. et al., "Generative AI for Software Security Analysis: Fundamentals, Applications, and Challenges" in IEEE Software, vol. 41, no. 06, pp. 46-54, Nov.-Dec. 2024 .https://doi.ieeecomputersociety.org/10.1109/MS.2024.3416036
Hicken, A. (2023, 12 травня). False Positives in Static Code Analysis. Parasoft. https://www.parasoft.com/blog/false-positives-in-static-code-analysis/
Fan, Gang, Xie, Xiaoheng, Zheng, Xunjin, Liang, Yinan, and Di, Peng. (2023). Static Code Analysis in the AI Era: An In-depth Exploration of the Concept, Function, and Potential of Intelligent Code Analysis Agents. arXiv. https://doi.org/10.48550/arXiv.2310.08837
Abstract views: 40 PDF Downloads: 58
Copyright (c) 2025 Roman Zhyshko, Ihor Kos, Yelyzaveta Cheremnykh, Danyil Zhuravchak, Vitalii Susukailo

This work is licensed under a Creative Commons Attribution 4.0 International License.
The authors agree with the following conditions:
1. Authors retain copyright and grant the journal right of first publication (Download agreement) with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
2. Authors have the right to complete individual additional agreements for the non-exclusive spreading of the journal’s published version of the work (for example, to post work in the electronic repository of the institution or to publish it as part of a monograph), with the reference to the first publication of the work in this journal.
3. Journal’s politics allows and encourages the placement on the Internet (for example, in the repositories of institutions, personal websites, SSRN, ResearchGate, MPRA, SSOAR, etc.) manuscript of the work by the authors, before and during the process of viewing it by this journal, because it can lead to a productive research discussion and positively affect the efficiency and dynamics of citing the published work (see The Effect of Open Access).