№ 2
2025 year
Annotation:
In several publications, a theoretical basis for a universal data model has been proposed, but its practical implementation has been considered only at the level of a general preliminary sketch. Many questions remain open, which complicates the creation of real systems implementing this model. In particular, the issue of processing queries to data presented in various traditional data models and stored in a system based on a universal data model has not been studied. The purpose of the study is to develop a method for implementing a system for processing queries to data presented in various traditional models and jointly stored in a universal data model, as well as to develop the architecture of such a query processing system. The article presents the results of an analysis of existing query handlers to assess the possibility of their use, and proposes a method for integrating query handlers in MDX, SQL, and Cypher into a single data management system based on an archigraph DBMS. An architecture is presented that allows unifying access and query processing to heterogeneous data, such as relational tables, multidimensional cubes, vertices, and edges of property graphs. The results obtained were used in developing the first prototype of the system. This opens prospects for further development and implementation of the universal data model and its varieties in various information systems, improving their flexibility and efficiency.Keywords:
Archigraph, archigraph DBMS, Data Lake, Data Lake Management System, query handler, SQL, MDX, CypherAnnotation:
The widespread use of various neural networks for detecting cyberattacks is hindered by the difficulty of determining their hyperparameters. Typically, hyperparameter values are established experimentally. This paper presents an approach to selecting perceptron hyperparameters for network attack detection using a genetic algorithm. Experimental results confirm the validity of this approach.Keywords:
Network attack detection, perceptron, hyperparameters, genetic algorithmAnnotation:
The problem of neural network optimization for large language models, such as ChatGPT, is discussed. One of the developing directions of large language model optimization is knowledge distillation – knowledge transfer from a large teacher model to a smaller student model without significant loss of result accuracy. Currently known methods of knowledge distillation have certain disadvantages: inaccurate knowledge transfer, long learning process, error accumulation in long sequences. A combination of methods that contribute to improving the quality of knowledge distillation is considered: selective teacher intervention in the student learning process and low-rank adaptation. The proposed combination of knowledge distillation methods can find application in problems with limited computing resources.Keywords:
Large language models, optimization, knowledge distillation, teacher model, student model, teacher intervention in the student learning process, low-rank adaptationAnnotation:
The article discusses the security issues of the three-tier IoT architecture, consisting of the physical, network, and application layers. The emphasis is placed on the importance of protecting IoT systems from cyber attacks, which can have serious financial consequences and also affect human security. The existing possibilities of using current machine learning algorithms in order to detect and prevent cyber threats are considered. The study focuses on the two lower levels of the IoT architecture, as the application layer requires separate analysis due to a variety of attacks, including social engineering. The work is aimed at in-depth understanding of IoT vulnerabilities and at offering effective methods of overcoming them, using modern technologies.Keywords:
Machine learning, internet of things, water transportation, information security, neural networks, decision trees, IoT systems architectureAnnotation:
Presented results of research on digitalization and automation of geoinformation support for air quality management over natural-industrial territories under climate change. The methodology of natural risk management, as well as technologies for managing geographic information databases, were used while research. A model has been developed that allows combining investment goals for the development of natural-industrial territories with the costs of geoinformation support for air quality management over natural-industrial territories under climate change, including the problem of black carbon. A modular web-based tool has been developed to implement the proposed model. Examples of using the developed approach for St. Petersburg and the Leningrad region are given.Keywords:
Digitalization, automation, geoinformatics, natural risks, air quality, climate changeAnnotation:
The main biometric characteristics reflecting changes in the psychoemotional state of the user of the information system are considered. Their ranking was performed using the method of paired comparisons, as a result of which the voice and keyboard handwriting were identified as the most suitable for further research. The criteria for preliminary identification of potential internal information security violators based on changes in the considered biometric characteristics are defined. A convolutional neural network model has been developed and tested to solve this problem.Keywords:
Biometrics, psycho-emotional state, neural network, information securityAnnotation:
The research focuses on methods for automating security in DevOps pipelines within the DevSecOps framework, emphasizing the integration of tools, processes, and cultural shifts to enhance the security of software products. The research set the following tasks: analysis of modern DevSec- Ops methodologies and tools; assess the potential of using artificial intelligence and machine learning to automate information security tasks; identify the main problems and barriers to integrating DevSecOps into continuous integration and delivery (CI/CD) processes; identify promising areas for automation development in the field of security. The study uses a comparative analytical review method, including an analysis of scientific literature, industrial practices and documentation of modern DevSecOps tools, the Shift-Left Security and Security as Code approaches. Open sources, CI/CD platform documentation, and data on the use of AI in information security were used. The research identifies key principles for integrating security into DevOps: early vulnerability detection, automation of security processes, implementation of Security as Code, and enhanced threat monitoring. Modern DevSecOps tools are reviewed, including static and dynamic code analysis, security policy management systems, secret management solutions, and AI-powered proactive threat detection mechanisms. The study finds that automation minimizes human error, accelerates vulnerability detection and remediation processes, and ensures compliance with regulatory requirements. However, certain limitations were also identified, including the complexity of tool integration, a shortage of DevSecOps specialists, and resistance to changes within development and operations teams. Future trends indicate further advancements in AI-driven solutions and automated frameworks for security management. This research contributes to the field of information security by uncovering methods for automating DevSec- Ops integration into CI/CD processes and exploring the potential of AI for predictive threat analytics. It highlights key trends in security automation within modern cloud and containerized environments.Keywords:
Information security, DevSecOps, secure software development, security integration, security process automation, DevOpsAnnotation:
The article considers the problem of protecting dynamically changing network infrastructures from cyberattacks, where the key challenge is the exponential growth of the number of potential attack vectors as the network scales. To solve this problem, a model of the defense system based on the principles of multi-criteria optimization is proposed.Keywords:
Network security, honeypot, multicriteria optimization, dynamic network, cyberattack, graph modelAnnotation:
The article presents a study aimed at developing a model of Portable Executable files containing malicious code. The model is built based on static analysis methods and includes 333 classification features, formed using a training dataset of 34,026 PE files, comprising 17,992 malicious and 16,034 legitimate files. The proposed model introduces an approach for describing features using a differentiated assessment of their importance. Experimental results with binary feature description methods confirmed that incorporating feature importance levels improves classification accuracy. Additionally, it is demonstrated that optimizing the feature space using principal component analysis (PCA) and the isolation forest method allows reducing the number of features to 40 of the most informative ones without significant accuracy loss. The obtained results provide high classification accuracy with lower computational costs. The scientific significance of the work lies in expanding the methodological capabilities of static analysis, ensuring a deeper understanding of threats and enhancing the reliability of mechanisms for counteracting malicious software.Keywords:
Static analysis, malware detection, machine learning, PE files, feature importance assessment, dimension reduction methodsAnnotation:
Paper reviews a mining algorithm in smart city’s blockchain systems with the Proof-of-Work consensus mechanism. Related studies in the field of detecting selfish mining attacks are reviewed. A method for protecting blockchain from selfish mining is presented. A plug-in for detecting selfish mining for the miner software is developed which allows analyzing patterns in data coming from the mining pool. The proposed solution outperforms other selfish mining detectors as it allows identifying the attacking pool and has lower error rates.Keywords:
Blockchain, prevention, security, selfish mining, smart cityAnnotation:
The rapid evolution of self-driving vehicles (SDVs) has necessitated the development of robust authentication mechanisms to ensure secure and privacy-preserving vehicle communication. Traditional authentication protocols often expose vehicle location information, raising concerns about tracking and unauthorized surveillance. This paper proposes a novel Zero-Knowledge Proof (ZKP)-enhanced Elliptic Curve Decisional Diffie-Hellman (ECDDH) authentication framework that enables SDVs to prove their presence within a geofenced area without revealing their exact location. The proposed protocol leverages 5G-enabled edge computing to optimize computational efficiency and authentication latency while ensuring scalability in high-density vehicular networks. The proposed framework is formally validated using BAN logic, proving its resilience against replay attacks, location spoofing, and unauthorized access. Performance evaluations conducted in MATLAB demonstrate the efficiency of the protocol, with results indicating an authentication latency of approximately 54.7 ms (100 vehicles), a constant communication overhead of 448 bytes per session, and a 100 % authentication success rate. Comparative analysis with ECDH and RSA-based authentication schemes highlights the protocol’s superior security guarantees and optimized communication overhead. The findings confirm that the proposed authentication mechanism is an effective solution for ensuring privacy-preserving authentication in autonomous vehicular networks, making it a viable approach for securing future intelligent transportation systems.Keywords:
Self-driving vehicles, authentication protocol, zero-knowledge proof, 5G-enabled edge computing, privacy-preserving authentication, autonomous vehicular networkAnnotation:
The principles of construction and functioning of honeypot systems are investigated. The existing detection methods are analyzed, their advantages and disadvantages are highlighted. A detection method based on the analysis of command execution delays is proposed. A universal detection method based on combining the results of the methods is proposed. A software prototype of the detection system is developed and tested.Keywords:
Honeypot, latency analysis, detection, network stackDetailed information on the rules of registration and the process of submitting an article.