https://www.i-us.ru/index.php/ius/issue/feedInformation and Control Systems2025-09-07T18:40:29+00:00ius.spb@gmail.comOpen Journal Systems<p class="western" style="margin-top: 0.49cm; margin-bottom: 0.49cm; line-height: 100%;">Журнал «<strong>Информационно-управляющие системы</strong>», ISSN 1684-8853 (печ.), ISSN 2541-8610 (эл.), учрежден в 2002 году ФГУП «Издательство «Политехника». В 2012 году журнал перерегистрирован в связи со сменой учредителя: ООО «Информационно-управляющие системы», Свидетельство ПИ №ФС77-49181 от 30 марта 2012 года. С 2004 года издается Санкт-Петербургским государственным университетом аэрокосмического приборостроения (ГУАП).</p>https://www.i-us.ru/index.php/ius/article/view/163863D medical image segmentation with persistent homology-based constraints2025-09-07T18:30:01+00:00Rinat Ilgizovich Dumaevdumaevrinat@gmail.comSergey Aleksandrovich Molodyakovsamolodyakov@mail.ruLev Vladimirovich Utkinutkin_lv@spbstu.ru<p><strong>Introduction:</strong> Medical image segmentation is a widely researched field where neural networks serve as a basis for many medical data analysis and visualization processes. Traditional approaches to training segmentation models often rely on voxel-wise loss functions, which are insufficient for many segmentation tasks, and do not consider the topological correctness of the segmentation. Consequently, masks learned by such models may be spatially inconsistent, resulting in unrealistic features such as spurious connected components or holes. <strong>Purpose:</strong> To develop a method for training models to improve the quality of medical 3D image segmentation by leveraging persistent homology and comparing persistence diagrams during training. <strong>Results:</strong> We propose a method for training 3D image segmentation models, including persistent homology-based constraints and a loss function that is used to regularize the shape and edge reconstruction process of the mask. We present a filtration function based on the distance to the centroid of a binary mask to refine the shape and edge of the predicted mask. The analysis of the results obtained during the experiments on lung computed tomography data for segmentation and target structure extraction tasks has shown the effectiveness of our approach. The proposed approach not only improves the accuracy at the voxel level, but also preserves essential morphological properties, which is extremely important for subsequent tasks, such as nodule volume estimation and clinical decision making. <strong>Practical relevance:</strong> The use of the presented approach makes it possible to improve the quality of lung nodule segmentation using 3D CT images.</p>2025-08-29T00:00:00+00:00##submission.copyrightStatement##https://www.i-us.ru/index.php/ius/article/view/16408Management of smart city road network configuration: a scenario based on collaboration patterns of decision-making participants2025-09-07T18:31:49+00:00Tatiyana Viktorovna Levashovatatiana.levashova@iias.spb.suAlexander Viktorovich Smirnovsmir@iias.spb.suNikolay Nikolaevich Teslyateslya@iias.spb.su<p><strong>Introduction: </strong>In the process of joint activities of participants of digital communities consisting of people and software agents, repetitive problems often arise. To organize purposeful joint activities of the participants collaboration patterns that provide reusable solutions for recurring problems can be used. <strong>Purpose:</strong> To develop models for the effective organization of purposeful activities of participants of digital communities based on the collaborative patterns in the process of jointly solving the problem of management of a sociotechnical system and to propose a scenario for making recommendations on the management of a configuration of the of smart city road network as a kind of sociotechnical system. <strong>Results:</strong> We develop a conceptual model of a collaboration pattern which facilitates contextual information processing and interactions of the participants of a digital community due to the homogeneous representation of information used in the specifications of patterns of different types. We further develop a generalized decision-making model based on collaboration patterns which supports the choice of patterns in the process of joint activities within digital participation. We propose a scenario for making recommendations on managing the configuration of a smart city street and road network by the participants of a digital community which confirms the adequacy of the developed models. <strong>Practical relevance:</strong> The research results contribute to the problem of the management of sociotechnical system configurations. They provide models for the effective organization of purposeful joint activities of participants of digital communities in the process of their collaborative solving the configuration management problem as a decision-making problem. These results can be used, for instance, to make recommendations on the configuration management for such systems as a smart city, an airport and others.</p>2025-08-29T00:00:00+00:00##submission.copyrightStatement##https://www.i-us.ru/index.php/ius/article/view/16406Hybrid method of time synchronization in distributed systems2025-09-07T18:40:29+00:00Tatiyana Mikhailovna Tatarnikovatm-tatarn@yandex.ruEvgeniy Dmitrievich Arkhiptsevlokargenia@gmail.com<p><strong>Introduction: </strong>The solution to the problem of data and process consistency in a decentralized asynchronous environment is relevant for distributed systems such as global cloud platforms, the Internet of Things, and blockchain infrastructure. Existing synchronization protocols are based on the assumption of symmetrical time delays during transmission and reception, which is not true in the conditions of changing network load in addition, the stochastic nature of network noise is ignored. These shortcomings of the protocols lead to incorrect time synchronization. <strong>Purpose:</strong> To develop a hybrid time synchronization method based on the Kalman filter to smooth out network noise and on a logical clock for the system to adapt to the changing network load. <strong>Results:</strong> We demonstrate that the existing distributed systems synchronization protocols are becoming unreliable as time synchronization is affected by network delays, jitter, and hardware errors. We highlight the limitations of modern time synchronization: the NTP protocol for synchronizing Clients and Servers does not consider channel asymmetry; fixed synchronization intervals in the PTP protocol for synchronizing local networks lead to noise and error accumulation; hybridization of the NTP and PTP protocols does not provide dynamic adaptation to changing conditions. We propose a hybrid time synchronization model based on the combination of the Kalman filter and a logical clock. The Kalman filter makes it possible to effectively suppress network jitter and to compensate for physical clock drift, while logical clocks enable a faster adaptation to changing network loads. The results of a full-scale experiment demonstrate a more than threefold reduction in average delay. <strong>Practical relevance: </strong>The implementation of the method based on the combination of the Kalman filter and a logical clock can become a cost-effective alternative to specialized synchronization protocols in environments with unstable loads.</p>2025-08-29T00:00:00+00:00##submission.copyrightStatement##https://www.i-us.ru/index.php/ius/article/view/16415Modification of the YOLO model for a hybrid detection and tracking system in an automatic guidance UAV2025-09-07T18:33:20+00:00Alexander Vladimirovich Satsiukalexandrsatsuk@gmail.comNikita Vitalevich Volodaretsvolodarets.nikita@yandex.ru<p><strong>Introduction:</strong> Modern computer vision systems for UAVs face the problem of reliable detection and tracking in real time with limited resources of embedded platforms, especially when integrating neural network detectors with tracking algorithms. Existing YOLO implementations, despite their popularity, have drawbacks: excessive computational complexity due to the focus on multi-class detection and non-optimal interaction with tracking algorithms. <strong>Purpose:</strong> To develop an optimized version of YOLOv8 for a hybrid detection and<br>an on-board tracking system for a UAV with automatic guidance, aimed at reducing computational complexity while maintaining accuracy and adapting to resource-limited platforms. <strong>Results:</strong> The study is based on experiments with a modified YOLOv8m, evaluated on an embedded platform (Raspberry Pi 5) and a specialized ONE_OBJECT dataset. We develop a modified version of YOLOv8m with selective replacement of standard convolutional layers with depthwise separable convolutions in C3CA blocks and some Neck layers. As a result of experimental studies, we achiev a reduction in computational complexity by 32.9% (from 8.5 to 5.7 GFLOPS), the number of parameters by 37.1% (from 25.9 million to 16.3 million) and memory requirements by 29.4% (from 102 to 72 MB). The processing speed on Raspberry Pi 5 increases by 63.6% (from 11 to 18 FPS) while maintaining high detection accuracy mAP@0.5 at 93.5% (a drop of only 0.7 percentage points relative to the base model when tested on the ONE_OBJECT dataset). The greatest decrease in accuracy (1.2 percentage points)is observed for small objects (less than 50 pixels).<strong> Practical relevance:</strong> The developed YOLOv8m modification has been successfully integrated with the CSRT tracking algorithm. Consequently, it becomes possible to create efficient hybrid automatic guidance systems for<br>UAVs. The proposed solution is especially promising for embedded systems with limited resources. In addition, the obtained results open up new possibilities for creating energy-efficient real-time computer vision systems.</p>2025-08-29T00:00:00+00:00##submission.copyrightStatement##https://www.i-us.ru/index.php/ius/article/view/16405Multi-criteria analysis of web page optimization methods and their impact on search engine rankings2025-09-07T18:35:37+00:00Stanislav Vladimirovich Zhukovi@coder-stas.ruOlga Alexandrovna Kovalevasolomina-oa@yandex.ruSergey Vladimirovich Kovalevsseedd@mail.ru<p><strong>Introduction:</strong> Web page loading speed is one of the critical factors influencing website rankings in search engines. Slow loading negatively impacts user experience; consequently, it may lead to losing potential customers and decreasing conversions. Moreover, in an increasingly competitive environment, speed optimization becomes an essential part of effective website development, directly affecting a site's success online. <strong>Purpose:</strong> To perform a multi-criteria analysis for identifying the most effective web page speed optimization methods for a specific website. <strong>Results:</strong> We develop a hierarchical model of criteria to evaluate the effectiveness of various optimization methods, incorporating server-side, network, and client-side indicators for webpage loading speed. In addition, we analyze webpage optimization methods using a specific website as an example. The results demonstrate that implementing deferred media content loading (lazy loading) provides the most significant improvement. Certain optimization methods show no meaningful impact on the server-side and network performance criteria. Based on these findings, we have created a second website employing the most effective webpage optimization techniques. As a result of a comparative experiment, the optimized website demonstrates substantial improvements across all key metrics, such as the number of visits, duration of visits, page views per session, and a decrease in bounce rate. <strong>Practical relevance:</strong> The benefits of this study and the proposed model enable developers to make informed decisions when selecting the most effective methods to optimize web page loading speed.</p>2025-08-29T00:00:00+00:00##submission.copyrightStatement##https://www.i-us.ru/index.php/ius/article/view/16416Method for identifying software code vulnerabilities based on cluster analysis and contextual adaptation of large language models2025-09-07T18:37:09+00:00Ruslan Nadirovich Bakeevbakeev.ruslan@yandex.ruVladimir Nikiforovich Kuzminvka@mil.ruArtem Bakitzhanovich Menisovmen.arty@yandex.ruTimur Rimovich Sabirovrowing-team@mail.ru<p><strong>Introduction:</strong> Detecting vulnerabilities in source code remains one of the priority tasks in the field of cybersecurity. Classical code analysis methods often do not take into account the execution context and do not scale well with the growth of program volumes. In the context of complex architectures and incomplete data annotations, a context-adapted approach is required that can identify vulnerabilities based on semantic and structural analysis. <strong>Purpose:</strong> Тo develop a method for detecting vulnerabilities in program code using cluster analysis and contextual adaptation of large language models capable of taking into account not only the syntax, but also the semantic structure of programs. <strong>Methods:</strong> The developed approach combines clustering of code segments with the use of pre-trained language models adapted to the program code. To improve the efficiency, vulnerability features are identified, including both the vulnerable fragment itself and its context – control structures, variables, function calls. We carry out training and evaluation on labeled open datasets using pre-trained large language models. <strong>Results:</strong> The method makes it possible to automatically group code fragments by structural similarity, after which semantic analysis is performed using large language models capable of recognizing vulnerability patterns. Experiments have shown that the inclusion of contextual information significantly increases the efficiency of identifying vulnerabilities in the source code. On the BigVul and CVEfixes datasets, the proposed method achieved accuracy of up to 78% and recall of 82%, which is 9–12% higher than existing solutions. The method demonstrates resistance to syntactic variations and can be used to analyze previously unlabeled code. <strong>Practical</strong> <strong>relevance</strong><strong>:</strong> The method is applicable in automatic source code analysis systems and can significantly reduce the cost of manual auditing, especially when analyzing large code bases. It can also be used for educational and research purposes to analyze vulnerability patterns. <strong>Discussion:</strong> The results confirm the effectiveness of using large language models in software security analysis tasks. A promising direction is to extend the method to other programming languages, as well as to study hybrid approaches involving graph neural networks. The question of arguing the model’s decisions and automatically explaining the reasons for classifying a segment as vulnerable remains open.</p>2025-08-29T00:00:00+00:00##submission.copyrightStatement##https://www.i-us.ru/index.php/ius/article/view/16453.2025-09-07T18:40:29+00:00 Unknownius.spb@gmail.com<p>.</p>2025-08-29T00:00:00+00:00##submission.copyrightStatement##