Бидний тухай
Багш ажилтан
This study investigates the creation of large-scale images through mosaic construction and evaluates the outcomes thereof. The initial phase of the study involves a comprehensive comparison of algorithms commonly employed in mosaic creation, focusing on their respective strengths and weaknesses in terms of precision and computational efficiency. The experimentation is conducted on a dataset comprising 150 images obtained from SenseFly's aerial surveys. Results reveal that the utilization of the SURF algorithm for mosaic creation yields the highest precision, with a matching value of 30.6381 and a processing time of 605.5 seconds, surpassing other algorithms. However, employing the SURF algorithm for processing entire images poses challenges in terms of computational complexity, processing time, and memory usage. To address this, a methodology is proposed to selectively apply algorithms based on segment characteristics, enhancing precision and reducing processing time. Experimental results demonstrate that employing this approach reduces processing time to 120.2 seconds and minimizes error, resulting in superior outcomes when utilizing the SURF algorithm for the entire dataset.
This paper presents a comparative study between two robust estimation approaches: homography matrix-based RANSAC and fundamental matrix-based RANSAC, for outlier elimination in various computer vision applications. The study focuses on the critical task of reliably estimating correspondences across two-view images. The Random Sample Consensus (RANSAC) algorithm is employed to estimate accurate homography and fundamental matrices robustly, even in the presence of outliers. Image datasets are utilized for experimental analysis, including rotations and translations of object. The performance of both methods is compared in terms of accuracy, robustness based on their geometric properties with the different test dataset. Experimental results demonstrate that the homography matrix-based RANSAC method works well with planar movements of the objects, while the fundamental matrix-based RANSAC method performs better with 3D movements of the objects. The paper concludes by discussing the implications of these findings and highlighting the suitability of each approach.
Том хэмжээний зураг үүсгэх буюу мозайк аргын хэрэгжүүлэлт болон үр дүнг энэхүү судалгааны ажлаас харж болно. Судалгааны эхний хэсэгт бид мозайк хийх аргын чухал хэсэг болох онцлог шинж илрүүлэлт болон задлалтанд өргөн ашиглагддаг алгоритмуудыг туршиж харьцуулсан. Туршилтыг SenseFly компаниас гаргасан тариан талбайн 150 зурган дээр гүйцэтгэсэн ба туршилтын үр дүнгээс харахад мозайк хийх аргын үед онцлог шинж илрүүлэх болон задлахад SURF алгоритмын хэрэглэхэд оргил утгын адлаа 30.6381 ба ажиллах хугацаа 120.2 сек буюу бусад алгоритмтай харьцуулхад хамгийн өндөр үр дүнг үзүүлсэн. Гэвч бүтэн зургийн хувьд SURF алгоритмыг хэрэглэх нь онцлог шинж ихсэх тусам тооцоолол, ажиллах хугацаа болон ашиглах санах ойн хэмжээ ихэсдэг сул талтай тул бид онцлог шинж сонгохдоо зургийг мужуудад хуваан тухайн мужаас босго утгыг хангасан онцлог шинжүүдийг сонгож авах байдлаар бүтэн зураг ашиглахаас зайлсхийсэн ба өмнөхтэй адил өгөгдлийн хувьд туршилтийг гүйцэтгэсэн ба туршилтын үр дүнгээс харахад оргил утгын алдаа 30.1347 ба ажиллах хугацаа 48 сек буюу бүтэн зургийн хувьд SURF алгоритмыг ашигласнаас илүү өндөр үр дүн гарсан.
In this work, we tested the global thresholding method, the well-known image segmentation method for separating objects and background from the image on its refined histogram using the distinction neighborhood metric. If the original histogram of image has some large bins which occupy the most density of whole intensity distribution, it is a problem for global methods such as segmentation and contrast enhancement. We refined the histogram to overcome the big bin problem in which sub-bins are created from big-bins based on distinction metric. Median and Otsu thresholding methods are used in our work, and experimental results show that they work more effectively on the refined histograms.
The shortest path betweenness value of a node quantifies the amount of information passing through the node when all the pairs of nodes in the network exchange information in full capacity measured by the number of the shortest paths between the pairs assuming that the information travels in the shortest paths. It is calculated as the cumulative of the fractions of the number of shortest paths between the node pairs over how many of them actually pass through the node of interest. It’s possible for a node to have zero or underrated betweenness value while sitting just next to the giant flow of information. These nodes may have a significant influence on the network when the normal flow of information is disrupted. We propose a betweenness centrality measure called collective betweenness that takes into account the surroundings of a node. We will compare our measure with other centrality metrics and show some applications of it.
We present a new algorithm for determining 3D motion of a moving rigid object in real-time image sequences relative to a single camera. In the case where features are two-dimensional (2D), they are obtained by projective transformations of the 3D features on the object surface under perspective model. The perspective model has formulated in nonlinear least square problem to determine 3D motions as characterized by rotation and translation iteratively. In practice, it is numerically ill-conditioned and may converge slowly or even fail to converge, if it starts with not good enough initial guess. However, since para-perspective projection model closely approximates perspective projection for recovering the 3D motion and shape of the object in Euclidean space, we used the results provided from para-perspective projection model as an initial value of nonlinear optimization refinement under perspective model equations.
Bone density is one of the factors in the early failure of dental implants and doctors should make a preoperative assessment of jaw bone density using patient’s CT data before dental implant surgery in order to find out whether the patient has osteoporosis and osteopenia. The main goal of this study was to propose a method that based on image processing techniques in order to provide accurate information about where to drill and place an abutment screw of implants in the jaw bone for doctors and reduce human activity for the estimation of the local cancellous bone density of mandible using CT data. The experiment was performed on a computed tomography data of the jaw bone of two different individuals. We assumed that the result of the estimation of jaw bone density depends on the angle of drilling and average HU (Hounsfield Unit) values were used to evaluate the quality of local cancellous bone density of mandible. As a result of this study, we have been developed a toolbox that can be used to estimate jaw bone density automatically and found a positive correlation between the angle of the drill and time complexity but a negative correlation between the diameter of the drill and time complexity.
Abstract- The main goal of this paper is to propose an image processing method to determine mandibular density automatically using computed tomography (CT) data in order to provide accurate information about where to drill and place an abutment screw of implants in the jaw bone for doctors. The experiment was performed on a computed tomography data of jaw bone of two different individuals and the angle between drill and the vertical axis of Cartesian coordinate was changed from 10° to 25° by 5° interval. The results showed that regardless of the angle of the drill and the diameter of the drill a cylinder that drilling is available was found. Also, there was a positive correlation between the angle of the drill and time complexity but a negative correlation between the diameter of the drill and time complexity.
The main goal of this paper is to propose an image processing method to determine mandibular density automatically using computed tomography (CT) data in order to provide accurate information about where to drill and place an abutment screw of implants in the jaw bone for doctors. The experiment was performed on a computed tomography data of jaw bone of two different individuals and the angle between drill and the vertical axis of Cartesian coordinate was changed from 10° to 25° by 5° interval. The results showed that regardless of the angle of the drill and the diameter of the drill a cylinder that drilling is available was found. Also, there was a positive correlation between the angle of the drill and time complexity but a negative correlation between the diameter of the drill and time complexity.
: In this study, the development of large-scale maps using mosaic methods and its results are shown. In the first part of the study, we compared the algorithms that are widely used in discovering features and distributions as an important part of mosaic methods. The experiment was performed on 150 images of the field of SenseFly. The test results show that the SURF algorithm was used to detect special features for mosaic and the peak signal noise ratio value (PSNR) was 30.6381 and the operating time of 120.2 seconds was higher than other algorithms. When using SURF algorithm for feature selection, if there are many detected features, its operating time and memory usage increase correspondingly. In the next part, in order to overcome this drawback, we divided the image into nine parts and neighboring parts of image was used to selecting feature detection. The test results show that the PSNR was 30.1347 and the duration of the operation was 48 seconds and the modified approach was higher than others.