What is the Tan Delta Test?
Link to Hengfeng
A pure insulator when is connected across line and earth behaves as a capacitor. In an ideal insulator, as the insulating material which acts as dielectric too, is 100 % pure, the electric current passing through the insulator, only have a capacitive component. There is no resistive component of the current, flowing from line to earth through the insulator as in ideal insulating material, there is zero per cent impurity.
In a pure capacitor, the capacitive electric current leads the applied voltage by 90o.
In practice, the insulator cannot be made 100% pure. Also due to the aging of insulators, the impurities like dirt and moisture enter into them. These impurities provide the conductive path to the current. Consequently, an electric leakage current flowing from line to earth through the insulator has a resistive component.
Hence, it is needless to say that, for a good insulator, this resistive component of the electric leakage current is quite low. In another way, the healthiness of an electrical insulator can be determined by the ratio of the resistive component to the capacitive component. For a good insulator, this ratio would be quite low. This ratio is commonly known as tanδ or tan delta.
Sometimes it is also referred to as the dissipation factor.
Thus, tan δ = IR/ IC
NB: This δ angle is known as the loss
angle.
On which instruments Tan Delta testing can be done?
Tan δ testing can be done on various power types of equipment used in substations like transformers, winding, current transformer, potential transformer, transformer bushing, cables, generators. It is performed to assess the quality of insulation and is performed in combination with various test likeTTR, WRM, etc.
Reasons for doing Tan δ testing?
The main purpose of the tan delta test is to make sure of maintaining a secure and reliable functioning of the transformer. The calculation of dissipation factor and capacitance values provides the result of insulation behavior of bushings and in windings too.
Variation in the capacitance value, for instance, indicates partial kind of breakdowns in bushings and automated movement of windings. Insulation deprivation, aging of the equipment, enhancement in the energy levels is transformed into heat. The amount of losses in these is calculated as the dissipation factor.
With the tan delta testing method, one can easily know the dissipation factor and the capacitance values at the required level of frequencies. So, any kind of aging factor can be identified earlier and the corresponding action can be implemented.
As we know that Transformers plays a very crucial part in power, so, first of all, will discuss Tan Delta Testing in Transformers.
Tan Delta Testing Process
The below process explains the method of tan delta testing in a step-by-step manner
The requirements necessary for this test such as cable, potential transformer, bushings, current transformer, and winding on which this testing is conducted has to be initially separated from the system.
The minimal frequency level of test voltage is applied along with the equipment where the insulation to be analyzed.
At first, normal voltage levels are applied. When the tan delta values are as expected at this voltage level, then the applied voltage level is increased by 2 times as of applied voltage.
The values of the tan delta are recorded by the tan delta controller.
To the tan delta calculating component, a loss angle analyzer is connected which compares tan delta values at higher and general voltage levels and delivers accurate results.
It has to be noted that the testing procedure to be carried out at very minimal frequency levels.
Want more information on 270V AC Tan Delta Tester? Feel free to contact us.
It is more recommended to conduct testing at minimal frequency levels, because when the applied voltage level is more, then the capacitive reactance of the insulator device reaches very minimal, therefore the capacitive element of the current reaches more. As the resistive element is practically constant; it is based on the applied voltage level and the insulators conductivity value.
Whereas at increased frequency level the capacitive current, is more, and then the amplitude of the vector amount of both the capacitive and resistive elements of the current reaches very high. So, the necessary level of power for the tan delta test would become more that seems to be not acceptable. Because of this, the power constraint for dissipation factor analysis, very minimal frequency test voltage is required.
What are the Different Modes of the Tan Delta Test?
When it comes to the tan delta test, there are essentially three modes of power factor testing. Those are
GST Guard This calculates the amount of current leakage to the ground. This method eliminates the current leakage through red or blue leads. Whereas in UST, the ground is termed to be guard because grounded edges are not calculated. When the UST method is applied to the device, then the current measurement is only through blue or red leads. The current flow through the ground lead gets automatically bypassed to the AC source and thus excluded from the calculation.
UST Mode This is employed for the calculation of insulation in between ungrounded leads of the equipment. Here the individual portion of isolation has to be separated and analyze it having no other insulation connected to it.
GST Mode In this final mode of operation, both the leakage pathways are calculated by the test apparatus. The current, capacitance values, UST, and GST guards, loss in watts need to be equal to the GST test parameters. This provides the entire behavior of the test.
When the summing value of GST Guard and UST is not equal to the GST parameters, then it can be known that there is some crashing in the test set, or might the test terminal are not correctly designed.
On the whole, this is a detailed explanation of the Tan Delta Test. Here, in this article, we are completely aware of what is a tan delta test, its principle, its purpose, its methods, and its testing technique.
Also, know about what are LV to earth test, HV to earth test, and LV-HV tan delta testing methodologies and keep reading KPM Technologies TECH BLOG and subscribe for different technological development in the field of power sector/ testing sector/ etc
Related papers
High-resolution depth maps based on TOF-stereo fusionJan Čech
IEEE International Conference on Robotics and Automation,
The combination of range sensors with color cameras can be very useful for robot navigation, semantic perception, manipulation, and telepresence. Several methods of combining range-and color-data have been investigated and successfully used in various robotic applications. Most of these systems suffer from the problems of noise in the range-data and resolution mismatch between the range sensor and the color cameras, since the resolution of current range sensors is much less than the resolution of color cameras. High-resolution depth maps can be obtained using stereo matching, but this often fails to construct accurate depth maps of weakly/repetitively textured scenes, or if the scene exhibits complex self-occlusions. Range sensors provide coarse depth information regardless of presence/absence of texture. The use of a calibrated system, composed of a time-of-flight (TOF) camera and of a stereoscopic camera pair, allows data fusion thus overcoming the weaknesses of both individual sensors. We propose a novel TOF-stereo fusion method based on an efficient seed-growing algorithm which uses the TOF data projected onto the stereo image pair as an initial set of correspondences. These initial "seeds" are then propagated based on a Bayesian model which combines an image similarity score with rough depth priors computed from the low-resolution range data. The overall result is a dense and accurate depth map at the resolution of the color cameras at hand. We show that the proposed algorithm outperforms 2D image-based stereo algorithms and that the results are of higher resolution than off-the-shelf color-range sensors, e.g., Kinect. Moreover, the algorithm potentially exhibits real-time performance on a single CPU.
View PDF
chevron_right
Stereo Matching with Color and Monochrome Cameras in Low-Light ConditionsHyowon Ha
IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
Consumer devices with stereo cameras have become popular because of their low-cost depth sensing capability. However, those systems usually suffer from low imaging quality and inaccurate depth acquisition under low-light conditions. To address the problem, we present a new stereo matching method with a color and monochrome camera pair. We focus on the fundamental trade-off that monochrome cameras have much better light-efficiency than color-filtered cameras. Our key ideas involve compensating for the radiometric difference between two crossspectral images and taking full advantage of complementary data. Consequently, our method produces both an accurate depth map and high-quality images, which are applicable for various depth-aware image processing. Our method is evaluated using various datasets and the performance of our depth estimation consistently outperforms state-of-the-art methods.
View PDF
chevron_right
Stereo depth map fusion for robot navigationMarc Pollefeys
Abstract We present a method to reconstruct indoor environments from stereo image pairs, suitable for the navigation of robots. To enable a robot to navigate solely using visual cues it receives from a stereo camera, the depth information needs to be extracted from the image pairs and combined into a common representation. The initially determined raw depthmaps are fused into a two level heightmap representation which contains a floor and a ceiling height level.
View PDF
chevron_right
Depth map fusion with camera position refinementRadim Tylecek
,
Radim Sara
,
Radim Šára
We present a novel algorithm for image-based surface reconstruction from a set of calibrated images. The problem is formulated in Bayesian framework, where estimates of depth and visibility in a set of selected cameras are iteratively improved. The core of the algorithm is the minimisation of overall geometric L 2 error between measured 3D points and the depth estimates. In the visibility estimation task, the algorithm aims at outlier detection and noise suppression, as both types of errors are often present in the stereo output. The geometrical formulation allows for simultaneous refinement of the external camera parameters, which is an essential step for obtaining accurate results even when the calibration is not precisely known. We show that the results obtained with our method are comparable to other state-of-the-art techniques.
View PDF
chevron_right
Real-time depth enhancement by fusion for RGB-D camerasFrederic Garcia
,
Thomas Solignac
,
Djamila Aouada
,
Bruno Mirbach
IET Computer Vision,
This study presents a real-time refinement procedure for depth data acquired by RGB-D cameras. Data from RGB-D cameras suffer from undesired artefacts such as edge inaccuracies or holes owing to occlusions or low object remission. In this work, the authors use recent depth enhancement filters intended for time-of-flight cameras, and extend them to structured light-based depth cameras, such as the Kinect camera. Thus, given a depth map and its corresponding two-dimensional image, we correct the depth measurements by separately treating its undesired regions. To that end, the authors propose specific confidence maps to tackle areas in the scene that require a special treatment. Furthermore, in the case of filtering artefacts, the authors introduce the use of RGB images as guidance images as an alternative to real-time state-of-the-art fusion filters that use greyscale guidance images. The experimental results show that the proposed fusion filter provides dense depth maps with corrected erroneous or invalid depth measurements and adjusted depth edges. In addition, the authors propose a mathematical formulation that enables to use the filter in real-time applications.
View PDF
chevron_right
RGBD-fusion: Real-time high precision depth recoveryRon Kimmel
IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
The popularity of low-cost RGB-D scanners is increasing on a daily basis. Nevertheless, existing scanners often cannot capture subtle details in the environment. We present a novel method to enhance the depth map by fusing the intensity and depth information to create more detailed range profiles. The lighting model we use can handle natural scene illumination. It is integrated in a shape from shading like technique to improve the visual fidelity of the reconstructed object. Unlike previous efforts in this domain, the detailed geometry is calculated directly, without the need to explicitly find and integrate surface normals. In addition, the proposed method operates four orders of magnitude faster than the state of the art. Qualitative and quantitative visual and statistical evidence support the improvement in the depth obtained by the suggested method.
View PDF
chevron_right
Sensor Fusion Combining 3-D and 2-D for Depth Data EnhancementFrederic Garcia
Ph.D. in Computer Science, University of Luxembourg,
Time-of-Flight (ToF) cameras are known to be cost-efficient 3-D sensing systems capable of providing full scene depth information at a high frame rate. Among many other advantages, ToF cameras are able to provide distance information regardless of the illumination conditions and with no texture dependency, which makes them very suitable for computer vision and robotic applications where reliable distance measurements are required. However, the resolution of the given depth maps is far below the resolution given by standard 2-D video cameras which, indeed, restricts the use of ToF cameras in real applications such as those for safety and surveillance. In this thesis, we therefore investigate how to enhance the resolution of ToF data and how to reduce the noise level within distance measurements. To that end, we propose to combine 2-D and ToF data using a low-level data fusion approach that enhances the low-resolution depth maps up to the same resolution as their corresponding 2-D images. Low-level data fusion requires the data to be fused to be accurately aligned. Therefore, the first part of this thesis proposes a real-time mapping procedure for data matching. The challenge addressed thereby is to cope with the distance-dependent disparity in an efficient way. To that end, a set of look-up tables for an array of disparities is pre-computed. Then, the mapping is performed through an iterative algorithm that selects pixel by pixel the look-up table that corresponds to the distance measurement of the pixel to be mapped. The experimental results of this part show that in addition to being straightforward and easy to compute, our proposed data matching approach is highly accurate. The second part of this thesis presents a unified multi-lateral filter for real-time low-resolution depth map enhancement. We propose a unified multi-lateral filter that in addition to adaptively considering 2-D grayscale images and depth data as guidance information, accounts for the inaccuracy of the position of depth edges due to the low-resolution of ToF depth maps. Consequently, unwanted artefacts such as texture copying and edge blurring are almost entirely eliminated. Moreover, the proposed filter is configurable to behave as most of the alternative depth enhancement methods based upon a bilateral filter. Using a convolution-based formulation and data quantization and downsampling, the proposed filter has been effectively and efficiently implemented for dynamic scenes in real-time applications. The results show a significant qualitative improvement on our own recorded sequences as well as on the Middlebury dataset, outperforming alternative depth enhancement solutions. Finally, we propose two extensions to improve the quality of the enhanced depth maps. Edge blurring increases when considering grayscale images instead of the original coloured ones. Although the generalization of our filter to consider 3-colour channels is straightforward, the processing time and memory demands prevent it from performing in real-time. We therefore propose a new 1-D colour model whose representation is equivalent to, but more compact than, the 3-D HCL conical representation. It consists in gathering all the hue, chroma and luminance information in one component, namely, the cumulative spiral angle, where the spirals in question are defined as a sampling of the solid HCL cone. The results show that, in addition to preserving the perceptual properties of the HCL colour representation, using the proposed colour model leads to a solution that is more accurate than when using grayscale images. The second extension focuses on enhancing the frame rate of the hybrid ToF multi-camera rig up to the frame rate of the coupled 2-D camera. To that end, we predict new low-resolution depth maps using the flow information estimated from each pair of 2-D frames. Then, we enhance such predicted depth maps by using our proposed multi-lateral filter. In the end, we provide video frame rate depth maps that present more accurate depth measurements and a significant reduction of the global noise level. Furthermore, we note that the concepts presented herein are not only intended to enhance the depth information given by ToF cameras, as they also apply to other 3-D sensing modalities.
View PDF
chevron_right
Multi-Sensor Depth Fusion Framework for Real-Time 3D Reconstructionfarhan khan
IEEE Access,
For autonomous robots, 3D perception of environment is an essential tool, which can be used to achieve better navigation in an obstacle rich environment. This understanding requires a huge amount of computational resources; therefore, the real-time 3D reconstruction of surrounding environment has become a topic of interest for countless researchers in the recent past. Generally, for the outdoor 3D models, stereo cameras and laser depth measuring sensors are employed. The data collected through the laser ranging sensors is relatively accurate but sparse in nature. In this paper, we propose a novel mechanism for the incremental fusion of this sparse data to the dense but limited ranged data provided by the stereo cameras, to produce accurate dense depth maps in real-time over a resource limited mobile computing device. Evaluation of the proposed method shows that it outperforms the state-of-the-art reconstruction frameworks which only utilizes depth information from a single source.
View PDF
chevron_right
Learned Semantic Multi-Sensor Depth Map FusionMartin R. Oswald
IEEE/CVF International Conference on Computer Vision Workshop (ICCVW),
Volumetric depth map fusion based on truncated signed distance functions has become a standard method and is used in many 3D reconstruction pipelines. In this paper, we are generalizing this classic method in multiple ways: 1) Semantics: Semantic information enriches the scene representation and is incorporated into the fusion process. 2) Multi-Sensor: Depth information can originate from different sensors or algorithms with very different noise and outlier statistics which are considered during data fusion. 3) Scene denoising and completion: Sensors can fail to recover depth for certain materials and light conditions, or data is missing due to occlusions. Our method denoises the geometry, closes holes and computes a watertight surface for every semantic class. 4) Learning: We propose a neural network reconstruction method that unifies all these properties within a single powerful framework. Our method learns sensor or algorithm properties jointly with semantic depth fusion and scene completion and can also be used as an expert system, e.g. to unify the strengths of various photometric stereo algorithms. Our approach is the first to unify all these properties. Experimental evaluations on both synthetic and real data sets demonstrate clear improvements.
View PDF
chevron_right
Are you interested in learning more about 200V AC Tan Delta Tester? Contact us today to secure an expert consultation!