Loading...
2 results
Search Results
Now showing 1 - 2 of 2
- VGAN: generalizing MSE GAN and WGAN-GP for robot fault diagnosisPublication . Pu, Ziqiang; Cabrera, Diego; Li, Chuan; Valente de Oliveira, JOSÉGenerative adversarial networks (GANs) have shown their potential for data generation. However, this type of generative model often suffers from oscillating training processes and mode collapse, among other issues. To mitigate these, this work proposes a generalization of both mean square error (mse) GAN and Wasserstein GAN (WGAN) with gradient penalty, referred to as VGAN. Within the framework of conditional WGAN with gradient penalty, VGAN resorts to the Vapnik V-matrix-based criterion that generalizes mse. Also, a novel early stopping-like strategy is proposed that keeps track during training of the most suitable model. A comprehensive set of experiments on a fault-diagnosis task for an industrial robot where the generative model is used as a data augmentation tool for dealing with imbalance datasets is presented. The statistical analysis of the results shows that the proposed model outperforms nine other models, including vanilla GAN, conditional WGAN with and without conventional regularization, and synthetic minority oversampling technique, a classic data augmentation technique.
- Sliced Wasserstein cycle consistency generative adversarial networks for fault data augmentation of an industrial robotPublication . Pu, Ziqiang; Cabrera, Diego; Li, Chuan; Valente de Oliveira, JOSÉWe investigate the role of the loss function in cycle consistency generative adversarial networks (CycleGANs). Namely, the sliced Wasserstein distance is proposed for this type of generative model. Both the unconditional and the conditional CycleGANs with and without squeeze-and-excitation mechanisms are considered. Two data sets are used in the evaluation of the models, i.e., the well-known MNIST and a real-world in-house data set acquired for an industrial robot fault diagnosis. A comprehensive set of experiments show that, for both the unconditional and the conditional cases, sliced Wasserstein distance outperforms classic Wasserstein distance in CycleGANs. For the robot faulty data augmentation a model compatibility of 99.73% (conditional case) and 99.21% (unconditional case) were observed. In some cases, the improvement in convergence efficiency was higher than 2 (two) orders of magnitude.