Generative AI models : a comparative analysis
Publication details: Ghaziabad MAT Journals 2024Edition: Vol.10(1), Jan-AprDescription: 32-38pSubject(s): Online resources: In: Journal of computer science engineering and software testingSummary: A comprehensive comparative analysis is conducted in this paper on key Generative Artificial Intelligence (GAI) models such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs) and Transformers. This study looks into their architectures, training methods, applications, strong points and shortcomings. GANs are essentially based on the framework and then employ adversarial training; while VAEs are probabilistic encoders and decoders. Transformers on the other hand can handle long-range dependencies beautifully; we explore how they perform in different domains like image, text, music and video generation. This includes both quantitative measures of success and qualitative assessments. In terms of their advantages and drawbacks, every model despite its advancement has its own distinctive features. One problem is that GANs can produce high-quality images they also collapse at multi-task learning stages. The references in this comparative study are valuable for novices who wish to use the right Generative AI model when tackling particular problems; moreover, these findings both inspire and point the way forward to scholars working in this field.| Item type | Current library | Status | Barcode | |
|---|---|---|---|---|
|  Articles Abstract Database | School of Engineering & Technology Archieval Section | Not for loan | 2025-0806 | 
A comprehensive comparative analysis is conducted in this paper on key Generative Artificial Intelligence (GAI) models such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs) and Transformers. This study looks into their architectures, training methods, applications, strong points and shortcomings. GANs are essentially based on the framework and then employ adversarial training; while VAEs are probabilistic encoders and decoders. Transformers on the other hand can handle long-range dependencies beautifully; we explore how they perform in different domains like image, text, music and video generation. This includes both quantitative measures of success and qualitative assessments. In terms of their advantages and drawbacks, every model despite its advancement has its own distinctive features. One problem is that GANs can produce high-quality images they also collapse at multi-task learning stages. The references in this comparative study are valuable for novices who wish to use the right Generative AI model when tackling particular problems; moreover, these findings both inspire and point the way forward to scholars working in this field.
There are no comments on this title.
