The IRMA Community
Newsletters
Research IRM
Click a keyword to search titles using our InfoSci-OnDemand powered search:
|
Counterfactual Autoencoder for Unsupervised Semantic Learning
Abstract
Deep Neural Networks (DNNs) are best known for being the state-of-the-art in artificial intelligence (AI) applications including natural language processing (NLP), speech processing, computer vision, etc. In spite of all recent achievements of deep learning, it has yet to achieve semantic learning required to reason about the data. This lack of reasoning is partially imputed to the boorish memorization of patterns and curves from millions of training samples and ignoring the spatiotemporal relationships. The proposed framework puts forward a novel approach based on variational autoencoders (VAEs) by using the potential outcomes model and developing the counterfactual autoencoders. The proposed framework transforms any sort of multimedia input distributions to a meaningful latent space while giving more control over how the latent space is created. This allows us to model data that is better suited to answer inference-based queries, which is very valuable in reasoning-based AI applications.
Related Content
Bhargav Naidu Matcha, Sivakumar Sivanesan, K. C. Ng, Se Yong Eh Noum, Aman Sharma.
© 2023.
60 pages.
|
Lavanya Sendhilvel, Kush Diwakar Desai, Simran Adake, Rachit Bisaria, Hemang Ghanshyambhai Vekariya.
© 2023.
15 pages.
|
Jayanthi Ganapathy, Purushothaman R., Ramya M., Joselyn Diana C..
© 2023.
14 pages.
|
Prince Rajak, Anjali Sagar Jangde, Govind P. Gupta.
© 2023.
14 pages.
|
Mustafa Eren Akpınar.
© 2023.
9 pages.
|
Sreekantha Desai Karanam, Krithin M., R. V. Kulkarni.
© 2023.
34 pages.
|
Omprakash Nayak, Tejaswini Pallapothala, Govind P. Gupta.
© 2023.
19 pages.
|
|
|