Loading Now

Summary of Tell, Don’t Show!: Language Guidance Eases Transfer Across Domains in Images and Videos, by Tarun Kalluri and Bodhisattwa Prasad Majumder and Manmohan Chandraker


Tell, Don’t Show!: Language Guidance Eases Transfer Across Domains in Images and Videos

by Tarun Kalluri, Bodhisattwa Prasad Majumder, Manmohan Chandraker

First submitted to arxiv on: 8 Mar 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this research paper, the authors introduce LaGTran, a novel framework that leverages text supervision to guide robust transfer of discriminative knowledge from labeled source data to unlabeled target data with domain gaps. The authors argue that semantically richer text modality has more favorable transfer properties than pixel-space-based methods, and devise a transfer mechanism using a source-trained text-classifier to generate predictions on target text descriptions. These predictions are then used as supervision for the corresponding images. LaGTran is evaluated on challenging datasets like GeoNet and DomainNet, outperforming prior approaches.
Low GrooveSquid.com (original content) Low Difficulty Summary
LaGTran is a new way to help machines learn from one type of data (like pictures) even when it’s different from what they’re trained on. This is useful because sometimes the data we want to use has problems like being blurry or having different lighting. The idea behind LaGTran is that text can help fix these issues by providing more information about what’s in the picture. In tests, LaGTran worked really well and was better than other methods at doing this.

Keywords

» Artificial intelligence