Feng Wang

I am a first-year PhD student at Johns Hopkins University, where I am fortunate to be advised by Bloomberg Distinguished Professor Alan L. Yuille.

Before that I was a M.S. student at Tsinghua University, where I worked under the guidance of Prof. Hairong Lv. I also spent wonderful time interning at Microsoft Research and UIUC.

My current research interest lies at the intersection of computer vision and natural language processing, in particualr vision-language understanding and multi-modality content generation.

wangf3014 [at] gmail [dot] com / Github / Google Scholar

profile photo
News

[2023.03] I'm admitted to JHU CS and will be working with Prof. Alan Yuille!

[2023.01] One paper accepted in ICLR 2023!

[2022.07] Come to MSRA NLC group for internship!

[2022.07] One paper accepted to ECCV 2022!

...

Publications


CP2: Copy-Paste Contrastive Pretraining for Semantic Segmentation
Feng Wang, Huiyu Wang, Chen Wei, Alan Yuille, Wei Shen
ECCV , 2022 | arXiv / code
We propose a dense (pixel-wise) self-supervised contrastive learning method called CP2, which facilitates both image- and pixel-level representations. We obtain 78.6% mIoU with a ResNet-50 and 79.5% with a ViT-S by finetuning CP2 pretrained models on PASCAL VOC.


Learning to Decompose Visual Features with Latent Textual Prompts
Feng Wang, Manling Li, Xudong Lin, Hairong Lv, Alexander G. Schwing, Heng Ji
ICLR , 2023. | arXiv
We propose a novel vision-language model called Decomposed Feature Prompting (short as DeFo), which decouples the language inputs from the classes to be inferred, and learns to extract detailed visual features with textual prompts.


SCLIP: Rethinking Self-Attention for Dense Vision-Language Inference
Feng Wang, Jieru Mei, Alan Yuille
Preprint, under review . | arXiv | code
We present a zero-shot semantic segmentation model called SCLIP (Segmentation-adapted CLIP model), which leverages our newly proposed correlative self-attention mechanism and allows training-free adaptation to semantic segmentation tasks with CLIP.


Dual Prompt Tuning for Domain-Aware Federated Learning
Guoyizhe Wei, Feng Wang, Anshul Shah, Rama Chellappa
Preprint, under review . | arXiv
We address the challenges of domain shift in vision-language inference by leveraging the technique of prompt learning for both the image and text encoders in CLIP, which facilitates domain adaptation over decentralized and non-iid data.


Boost Neural Networks by Checkpoints
Feng Wang, Guoyizhe Wei, Qiao Liu, Jinxiang Ou, Xian Wei, Hairong Lv
NeurIPS , 2021 | arXiv
We propose a novel checkpoint ensemble called Checkpoint Boosted Neural Networks (CBNN), where a boosting scheme is utilized to accelerate model convergence and maximize the checkpoint diversity. Our superior performance is supported by a theoretical proof.


Gradient Boosting Forest: a Two-Stage Ensemble Method Enabling Federated Learning of GBDTs
Feng Wang, Jinxiang Ou, Hairong Lv
ICONIP , 2021 | paper
We propose a novel GBDT model which extends each single decision tree of GBDT to an ensemble of trees that are trained from different data splits. Our method allows decentralized training and achieves more robust performance.

Last update: Dec. 2023      Template