Skip to main content

Undergraduate Research Assistant Opportunities at (MURGE Lab) on Large Language Model Reasoning/Uncertainty/Multi-agent Interactions/Multimodality

Post Date
04/12/2024
Description

Advisor: Prof. Mohit Bansal (https://www.cs.unc.edu/~mbansal/)
Mentor: Elias Stengel-Eskin (https://esteng.github.io)
Group: Murge Lab UNC-CH (https://murgelab.cs.unc.edu/)
Duration (Flexible): Apr 15th – Sep 15th, 2024 (5 months), at least 20 hours per week commitment.
Role: Research Assistant (with RAship stipend)
Contact: Elias Stengel-Eskin (esteng@cs.unc.edu) with some basic information about yourself, your transcripts, your CV, and any discussion about any prior Machine Learning / Programming experience you have.

Requirements from Candidates (Good to Haves):
– Undergrads or Masters students from Computer Science, Mathematics/Statistics, or Linguistics/Cognitive Science background.
– Strong foundation in machine learning and deep learning techniques. Familiarity with model architectures like transformers, etc.
– Familiarity with deep learning programming frameworks like Pytorch and Huggingface libraries.
– Preferred: experience with code generation (semantic parsing, text-to-code) or vision-language tasks.
– Strong analytical abilities to ask the right questions to come up with a hypothesis and then design experiments to test it.
– Candidates who are possibly interested in a research career or grad school (Master/PhD) or Machine Learning jobs in the future.

Project Description:

Large language models allow people to interact with digital systems using natural language. However, language is inherently ambiguous and underspecified, and can lead to high uncertainty. This project will focus on enabling safe and robust interactions that allow people to make informed choices on when to trust model outputs. Core topics include: how to better model uncertainty, how to leverage multiple models to address ambiguity, how to intelligently obtain information to reduce uncertainty, and how to produce outputs that help users form their own uncertainty estimates. Topics may also include: multimodality and multimodal generation, vision-language tasks.

Research Areas:
– Multi-agent interactions and collaboration.
– Acquiring information for uncertainty reduction.
– Calibration and confidence estimation in LLM reasoning.
– Attributable and interpretable text generation.
– Multimodal reasoning and generation.

Some recent and representative papers in the direction:
1. Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language Models, ICLR 2024 (https://arxiv.org/abs/2310.05861)
2. ReConcile: Round-Table Conference Improves Reasoning via Consensus among Diverse LLMs (https://arxiv.org/pdf/2309.13007)
3. MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models (https://arxiv.org/abs/2402.01620)
4. Soft Self-Consistency Improves Language Model Agents (https://arxiv.org/abs/2402.13212)
5. Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training (https://arxiv.org/abs/2403.02325)
6. Did You Mean…? Confidence-based Trade-offs in Semantic Parsing, EMNLP 2023 (https://arxiv.org/abs/2303.16857)
7. Zero and Few-shot Semantic Parsing with Ambiguous Inputs, ICLR 2024 (https://arxiv.org/abs/2306.00824)
8. ReGAL: Refactoring Programs to Discover Generalizable Abstractions (https://arxiv.org/abs/2401.16467)
9. GTBench: Uncovering the Strategic Reasoning Limitations of LLMs via Game-Theoretic Evaluations (https://arxiv.org/abs/2402.12348)
10. VideoDirectorGPT: Consistent Multi-scene Video Generation via LLM-Guided Planning (https://arxiv.org/abs/2309.15091)
11. Any-to-Any Generation via Composable Diffusion (https://arxiv.org/abs/2305.11846)

For inquiries or to express your interest, please send an email to esteng@cs.unc.edu

Faculty Advisor
Mohit Bansal
Research Supervisor
Elias Stengel-Eskin
Faculty Email:
Type of Position
Availability
Website
Application Deadline
08/15/2024