Skip to the content.
Accepted Papers
Morning Session
- SQA3D: Situated Question Answering in 3D Scenes
Xiaojian Ma, Silong Yong, Zilong Zheng, Qing Li, Yitao Liang, Song-Chun Zhu, Siyuan Huang
- Retrieval-Augmented Multimodal Language Modeling
Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Rich James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih
- On the Aggregation of Rules for Knowledge Graph Completion
Patrick Betz, Stefan Lüdtke, Christian Meilicke, Heiner Stuckenschmidt
- Large Language Model Programs
Imanol Schlag, Sainbayar Sukhbaatar, Asli Celikyilmaz, Wen-tau Yih, Jason Weston, Jürgen Schmidhuber, Xian Li
- LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
Kaiyu Yang, Aidan Swope, Alex Gu, Rahul R Chalamala, Shixing Yu, Saad Godil, Ryan Prenger, Animashree Anandkumar
- Semantically Adversarial Scene Generation with Explicit Knowledge Guidance for Autonomous Driving
Wenhao Ding, Haohong Lin, Bo Li, Ding Zhao
- Towards true discovery of the differential equations
Alexander Hvatov, Roman Titov
- Neural Priority Queues for GNNs
Rishabh Jain, Petar Veličković, Pietro Lió
- VAEL: Bridging Variational Autoencoders and Probabilistic Logic Programming
Eleonora Misino, Giuseppe Marra, Emanuele Sansone
- Explanatory Learning: Towards Artificial Scientific Discovery
Antonio Norelli, Giorgio Mariani, Luca Moschella, Andrea Santilli, Giambattista Parascandolo, Simone Melzi, Emanuele Rodola
- ANet: A Scalable Path-based Reasoning Approach for Knowledge Graphs
Zhaocheng Zhu, XINYU YUAN, Mikhail Galkin, Louis-Pascal Xhonneux, Ming Zhang, Maxime Gazeau, Jian Tang
- DiversiGATE: A Comprehensive Framework for Reliable Large Language Models
Shima Imani, Ali Beyram, Harsh Shrivastava
- Neuro-Symbolic Continual Learning: Knowledge, Reasoning Shortcuts and Concept Rehearsal
Emanuele Marconato, Gianpaolo Bontempo, Elisa Ficarra, SIMONE CALDERARA, Andrea Passerini, Stefano Teso
- OC-NMN: Object-centric Compositional Neural Module Network for Generative Visual Analogical Reasoning
Rim Assouel, Pau Rodriguez, Perouz Taslakian, David Vazquez, Yoshua Bengio
- Look, Remember and Reason: Visual Reasoning with Grounded Rationales
Apratim Bhattacharyya, Sunny P Panchal, Mingu Lee, Reza Pourreza, Pulkit Madan, Roland Memisevic
- Describe, Explain, Plan and Select: Interactive Planning with LLMs Enables Open-World Multi-Task Agents
Zihao Wang, Shaofei Cai, Guanzhou Chen, Anji Liu, Xiaojian Ma, Yitao Liang
- Recursive Algorithmic Reasoning
Dulhan Jayalath, Jonas Jürß, Petar Veličković
- EXPLAIN, AGREE and LEARN: A Recipe for Scalable Neural-Symbolic Learning
Victor Verreet, Lennert De Smet, Emanuele Sansone
- Semantic Conditioning at Inference : Improving Neural-based Systems with Logical Background Knowledge
Arthur Ledaguenel, Céline Hudelot, Mostepha Khouadjia
- Continuous-Discrete Message Passing for Graph Logic Reasoning
Cristóbal Corvalán Morbiducci, Francesco Alesiani, Markus Zopf
- Testing the General Deductive Reasoning Capacity of Large Language Models Using OOD Examples
Abulhair Saparov, Richard Yuanzhe Pang, Vishakh Padmakumar, Nitish Joshi, Seyed Mehran Kazemi, Najoung Kim, He He
- Evidence of Meaning in Language Models Trained on Programs
Charles Jin, Martin Rinard
- Neurosymbolic AI for Reasoning on Biomedical Knowledge Graphs
Lauren Nicole DeLong, Ramon Fernández Mir, Zonglin Ji, Fiona Niamh Coulter Smith, Jacques D. Fleuriot
- Bayesian Neural Networks with Domain Knowledge
Dylan Sam, Rattana Pukdee, Daniel P Jeong, Yewon Byun, Zico Kolter
- Exposing Attention Glitches with Flip-Flop Language Modeling
Bingbin Liu, Jordan Ash, Surbhi Goel, Akshay Krishnamurthy, Cyril Zhang
- Does End-to-End Visual Pretraining Help Reasoning?
Chen Sun, Calvin Luo, Xingyi Zhou, Anurag Arnab, Cordelia Schmid
- On the Planning Abilities of Large Language Models - A Critical Investigation
Karthik Valmeekam, Matthew D Marquez, Sarath Sreedharan, Subbarao Kambhampati
- Leveraging Pre-trained Large Language Models to Construct and Utilize World Models for Model-based Task Planning
Lin Guan, Karthik Valmeekam, Sarath Sreedharan, Subbarao Kambhampati
- On The Ability of Transformers To Learn Recursive Patterns
Dylan Zhang, Curt Tigges, Talia Ringer, Stella Biderman, Maxim Raginsky
- Reasoning Ability Emerges in Large Language Models as Aggregation of Reasoning Paths
Xinyi Wang, William Yang Wang
- Exploring the Impact of Disentangling Extraction and Reasoning in Multi-hop Spatial Reasoning
Roshanak Mirzaee, Parisa Kordjamshidi
- Plan, Eliminate, and Track --- Language Models are Good Teachers for Embodied Agents.
Yue Wu, So Yeon Min, Yonatan Bisk, Ruslan Salakhutdinov, Amos Azaria, Yuanzhi Li, Tom Mitchell, Shrimai Prabhumoye
- SPRING: Studying Papers and Reasoning to play Games
Yue Wu, Shrimai Prabhumoye, So Yeon Min, Yonatan Bisk, Ruslan Salakhutdinov, Amos Azaria, Tom Mitchell, Yuanzhi Li
Afternoon Session
- Parallel Algorithms Align with Neural Execution
Valerie Engelmayer, Dobrik G. Georgiev, Petar Veličković
- Learning and Leveraging Verifiers to Improve Planning Capabilities of Pre-trained Language Models
Daman Arora, Subbarao Kambhampati
- Latent Space Representations of Neural Algorithmic Reasoners
Viktor Mirjanic, Razvan Pascanu, Petar Veličković
- Towards More Likely Models for AI Planning
Turgay Caglar, sirine belhaj, Tathagata Chakraborti, Michael Katz, Sarath Sreedharan
- A Pseudo-Semantic Loss for Deep Generative Models with Logical Constraints
Kareem Ahmed, Kai-Wei Chang, Guy Van den Broeck
- Asynchronous Algorithmic Alignment with Cocycles
Andrew J Dudzik, Tamara von Glehn, Razvan Pascanu, Petar Veličković
- Learning with Explanation Constraints
Rattana Pukdee, Dylan Sam, Maria-Florina Balcan, Pradeep Ravikumar
- BoardgameQA: Natural Language Reasoning with Contradictory Information
Mehran Kazemi, Quan Yuan, DEEPTI BHATIA, Najoung Kim, Xin Xu, Vaiva Imbrasaite, Deepak Ramachandran
- Invalid Logic, Equivalent Gains: The Bizarreness of Reasoning in Language Model Prompting
Rylan Schaeffer, Kateryna Pistunova, Samar Khanna, Sarthak Consul, Sanmi Koyejo
- interpretability of Transformers: a case study with Dyck grammars
Kaiyue Wen, Yuchen Li, Bingbin Liu, Andrej Risteski
- dPASP: A Comprehensive Differentiable Probabilistic Answer Set Programming Environment For Neurosymbolic Learning and Reasoning
Renato L Geh, Jonas L Goncalves, Igor Silveira, Denis D Maua, Fabio Cozman
- Building One-class Detector for Anything: Open-vocabulary Zero-shot OOD Detection Using Text-image Models
Yunhao Ge, Jie Ren, Jiaping Zhao, Kaifeng Chen, Andrew Gallagher, Laurent Itti, Balaji Lakshminarayanan
- Disaster Occurrence Detection through GNN Models using Disaster Knowledge Graphs
Seonhyeong Kim, Irshad Khan, Young-Woo Kwon
- Chain of Thought Empowers Transformers to Solve Inherently Serial Problems
Zhiyuan Li, Hong Liu, Denny Zhou, Tengyu Ma
- Open Visual Knowledge Extraction via Relation-Oriented Multimodality Model Prompting
Hejie Cui, Xinyu Fang, Zihan Zhang, Ran Xu, Xuan Kan, Xin Liu, Manling Li, Yangqiu Song, Carl Yang
- Towards A Unified Neural Architecture for Visual Recognition and Reasoning
Calvin Luo, Boqing Gong, Ting Chen, Chen Sun
- How Do Transformers Learn Topic Structure: Towards a Mechanistic Understanding
Yuchen Li, Yuanzhi Li, Andrej Risteski
- Large Language Models are Zero-Shot Multi-Tool Users
Luca Beurer-Kellner, Marc Fischer, Martin Vechev
- Training LLMs with Noisy Algorithmic Chain of Thought
Alexander Havrilla
- The Role of Semantic Parsing in Understanding Procedural Text
Hossein Rajaby Faghihi, Parisa Kordjamshidi, Choh Man Teng, James Allen
- Partial Label Learning meets Active Learning: Enhancing Annotation Efficiency through Binary Questioning
Shivangana Rawat, Chaitanya Devaguptapu, Vineeth Balasubramanian
- Learning to Initiate and Reason in Event-Driven Cascading Processes
Yuval Atzmon, eli meirom, Shie Mannor, Gal Chechik
- LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language Models
Long Lian, Boyi Li, Adam Yala, Trevor Darrell
- Language Models Can Improve Event Prediction by Few-Shot Abductive Reasoning
Xiaoming Shi, Siqiao Xue, Kangrui Wang, Fan Zhou, James Y Zhang, Jun Zhou, Chenhao Tan, Hongyuan Mei
- What’s left can’t be right - The remaining positional incompetence of contrastive vision-language models
Nils Höhing, Ellen Rushe, Anthony Ventresque
- Deep Neuro-Symbolic Weight Learning in Neural Probabilistic Soft Logic
Connor Pryor, Charles Dickens, Lise Getoor
- Equivariance Is Not All You Need: Characterizing the Utility of Equivariant Graph Neural Networks for Particle Physics Tasks
Savannah Thais, Daniel Murnane
- Revealing the Intrinsic Ability of Generative Language Models in Relation Prediction
Qi Li, Lyuwen Wu, Luoyi Fu, Xinbing Wang, Lei Zhou, Chenghu Zhou, Shiyu Liang
- Augmenting the Knowledge to Large Model from Federated Small Models
Miru Kim, Minhae Kwon
- Explicit Planning Helps Language Models in Logical Reasoning
Hongyu Zhao, Kangrui Wang, Mo Yu, Hongyuan Mei
- Evaluating the Casual Reasoning Abilities of Large Language Models
Isha Puri, Himabindu Lakkaraju