Oliver Club (joint with CS department)
Abstract: Since Euclid's Elements, the ideal of mathematics has been the combination of human ingenuity and uncompromising rigor. Only at the end of the twentieth century has it become feasible to verify non-trivial mathematical statements mechanically from a few foundational axioms. Even very simple statements can require hundreds of elementary "kernel" steps. This allows for checking formal proofs with extreme certainty. However, our best current proof automation lags behind human creativity in creating such proofs from scratch. Here I will argue that this is about to change fundamentally in the coming decade. Modern deep learning methods tuned for specialized hardware capable of multiplying huge matrices at extreme speeds are capable of both complex reasoning and formalizing mathematical statements from natural language. In this talk, I will outline a realistic long-term path that can lead us to automating mathematical reasoning at a level that will rival human creativity while providing the rigor as existing formal proof checking methods. This approach is based on a feedback loop between two deep learning models: a formalizer and a prover and both components are implemented in the form of large retrieval-augmented neural language models. I will present a short introduction to these methods alongside initial results demonstrating the feasibility of the most crucial parts of this plan.