ACMES 2, Computationally Assisted Mathematical Discovery and Experimental Mathematics, was a four-day conference held on May 12 – 15, 2016 at Western University. Computational Discovery, also called Experimental Mathematics, is the use of symbolic and numerical computation to discover patterns, to identify particular numbers and sequences, and to gather evidence in support of specific mathematical assertions that may themselves arise by computational means. In recent decades, computer-assisted mathematical discovery has profoundly transformed the strategies used to expand mathematical knowledge. In addition to symbolic and numerical computation, a new trend that shows tremendous potential is the use of novel visualization techniques. The current situation was well summarized by a recent ICMI study: “The latest developments in computer and video technology have provided a multiplicity of computational and symbolic tools that have rejuvenated mathematics and mathematics education. Two important examples of this revitalization are experimental mathematics and visual theorems.” The program and further details on the conference can be accessed on the ACMES website.

The ACMES 2 video playlist is now complete and includes videos from each talk given at the conference. Titles and abstracts of videos just added (from day 3) are listed below.


Ernest Davis (Department of Computer Science, New York University):
Automating the foundations of physics, starting from the experiments

In mathematics, large parts of the project initiated by Whitehead and Russell have been accomplished. Using theorem-verification technology, online libraries of mathematical proofs have been assembled which go all the way from foundational axioms to very deep theorems. Suppose that you want to do the same thing for physics; and suppose that you want the foundational point to be, not the fundamental laws of physics as we have determined them, but the experiments and observations from which the laws are derived. What would such a derivation look like? I will discuss some aspects of how such a project could reasonably be formulated, and what would be involved in it.

James Hughes (Department of Computer Science, Western University):
Finding Nonlinear Relationships in fMRI Time Series

The brain is an intrinsically nonlinear system, yet the dominant methods used to generate network models of functional connectivity from fMRI data use linear methods. Although these approaches have been used successfully, they are limited in that they can find only linear relations within a system we know to be nonlinear. This study employs a highly specialized genetic programming system which incorporates multiple enhancements to perform symbolic regression, a sophisticated and computationally rigorous type of regression analysis that searches for declarative mathematical expressions to describe relationships in observed data with any provided set of basis functions. Publicly available fMRI data from the Human Connectome Project were segmented into meaningful regions of interest and highly nonlinear mathematical expressions describing functional connectivity were generated. These nonlinear expressions exceed the explanatory power of traditional linear models and allow for more accurate investigation of the underlying physiological connectivities.

Bert Baumgartner (Department of Philosophy, University of Idaho):
Spatial Dynamics of Mathematical Models of Opinion Formation

The Voter Model and variations of it are used to study the mathematical principles of opinion dynamics. Common aims in this area include the calculation of the probability of reaching consensus, time to consensus, distribution of cluster sizes, etc. We introduce a stochastic spatial agent-based model of opinion dynamics that includes a spectrum of opinion strengths and various possible rules for how the opinion type and strength of one individual affect the influence that individual has on others. Through simulations of the model, we find that even a small amount of amplification of opinion strength through interaction with like-minded neighbors can tip the scales in favor of polarization and dead- lock. Moreover, the spatial patterns that emerge and their dynamics resemble surface tension and motion by mean curvature. While these are also observed in the Threshold Voter Model, their emergence is the result of a very different mechanism. Finally, we compare the time dynamics of our spatial model with an ODE version.

Max Alekseyev (Department of Mathematics, George Washington University):
Computational methods for solving exponential and exponential-polynomial Diophantine equations

We present computational methods for solving Diophantine equations of two forms:

A*P^m + B*Q^n +C*R^k = 0 and A*m^2 + B*m + C = D*Q^n,

where $A,B,C,D,P,Q,R$ are given integers, $P,Q,R>0$, and $m,n,k$ are unknowns. The methods are based on modular arithmetics and aimed at bounding values of the variables, which further enables solving the equations by exhaustive search. We illustrate the methods workflow with some Diophantine equations like 4 + 3^n – 7^k = 0 and 2*m^2 + 1 = 3^n and show how one can compute all their solutions.

Karl Dilcher (Dalhousie University):
Derivatives and fast evaluation of the Witten zeta function

We study analytic properties of the Witten zeta function ${\mathcal W}(r,s,t)$, which is also named after Mordell and Tornheim. In particular, we evaluate the function ${\mathcal W}(s,s,\tau s)$ ($\tau>0$) at $s=0$ and, as our main result, find the derivative of this function at $s=0$, which turns out to be surprisingly simple. These results were first conjectured using high-precision calculations based on an identity due to Crandall that involves a free parameter and provides an analytic continuation. This identity was also the main tool in the eventual proofs of our results. Finally, we derive special values of a permutation sum and study an alternating analogue of ${\mathcal W}(r,s,t)$. (Joint work with Jon Borwein).

Ann Johnson (Department of Science & Technology Studies, Cornell University):
Toward a Computational Culture of Prediction: The Co-Evolution of Computing Machines, Computational Methods, and the World around us

The ability to make reliable predictions based on robust and replicable methods is possibly the most distinctive and important advantage claimed about scientific knowledge versus other types of knowledge. Because of the common use of computers and computational tools prediction in science and engineering is playing a much more important and ever-present role in 21st century knowledge production. Computers and simulation methods play prominent roles in a wide range of present-day scientific and engineering research. Without doubt, the computer, computational science, and scientific and engineering research have all mutually shaped one another—what is computationally possible informs the questions scientists and engineers ask and the questions scientists and engineers ask, in part, shape the development of new hardware and software. These mutual influences have been frequently examined, largely through the origins of scientific computing and the application of computers to a series of scientific disciplines and problems in the 1940s and 50s. However, I want to focus on a even more recent development. Namely, to argue that the wide and relatively cheap availability of computing power, particularly through mature networked desktop computing or the so-called “PC (personal computer) Revolution,” has triggered—but, to be clear, not made inevitable—a reorientation in the practices of scientists and engineers. My thesis is that substantial changes have occurred in research practices and culture over the past decades and that these developments have been made possible by everyday accessibility to simulation methods by a wide variety of technical actors. Furthermore, these changes help to generate a highly exploratory mode of research, which, in turn, amplifies the character and role of prediction in science. Such exploratory and iterative strategies introduce a design mode of knowledge production, where science becomes more engineering-like by focusing on 1) making things virtually and 2) testing their performance in computer simulations, and refining design. This is true of obvious engineering designs like planes, but also of molecules, economies, and seemingly autonomous traffic patterns. This new orientation towards design and prediction challenges some of the basic tenets of the philosophy of science, in which scientific theories and models are predominantly seen as explanatory rather than predictive. This talk will use examples from computational fluid dynamics, computational chemistry, and population biology to show what the new computational culture of prediction looks like and what it may mean.

Craig Larson (Department of Mathematics and Applied Mathematics, Virginia Commonwealth University):
Automated Conjecturing for Proof Discovery

CONJECTURING is an open-source Sage program which can be used to make invariant-relation or property-relation conjectures for any mathematical object-type. The user must provide at least a few object examples, together with functions defining invariants and properties for that object-type. These invariants and properties will then appear in the conjectures. Here we demonstrate how the CONJECTURING program can be used to produce proof sketches in graph theory. In particular, we are interested in graphs where the independence number of the graph equals its residue. Residue is a very good lower bound for the independence number – and the question of characterizing the class of graphs where these invariants are equal has been of continuing interest. The CONJECTURING program can be used to generate both necessary and sufficient condition conjectures for graphs where the independence number equals its residue, and proof sketches of these conjectures can also be generated. We will discuss the program and give examples. This is joint work with Nico Van Cleemput (Ghent University).

Greg Reid (Department of Applied Mathematics, Western University):
Numerical Differential-Geometric Algorithms in Computational Discovery

We consider classes of models specified by polynomial equations, or more generally polynomially nonlinear partial differential equations, with parametric coefficients. A basic task in computational discovery is to identify exceptional members of the class characterized by special properties, such as large solution spaces, symmetry groups or other properties. Symbolic algorithms such as Groebner Bases and their differential generalizations can some times be applied to such problems. These can be effective for systems with exact (e.g. rational) coefficients, but even then can be prohibitively expensive. They are unstable when applied to approximate systems. I will describe progress in the approximate case, in the new area of numerical algebraic geometry, together with fascinating recent progress in convex geometry, and semi-definite programming methods which extends such methods to the reals. This is joint work with Fei Wang, Henry Wolkowicz and Wenyuan Wu.

Shaoshi Chen (Key Laboratory of Mathematics Mechanization, Chinese Academy of Sciences):
Proof of the Wilf-Zeilberger Conjecture on Mixed Hypergeometric Terms

In 1992, Wilf and Zeilberger conjectured that a hypergeometric term in several discrete and continuous variables is holonomic if and only if it is proper. Strictly speaking the conjecture does not hold, but it is true when reformulated properly: Payne proved a piecewise interpretation in 1997, and independently, Abramov and Petkovsek in 2002 proved a conjugate interpretation. Both results address the pure discrete case of the conjecture. In this paper we extend their work to hypergeometric terms in several discrete and continuous variables and prove the conjugate interpretation of the Wilf-Zeilberger conjecture in this mixed setting. With the proof of this conjecture, one now could algorithmically detect the holonomicity of hypergeometric terms by checking properness with the algorithms in the work by Chen et al. This is important because it gives a simple test for the termination of Zeilberger’s algorithm. This is joint work with Christoph Koutschan (Austrian Academy of Sciences).

David Bailey (Lawrence Berkeley National Lab and University of California, Davis):
Computer discovery of large Poisson polynomials

In two earlier studies of lattice sums arising from the Poisson equation of mathematical physics, we established that the lattice sum phi_2(x,y) = 1/pi*log A, where A is an algebraic number. We were also able to compute the explicit minimal polynomials associated with A for a few specific rational arguments x and y. Based on these results, Jason Kimberley conjectured a number-theoretic formula for the degree of A in the case x = y = 1/s for some integer s. These earlier studies were hampered by the enormous cost and complexity of the requisite computations. In this study, we address the Poisson polynomial problem with significantly more capable computational tools: (a) a new thread-safe arbitrary precision package; (b) a new three-level multipair PSLQ integer relation scheme; and (c) a parallel implementation on a 16-core system. As a result of this improved capability, we have confirmed that Kimberley’s formula holds for all integers s up to 50 (except for s = 41, 43, 47, 49, which are still too costly to test). As far as we are aware, these computations, which employed up to 51,000-digit precision, producing polynomials with degrees up to 324 and integer coefficients up to 10^145, constitute the largest successful integer relation computations performed to date. The resulting polynomials have some interesting features, such as the fact that when s is even, the polynomial is palindromic (i.e., coefficient a_k = a_{d-k}), where d is the degree). Further, by examination of these polynomials, we have found connections to a sequence of polynomials defined in a 2010 paper by Savin and Quarfoot.

Videos of all Rotman Institute of Philosophy events can be viewed on our YouTube channel. Subscribe to our channel to be notified whenever new videos are added.