ACMES 2, Computationally Assisted Mathematical Discovery and Experimental Mathematics, was a four day conference held on May 12 – 15, 2016 at Western University. Computational Discovery, also called Experimental Mathematics, is the use of symbolic and numerical computation to discover patterns, to identify particular numbers and sequences, and to gather evidence in support of specific mathematical assertions that may themselves arise by computational means. In recent decades, computer-assisted mathematical discovery has profoundly transformed the strategies used to expand mathematical knowledge. In addition to symbolic and numerical computation, a new trend that shows tremendous potential is the use of novel visualization techniques. The current situation was well summarized by a recent ICMI study: “The latest developments in computer and video technology have provided a multiplicity of computational and symbolic tools that have rejuvenated mathematics and mathematics education. Two important examples of this revitalization are experimental mathematics and visual theorems.” The program and further details on the conference can be accessed on the ACMES website.

The ACMES 2 video playlist now includes talks from the first two days of the conference. Remaining videos from the conference will be available soon.

[embedyt]https://youtu.be/9O05y4jGhJs?list=PLkMaaEPd7InL_hWKXk_uBWJ675V0t5Jno[/embedyt]

**Pierluigi Vellucci (Dipartimento di Scienze di Base e Applicate per l’Ingegneria, Sapienza Universita di Roma):
Nested square roots of 2 and Gray code**

Long before current graphic, visualisation and geometric tools were available, John E. Littlewood (1885-1977) wrote in his delightful A mathematician’s miscellany (Methuen 1953):

“A heavy warning used to be given [by lecturers] that pictures are not rigorous; this has never had its bluff called and has permanently frightened its victims into playing for safety. Some pictures, of course, are not rigorous, but I should say most are (and I use them whenever possible myself).”

Over the past five to ten years, the role of visual computing in my own research has expanded dramatically. In part this was made possible by the increasing speed and storage capabilities – and the growing ease of programming – of modern multi-core computing environments. But, at least as much, it has been driven by my group’s paying more active attention to the possibilities for graphing, animating or simulating most mathematical research activities. Part I. I shall first discuss briefly what is meant both by visual theorems and by experimental computation. I will then turn to dynamic geometry (iterative reflection methods) and matrix completion problems. Part II. I end with a description of recent work from my group in probability (behaviour of short random walks) and transcendental number theory (normality of real numbers).

**Murray Bremner (Mathematics & Statistics, University of Saskatchewan):
Binary-ternary Wedderburn-Etherington numbers**

The starting point was to determine the number of inequivalent placements of operation symbols in a monomial of degree n generated by completely symmetric binary and ternary operations [-,-] and (-,-,-). Another way to visualize this is to consider directed rooted non-planar trees in which every internal node has in-degree 2 or 3: the leaves represent the arguments and the internal nodes represent the operations. Maple calculations produced this initial sequence:

1, 1, 2, 4, 9, 23, 58, 156, 426, 1194, 3393, 9802, …,

which was not in the OEIS (it is now A268172). The next step was to determine the number of inequivalent multilinear mononomials of degree n: this is the sum, over all placements of operation symbols, of n! divided by the order of the group of symmetries of the placement. Further Maple calculations produced this initial sequence: 1, 1, 4, 25, 220, 2485, 34300, 559405, 10525900, 224449225, …, which was not in the OEIS (it is now A268163). An internet search for the first few terms of this sequence revealed the remarkable fact that it also enumerates the number of Feynman diagrams involved with multi-gluon scattering in quantum chromodynamics. The underlying structure in both cases is the set of labeled binary-ternary directed rooted non-planar trees indexed by the number n of leaves. Further details on the connections with physics can be found in these surveys: B. Feng and M. Luo, An introduction to on-shell recursion relations, Review Article, Frontiers of Physics, October 2012, Volume 7, Issue 5, pp. 533-575. M. L. Mangano and S. J. Parke, Multi-parton amplitudes in gauge theories, Physics Reports, Volume 200, Issue 6, February 1991, pp. 301-367. In this talk I will show how these sequences were discovered, sketch the relation with nuclear physics, and describe an infinite family of other sequences which are also related to Feynman diagrams.

**Lila Kari (Department of Computer Science, University of Waterloo):
Additive Methods to Overcome the Limitations of Genomic Signatures**

Approaches to understanding and utilizing the mathematical properties of bioinformation take many forms, from using DNA to build lattices and polyhedra, to studies of algorithmic DNA self- assembly, and harnessing DNA strand displacement for biocomputations. This talk presents a view of the structural properties of DNA sequences from yet another angle, by proposing the use of Chaos Game Representation (CGR) of DNA sequences to measure and visualize their interrelationships. The method we propose computes an “information distance” for each pair of Chaos Game Representations of DNA sequences, and then uses multidimensional scaling to visualize the resulting interrelationships in a three-dimensional space. The result of applying it to a collection of DNA sequences is an easily interpretable 3D Molecular Distance Map wherein sequences are represented by points in a common Euclidean space, and the spatial distance between any two points reflects the differences in their subsequence composition. This general-purpose method does not require DNA sequence alignment and can thus be used to compare similar or vastly different DNA sequences, genomic or computer-generated, of the same or different lengths. Our most recent results indicate that while mitochondrial DNA signatures can reliably be used for species classification, conventional CGR signatures of nuclear genomic sequences do not always succeed to separate them by their organism of origin, especially for closely related species. To address this problem, we propose the general concept of additive DNA signature of a set of DNA sequences, which combines information from all the sequences in the set to produce its signature, and show that it successfully overcomes the limitations of conventional genomic signatures.

**Yuri V. Matiyasevich (St. Petersburg Department of the Steklov Institute for Mathematics):
Computer experiments for approximating Riemann’s zeta function by finite Dirichlet series**

In 2011, the speaker began to work with finite Dirichlet series of length N vanishing at N −1 initial non-trivial zeroes of Riemann’s zeta function. Intensive multiprecision calculations revealed several interesting phenomena. First, such series approximate with great accuracy the values of the product (1 − 2 · 2^(-s)) zeta(s) ζ(s) for a large range of s lying inside the critical strip and to the left of it (even better approximations can be obtained by dealing with ratios of certain finite Dirichlet series). In particular the series vanish also very close to many other non-trivial zeroes of the zeta function (initial non-trivial zeroes “know about” subsequent non-trivial zeroes). Second, the coefficients of such series encode prime numbers in several ways. So far no theoretical explanation has been given for the observed phenomena. The ongoing research can be followed at http://logic.pdmi.ras.ru/~yumat/personaljournal/finitedirichlet

**Curtis Bright (School of Computer Science, Waterloo University):
MathCheck2: A SAT+CAS Verifier for Combinatorial Conjectures**

n this talk, we present MathCheck2, a combination of a SAT solver and a computer algebra system (CAS) aimed at finitely verifying or counterexampling mathematical conjectures. Using MathCheck2 we generated Hadamard matrices all orders 4n with n < 35 and provided 160 matrices to the Magma Hadamard database that were not equivalent to any matrix previously in that database. Furthermore, we provide independent verification of the claim that Williamson matrices of order 35 do not exist, and demonstrate for the first time that 35 is the smallest number with this property. The crucial insight behind the MathCheck2 algorithm and tool is that a combination of an efficient search procedure (like those in SAT solvers) with a domain-specific knowledge base (à la CAS) can be a very effective way to verify, counterexample, and learn deeper properties of mathematical conjectures (especially in combinatorics) and the structures they refer to. MathCheck2 uses a divide-and-conquer approach to parallelize the search, and a CAS to prune away classes of structures that are guaranteed to not satisfy the conjecture-under-verification C. The SAT solver is used to verify whether any of the remaining structures satisfy C, and in addition learn UNSAT cores in a conflict-driven clause-learning style feedback loop to further prune away non-satisfying structures. (This is joint work by Curtis Bright, Vijay Ganesh, Albert Heinle, Ilias Kotsireas, Saeed Nejati, and Krzysztof Czarnecki)

**Laureano Gonzalez-Vega (Departamento de Matematicas, Estadistica y Computacion, Universidad de Cantabria):
Dealing with real algebraic curves symbolically and numerically for discovery: from experiments to theory**

Geometric entities such as the set of the real zeros of a bivariate equation f(x,y)=0 can be treated algorithmically in a very efficient way by using a mixture of symbolic and numerical techniques. This implies that it is possible to know exactly which is the topology (connected components and their relative position, connectedness, singularities, etc.) of such a curve if the defining equation f(x,y) is also known in an exact manner (whatever this means) even for high degrees. In this talk we would like to describe three different “experiments” that highlight how new visualization tools in Computational Mathematics mixing symbolic and numerical techniques allow to perform experiments conveying either to mathematical discoveries and/or to new computational techniques. The first experiment to consider is not successful yet: it deals with the first part of Hilbert’s sixteenth problem asking for the relative positions of the closed ovals of a non-singular algebraic curve f(x,y)=0 (regarded in the real projective plane). The problem is solved for degrees less or equal to 7 and, despite several attempts by H. Hong and F. Santos for degree 8, experimentation here has not produced yet any insight. Second experiment was successful: for a long period, in Algebraic Geodesy, it was not known where Vermeille’s method failed when inverting the transformation from geodetic coordinates to Cartesian coordinates. In 2009 L. Gonzalez-Vega and I. Polo-Blanco were able to compute exactly the region where Vermeille’s method cannot be applied by analysing carefully the topology of the arrangement of several real algebraic curves defined implicitly (coming from the solution of a quantifier elimination problem according to the terminology in Real Algebraic Geometry). Finally, third experiment has a different nature. In Computer Aided Design, the computation of the medial axis (or skeleton) of a region in the plane whose boundary is the union of finitely many curves has a wide range of applications. If the curves defining the region to compute the medial axis are restricted to be segments or conic arcs then I. Adamou, M. Fioravanti and L. Gonzalez-Vega have been able to determine all the possible “topologies” for the curves appearing in the medial axis (information which is specially useful when computing it). Again, the key tool here was the computation of the topology of the bisector curves (which in general are not rational) appearing in the medial axis to be computed.

**Sergey Khashin (Department of Mathematics, Ivanovo State University):
Counterexamples for Frobenius primality test**

The most powerful elementary probabilistic method for primality test is the Frobenius test. Frobenius pseudoprimes are the natural numbers for which this test fails. There are several slightly different definitions of Frobenius pseudoprimes (FPP), which are almost equivalent. We are using the following one. {\bf Definition} Frobenius pseudoprime (FPP) is a composite odd integer $n$ such that it is not a perfect square provided $ z^n \equiv \overline{z} \mod n$, where $z=1+\sqrt{c}$ and $c$ is the smallest odd prime number with the Jacobi symbol $J(c/n)=-1$. {\bf Theorem 1(multiple factors)}. Let $n=p^sq$ is FPP where $p$ is prime and $s>1$. Then ${z^p = \overline{z} \mod p^2}$. {\bf Experiment 1}. There are no exist such prime $p<2^{32}$. {\bf Theorem 2}. Let $n=pq$ be FPP where $J(c/p)=+1$. Then $z_1^q \equiv z_2$, $z_2^q \equiv z_1 \mod p$, where $z_{1,2} = 1 \pm d \mod p$ and $d^2 = c \mod p$. {\bf Experiment 2}. For each $c$ there exists a short list ($<20$) of such primes $p$. {\bf Experiment 3}. There exist no FPP less than $2^{60}$.

**Austin Roche (Maplesoft, Waterloo):
Maple as a Computational Tool**

(abstract not available)

*Videos of all Rotman Institute of Philosophy events can be viewed on our YouTube channel. Subscribe to our channel to be notified whenever new videos are added.*