Home > Uncategorized > Notes on the Oxford IUT workshop by Brian Conrad

Notes on the Oxford IUT workshop by Brian Conrad

December 15, 2015

Brian Conrad is a math professor at Stanford and was one of the participants at the Oxford workshop on Mochizuki’s work on the ABC Conjecture. He is an expert in arithmetic geometry, a subfield of number theory which provides geometric formulations of the ABC Conjecture (the viewpoint studied in Mochizuki’s work).

Since he was asked by a variety of people for his thoughts about the workshop, Brian wrote the following summary. He hopes that a non-specialist may also learn something from these notes concerning the present situation. Forthcoming articles in Nature and Quanta on the workshop will be addressed at the general public. This writeup has the following structure:

  1. Background
  2. What has delayed wider understanding of the ideas?
  3. What is Inter-universal Teichmuller Theory (IUTT = IUT)?
  4. What happened at the conference?
  5. Audience frustration
  6. Concluding thoughts
  7. Technical appendix

1.  Background

The ABC Conjecture is one of the outstanding conjectures in number theory, even though it was formulated only approximately 30 years ago. It admits several equivalent formulations, some of which lead to striking finiteness theorems and other results in number theory and others of which provide a robust structural framework to try to prove it. The conjecture concerns a concrete inequality relating prime factors of a pair of positive whole numbers (A and B) and their sum (C) to the actual magnitudes of the two integers and their sum. It has a natural generalization to larger number systems (called “number fields”) that arise throughout number theory.

The precise statement of the conjecture and discussion of some of its consequences are explained here in the setting of ordinary whole numbers, and some of the important applications are given there as well. The interaction of multiplicative and additive properties of whole numbers as in the statement of the ABC Conjecture is a rather delicate matter (e.g., if p is a prime one cannot say anything nontrivial about the prime factorization of p+12 in general). This conjectural inequality involves an auxiliary constant which provides a degree of uniform control that gives the conjecture its power to have striking consequences in many settings. Further consequences are listed here.

It is the wealth of consequences, many never expected when the conjecture was first formulated, that give the conjecture its significance. (For much work on this problem and its consequences, it is essential to work with the generalized version over number fields.)

It was known since around the time when the ABC Conjecture was first introduced in the mid-1980’s that it has deep links to — and even sometimes equivalences with — other outstanding problems such as an effective solution to the Mordell Conjecture (explicitly bounding the numerators and denominators of the coordinates of any possible rational point on a “higher-genus” algebraic curve, much harder than merely bounding the number of possible such points; the Mordell Conjecture asserting the mere finiteness of the set of such points was proved by Faltings in the early 1980’s and earned him a Fields Medal). To get explicit bounds so as to obtain an effective solution to the Mordell Conjecture, one would need explicit constants to emerge in the ABC inequality.

An alternative formulation of the conjecture involves “elliptic curves”, a class of curves defined by a degree-3 equation in 2 variables that arise in many problems in number theory (including most spectacularly Fermat’s Last Theorem). Lucien Szpiro formulated a conjectural inequality (called Szpiro’s Conjecture) relating some numerical invariants of elliptic curves, and sometime after the ABC Conjecture was introduced by David Masser and Joseph Oesterle it was realized that for the generalized formulations over arbitrary number fields, the ABC Conjecture is equivalent to Szpiro’s Conjecture (shown by using a special class of elliptic curves called “Frey curves” that also arise in establishing the link between Fermat’s Last Theorem and elliptic curves).

It had been known for many years that Shinichi Mochizuki, a brilliant mathematician working at the Research Institute for Mathematical Sciences (RIMS) in Kyoto since shortly after getting his PhD under Faltings at Princeton in 1992, had been quietly working on this problem by himself as a long-term goal, making gradual progress with a variety of deep techniques within his area of expertise (arithmetic geometry, and more specifically anabelian geometry).  Just as with the proof of Fermat’s Last Theorem, to make progress on the conjecture one doesn’t work directly with the initial conjectural relation among numbers. Instead, the problem is recast in terms of sophisticated constructions in arithmetic geometry so as to be able to access more powerful tools and operations that cannot be described in terms of the initial numerical data. Mochizuki’s aim was to settle Szpiro’s Conjecture.

In August 2012, Mochizuki announced a solution using what he called Inter-Universal Teichmuller Theory; he released several long preprints culminating in the proof of the conjecture. Mochizuki is a remarkable (and very careful) mathematician, and his approach to the ABC Conjecture (through Szpiro’s Conjecture for elliptic curves) is based on much of his own previous deep work that involves an elaborate tapestry of geometric and group-theoretic constructions from an area of mathematics called anabelian geometry. He deserves much respect for having devoted substantial effort over such an extended period of time towards developing and applying tools to attack this problem.

The method as currently formulated by Mochizuki does not yield explicit constants, so it cannot be used to establish an effective proof of the Mordell Conjecture. But if correct it would nonetheless be a tremendous achievement, settling many difficult open problems, and would yield a new proof of the Mordell Conjecture (as shown long ago by Noam Elkies).

Very quickly, the experts realized that the evaluation of the work was going to present exceptional difficulties. The manner in which the papers culminating in the main result has been written, including a tremendous amount of unfamiliar terminology and notation and rapid-fire definitions without supporting examples nearby in the text, has made it very hard for many with extensive background in arithmetic geometry to get a sense of progress when trying to work through the material. There are a large number of side remarks in the manuscripts, addressing analogies and motivation, but to most readers the significance of the remarks and the relevance of the analogies has been difficult to appreciate at first sight. As a consequence, paradoxically many readers wound up quickly feeling discouraged or confused despite the inclusion of much more discussion of “motivation” than in typical research papers. In addition to the difficulties with navigating the written work, the author preferred not to travel and give lectures on it, though he has been very receptive to questions sent to him via email and to speaking with visitors to RIMS.

To this day, many challenges remain concerning wider dissemination and evaluation of his ideas building on anabelian geometry to deduce the ABC Conjecture. With the passage of much time, the general sense of discouragement among many in the arithmetic geometry community was bringing matters to a standstill. The circumstances are described here.

Although three years have passed since the original announcement, it isn’t the case that many arithmetic geometers have been working on it throughout the whole time. Rather, many tried early on but got quickly discouraged (due in part to the density of very new notation, terminology, and concepts). There have been some surveys written, but essentially everyone I have spoken with has found those to be as difficult to parse for illumination as the original papers. (I have met some who found some surveys to be illuminating, but most have not.)

The Clay Mathematics Institute and the Mathematical Institute at Oxford contributed an important step by hosting a workshop last week at Oxford University on Mochizuki’s work, inviting experts from across many facets of arithmetic geometry relevant to the ABC Conjecture (but not themselves invested in Inter-Universal Teichmuller Theory) to try to achieve as a larger group some progress in recognizing parts of the big picture that individuals were unable or too discouraged to achieve on their own. The organizers put in a tremendous amount of effort, and (together with CMI staff) are to be commended for putting it all together. It was challenging to find enough speakers since many senior people were reluctant to give talks and for most speakers many relevant topics in Mochizuki’s work (e.g., Frobenioids and the etale theta function) were entirely new territory. Many speakers understandably often had little sense of how the pieces would finally fit into the overall story. One hope was that by combining many individual efforts a greater collective understanding could be achieved.

I attended the workshop, and among those attending were leading experts in arithmetic or anabelian geometry such as Alexander Beilinson, Gerd Faltings, Kiran Kedlaya, Minhyong Kim, Laurent Lafforgue, Florian Pop, Jakob Stix, Andrew Wiles, and Shou-Wu Zhang. The complete list of participants is given here.

It was not the purpose of the workshop to evaluate the correctness of the proof. The aim as I (and many other participants) understood it was to help participants from across many parts of arithmetic geometry to become more familiar with some key ideas involved in the overall work so as to (among other things) reduce the sense of discouragement many have experienced when trying to dig into the material.

The work of Mochizuki involves the use of deep ideas to construct and study novel tools fitting entirely within mainstream algebraic geometry that give a new angle to attacking the problem. But evaluating the correctness of a difficult proof in mathematics involves many layers of understanding, the first of which is a clear identification of some striking new ideas and roughly how they are up to the task of proving the asserted result. The lack of that identification in the present circumstances, at least to the satisfaction of many experts in arithmetic geometry, lies at the heart of the difficulties that still persist in the wider understanding of what is going on in the main papers. The workshop provided definite progress in that direction, and in that respect was a valuable activity.

The workshop did not provide the “aha!” moment that many were hoping would take place. I am glad that I attended the Oxford workshop, despite serious frustrations which arose towards the end. Many who attended now have a clearer sense of some ingredients and what some key issues are, but nobody acquired expertise in Inter-universal Teichmuller Theory as a consequence of attending (nor was it the purpose, in my opinion). In view of the rich interplay of ideas and intermediate results that were presented at the workshop, including how much of Mochizuki’s own past work enters into it in many aspects, as well as his own track record for being a careful and powerful mathematician, this work deserves to be taken very seriously.

References below to opinions and expectations of “the audience” are based on conversations with many participants who have expertise in arithmetic geometry (but generally not with Inter-universal Teichmuller Theory). As far as I know, we were all on the same wavelength for expectations and impressions about how things evolved during the week. Ultimately any inaccuracy in what is written below is entirely my responsibility. I welcome corrections or clarification, to be made through comments on this website for the sake of efficiency.

2. What has delayed wider understanding of the ideas?

One source of difficulties in wider dissemination of the main ideas appears to be the fact that prior work on which it depends was written over a period of many years, during much of which it was not known which parts would finally be needed just to understand the proof of the main result. There has not been a significant “clean-up” to give a more streamlined pathway into the work with streamlined terminology/notation. This needs to (eventually) happen.

Mochizuki aims to prove Szpiro’s conjecture for all elliptic curves over number fields, with a constant that depends only on the degree of the number field (and on the choice of epsilon in the statement). The deductions from that to more concrete consequences (such as the ABC Conjecture and hence many finiteness results such as: the Mordell Conjecture, Siegel’s theorem on integral points of affine curves, and the finiteness of the set of elliptic curves over a fixed number field with good reduction outside a fixed finite set of places) have been known for decades and do not play any direct role in his arguments. In particular, one cannot get any insight into Mochizuki’s methods by trying to “test them out” in the context of such concrete consequences, as his arguments are taking place entirely in the setting of Szpiro’s Conjecture (where those concrete consequences have no direct relevance).

Moreover, his methods require the elliptic curve in question to satisfy certain special global and local properties (such as having split 30-torsion and split multiplicative reduction at all bad places) which are generally not satisfied by Frey curves but are attained over an extension of the ground field of controlled degree. From those special cases he has a separate short clever argument to deduce the general case from such special cases (over general number fields!) at the cost of ineffective constants. Thus, one cannot directly run his methods over a small ground field such as the rational numbers; the original case for ordinary integers is inferred from results over number fields of rather large (but controlled) degree.

Sometimes the extensive back-referencing to earlier papers also on generally unfamiliar topics (such as Frobenioids and anabelioids) has created a sense of infinite regress, due to the large number of totally novel concepts to be absorbed, and this has had a discouraging effect since the writing is often presented from the general to the specific (which may be fine for logic but not always for learning entirely new concepts). For example, if one tries to understand Mochizuki’s crucial notion of Frobenioid (the word is a hybrid of “Frobenius” and “monoid”), it turns out that much of the motivation comes from his earlier work in Hodge-Arakelov theory of elliptic curves, and that leads to two conundrums of psychological (rather than mathematical) nature:

  • Hodge-Arakelov theory is not used in the end (it was the basis for Mochizuki’s original aim to create an arithmetic version of Kodaira-Spencer theory, inspired by the function field case, but that approach did not work out). How much (if any) time should one invest to learn a non-trivial theory for motivational purposes when it will ultimately play no direct role in the final arguments?
  • Most of the general theory of Frobenioids (in two prior papers of Mochizuki) isn’t used in the end either (he only needs some special cases), but someone trying on their own to learn the material may not realize this and so may get very discouraged by the (mistaken) impression that they have to digest that general theory. There is a short note on Mochizuki’s webpage which points out how little of that theory is ultimately needed, but someone working on their own may not be aware of that note. Even if one does find that note and looks at just the specific parts of those earlier papers which discuss the limited context that is required, one see in there ample use of notation, terminology, and results from earlier parts of the work. That may create a sense of dread (even if misplaced) that to understand enough about the special cases one has to dive back into the earlier heavier generalities after all, and that can feel discouraging.

An analogy that comes to mind is learning Grothendieck’s theory of etale cohomology. Nowadays there are several good books on the topic which develop it from scratch (more-or-less) in an efficient and direct manner, proving many of the key theorems. The original exposition by Grothendieck in his multi-volume SGA4 books involved first developing hundreds of pages of the very abstract general theory of topoi that was intended to be a foundation for all manner of future possible generalizations (as did occur later), but that heavy generality is entirely unnecessary if one has just the aim to learn etale cohomology (even for arbitrary schemes).

3. What is Inter-universal Teichmuller Theory (IUT)?

I will build up to my impression of an approximate definition of IUT in stages. As motivation, the method of Mochizuki to settle Szpiro’s Conjecture (and hence ABC) is to encode the key arithmetic invariants of elliptic curves in that conjecture in terms of “symmetry” alone, without direct reference to elliptic curves. One aims to do the encoding in terms of group-theoretic data given by (arithmetic) fundamental groups of specific associated geometric objects that were the focus of Grothendieck’s anabelian conjectures on which Mochizuki had proved remarkable results earlier (going far beyond anything Grothendieck had dared to conjecture). The encoding mechanism is addressed in the appendix; it involves a lot of serious arguments in algebraic and non-archimedean geometry of an entirely conventional nature (using p-adic theta functions, line bundles, Kummer maps, and a Heisenberg-type subquotient of a fundamental group).

Mochizuki’s strategy seems to be that by recasting the entire problem for Szpiro’s Conjecture in terms of purely group-theoretic and “discrete” notions (i.e., freeing oneself from the specific context of algebro-geometric objects, and passing to structures tied up with group theory and category theory), one acquires the ability to apply new operations with no direct geometric interpretation. This is meant to lead to conclusions that cannot be perceived in terms of the original geometric framework.

To give a loose analogy, in Wiles’ solution of Fermat’s Last Theorem one hardly ever works directly with the Fermat equation, or even with the elliptic curve in terms of which Frey encoded a hypothetical counterexample. Instead, Wiles recast the problem in terms of a broader framework with deformation theory of Galois representations, opening the door to applying techniques and operations (from commutative algebra and Galois cohomology) which cannot be expressed directly in terms of elliptic curves. An analogy of more relevance to Mochizuki’s work is the fact that (in contrast with number fields) absolute Galois groups of p-adic fields admit (topological) automorphisms that do not arise from field-theoretic automorphisms, so replacing a field with its absolute Galois group gives rise to a new phenomenon (“exotic” automorphisms) that has no simple description in the language of fields.

To be more specific, the key new framework introduced by Mochizuki, called the theory of Frobenioids, is a hybrid of group-theoretic and sheaf-theoretic data that achieves a limited notion of the old dream of a “Frobenius morphism” for algebro-geometric structures in characteristic 0. The inspiration for how this is done apparently comes from Mochizuki’s earlier work on p-adic Teichmuller theory (hence the “Teichmuller” in “IUT”). To various geometric objects Mochizuki associates a “Frobenioid”, and then after some time he sets aside the original geometric setting and does work entirely in the context of Frobenioids. Coming back to analogues in the proof of FLT, Wiles threw away an elliptic curve after extracting from it a Galois representation and then worked throughout with Galois representations via notions which have no meaning in terms of the original elliptic curve.

The presence of structure akin to Frobenius morphisms and other operations of “non-geometric origin” with Frobenioids is somehow essential to getting non-trivial information from the encoding of arithmetic invariants of elliptic curves in terms of Frobenioids in a way I do not understand and was not clearly addressed at the Oxford workshop, though it seems to have been buried somewhere in the lectures of the final 2 days; to understand this point seems to be an essential step in recognizing where there is some deep contact between geometry and number theory in the method.

So in summary, IUT is, at least to first approximation, the study of operations on and refined constructions with Frobenioids in a manner that goes beyond what we can obtain from geometry yet can yield interesting consequences when applied to Frobenioid-theoretic encodings of number-theoretic invariants of elliptic curves. The “IU” part of “IUT” is of a more technical nature that appears to be irrelevant for the proof of Szpiro’s conjecture.

The upshot is that, as happens so often in work on difficult mathematical problems, one broadens the scope of the problem in order to get structure that is not easily expressible in terms of the original concrete setting. This can seem like a gamble, as the generalized problem/context could possibly break down even when the thing one wants to prove is true; e.g., in Wiles’ proof of FLT this arose via his aim to prove an isomorphism between a deformation ring and a Hecke ring, which was much stronger than needed for the desired result yet was also an essential extra generality for the success of the technique originally employed. (Later improvements of Wiles’ method had to get around this issue, strengthening the technique to succeed without proving a result quite as strong as an isomorphism but still sufficient for the desired final number-theoretic conclusion.)

One difference in format between the nature of Mochizuki’s approach to the Szpiro Conjecture and Wiles’ proof of FLT is that the latter gave striking partial results even if one limited it to initial special cases – say elliptic curves of prime discriminant – whereas the IUT method does not appear to admit special cases with which one might get a weaker but still interesting inequality by using less sophisticated tools (and for very conventional reasons it seems to be impossible to “look under the hood” at IUT by thinking in terms of the concrete consequences of the ABC Conjecture). Mochizuki was asked about precisely this issue during the first Skype session at the Oxford meeting and he said he isn’t aware of any such possibility, adding that his approach seems to be “all or nothing”: it gives the right inequality in a natural way, and by weakening the method it doesn’t seem to simply yield a weaker interesting inequality but rather doesn’t give anything.

Let us next turn to the meaning of “inter-universal.” There has been some attention given to Mochizuki’s discussion of “universes” in his work on IUT, suggesting that his proof of ABC (if correct) may rely in an essential way on the subtle set-theoretic issues surrounding large cardinals and Grothendieck’s axiom of universes, or entail needing a new foundation for mathematics. I will now explain why I believe this is wrong (but that the reason for Mochizuki’s considerations are nonetheless relevant for certain kinds of generalities).

The reason that Mochizuki gets involved with universes appears to be due to trying to formulate a completely general context for overcoming certain very thorny expository issues which underlie important parts of his work (roughly speaking: precisely what does one mean by a “reconstruction theorem” for geometric or field-theoretic data from a given profinite group, in a manner well-suited to making quantifiable estimates?) He has good reasons to want to do this, but a very general context seems not to be necessary if one takes the approach of understanding his proofs along the way (i.e., understanding all the steps!) and just aiming to develop enough for the proof of Szpiro’s Conjecture.

Grothendieck introduced universes in order to set up a rigorous theory of general topoi as a prelude to the development of etale cohomology in SGA4. But anyone who understands the proofs of the central theorems in etale cohomology knows very well that for the purposes of developing that subject the concept of “universe” is irrelevant. This is not a matter of allowing unrigorous arguments, but rather of understanding the core ideas in the proofs. Though Grothendieck based his work on a general theory of topoi rather than give proofs only in the special case of etale topoi of schemes, it doesn’t follow that one must do things that way (as is readily seen by reading any other serious book on etale cohomology, where universes are irrelevant and proofs are completely rigorous).

In other words, what is needed to create a rigorous “theory of everything” need not have anything to do with what is needed for the more limited aim of developing a “theory of something”. Mochizuki does speak of “change of universe” in a serious way in his 4th and final IUT paper (this being a primary reason for the word “inter-universal” in “IUT”, I believe). But that consideration of universes is due to seeking a very general framework for certain tasks, and does not appear to be necessary if one aims for an approach that is sufficient just to prove Szpiro’s Conjecture. For the purposes of setting up a general framework for IUT strong enough to support all manner of possible future developments without “reinventing the wheel”, the “inter-universal” considerations may be necessary, and someone at the Oxford workshop suggested model theory could provide a well-developed framework for such matters, but for applications in number theory (and in particular the ABC Conjecture) it appears to be irrelevant.

4. What happened at the workshop?

The schedule of talks of the workshop aimed to give an overview of the entire theory. The aim of all participants with whom I spoke was to try to identify where substantial contact occurs between the theory of heights for elliptic curves (an essential feature of Szpiro’s Conjecture) and Mochizuki’s past work in anabelian geometry, especially how such contact could occur in a way which one could see did provide insight in the direction of a result such as Szpiro’s conjecture (rather than just  yield non-trivial new results on heights disconnected from anything). So one could consider the workshop to be a success if it gave participants a clearer sense among:

  1. the “lay of the land” in terms of how some ingredients fit together,
  2. which parts of the prior work are truly relevant, and in what degree of generality, and
  3. how the new notions introduced allow one to do things that cannot be readily expressed in more concrete terms.

The workshop helped with (i) and (ii), and to a partial extent with (iii).

It was reasonable that participants with advanced expertise in arithmetic geometry should get something out of the meeting even without reading any IUT-related material in advance, as none of us were expecting to emerge as experts (just seeking basic enlightenment). Many speakers in the first 3 days, which focused on material prior to the IUT papers but which feed into the IUT papers, were not IUT experts. Hence, they could not be expected to identify how their topic would precisely fit into IUT. It took a certain degree of courage to be a speaker in such a situation.

The workshop began with a lecture by Shou-Wu Zhang on a result of Bogomolov with a group-theoretic proof (by Zhang, if I understood correctly) that is not logically relevant (Mochizuki was not aware of the proof until sometime after he wrote his IUT papers) but provided insight into various issues that came up later on. Then there was a review of Mochizuki’s papers on refined Belyi maps and on elliptic curves in general position that reduced the task of proving ABC for all number fields and Vojta’s conjecture for all hyperbolic curves over number fields to the Szpiro inequality for all elliptic curves with controlled local properties (e.g., semistable reduction over a number field that contains sqrt{-1}, j-invariant with controlled archimedean and 2-adic valuations, etc.). This includes a proof by contradiction that was identified as the source of non-effectivity in the constants to be produced by Mochizuki’s method (making his ABC result, if correct, well-suited to finiteness theorems but with no control on effectivity).

Next, there were lectures largely focused on anabelian geometry (for hyperbolic curves over p-adic fields and number fields, and various classes of “reconstruction” theorems of geometric objects and fields from arithmetic fundamental groups). Slides for many of the lectures are available at the webpage for the workshop.

The third day began with two lectures about Frobenioids. This concept was developed by Mochizuki around 2005 in a remarkable degree of generality, beyond anything eventually needed. A Frobenioid is a type of fibered category that (in a special case) retains information related to pi_1 and line bundles on all finite etale covers of a reasonable scheme, but its definition involves much less information than that of the scheme. Frobenioids also include a feature that can be regarded as a substitute for missing Frobenius maps in characteristic 0. The Wednesday lectures on Frobenioids highlighted the special cases that are eventually needed, with some examples.

At the end of the third day and beginning of the fourth day were two crucial lectures by Kedlaya on Mochizuki’s paper about the “etale theta function” (so still in a pre-IUT setting). Something important emerged in Kedlaya’s talks: a certain cohomological construction with p-adic theta functions (see the appendix). By using Mochizuki’s deep anabelian theorems, Kedlaya explained in overview terms how the cohomological construction led to the highly non-obvious fact that “everything” relevant to Szpiro’s Conjecture could be entirely encoded in terms of a suitable Frobenioid. That shifted the remaining effort to the crucial task of doing something substantial with this Frobenioid-theoretic result.

After Kedlaya’s lectures, the remaining ones devoted to the IUT papers were impossible to follow without already knowing the material: there was a heavy amount of rapid-fire new notation and language and terminology, and everyone not already somewhat experienced with IUT got totally lost. This outcome at the end is not relevant to the mathematical question of correctness of the IUT papers. However, it is a manifestation of the same expository issues that have discouraged so many from digging into the material. The slides from the conference website link above will give many mathematicians a feeling for what it was like to be in the audience.

5. Audience frustration.

There was substantial audience frustration in the final 2 days. Here is an example.

We kept being told many variations of “consider two objects that are isomorphic,” or even something as vacuous-sounding as “consider two copies of the category D, but label them differently.” Despite repeated requests with mounting degrees of exasperation, we were never told a compelling example of an interesting situation of such things with evident relevance to the goal.

We were often reminded that absolute Galois groups of p-adic fields admit automorphisms not arising from field theory, but we were never told in a clear manner why the existence of such exotic automorphisms is relevant to the task of proving Szpiro’s Conjecture; perhaps the reason is a simple one, but it was never clearly explained despite multiple requests. (Sometimes we were told it would become clearer later, but that never happened either.)

After a certain amount of this, we were told (much to general surprise) variations of “you have been given examples.” (Really?  Interesting ones?  Where?) It felt like taking a course in linear algebra in which one is repeatedly told “Consider a pair of isomorphic vector spaces” but is never given an interesting example (of which there are many) despite repeated requests and eventually one is told “you have been given examples.”

Persistent questions from the audience didn’t help to remove the cloud of fog that overcame many lectures in the final two days. The audience kept asking for examples (in some instructive sense, even if entirely about mathematical structures), but nothing satisfactory to much of the audience along such lines was provided.

For instance, we were shown (at high speed) the definition of a rather elaborate notion called a “Hodge theater,” but were never told in clear succinct terms why such an elaborate structure is entirely needed.  (Perhaps this was said at some point, but nobody I spoke with during the breaks caught it.) Much as it turns out that the very general theory of Frobenioids is ultimately unnecessary for the purpose of proving Szpiro’s Conjecture, it was natural to wonder if the same might be true of the huge amount of data involved in the general definition of Hodge theaters; being told in clearer terms what the point is and what goes wrong if one drops part of the structure would have clarified many matters immensely.

The fact that the audience was interrupting with so many basic questions caused the lectures to fall behind schedule, which caused some talks to go even faster to try to catch up with the intended schedule, leading to a feedback loop of even more audience confusion, but it was the initial “too much information” problem that caused the many basic questions to arise in the first place. Lectures should be aimed at the audience that is present.

6. Concluding thoughts

Despite the difficulties and general audience frustration that emerged towards the end of the week, overall the workshop was valuable for several reasons. It improved awareness of some of the key ingredients and notions. Moreover, in addition to providing an illuminating discussion of ideas around the vast pre-IUT background, it also gave a clearer sense of a more efficient route into IUT (i.e., how to navigate around a lot of unnecessary material in prior papers). The workshop also clarified the effectivity issues and highlighted a crucial cohomological construction and some relevant notions concerning Frobenioids.

Another memorable feature of the meeting was seeing the expertise of Y. Hoshi on full display. He could always immediately correct any errors by speakers and made sincere attempts to give answers to many audience questions (which were often passed over to him when a speaker did not know the answer or did not explain it to the satisfaction of the audience).

If the final 2 days had scaled back the aim to reach the end of IUT and focused entirely on addressing how the Frobenioid incarnation of the cohomological construction from Kedlaya’s lectures makes it possible (or at least plausible) to deduce something non-trivial in the direction of Szpiro’s Conjecture (but not necessarily the entire conjecture) then it would have been more instructive for the audience. Although “non-trivial” is admittedly a matter of taste, I do know from talking with most of the senior participants that most of us did not see where such a deduction took place; probably it was present somewhere in a later lecture, but we were so lost by everything else that had happened that we missed it.

I don’t understand what caused the communication barrier that made it so difficult to answer questions in the final 2 days in a more illuminating manner. Certainly many of us had not read much in the IUT papers before the meeting, but this does not explain the communication difficulties. Every time I would finally understand (as happened several times during the week) the intent of certain analogies or vague phrases that had previously mystified me (e.g., “dismantling scheme theory”), I still couldn’t see why those analogies and vague phrases were considered to be illuminating as written without being supplemented by more elaboration on the relevance to the context of the mathematical work.

At multiple times during the workshop we were shown lists of how many hours were invested by those who have already learned the theory and for how long person A has lectured on it to persons B and C. Such information shows admirable devotion and effort by those involved, but it is irrelevant to the evaluation and learning of mathematics. All of the arithmetic geometry experts in the audience have devoted countless hours to the study of difficult mathematical subjects, and I do not believe that any of us were ever guided or inspired by knowledge of hour-counts such as that. Nobody is convinced of the correctness of a proof by knowing how many hours have been devoted to explaining it to others; they are convinced by the force of ideas, not by the passage of time.

The primary burden now is on those who understand IUT to do a better job of explaining the main substantial points to the wider community of arithmetic geometers. Those who understand the work need to be more successful at communicating to arithmetic geometers what makes it tick and what are some of its crucial insights visibly relevant towards Szpiro’s Conjecture.

It is the efficient communication of great ideas in written and oral form that inspires people to invest the time to learn a difficult mathematical theory. To give a recent example, after running a year-long seminar on perfectoid spaces I came to appreciate that the complete details underlying the foundational work in that area are staggeringly  large, yet the production of efficient survey articles and lectures getting right to the point in a moderate amount of  space and time occurred very soon after that work was announced. Everything I understood during the week in Oxford supports the widespread belief that there is no reason the same cannot be done for IUT, exactly as for other prior great breakthroughs in mathematics. There have now been 3 workshops on this material. Three years have passed. Waiting another half-year for yet another workshop is not the answer to the current predicament.

For every subject I have ever understood in mathematics, there are instructive basic examples and concise arguments to illustrate what is the point to generally educated mathematicians. There is no reason that IUT should be any different, especially for the audience that was present at Oxford. Let me illustrate this with a short story. During one of the tea breaks I was chatting with a postdoc who works in analysis, and I mentioned sheaf theory as an example of a notion which may initially look like pointless abstract nonsense but actually allows for very efficient consideration of useful ideas which are rather cumbersome (or impossible) to contemplate in more concrete terms. Since that postdoc knew nothing about what can be done with sheaf theory, I told him about the use of sheaf cohomology to systematize and analyze the deRham theorem and topological obstructions to construction problems in complex analysis; within 20 minutes he understood the point and wanted to learn more. Nobody expects to grasp the main points of IUT within 20 minutes, but if someone says they understand a theory and does not provide instructive visibly relevant examples and concise arguments that clearly illustrate what is the point then they are not trying hard enough. Many are willing to work hard to understand what must be very deep and powerful ideas, but they need a clearer sense of the landscape before beginning their journey.

7. Technical appendix

The following summary of some notions from Kedlaya’s lectures is included to convey to experts in arithmetic geometry that there are substantial and entirely conventional scheme-theoretic ideas underlying crucial constructions that provide the backbone for IUT. The Szpiro conjecture requires controlling height(Delta(E)) in a global setting. Let’s now focus on the local story, for an elliptic curve E with split multiplicative reduction over a p-adic field K.  We aim to encode ord(Delta(E)) – up to controlled error – in terms of cohomological constructions related to etale fundamental groups. The punctured curve E – {0} is hyperbolic, but to work with it analytically over K without losing contact with the algebraic side it is better (for GAGA purposes) to instead consider the complete curve E with a suitable log-structure supported at {0}, that being a “hyperbolic log curve” X.

Inside the profinite pi_1(X) is a “tempered” pi_1 that is a topological subgroup somewhat akin to a local Weil group inside a profinite local Galois group. The sense in which it involves a  “\widehat{Z} replaced by Z” (related to ramification at {0} for connected finite etale covers of E – {0} branched over 0) is not by direct definition (think back to the usual cheap definition of the local Weil group, to be contrasted with the more conceptual definition of Weil groups in Artin-Tate that map to the profinite Galois group and in the local case is proved to be injective with “expected” image). Instead, the “tempered” pi_1 is defined by a procedure intrinsic to rigid-analytic geometry that classifies certain types of infinite-degree connected etale covers controlled by the geometry of a semistable formal model. The importance of having Z rather than \widehat{Z} related to ramification over 0 is that it will enable us to recover ord(Delta(E)) as an element of Z (amenable to archimedean considerations) rather than just in \widehat{Z} (where one cannot make contact with archimedean estimates).

The way we’re going to rediscover ord(Delta(E)) in cohomology on a tempered pi_1 is through p-adic theta functions. Mochizuki is going to build certain cohomology classes for a non-abelian (Heisenberg-type) quotient of tempered pi_1 related to covers of X arising from pullback along multiplication-by-N on E (not a Galois cover over K when E[N] is not K-split) for varying integers N > 0. The construction is rather technical, involving arguments with various line bundles on the analytic etale cover Y of X given by G_m (via Tate uniformization) equipped with appropriate log structure supported at q^Z, as well as certain finite etale covers Y_N of Y related to pullback by [N] on E and formal-scheme models \mathfrak{Y}_N of such Y_N (and some games with replacing N by 2N to provide square roots for constructions of theta functions).

Such geometric machinery constructs a degree-1 cohomology class \eta_X in a “tempered” pi_1 with the crucial property that it nearly coincides with a Kummer-theoretic cohomology class of a down-to-earth non-archimedean theta-function (viewed as a meromorphic function on the analytic space Y). This comparison of cohomology classes constructed in seemingly completely different ways doesn’t quite work on the nose, but only up to translation by cohomology classes arising from controlled units (arising from the ring of integers of specific finite extensions of K); this “controlled error” is very important. (It has a precedent in Mochizuki’s work on reconstruction for rational functions in the context of of the anabelian conjecture for hyperbolic curves, where there is an exact equality and no error term, as explained in Stix’s lecture.)

Two key points are as follows:

(i) If we specialize the tempered-pi_1 cohomology class \eta_X at points of Y over specific torsion points of X = E then we recover Kummer classes of theta-values as cohomology classes for (specific) finite extensions of K, up to the same controlled error as mentioned above. Theta-functions (not their individual values!) have robust properties (rigidity in some sense) enabling one to prove nontrivial properties of their associated cohomology classes. But theta-values at suitable (torsion) points can encode numbers such as

(*) ord(q) = ord(Delta(E))

for the E with split multiplicative reduction with which we began. The elementary equation (*) is the mechanism by which ord(Delta(E)) makes an appearance amidst the surrounding heavy abstraction.

The upshot is that this encodes — up to controlled error and in a robust useful manner — the number ord(Delta(E)) in terms of cohomology classes on some tempered pi_1. These “controlled errors” are probably instances of what Mochizuki means when he speaks a lot about “mild indeterminacies”.

(ii) The actual construction of \eta_X goes via the route of formal schemes and does not directly mention theta-functions! (An analogy is surely the task of comparing Raynaud’s abstract conceptual construction of the Tate curve via formal schemes and GAGA-algebraization vs. the explicit equation that also computes the Tate curve.) By careful inspection of how \eta_X is actually built, one sees it is controlled by data expressed in terms of line bundles on finite etale covers of E – {0} (in the guise of the log-curve X given by E with its log structure at 0). The latter data constitute an important example of what Mochizuki calls a “tempered Frobenioid”.

If one encodes some information extracted from a scheme in terms of a Frobenioid arising from the scheme then one might informally say that one has “forgotten the scheme structure” and retained something much less. It’s loosely reminiscent of how working with the etale topology of a scheme “forgets nilpotent information.” Caveat: By SGA4, if we consider the etale topos of a scheme as a ringed topos by carrying along the structure sheaf as additional data then we actually recover the entire scheme, nilpotents and all, so the gulf between a scheme and its etale topology is in some sense quite mild. The gulf when passing from a nice variety over a general field to an associated Frobenioid encoding pi_1 and line bundle data generally entails a vast loss of information in general.

Mochizuki’s work on the Grothendieck conjecture for hyperbolic (log-)curves over sub-p-adic fields amounts to “reconstructing” such a curve, the p-adic field, and the structural morphism just from the arithmetic pi_1 as a profinite group on its own (with “reconstruction” meant in a precise sense that is highly non-trivial to express in a form well-suited for making quantifiable estimates without literally describing the entire process in excruciating detail; this precision issue underlies many of the expository challenges). The key consequence of this is that the associated tempered Frobenioid retains above much more information than initially may seem to be the case. Yet Frobenioids are very “categorical/discrete” objects and so appear to admit operations (such as exotic isomorphisms among them) which have no scheme-theoretic interpretation whatsoever. So if we can encode information we care about in terms of a Frobenioid then we might be able to subject the data we care about to operations that have no explanation in terms of operations on the schemes with which one began. (Frobenioids support operations that could be considered as substitutes for Frobenius endomorphisms in positive characteristic, for example.)

The upshot is that the task of approximately understanding ord(Delta(E)) in the local theory at a split multiplicative place can be recast in terms of cohomology extracted from a “tempered Frobenioid,” a type of mathematical structure that appears on the surface to be much less structure than that of a scheme which gave rise to it and yet is potentially amenable to operations with no direct description in terms of maps among schemes. (This may be part of what Mochizuki means when he refers to “dismantling scheme theory”.) Since Frobenioids by design include data intended to replicate in characteristic 0 a weak notion of Frobenius morphism, that part of the structure is an instance of the operations one can have with Frobenioids which are not expressible with schemes alone.

Categories: Uncategorized
  1. Aaron Bertram
    December 15, 2015 at 10:28 am

    Fascinating. Great writing.


    • Bertie
      December 16, 2015 at 1:09 am

      I was thinking the whole way through ‘this is written so beautifully’. Surely something written as well as this must help in some way to break the current impasse, one would imagine.


  2. Olaf
    December 15, 2015 at 3:51 pm

    Thank you very much for this great writing!

    I bet 100 bucks that something is wrong with IUT and the proof of ABC.


    • Alexander
      December 16, 2015 at 3:33 pm

      If you mean something which cannot be repaired (quickly) I’ll bet against it.


  3. December 15, 2015 at 4:56 pm

    Reblogged this on the alien number and commented:
    I found this a fascinating post, and given the context of this blog I hope other readers may do too!


  4. December 16, 2015 at 9:39 am

    Reblogged this on The Conscious Mathematician.

    Liked by 1 person

  5. December 16, 2015 at 11:07 pm

    Brian, you spoke highly of Hoshi. Does he (or anyone else for that matter) claim to have checked all the details of the proof of the Szpiro conjecture and found it to be correct? Or is he still declining to “vouch” for the correctness of the proof?

    Liked by 3 people

  6. Brian Conrad
    December 17, 2015 at 4:57 pm

    Hi Tim. My recollection is that Hoshi said he has gone through the entire argument and believes it to be correct. What we really seek is a community understanding of the main ideas, which is unfortunately very difficult to achieve with explanations provided in their present format. Computations which require precise uniform control of constants under variation of input could have a mistake, for example. Making it more feasible for a wider audience to dig into the argument would make a tremendous difference (e.g., trying to compartmentalize it, laying out clearer lists of precise properties needed from one step as input into the next step, etc.).

    Liked by 3 people

    • Marshall Flax
      December 17, 2015 at 5:36 pm

      If I can reason by analogy: the first Harry Potter book was tight and well-edited, but by the time you get to the fifth book, the books read like they went straight from draft to printing press. It’s psychologically painful to throw perfectly good phrases on the floor, but a good editor will force the writer to do so for the sake of the reader — but by the fifth book JK Rowling didn’t have to subject herself to that unpleasantness anymore.

      The situation is different here, of course, we don’t really expect SM to throw away perfectly good math, but at the same time the core of the argument is obscured. Perhaps we should ask him to *color* (in the graph-theory sense) his argument, so that the critical steps are marked — but at the same time posterity doesn’t lose his full generality.


      • December 18, 2015 at 5:53 am

        Shin is great, but maybe even he doesn’t know which steps are critical, especially where full generality is needed or where (simpler) special cases would suffice. From the descriptions I’ve heard, it doesn’t sound like he is being intentionally obtuse or unhelpful.


  7. ed
    December 17, 2015 at 9:02 pm

    Mochizuki’s self-admitted adoption of unnecessary baggage in his proof sounds a lot like the original timepiece “H1” of John Harrison, the clockmaker who invented marine chronometers in the 1700’s. For 50 years no one believed his machine worked, as it was assumed the solution to the problem of precision timekeeping would involve astronomy. In 2008 when the huge contraption of wood and metal was finally being painfully restored, its design was found to contain several “false starts” left intact in the mechanism whose purpose had always puzzled experts.

    Liked by 1 person

  8. Edward R.
    December 20, 2015 at 5:27 am

    Go Yamashita has been writing, for many months, a “proof of abc conjecture after Mochizuki” as you can see on his website.

    Supposedly, from what I have heard, this paper would clean up Mochizuki’s overengineered preprints and provide a more comprehensible proof of abc.

    In the workshop, was there any comments about Yamashita’s work:on this front? Is development going as expected?

    Lastly, Hoshi released an “Introduction to inter-universal Teichmüller theory” (84 pages) in November. But it’s written in Japanese. Was there any hints that this might be translated to English eventually?

    Best wishes.


    • Brian Conrad
      December 20, 2015 at 10:52 am

      Dear Edward R.: Yamashita has been working on a survey for at least a couple of years. The original target date was last March (the time of the first IUT workshop). At the Oxford meeting he said in his first IUT lecture that he is still working on it, projecting the number of pages to be “more than 200 but definitely less than 1000” (if I heard correctly). In the meantime we have the slides he prepared for the Oxford meeting.

      I asked Hoshi at the meeting if he knew of any work being done on preparing a translation of his survey, and he wasn’t. My understanding from others who have more familiarity with Japanese culture than I do is that it is unlikely that someone senior to him would make a translation. So after that week I approached a very strong Japanese graduate student I know who has read a lot of the original IUT papers and had looked at Hoshi’s survey, and I asked him about prospects for a translation. He told me that making a translation seems to require good understanding of IUT, so he had no idea who can do it.

      That is all I know about the current circumstances related to surveys.

      Liked by 2 people

    • January 3, 2016 at 6:46 pm

      Dear Edward R.: As an addendum to my first reply to your comment, I have more information now. I was recently told by someone who communicated directly with Hoshi that he (Hoshi) is hoping to make an English translation of his notes by the time of next summer’s Kyoto workshop.


  1. December 15, 2015 at 12:29 pm
  2. December 16, 2015 at 4:06 pm
  3. December 16, 2015 at 7:37 pm
  4. December 16, 2015 at 8:18 pm
  5. December 20, 2015 at 11:01 am
  6. December 23, 2015 at 5:33 am
Comments are closed.
%d bloggers like this: