**Professor A.W.F. Edwards: Venn diagrams**

As with most capitalized innovations, diagrams with overlapping areas did not originate with the person they’re named after; Venn was adapting diagrams made by Euler, who was likely adapting his from elsewhere. The BSHM bulletin even posted an 11th century overlapping music diagram deriving from someone named John of Afflighem. Euler (and likely John) were depicting the results of a logical structure already known. Venn, instead, used the diagrams to help discover what the structure is.

But the talk focused on Venn diagrams and how many overlapping areas they can have. The more areas you have, the more their shape has to change. Four areas become elliptical, and more tend to become sausage-shaped and curved.

At that point, it seemed to me that the actual information was being lost in complexity. But Professor Edwards’ field is genetics, and he showed how mapping the genetic code for amino acids could lead to new discoveries. Even understanding nothing about Mendelian biology or genetics, I was able to see the usefulness of the creation of adjoining areas on the resulting Venn diagram, which suggested overlaps and duplications in the code. It was clear that new and important scientific discoveries could be made just by changing the visualization of data.

**Professor June Barrow-Green: Olaus Henrici**

Henrici was supposed to become an engineer, but went into maths instead. Ferdinand Redtenbacher, one of his professors in Mechanical Engineering at Karlruhe Polytechnium, had created over 100 models for teaching engineering (again, we have innovation as a result of teaching students).

Although Barrow-Green emphasized Henrici’s move to pure mathematics, he never went far from what I would call applied maths. She discovered a patent of his for an improvement in the construction of bridges, arches, and roofs, and he published a book called Skeleton Structures, which, although it made him no money, became foundational reading for the building of skyscrapers in the United States.

As with the story of H.G. Wells (sorry), Henrici’s problem was figuring out how to make a living from his varied talents. And, like H.G., he became a tutor to make ends meet. His opportunity came at Central Technical College (now part of Imperial College, so part of the South Kensington mode in which H.G. trained). There he was able to create a Mechanics Laboratory in 1884 (I really have to wonder whether he and Wells ever ran into each other).

His central interest since working with Clebsch as a young man, however, had been pure maths, and in particular geometry (not surprising given his interest in construction). He began to develop his own system for teaching a “modern” geometry, finding the emphasis on Euclid to be underwhelming in preparing students. Teachers, apparently, knew this, and yet the university curriculum, even at Cambridge, was still teaching Euclid almost exclusively. Henrici created cardboard and stick models to teach his modern geometry, and wrote a textbook (as I’m starting to think all good teachers do).

Apparently Charles Dodgson (his pen name was never mentioned or implied) wrote a thesis on “Euclid and His Modern Rivals” discussing the new methods. I need to explore which kind of geometry Wells was exposed to for his exams.

Although Henrici didn’t publish many research papers (teaching, remember?), he joined the Royal Society and became famous for the mathematical models, which influenced others.

**Professor Sarah Hart: Symmetry, Pattern and Groups**

Perhaps symmetry makes me sleepy, or maybe it was because the session was after lunch, but my notes say, “your major should be the topic wherein you wouldn’t fall asleep at a good lecture, even if you were tired”.

While starting with images, this talk spent much time on formulas for what I would call repetitive elements of design, and I’m afraid I never understood the difference between a reflection and other ways that elements repeat. We began with Platonic solids and ended with Coxeter graphs, and the only part that made sense to me actually came from a question from another mathematician. Symmetry, among other things, makes it possible to reduce the number of instructions, because elements repeat. I see the connection here to understanding the natural world, which has many symmetries (a word which more me means many examples of symmetry, but for Professor Hart means something else entirely). I understand the desire to reduce the set of instructions so as to make more complex findings. This gelled with my understanding of the purpose of mathematics: to reduce the set of instructions to its simplist forms.

**Kenneth Falconer: Fractals – Simple or Complex?**

Not even sure what a fractal is, I paid close attention. Apparently calculus in the 18th and 19th century was applicable only to smooth curves, which could have a tangent. But something called the von Koch curve is so irregular that you cannot draw a tangent anywhere. (This made me thing of student complaints that professors’ lectures “go off on a tangent” – perhaps if our lecture is super-irregular there would be not tangents!).

However, somewhere the curve is “self-similar”, which seems to mean repeating in some way, so I had trouble meshing this with my understanding of “irregular”, which to me means without regularities. Examples like the Sierpinski triangle looked regular to me (and symmetrical).

Benoit Mandelbrot claimed that irregular shapes are the norm (examples today might be fluid dynamics, the branching of airways in the lungs, the stock market patterns), which meant that fractals are the norm, which meant that mathematics to study them needed to be developed. Here we got into a competition to do this after the First World War, and the necessary win by Gston Julia because he was a war hero, even though Pierre Fatou (the only other competitor) submitted work that was as good.

If a Mandelbrot set starts at 0,0 and uses the formula, it will create a cohesive shape (that looks like a strange, but symmetrical, sea creature), but if it starts in some other places, the coordinates become “dust-like” rather than connected. With the advent of computer modeling beginning in the 70s, it became possible to calculate these out to the points where further miniature Mandelbrot sets could be seen in the connected portions.

I was unable to determine the significance of all this, and it still looked fairly regular to me, and seemed rather self-referential. But, as my son has told me, mathematics is circular anyway.

**Ian Stewart: Picturing Chaos**

The thesis here was that it was pictures that drove the research of chaos, rather than the mathematics leading to the pictures, and here we really did get into irregularities. I knew nothing about chaos theory going in (it always seemed to me like the word “anarchy” – something that has a specific definition to scholars, but that lay people interpret wrongly).

Newton’s physics could explain the interaction of two heavenly bodies, but once you added a third the mathematics fell apart. Henri Poincare tried it with two bodies with much mass and one with little mass, and it created such complexity that he said he could not draw the result. He concluded that Newton’s method simply wouldn’t work to picture the impact of three bodies upon each other – the curves fold back on each other, and make what’s called a Homoclinic Tangle.

Enter Edward Lorenze, who in 1961 was using a room-sized computer to do calculations, and turned it off in the middle of a process because he didn’t want to leave it running while he was gone. When he returned, he reset the machine back a few calculations, to make sure he’d re-entered everything correctly, and he noted the next several calculations were correct and matched. So he let it run, but after awhile he noted the results were not the same at all.

Technicians told him this was because the computer stored the numbers to many more decimal points than could be entered on the keys, but that it was a very tiny change. With ongoing calculations, however, the change got bigger and bigger. Although it may have been a computer bug (or feature!) to begin with, the implications were bigger. It mean that one tiny, inperceptible change someone could change the entire outcome. More, the results will ultimately diverge until they are on entirely different, unrelated trajectories. He published his paper on this (*Deterministic Nonperiodic Flow*), but it was ignored by his intended audience of meteorologists and by mathematicians. When he would later present on it, but hadn’t given the talk a title, the conference organizer called it “The Butterfly Effect”.

The theory was proven in 2002 with computers, and is now regularly used to predict weather. This was Lorenze’s original research, to determine how to better predict the weather. By showing the unexpected results of even a tiny change, he changed meteorology into something that requires far more predictive models, not to create certainty but to provide a range of possible scenarios which can then be combined into the most likely one. The implications, of course, are much larger – one question got into catastrophic theory, and how the issuing of multiple possible results could create better systems for monitoring bodily processes like organ function.