Max Little: 'science's branches mustn't forget they're part of a tree'

This article was taken from the January 2014 issue of Wired magazine. Be the first to read Wired's articles in print before they're posted online, and get your hands on loads of additional content by subscribing online.

Like many scientists, I'm concerned that science is becoming too fragmented. So many papers are published that it is impossible to keep track: one estimate was 1.4 million articles a year. Since there are not enough scientists to read, let alone test, these findings, most published research might be false. But scientists are encouraged to "hyper-specialise" to narrow disciplines. In this isolation, could they just be duplicating each other's mistakes?

As an applied mathematician, I see patterns everywhere. I collate models from across disciplines, I test the simplest models.

I use artificial intelligence to throw out inaccurate ones. And exploiting abstract patterns in this way has been surprisingly successful.

During my PhD I stumbled across the discipline of biomedical voice analysis. With some new mathematical methods, and combining these with AI, I could make accurate medical predictions about voice problems. This sparked off research in detecting Parkinson's from voice recordings - the basis of the Parkinson's Voice Initiative.

After my PhD, I brought together a hydrologist and an economist.

We wanted to try rainfall forecasting using some fairly simple mathematics. Now, weather forecasting throws together $10 million supercomputers and ranks of atmospheric scientists, crunching the equations of the atmosphere to make predictions. So competing against this Goliath with only historical data and a laptop would seem foolhardy.

But I came up with simple mathematics that, when fed with rainfall data, could make predictions often as accurate as weather supercomputers. We even discovered that the historical average rainfall, used as a forecast, was sometimes more accurate than supercomputer forecasts.

Success like this raises suspicions. So, with collaborators, I tried to make this approach fail. We assembled 30,000 data sets from geophysics, finance, seismology, hydrology, astrophysics, space science, acoustics, biomedicine, molecular biology, meteorology and others. We wrote software for 9,000 mathematical models from a deep literature dive. We exhaustively applied each model to each data set. A big picture emerged: we found that problems across the sciences could be solved in this way. Often, the best models were not the ones suggested by disciplinary wisdom.

Is this a new way of doing science? If we divide science into three branches - experiment, theory and computer simulation - then what I describe here doesn't quite fit. These are not just simulations - they need both data and mathematics. Models and data are mixed and matched across disciplines, using AI. I don't know what to call this approach, but its most enthusiastic proponents are computer scientists doing this in video-analysis competitions or for prizes financed by big pharma for drug discovery, and statistical forecasters.

What can we learn from this approach? First of all, open up and share the data, freeing it from politics. Science is not reproducible otherwise. Second, don't prejudge. We need a commitment to radical impartiality. Too often, favoured theories and models persist, putting whole disciplines at risk of going astray. More collaboration would greatly speed advances - this would encourage working on problems that span disciplinary boundaries.

Scientific theories that can withstand this "challenge" from other disciplines will pass a rigorous test. Not only will they be good explanatory theories, they'll have practical, predictive power. Without this mixing of disciplinary knowledge, we won't know if science is making progress, or merely rediscovering the same findings ad infinitum.

Max Little is a consultant mathematician, academic and TED Fellow;

This article was originally published by WIRED UK