This article was taken from the May 2014 issue of Wired magazine. Be the first to read Wired's articles in print before they're posted online, and get your hands on loads of additional content by <span class="s1">subscribing online. "Of the various things I've been trying to explain, this is one of the more difficult ones," says Stephen Wolfram. Given that Wolfram has a PhD in particle physics, is the creator of computational search engine Wolfram|Alpha and the widely used Mathematica modelling software, is a recipient of a MacArthur Genius Grant, and the author of A New Kind of Science, in which he proposes an algorithmic key to the entire universe, this could get heavy. Wolfram, British-born but Massachusetts-based, is talking about his newest project, Wolfram Language, which he describes as "knowledge-based programming": the way he tells it, it's an attempt to make everything in the world programmable. Wired finds out more.
Wired: Where did Wolfram Language come from?
Stephen Wolfram: We're interested in connecting computation to the everyday world. There are several issues in doing that. How do you describe the everyday world in computational terms? How do you interact with the devices that exist in the everyday world? How do you take the computation that we can abstractly do and connect it to things that humans can understand and control? So one of the things I'm excited about is that Wolfram Language allows one to really describe things in the real world.
How does that actually work?
The language is based on this idea of symbolic programming. It is really an old idea - it's derived from ideas that were developed in mathematics more than 100 years ago, and it's a way of formalising a representation of things.
When it comes to pieces of code or pieces of the user interface for the program, or even something like graphics, those tend to be different objects. In a symbolic language, the idea is that all those different things are represented in the same way. Those objects can now be in the Cloud, running programs, or they can be devices -- and they can be represented in this symbolic way. The other big thing is that movies and so on can be well-represented in this symbolic way, where they are pieces of data -- just like numbers might be pieces of data, or elements of the user interface for a program might be pieces of data. In this symbolic paradigm, all these things can be represented in a uniform way, and that's one of the partly conceptual, partly technological advances that makes possible a new level of use of computation.
The other important element is that of natural language, or how you communicate with a computer. Computer language is formal, but when it comes to talking about things in the real world, it can be a very inefficient... Take a question like, "Who was the director of the movie Gravity?" It's easy to represent that concept in natural language, or using a precise symbolic computer language. But it feels inefficient for humans to do that.
It's like a combination of Mathematica and Wolfram|Alpha.
I was lucky that I built two very different approaches to communicating computations, one in Mathematica and one in Wolfram|Alpha. The Mathematica one is a precise kind of symbolic language for everything and the Wolfram|Alpha one is completely messy. You communicate with natural language and say whatever comes into your mind and it will try and decode what you're talking about. That second direction is primarily for what I might call "drive-by computation", as in, you ask a question, you get an answer, and that's the end of the interaction. Then you go and use that answer to the question and do whatever you were doing, as opposed to what happens with the language in Mathematica, where one is building up potentially thousands, tens of thousands, millions of lines of code to do something very complicated, building a huge structure.
We built those two different branches quite separately, and what I realised is how powerful it is to bring them together.
Google's Knowledge Graph tries to understand real things, but this feels different.
There's a huge difference between what we're trying to do and the world of search engines. We're trying to compute answers to things.
There's a giant structure that is the systematic knowledge of our world, and it's our objective to take that structure and put it in precise computational terms. Part of it is actually like the web.
The web is ten billion pages of text and there are questions you can ask, for example, about the web, which are part of the corpus of human knowledge. But what we're trying to do is to have a genuinely computational representation of things in the world, so when people ask a question that has never been asked before, and was never written down by anybody on the web, with Wolfram|Alpha underneath, we can answer that question.
How does all of this translate to a real-world use?
I think there are a bunch of different things, but one early objective is if you've got an algorithmic idea, set it up so that you can deploy that idea and turn it into a useful product in an extremely short amount of time.
At some level, it's just a quantitative difference; it's five lines of code in this language and it might be 3,000 lines of code in some other language. But the reality of things is it's also a qualitative difference, because you're not going to write the 3,000 lines, and the expertise you need to do it is vastly different from the expertise you need to write five lines. I'm hoping for a Cambrian explosion of algorithmic startups, where people take algorithmic ideas they have and make use of this huge stack of computational capabilities we've built to implement those other ideas and actually deploy them in the world.
Another thing is having more things in the world become computational. So, for example, the devices you're interacting with, are they doing something algorithmic or are they just dumb devices? What we're realising is that it's going to be easier and easier to inject all this sophisticated computation into almost everything, and we've seen that in the last couple of months. Now, we're bundling Wolfram Language on Raspberry Pis, and then we're looking at this Intel Edison computer that is absurdly small. It's the size of an SD card, and the Wolfram Language runs on that device. And what one starts to realise is that one can put the Wolfram Language into almost anything. You can put the little SD card computer or whatever into almost any kind of device, and then you start wondering, well, what does that mean?
All these different devices that have, in the past, been controlled with a few buttons or something, and which have fairly simple interfaces, and behave in fairly straightforward ways -- we can inject this whole stack of computation into these devices. We can expect that, if we were talking to these devices in our camera or whatever else, it [would be] in natural language. One could have it understand that and have it actually do things that are sophisticated knowledge-based computations.
And to me, the thing that I actually don't fully understand,
[is] what the consequences of it will be. As we start to be able to have all that sophisticated computation from the Wolfram Language in all these different devices, so that we've got thousands, tens of thousands, millions of communicating devices that are all able to do this kind of sophisticated computation, what then becomes possible?
What's pretty interesting about this is, because we have this kind of uniform symbolic representation of the world, these devices, if they're sensors or something like this, can be making measurements on the world -- and the measurements that they make can be successfully shared with all the other devices and computers and so on. Because we have this kind of uniform symbolic representation of things in the world, we get to share the information from all these different devices and expect to compute with all of this information together. So, I don't yet know what the killer apps of that situation will be -- but I'm pretty certain that there will be a bunch of them.
To find out more about Wolfram Language, visit wolfram.com/wolfram-language
This article was originally published by WIRED UK