The topics of automation, AI, and the impact these will have on society have been big on this blog. This is because they are things that I think about a fair bit. I'm not in AI, my interest are numerical work and programming languages, but I live in the world and I train the people who will be writing tomorrows computing software so these things interest me. I've been saying that things get interesting around 2020. I think the social changes become more visible around 2015 as automation begins to soak up more jobs and by 2025 we are in a very odd place where a lot of people simply aren't employable unless we find ways to augment humans in dramatic ways.
Cray just announced a new supercomputer line that they say will scale to 50 petaflops. No one has bought one yet so there isn't one in existence, but they will be selling them by the end of the year and I'm guessing by next year someone builds one that goes over 10 petaflops. That's on the high end for most estimates I've seen of the computing power of the human brain so this is significant.
Thinking of the Cray announcement it hit me that I can put my predicted dates to the test a bit to see how much I really believe them, and as a way to help others decide if they agree or not. We'll start with the following plot from top500.org. This shows computing power of the top 500 computers in the world since 1993.
What we see is a really nice exponential growth that grows by an order of magnitude every 4 years. I couldn't find exact numbers for the Flops of the Watson BlueGene computer, but what I found tells me it would probably come in between 100 and 800 TFlops though that might be too high.
The thing is, that the processing power of the top 500 machines in the world isn't really going to change the world. MacDonald's isn't going to replace the human employees if it costs several million to buy the machine that can do the AI. However, smaller machines are doing about the same thing as these big machines. Right now if you can get a machine that does ~1 TFlops for about $1k assuming you put in a good graphics card and utilize it through OpenCL or CUDA based programs. So workstation machines are less then 2 orders of magnitude behind the bottom of the Top500 list. That means in 8 years a workstation class machine should have roughly the power of today's low end supercomputer. To be specific, in 2021 for under $10000 you will probably be able to buy a machine that can pull 100 TFlops. So you can have roughly a Watson for a fraction of a humans annual salary, especially if you include employer contributions to taxes and such. I'm guessing that running a McDonald's doesn't require a Watson worth of computer power. So if the reliability is good, by 2021 fast food companies would be stupid to employ humans. The same will be true of lots of other businesses that currently don't pay well.
Comparing to Watson might not be the ideal comparison though. What about the Google self-driving car or the Microsoft virtual receptionist? In the latter case I know that it was a 2P machine with 8 cores and something like 16GB of RAM. That machine probably didn't do more than 100 GFlops max. Google wasn't as forthcoming about their car, but it wasn't a supercomputer so I'm guessing it was probably a current workstation class machine.
What about the next step down in the processor/computer hierarchy? The newest tablets and cell phones run dual core ARM processors that only run about 100 MFlops. That's the bottom of the chart so they are 3.5-4 orders of magnitude down from the workstation class machines. Keep in mind though that given the exponential growth rate, the low power machines that you carry around will hit 1 TFlops in 16 years, by 2027. That means they can run their own virtual receptionist.
Networking and the cloud make this even more interesting because the small device can simply collect data and send it to bigger computers that crunch the data and send back information on what to do. What is significant is that the chips required to do significant AI will extremely cheap within 8-16 years. Cheap enough that as long as the robots side can make devices that are durable and dependable, it will be very inexpensive to have machines performing basic tasks all over the place.
So back to my timeline, a standard workstation type machine should be able to pull 10 TFlops by 2015, four years from today. I think thins like the virtual receptionist and the Google cars demonstrate that that will be sufficient power to take over a lot of easy tasks and as prices come down, the automation will move in. By 2020 the cost of machines to perform basic tasks will be negligible (though I can't be as certain about the robots parts) and the machines you can put in a closet/office will be getting closer to 100 TFlops, enough to do Watson-like work, displacing quite a few jobs that require a fair knowledge base. By 2025 You are looking at petaflop desktops and virtual assistants that have processing power similar to your own brain.
So I think the timeline is sound from the processing side. I also have the feeling it will work on the software side. The robots are less clear to me and they might depend on some developments in materials. However, graphene really appears to have some potential as a game changer and if that pans out I don't see the material side being a problem at all.
Hey Mark,
ReplyDeleteHave to admit, I respectfully disagree. There's a marked difference between computing power and AI. While, yes, you could absolutely design and develop very specific repetitive tasks for a computer or robot to do; it is incredibly difficult to make the leap that raw processing power is going to have any barring on abstract thought being replaced.
The Turing test isn't going to be passed, it's poorly formed. Watson was merely a inference engine utilizing a very specific task's parameters. Which is exactly what AI seeks to do.
So yes, 100 TFlops could be in a sub 1k machine; but until a programmer is paid to create the fry cook (not hard admittedly), McDonald's wouldn't be smart to replace anyone just yet.
Also, I'd note that the processing power necessary for a self driven car isn't that essential. I'd recommend reading up on the Stanford OpenCV and Laser rangefinder driven car that won the XPrize from DARPA for unimproved terrain. Not much processing power in that project necessary; as is sufficiently demonstrable by the ability of consumer smartphones to run in realtime, an OpenCV instance with full Haar classification training/recognition.
Steve,
ReplyDeleteI don't think you have to believe in strong AI to see automation taking over things. McDonald's will replace the fry cook because a system will be built and written for that purpose and it will cost a lot less to run than employing the humans. The fry cook isn't using full human intelligence. Driving doesn't require it either. If anything, I think your reference to the DARPA XPrize for unimproved terrain supports my argument. Of course, DARPA has since moved to urban terrain. GM is planning a self-driving car by 2018 and I don't think they are alone.
The Watson computer was an 80 teraflop machine. That's about 1/100th the estimates of a human brain. Watson, at that performance level, can put pretty much every call center employee out of a job. It will be able to make doctors more efficient too. Similar systems called e-discovery systems are already knocking out lawyers.
Build the machine and the software will follow. I don't think it will take that long either. The first 8-petaflop computer exists now. It won't pass any Turing tests, even if it has the power. The software isn't there and what is more, that isn't what it was written for. When the 500th fastest computer in the world hits 8-petaflops then you might see something.
There is another catch though. I human baby might have 8-petaflops of processing power, but it takes years to acquire the information and learn to really use that power. I expect the computer AI systems that do wind up being the "smartest" will have to develop in a similar way. Watson already did that somewhat. It wasn't given a database of answers. It was given a bunch of books to "read" and learn from. Newer systems will do it faster and better.