Machine learning, generically artificial intelligence, has begun to demonstrate that it can do significantly interesting things given a lot of computing power and a lot of examples. Lots of computing, needed in bursts, is something that "the cloud" can supposedly provide well. To my knowledge, there is yet to be a cloud computing vendor providing a machine learning interface. Ideally, of course, there would be multiple such vendors offering machine learning as a commodity service, and the user can choose the one with the lowest price.
When is such machine learning necessary to be done in a cloud? When can it not be done, say, on a personal computer running overnight? The problem has to be very big and/or very urgent. It also has to be pretty rare, either across the population or in time for a single user, or else there simply will not be enough computing resources to meet everyone's needs, or it will be prohibitively expensive.
The parallelization ratio of 12 hours of computing ("overnight") being done in 1 minute ("while you wait") is 720. The parallelization ratio to 1 second ("instantaneous") is about 40000. Is it really feasible to rent a computer 40000 times more powerful than your personal computer for 1 second only?
No comments :
Post a Comment