In the last week we’ve seen some noteworthy developments in the cognitive computing field. Most significantly, Google and then Microsoft, open sourced their machine learning systems in an effort to move the field ahead. But software is not the only component involved in this advancement. There has also been significant progress in machine learning and other artificial intelligence tasks because we have more and faster processing power.
Some folks are beginning to look at what, if any, changes we need to make in our hardware setups to continue to make gains in cognitive computing. Forbes wrote this article in 2011 that gives a peek into how long IBM has been trying to reproduce the way the brain processes information. It’s an interesting look at how complex the problem is.
More recently, Wired followed up its article on the open sourcing of TensorFlow with this piece on how companies are using GPUs to both train and execute AI models – a shift from using CPUs. GPUs, or graphics processing units, are best known for rendering images, video and animations on a computer screen. Google seems to be using these programmable units as parallel processors to speed up processing.
Yesterday, IBM announced that it is collaborating with FPGA chip designer Xilinx to increase processing power through accelerated computing. FPGAs, or a field-programmable gate arrays, are the computer equipment of stem cells – they can be set to run a very specific set of arithmetic and logic, and that logic set can change between every run. This makes them much more efficient than CPUs with their hard and fast assigned tasks. The advantage of the FPGAs seems obvious – as the machine learns, it can change what it asks each FPGA to do based on its immediate need. Those needs are likely to continue to shift as the machine learns.
We’re likely to see more of these collaborations and creative uses of existing hardware in the near future as we work toward accomplishing more machine learning tasks.Share