A day in the life of a data scientist is, at the very least, multi-threaded (in terms of task processing, that is). Not only do they deal with several internal stakeholders to get their ideas through, they are also required to ensure their machine learning models are adequately trained in the requisite volume and dimensionality of data. Even more so, we may add, if Deep and reinforcement learning is at play.
Computer storage has followed Moore’s Law, sure, but computing prowess that is a prerequisite for Artificial Intelligence model development, has not. And at the core of this problem lies a technology relic from decades ago – The Central Processing Unit. This article aims to make the case of the CPU being closely supplemented by Graphical Processing Units, or GPUs, to accelerate AI model training and deep learning exponentially- something the industry needs very badly today, indeed.