No matter where you look it now seems like everyone is involved in Artificial Intelligence, Machine Learning or Deep Learning in some way. For this blog I’m going to make the assumption you know the differences between the three, but if not, NVIDIA has written a great blog that’ll get you started.
There’s good reason that NVIDIA is writing about these technologies; GPUs are a fundamental part of being able to conduct machine learning and deep learning in any meaningful sense. The algorithms have existed for many years, but the sad fact is the compute power wasn’t available to run wide data sets in any sort of useful timeframes until quite recently.
Now, however, we can build supercomputers on commodity hardware – affordable, easy to obtain, generally broadly compatible with a wide range of technologies, and can function on a plug and play basis. Plus, it’s possible to have multiple GPUs in a clustered system, which can tackle huge amounts of data using massively complex algorithms, in timeframes that make deep learning and machine learning projects financially viable.
Most cutting-edge researchers are trying to understand how cognitive computing can be applied to their research and provide a competitive edge. And the World’s largest technology companies are paying huge sums to snap up any AI talent too.
My background is research computing and life sciences and in my day-to-day work at OCF I’m often talking to customers about what benefits AI could potentially bring to their University. I have to tell them that there really isn’t a specific area that can’t benefit from AI – anywhere there’s data there is the potential to benefit from AI.
If you already have datasets that you use to produce insights or outcomes from (which is pretty much everyone doing anything!) then you already have a training dataset that can teach your algorithm of choice how to assist you. There are many possible impacts to using AI in this way:
- Providing assistance to human decision making – The algorithm makes a suggestion and provides reasoning that allows a human to accept or reject the recommendation. This would be relevant to areas such as medicine where your doctor can provide an AI-assisted diagnosis.
- Automate processes – any task that a human can achieve in under a second can be performed by AI. Why should a person be stuck with a repetitive task when AI can learn to do this, freeing people up to do more complex tasks.
- Increase efficiency without making compromises – The Square Kilometer Array (SKA) is a great example in which humans aren’t able to look at and, evaluate the huge amounts of data coming from the project – a huge proportion of data is thrown away before it has been analysed. Whilst a lot of data from SKA may just be ‘noise’ or temporary files, it’s still worth considering what might be in the data that has been discarded. If you apply AI ‘on the wire’ to that data and analyse it on the fly, you start to provide much greater clarity and certainty in the data you’re keeping and discarding.
- Develop insights into consumer behaviour – BBC Research and Development has announced a five-year research partnership with eight UK Universities to better understand audiences. Using data, the project aims to better understand what audiences want and explore what machine learning can teach the BBC about its programmes and services to create a more personal BBC. You can read more about this particular project in my colleague Julian’s blog here.
The real success of true AI – machines that have all of our senses, our reasoning powers and can think independently – will be in connecting disparate HPC systems and frameworks together, which is still a way off. But, for the time being, researchers can now embark upon the previously impossible and use machine learning and deep learning to derive new insights from data that were previously unknown.
Is your University investing in machine learning, AI or Deep Learning? Leave a comment below or get in touch via email.