The future of AI needs hardware accelerators based on analog memory devices

Imagine personalized Artificial Intelligence (AI), where your smartphone becomes more like an intelligent assistant – recognizing your voice even in a noisy room, understanding the context of different social situations or presenting only the information that's truly relevant to you, plucked out of the flood of data that arrives every day. Such capabilities might soon be within our reach – but getting there will require fast, powerful, energy-efficient AI hardware accelerators.

AI could get 100 times more energy-efficient with IBM’s new artificial synapses

Copying the features of a neural network in silicon might make machine learning way more efficient.

Tue 12 Jun 18 from MIT Technology Review

IBM Aims To Reduce Power Needed For Neural Net Training By 100x

Custom silicon for speeding up AI inferencing is here, but IBM wants to go further and use a hybrid computing architecture and elements of resistive computing to also improve the efficiency ...

Wed 13 Jun 18 from Extremetech

Training a neural network in phase-change memory beats GPUs

Specialized hardware that trains in-memory is both fast and energy-efficient.

Thu 7 Jun 18 from Ars Technica

This New Chip Design Could Make Neural Nets More Efficient and a Lot Faster

Neural networks running on GPUs have achieved some amazing advances in artificial intelligence, but the two are accidental bedfellows. IBM researchers hope a new chip design tailored specifically ...

Mon 11 Jun 18 from SingularityHub

  • Pages: 1

Bookmark

Bookmark and Share