Description
The exponentially increasing demand for computing power as well as physical and economic limitations has contributed to a proliferation of distributed and parallel computer architectures. To make better use of current and future high-performance computing, and to fully benefit from these massive amounts of data, we must discover, understand and exploit the available parallelism in machine learning. Simultaneously, we have to model data in an adequate manner while keeping the models as simple as possible, by making use of a sparse representation of the data or sparse modelling of the respective underlying problem.
Visitors profile
Scientists, Researcher, PhD Students
Relevant Keywords
Machine Learning