The objective of this tutorial is to give a comprehensive overview of the theories, algorithms, and applications of sparse learning. The last decade has witnessed a growing interest in the search for sparse representations of data, as the underlying representations of many real-world processes are often sparse. For example, in disease diagnosis, even though humans have a huge number of genes, only a small number of them contribute to certain disease (Golub et al., 1999; Guyon et al., 2002). In neuroscience, the neural representation of sounds in the auditory cortex of unanesthetized animals is sparse, since the fraction of neurons that are active at a given instant is typically small (Hromadka et al., 2008). In signal processing, many natural signals are sparse in that they have concise representations when expressed under a proper basis (Candes&Wakin, 2008). Therefore, finding sparse representations is fundamentally important in many fields of science. This tutorial will focus on introducing the necessary background for sparse learning, presenting the sparse learning techniques based on L1-norm regularization and its variants, demonstrating successful application of these techniques in various application domains, introducing the efficient algorithms for optimization, and discussing recent advances and future trends in the area.