# Learned Filters

These are examples of 16x16 sparsifying filters, as described in [1].
They were learned on a set of five images and are constrained to form a frame for the
space of N by N images. While the filters appear similar to Gabor atoms or the filters
learned with a convolutional neural network, they were learned to satisfy a significantly
different optimality condition.

**L. Pfister** and Y. Bresler, “Learning Filter Bank Sparsifying Transforms,” 2018.
- Abstract
Data is said to follow the transform (or analysis) sparsity
model if it becomes sparse when acted on by a linear operator
called a sparsifying transform. Several algorithms have been
designed to learn such a transform directly from data, and
data-adaptive sparsifying transforms have demonstrated
excellent performance in signal restoration tasks. Sparsifying
transforms are typically learned using small sub-regions of
data called patches, but these algorithms often ignore
redundant information shared between neighboring patches. We
show that many existing transform and analysis sparse
representations can be viewed as filter banks, thus linking
the local properties of patch-based model to the global
properties of a convolutional model. We propose a new
transform learning framework where the sparsifying transform
is an undecimated perfect reconstruction filter bank. Unlike
previous transform learning algorithms, the filter length can
be chosen independently of the number of filter bank channels.
Numerical results indicate filter bank sparsifying transforms
outperform existing patch-based transform learning for image
denoising while benefiting from additional flexibility in the
design process.

- arXiv
- BibTeX
@article{Pfister2018a,
title = {Learning Filter Bank Sparsifying Transforms},
author = {Pfister, Luke and Bresler, Yoram},
year = {2018},
archiveprefix = {arXiv},
eprint = {1803.01980},
primaryclass = {stat.ML},
url = {http://arxiv.org/abs/1803.01980v1}
}