Nicholas Richardson
Doctor of Philosophy in Mathematics (PhD)
Research Topic
Signal decomposition and source separation
Dissertations completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest dissertations.
Compressed sensing (CS) is a paradigm in which a structured high-dimensional signal may be recovered from random, under-determined, corrupted linear measurements. LASSO programs are effective for solving CS problems due to their proven ability to leverage underlying signal structure. Three popular LASSO programs are equivalent in a sense and sometimes used interchangeably. Tuned by a governing parameter, each admits an optimal parameter choice yielding minimax order-optimal error. CS is well-studied, though theory for LASSO programs typically concerns this optimally tuned setting. However, the optimal parameter value for a LASSO program depends on properties of the data and is typically unknown in practical settings. Performance in empirical problems thus hinges on a program's parameter sensitivity: it is desirable that small variation about the optimal parameter choice begets small variation about the optimal risk.We examine the risk of three LASSO programs as a function of their governing parameters and further demonstrate that their parameter sensitivity can differ for the same data, thereby informing the selection of a LASSO program in practice. We prove a gauge-constrained program admits asymptotic cusp-like behaviour of its risk in the limiting low-noise regime; a residual-constrained program has asymptotically suboptimal risk for very sparse vectors (i.e., for any fixed parameter choice, the ratio of the risk to the optimally achievable risk grows unbounded). These results contrast observations about an unconstrained program with sufficiently large parameter. Our theory is supported with extensive numerical simulations, demonstrating the parameter sensitivity phenomenon for even modest dimensional parameters.We first analyze these risks for proximal denoising (PD), in which one directly observes signal plus noise (i.e., the measurement matrix is identity). There, we further reveal a data regime in which the unconstrained PD risk can be asymptotically suboptimal. We also show how our theory extends to analyze generalized LASSO programs for generalized CS. Finally, we extend a keystone of our theoretical analysis, the projection lemma. We generalize the result to an arbitrary Hilbert space and extend it from scaled projections to proximal mappings of a dilated gauge. We discuss applications and possible directions for these results.
View record
Signal reconstruction from linear measurements is a well-established problem in applied and computational mathematics, data science, and signal processing. In a nutshell, a signal lives in a vector space, and one aims to reconstruct the signal from its linear measurements. This thesis investigates the following real-world scenarios under this setup.Scenario 1. Consider the over-sampled setting and assume that signals to be recovered are known to be sparse. In this case, randomized Kaczmarz and sparse randomized Kaczmarz algorithms are two well-known algorithms that can handle large data. This thesis investigates the reconstruction of sparse signals and the support detection for both algorithms and provides rigorous results for this setting.Scenario 2. As in scenario 1, suppose that signals of interest are known to be sparse and belong to ℝ^N, where N is large, but this time we are in the under-sampled setting. The thesis proposes a novel algorithm called Iterative Reweighted Kaczmarz (IRWK) for recovery of N-dimensional s-sparse signals from their linear measurements in the under-determined setting. The IRWK algorithm uses a single measurement per iteration and requires working memory to be Ο(N).Scenario 3. Suppose that linear measurements of a signal are quantized using Memoryless Scalar Quantization (MSQ), which essentially rounds off each entry to the nearest point in δℤ. The measurements perturbed by the MSQ lead to an inaccurate reconstruction of the signal even in an over-sampled setting (frame quantization). In practice, when the number of measurements m increases, the reconstruction error is observed to decay at a rate of Ο(m⁻¹/²). However, the earlier theory guarantees only Ο(1) error. This thesis bridges theory with the observed results by rigorously proving that the error is in the form of c₁+c₂O(m⁻¹/²) where c₁ and c₂ are independent of m. We prove that c₁ is extremely small for the case of Gaussian measurements and provide sharp bounds on its value. Also, we mathematically justify why we expect c₁ to be small but non-zero for a broad class of random matrices. We also extend the result to the under-determined setting (compressed sensing quantization).
View record
Compressed sensing (CS) is a signal acquisition paradigm to simultaneouslyacquire and reduce dimension of signals that admit sparse representations.This is achieved by collecting linear, non-adaptive measurements of a signal,which can be formalized as multiplying the signal with a “measurement matrix".If the measurement matrix satisfies the so-called restricted isometryproperty (RIP), then it will be appropriate for compressed sensing. Whilewide classes of random matrices provably satisfy the RIP with high probability,explicit and deterministic constructions have been shown (so far) tosatisfy the RIP only in a significantly suboptimal regime.In this thesis, our focus is on deterministic measurement matrices incompressed sensing. In a nutshell, we investigate quantization methods for aclass of deterministic matrices (Chapter 2); introduce a novel deterministicconstruction of a family of binary, circulant measurement matrices usingthe Legendre symbol (Chapter 3); and propose two novel approaches forimproving the RIP constant estimates based on Gershgorin circle theorem,obtaining an improvement over the Gershgorin bounds by a multiplicativeconstant. One of our approaches here, together with a conjecture we makeregarding the distribution of quadratic residues in a finite field provides apotential path to break the so-called “square-root barrier"–we give a proofbased on the assumption that the conjecture holds.
View record
Many empirical studies suggest that samples of continuous-timesignals taken at locations randomly deviated from an equispaced grid canbenefit signal acquisition (e.g., undersampling and anti-aliasing).However, rigorous statements of such advantages and the respectiveconditions are scarce in the literature. This thesis provides some theoretical insight onthis topic when the deviations are known and generated i.i.d. from a variety of distributions. By assuming the signal of interest is s-compressible (i.e., can be expanded by scoefficients in some basis), we show that O(s polylog(N))$ samples randomly deviated from an equispaced grid are sufficient to recover theN/2-bandlimited approximation of the signal. For sparse signals (i.e.,s ≪ N), this sampling complexity is a great reduction in comparison toequispaced sampling where O(N) measurements are needed for thesame quality of reconstruction (Nyquist-Shannon sampling theorem). The methodology consists of incorporating an interpolation kernel into the basis pursuit problem. The main result shows that this technique is robust with respect to measurement noise and stable when the signal of interest is not strictly sparse. Analogous results are provided for signals that can be represented as an N × N matrix with rank r. In this context, we show that O(rN polylog (N)) random nonuniform samples provide robust recovery of the N/2-bandlimited approximation of the signal via the nuclear norm minimization problem. This result has novel implications for the noisy matrix completion problem by improving known error bounds and providing the first result that is stable when the data matrix is not strictly low-rank.
View record
Tensor completion is the problem of recovering a low-rank tensor from a partial subset of its entries. Assuming a rank-r, order-d tensor in ℝ^{NxNx...N}, the best sampling complexity achieved is O(rN^{d/2}) which can be obtained by a tensor nuclear-norm minimization problem. This bound is significantly larger than O(rdN), the number of free variables in a rank-r tensor. In this thesis, we prove that when r=O(1), we can achieve optimal sample complexity by constraining either one of two proxies for tensor rank, the convex M-norm or the non-convex max-qnorm. The max-qnorm is the generalization of matrix max-norm to tensors which is non-convex. The M-norm, on the other hand, is a convex atomic norm whose atoms are rank-1 sign tensors. We prove that both max-qnorm and M-norm of a bounded rank-r tensor are bounded by quantities that are independent of N. We also prove that the unit balls of both these norms have small Rademacher complexity.We analyze max-qnorm and M-norm constrained least squares minimization problems for tensor completion, proving that when r=O(1), m=O(Nd) measurements are sufficient for efficient estimation of the original tensor. We also use an information theoretic technique to prove that the dependence on N is optimal. Moreover, we design an alternating method for approximating the solution of max-qnorm tensor completion and do a thorough investigation of its performance on synthetic and real-world data.We also generalize the 1-bit matrix completion problem to higher-order tensors. We prove that when r=O(1) a bounded rank-r, order-d tensor T in ℝ^N x ℝ^N x ... x ℝ^N can be estimated efficiently by only m=O(Nd) binary measurements by regularizing either its max-qnorm or its M-norm. We prove that the sample complexity of recovering a low-rank tensor from 1-bit measurements of a subset of its entries is the same as recovering it from unquantized measurements. Moreover, we show the advantage of using 1-bit tensor completion over matricization both theoretically and numerically. Specifically, we show how the 1-bit measurement model can be used for context-aware recommender systems.
View record
Theses completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest theses.
In~\cite{bora2017compressed}, a mathematical framework was developed for compressed sensing guarantees in the setting where the measurement matrix is Gaussian and the signal structure is the range of a generative neural network (GNN). The problem of compressed sensing with GNNs has since been extensively analyzed when the measurement matrix and/or network weights follow a subgaussian distribution. In this thesis, we move beyond the subgaussian assumption to measurement matrices that are derived by sampling uniformly at random rows of a unitary matrix (including sub-sampled Fourier measurements as a special case). Specifically, we prove the first known restricted isometry guarantee for generative compressed sensing (GCS) with \emph{sub-sampled isometries}, and provide recovery bounds with nearly order-optimal sample complexity, addressing an open problem of~\cite[p.~10]{scarlett2022theoretical}. Recovery efficacy is characterized by the \emph{coherence}, a new parameter, which measures the interplay between the range of the network and the measurement matrix. Our approach relies on subspace counting arguments and ideas central to high-dimensional probability. Furthermore, we propose a regularization strategy for training GNNs to have favourable coherence with the measurement operator. We provide compelling numerical simulations that support this regularized training strategy: our strategy yields low coherence networks that require fewer measurements for signal recovery. This, together with our theoretical results, supports coherence as a natural quantity for characterizing GCS with sub-sampled isometries.
View record
Distributed Compressive Sensing (DCS) studies the recovery of jointly sparse signals. Compared to separate recovery, the joint recovery algorithms in DCS are usually more effective as they make use of the joint sparsity. In this thesis, we study a weighted ℓ₁-minimization algorithm for the joint sparsity model JSM-1 proposed by Baron et al. Our analysis gives a sufficient null space property for the joint sparse recovery. Furthermore, this property can be extended to stable and robust settings. We also presents some numerical experiments for this algorithm.
View record
The synthesis model for signal recovery has been the model of choice for many years in compressive sensing. Various weighting schemes using prior support information to adjust the objective function associated with the synthesis model have been shown to improve the recovery of the signal in terms of accuracy. Generally, even with no prior knowledge of the support, iterative methods can build support estimates and incorporate that into therecovery which has also been shown to increase the speed and accuracy of the recovery. However when the original signal is sparse with respect to a redundant dictionary (rather than an orthonormal basis) there is a coun-terpart model to synthesis, namely the analysis model, which has been less popular but has recently attracted more attention. The analysis model is much less understood and thus there are fewer theorems available in both the context of non-weighted and weighted signal recovery.In this thesis, we investigate weighting in both the analysis model and synthesis model in weighted l-1 minimization. Theoretical guarantees on reconstruction and various weighting strategies for each model are discussed. We give conditions for weighted synthesis recovery with frames which do not require strict incoherency conditions, this is based on recent results of regular synthesis with frames using optimal dual l-1 analysis. A novel weighting technique is introduced in the analysis case which outperforms its traditional counterparts in the case of seismic wavefield reconstruction. We also introduce a weighted split Bregman algorithm for analysis and optimaldual analysis. We then investigate these techniques on seismic data and synthetically created test data using a variety of frames.
View record
Compressed sensing is a data acquisition technique that entails recovering estimates of sparse and compressible signals from n linear measurements, significantly fewer than the signal ambient dimension N.In this thesis we show how we can reduce the required number of measurements even further if we incorporate prior information about the signal into the reconstruction algorithm. Specifically, we study certain weighted nonconvex Lp minimization algorithms and a weighted approximate message passing algorithm.In Chapter 1 we describe compressed sensing as a practicable signal acquisition method in application and introduce the generic sparse approximation problem. Then we review some of the algorithms used in compressed sensing literature and briefly introduce the method we used to incorporate prior support information into these problems.In Chapter 2 we derive sufficient conditions for stable and robust recovery using weighted Lp minimization and show that these conditions are better than those for recovery by regular Lp and weighted L1. We present extensive numerical experiments, both on synthetic examples and on audio, and seismic signals.In Chapter 3 we derive weighted AMP algorithm which iteratively solves the weighted L1 minimization. We also introduce a reweighting scheme for weighted AMP algorithms which enhances the recovery performance of weighted AMP. We also apply these algorithms on synthetic experiments and on real audio signals.
View record
If this is your researcher profile you can log in to the Faculty & Staff portal to update your details and provide recruitment preferences.