Constructs the falling factorial basis matrix of a given order, with respect to given design points.
h_mat(k, xd, di_weighting = FALSE, col_idx = NULL)
Order for the falling factorial basis matrix. Must be >= 0.
Design points. Must be sorted in increasing order, and have length
at least k+1
.
Should "discrete integration weighting" be used?
Multiplication by such a weighted H gives discrete integrals at the design
points; see details for more information. The default is FALSE
.
Vector of indices, a subset of 1:n
where n = length(xd)
,
that indicates which columns of the constructed matrix should be
returned. The default is NULL
, which is taken to mean 1:n
.
Sparse matrix of dimension length(xd)
by length(col_idx)
.
The falling factorial basis matrix of order \(k\), with respect to design points \(x_1 < \ldots < x_n\), is denoted \(H^k_n\). It has dimension \(n \times n\), and its entries are defined as: $$ (H^k_n)_{ij} = h^k_j(x_i), $$ where \(h^k_1, \ldots, h^k_n\) are the falling factorial basis functions, defined as: $$ \begin{aligned} h^k_j(x) &= \frac{1}{(j-1)!} \prod_{\ell=1}^{j-1}(x-x_\ell), \quad j=1,\ldots,k+1, \\ h^k_j(x) &= \frac{1}{k!} \prod_{\ell=j-k}^{j-1} (x-x_\ell) \cdot 1\{x > x_{j-1}\}, \quad j=k+2,\ldots,n. \end{aligned} $$
The matrix \(H^k_n\) can also be constructed recursively, as follows. We first define the \(n \times n\) lower triangular matrix of 1s: $$ L_n = \left[\begin{array}{rrrr} 1 & 0 & \ldots & 0 \\ 1 & 1 & \ldots & 0 \\ \vdots & & & \\ 1 & 1 & \ldots & 1 \end{array}\right], $$ and for \(k \geq 1\), define the \(n \times n\) extended diagonal weight matrix \(Z^k_n\) to have first \(k\) diagonal entries equal to 1 and last \(n-k\) diagonal entries equal to \((x_{i+k} - x_i) / k\), \(i = 1,\ldots,n-k\). The \(k\)th order falling factorial basis matrix is then given by the recursion: $$ \begin{aligned} H^0_n &= L_n, \\ H^k_n &= H^{k-1}_n Z^k_n \left[\begin{array}{cc} I_k & 0 \\ 0 & L_{n-k} \end{array}\right], \quad \text{for $k \geq 1$}, \end{aligned} $$ where \(I_k\) denotes the \(k \times k\) identity matrix, and \(L_{n-k}\) denotes the \((n-k) \times (n-k)\) lower triangular matrix of 1s. For further details about this recursive representation, see Sections 3.3 and 6.3 of Tibshirani (2020).
The option di_weighting = TRUE
returns \(H^k_n Z^{k+1}_n\) where
\(Z^{k+1}_n\) is the \(n \times n\) diagonal matrix as defined above.
This is connected to discrete integration as explained in the help file for
h_mat_mult()
. See also Section 3.3 of Tibshirani (2020) for more details.
Each basis function \(h^k_j\), for \(j \geq k+2\), has a single knot at
\(x_{j-1}\). The falling factorial basis thus spans \(k\)th degree
piecewise polynomials---discrete splines, in fact---with knots in
\(x_{(k+1):(n-1)}\). The dimension of this space is \(n-k-1\) (number
of knots) \(+\) \(k+1\) (polynomial dimension) \(=\) \(n\). Setting
the argument col_idx
appropriately allow one to form a basis matrix for a
discrete spline space corresponding to an arbitrary knot set \(T
\subseteq x_{(k+1):(n-1)}\). For more information, see Sections 4.1 and 8 of
Tibshirani (2020).
Note 1: For computing the least squares projection onto a discrete spline
space defined by an arbitrary knot set \(T \subseteq x_{(k+1):(n-1)}\),
one should not use the falling factorial basis, but instead use the
discrete natural spline basis from n_mat()
, as the latter has much
better numerical properties in general. The help file for dspline_solve()
gives more information.
Note 2: For multiplication of a given vector by \(H^k_n\), one should
not form \(H^k_n\) with the current function and then carry out the
multiplication, but instead use h_mat_mult()
, as the latter will be
much more efficient (quadratic-time versus linear-time).
Tibshirani (2020), "Divided differences, falling factorials, and discrete splines: Another look at trend filtering and related problems", Section 6.3.
h_mat_mult()
for multiplying by the falling factorial basis
matrix and h_eval()
for constructing evaluations of the falling factorial
basis at arbitrary query points.