forked from SPC-education/numerics-2022
162 lines
6.4 KiB
TeX
162 lines
6.4 KiB
TeX
|
\documentclass{article}
|
||
|
\usepackage[utf8]{inputenc}
|
||
|
\usepackage{biblatex}
|
||
|
\addbibresource{library.bib}
|
||
|
\usepackage{listings}
|
||
|
\usepackage{amssymb}
|
||
|
\usepackage{comment}
|
||
|
\usepackage{graphicx,amsmath}
|
||
|
\newcommand{\norm}[1]{\left\lVert#1\right\rVert}
|
||
|
\usepackage{hyperref}
|
||
|
\hypersetup{
|
||
|
colorlinks=true,
|
||
|
linkcolor=blue,
|
||
|
filecolor=magenta,
|
||
|
urlcolor=cyan,
|
||
|
pdftitle={Overleaf Example},
|
||
|
pdfpagemode=FullScreen,
|
||
|
}
|
||
|
|
||
|
\title{Numerical Methods: Lecture 4. Conditioning. Floating point arithmetic and stability. Systems of linear equations.}
|
||
|
\author{Konstantin Tikhonov}
|
||
|
|
||
|
\begin{document}
|
||
|
|
||
|
\maketitle
|
||
|
|
||
|
\section{Suggested Reading}
|
||
|
|
||
|
\begin{itemize}
|
||
|
\item Lectures 12-19, 20-23 of \cite{trefethen1997numerical}
|
||
|
\item Lectures 6-7 of \cite{tyrtyshnikov2012brief}
|
||
|
\end{itemize}
|
||
|
|
||
|
\section{Exercises}
|
||
|
|
||
|
Deadline: 18 Nov
|
||
|
|
||
|
\begin{enumerate}
|
||
|
|
||
|
\item (3) Propose a numerically stable way to compute the function $f(x,a)=\sqrt{x+a}-\sqrt{x}$ for positive $x,\;a$.
|
||
|
|
||
|
\item (2) Consider numerical evaluation $\mathcal{C}=\tan(10^{100})$ with the help of arbitrary-precision arithmetic module \lstinline{mpmath}, which can be called as follows:
|
||
|
\lstset{language=Python}
|
||
|
\lstset{frame=lines}
|
||
|
% \lstset{label={lst:code_direct}}
|
||
|
\lstset{basicstyle=\ttfamily}
|
||
|
\begin{lstlisting}
|
||
|
from mpmath import *
|
||
|
mp.dps = 64 # precision (in decimal places)
|
||
|
mp.pretty = True
|
||
|
+pi
|
||
|
\end{lstlisting}
|
||
|
What is the relative condition number of evaluating $\mathcal{C}$ w.r.t the input number $10^{100}$? How many digits do you need to keep at intermediate steps to evaluate $\mathcal{C}$ with 7-digit accuracy?
|
||
|
\begin{comment}
|
||
|
\item (3) Check, that the following function
|
||
|
\lstset{language=Python}
|
||
|
\lstset{frame=lines}
|
||
|
\lstset{label={lst:code_direct}}
|
||
|
\lstset{basicstyle=\ttfamily}
|
||
|
\begin{lstlisting}
|
||
|
import math
|
||
|
def round_to_n(x, n):
|
||
|
if x == 0:
|
||
|
return x
|
||
|
else:
|
||
|
return round(x, -int(math.floor(math.log10(abs(x)))) + (n - 1))
|
||
|
\end{lstlisting}
|
||
|
rounds $x$ to $n$ significant digits.
|
||
|
A sample program to compute $\sum_{k=1}^{3000}k^{-2}\approx 1.6446$ via consequent summation with rounding of intermediate results to 4 digits looks as follows:
|
||
|
\lstset{language=Python}
|
||
|
\lstset{frame=lines}
|
||
|
\lstset{label={lst:code_direct}}
|
||
|
\lstset{basicstyle=\ttfamily}
|
||
|
\begin{lstlisting}
|
||
|
res = 0
|
||
|
for k in range(1,3001):
|
||
|
res = round_to_n(res+1/k**2, 4)
|
||
|
\end{lstlisting}
|
||
|
Despite the absence of subtractions (and related precision loss), this code allows to get only two significant digits. Explain, why this happens and propose a more accurate way to compute this sum (maintaining the restriction of keeping only 4 digits of intermediate result).
|
||
|
\end{comment}
|
||
|
\item (4) Implement the function \lstinline{solve_quad(b, c)}, receiving coefficients $b$ and $c$ of a quadratic polynomial $x^2 + b x + c$, and returning a pair of equation roots. Your function should always return two roots, even for a degenerate case (for example, a call \lstinline{solve_quad(-2, 1)} should result into \lstinline{(1, 1)}). Additionally, your function is expected to return complex roots.
|
||
|
|
||
|
After checking ensuring that your algorithm sort of works, try it on the following 5 tests. Make sure that all of them pass.
|
||
|
\lstset{language=Python}
|
||
|
\lstset{frame=lines}
|
||
|
\lstset{label={lst:code_direct}}
|
||
|
\lstset{basicstyle=\ttfamily}
|
||
|
\begin{lstlisting}
|
||
|
tests = [{'b': 4.0, 'c': 3.0},
|
||
|
{'b': 2.0, 'c': 1.0},
|
||
|
{'b': 0.5, 'c': 4.0},
|
||
|
{'b': 1e10, 'c': 3.0},
|
||
|
{'b': -1e10, 'c': 4.0}]
|
||
|
\end{lstlisting}
|
||
|
|
||
|
\item (5) Consider the polynomial $$
|
||
|
w(x)=\Pi_{r=1}^{20}(x-r)=\sum_{i=0}^{20} a_i x^i
|
||
|
$$ and investigate the condition number of roots of this polynomial w.r.t the coefficients $a_i$. Perform the following experiment, using \texttt{numpy} root-finding algorithm. Randomly perturb $w(x)$ by replacing the coefficients $a_i\to n_i a_i$, where $n_i$ is drawn from a normal distribution of mean $1$ and variance $\exp(-10)$. Show the results of $100$ such experiments in a single plot, along with the
|
||
|
roots of the unperturbed polynomial $w(x)$. Using one of the experiments, estimate the relative and absolute condition number of the problem of finding the roots of $w(x)$ w.r.t. polynomial coefficients.
|
||
|
|
||
|
\item (10)
|
||
|
Consider the least squares problem $Ax\approx b$ at
|
||
|
$$
|
||
|
A = \begin{bmatrix}
|
||
|
1 & 1\\
|
||
|
1 & 1.00001\\
|
||
|
1 & 1.00001
|
||
|
\end{bmatrix},\quad b = \begin{bmatrix}
|
||
|
2 \\
|
||
|
0.00001 \\
|
||
|
4.00001
|
||
|
\end{bmatrix}.
|
||
|
$$
|
||
|
|
||
|
\begin{itemize}
|
||
|
\item
|
||
|
Formally, solution is given by
|
||
|
\begin{equation}
|
||
|
\label{ex}
|
||
|
x = ( A^T A )^{-1} A^T b.
|
||
|
\end{equation}
|
||
|
Using this equation, compute the solution analytically.
|
||
|
\item Implement Eq. (\ref{ex}) in \lstinline{numpy} in single and double precision; compare the results to the analytical one.
|
||
|
\item Instead of Eq. (\ref{ex}), implement SVD-based solution to least squares. Which approach is numerically more stable?
|
||
|
\item Use \lstinline{np.linalg.lstsq} to solve the same equation. Which method does this function use?
|
||
|
\item
|
||
|
What are the four condition numbers of this problem, mentioned in Theorem 18.1 of Ref. \cite{trefethen1997numerical}? Give examples of perturbations $\delta b$ and $\delta A$ that approximately attain those condition numbers?
|
||
|
\end{itemize}
|
||
|
\item (7)
|
||
|
Let $$A = \begin{bmatrix}
|
||
|
\epsilon & 1 & 0\\
|
||
|
1 & 1 & 1\\
|
||
|
0 & 1 & 1
|
||
|
\end{bmatrix}$$
|
||
|
\begin{itemize}
|
||
|
\item Find analytically LU decomposition with and without pivoting for the matrix $A$.
|
||
|
\item Explain, why can the LU decomposition fail to approximate factors $L$ and $U$ for $|\epsilon|\ll 1$ in finite-precision arithmetic?
|
||
|
\end{itemize}
|
||
|
|
||
|
\item (6) Consider computing the function $f(n, \alpha)$ defined by $f(0,\alpha)=\ln(1+1/\alpha)$ and recurrent relation
|
||
|
\begin{equation}
|
||
|
f(n,\alpha)=\frac{1}{n}-\alpha f(n-1,\alpha).
|
||
|
\end{equation}
|
||
|
Compute $f(20, 0.1)$ and $f(20, 10)$ in standard (double) precision. Now, do the same exercise in arbitrary
|
||
|
precision arithmetic:
|
||
|
\lstset{language=Python}
|
||
|
\lstset{frame=lines}
|
||
|
\lstset{label={lst:code_direct}}
|
||
|
\lstset{basicstyle=\ttfamily}
|
||
|
\begin{lstlisting}
|
||
|
from mpmath import mp, mpf
|
||
|
mp.dps = 64 # precision (in decimal places)
|
||
|
f = mp.zeros(1, n)
|
||
|
f[0] = mp.log(1+1/mpf(alpha))
|
||
|
for i in range(1, n):
|
||
|
f[i] = 1/mpf(i) - mpf(alpha)*f[i-1]
|
||
|
\end{lstlisting}
|
||
|
\end{enumerate}
|
||
|
Plot the relative difference between exact and approximate results, in units of machine epsilon \texttt{np.finfo(float).eps} for $\alpha=0.1$ and $\alpha=10$ as function of $n$. How would you evaluate $f(30, 10)$ without relying on the arbitrary precision arithmetic?
|
||
|
\printbibliography
|
||
|
\end{document}
|