% Format LaTeX2e
\documentclass[12pt,notitlepage]{article}

\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{latexsym}
\usepackage[dvips]{graphicx}

\setlength{\textheight}{21.5cm} 
\setlength{\textwidth}{16.0cm}    
\setlength{\arraycolsep}{1.5pt}
\setlength{\parindent}{0.0pt}
\setlength{\parskip}{3.0pt}

\oddsidemargin=6.5truemm
\evensidemargin=6.5truemm
\baselineskip=2pt 

\begin{document}

\newtheorem{theorem}{Theorem}[section]
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{definition}[theorem]{Definition}
\newenvironment{algorithm}[1]{\noindent\\\bf Algorithm #1\it}{\indent\\}

\newcounter{ex}
\newenvironment{example}%
{\begin{sloppypar}\noindent\stepcounter{ex}\sc Example \arabic{ex}%
\begin{quote}\small\rm}{\hspace*{\fill}$\bigtriangleup$\end{quote}%
\end{sloppypar}}

\newcommand{\B}{\mathbb}
\newcommand{\R}{{\B R}}
\newcommand{\Q}{{\B Q}}
\newcommand{\N}{{\B N}}
\newcommand{\Z}{{\B Z}}
\newcommand{\F}{{\B F}}
\newcommand{\D}{{\B D}}
\newcommand{\IR}{{\B {I\!R}}}
\newcommand{\IF}{{\B {I\!F}}}
\newcommand{\cD}{{\mathcal D}}
\newcommand{\eps}{{\varepsilon}}
\newcommand{\Ord}{{\mathcal O}}
\newcommand{\cc}{\bf\tt}
\newcommand{\sign}{\text{\rm sign}}
\newcommand{\lo}[1]{\underbar{\it #1}}
\newcommand{\hi}{\bar}
\newcommand{\rndd}[1]{\downarrow\! #1\!\downarrow}
\newcommand{\rndu}[1]{\uparrow\! #1\!\uparrow}

\title{
A robust algorithm for noisy triangulation \\
{\small -- A most preliminary, partial report for Luiz to read and comment, modify etc -- }
}

\author{
Warwick Tucker\\ 
%Cornell University\\
%Department of Mathematics\\
%Malott Hall\\
%Ithaca, NY, 14853-4201, USA \\
{\tt warwick}{\rm @}{\tt math.cornell.edu}
}

\date{\today}

\maketitle

\abstract{We present an algorithm developed to localize an acoustic beacon using several hydrophones. The problem can be generalized to any finite dimension $n$. The direct approach triangulates the position of the beacon by localizing the intersection of several $n$-spheres. This can be interpreted as a root-finding problem, and can thus be solved by Newton's method. Our approach reduces the problem to a system of linear equations, which can be solved by standard linear algebra techniques. To account for noisy inputs from the hydrophones, we develop an algorithm that utilizes interval arithmetic to achieve a rigorous and tight enclosure of the beacon's exact location.}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Introduction}

Consider a lake in which we have submerged $k$ hydrophones, located at the points $h_{1}, h_{2}, \dots, h_{k}$, expressed in some external three-dimensional coordinate system. Now suppose that there is a beacon hidden in the lake at the point $b$. The beacon occasionally emits an acoustic signal (a ping) which travels through the water and triggers the hydrophones at the times $t_{1}, t_{2}, \dots, t_{k}$ (relative to some external clock). Given the information $h_{i}, t_{i}$, $i = 1,\dots,k$, how do we determine the location $b$ of the beacon? Indeed, under which conditions does a solution to our problem even exist? 

We will prove that triangulating the location of a beacon in $\R^n$ is equivalent to solving a $(n+1)$-dimensional linear system of equations, whose elements are provided by data from $n+2$ hydrophones. In real-life applications, however, things are not so nice: the A/D converters in the hydrophones are not perfect, and thus introduce errors in the registered activation times. Also, the exact location of the hydrophones may not be known. The temperature of water is not uniform, and thus the signal travels with non-uniform speed. In this situation, we clearly can not find the exact location of the beacon. Given the tolerances of the hyrophones' locations and the estimated maximal errors in the recorded times, what we can strive for is to find a {\it region} guaranteed to contain the beacon.

The traditional way to approach this new situation is to perform a classical error analysis, examining every step of the computations outlined in the previous section. In most cases, this is extremely time-consuming, and often prone to error (no pun intended). We propose a radically different approach: instead of carrying out the error analysis by hand, we perform all computations in such a way so that all errors are automatically accounted for. Such computations can be realized by using interval arithmetic with directed rounding, as described in Section~\ref{Interval_Arithmetic}.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{The two-dimensional case}

For simplicity, let us discuss the two-dimensional case. Here the lake is just a bounded region in the plane, which we will also assume is convex. Such a scenario is illustrated in Figure~\ref{setup}.

\begin{figure}[h]
\centerline{\includegraphics[width=0.40\textwidth]{pictures/setup.eps} }
\caption{\label{setup} A lake with four hydrophones and one hidden beacon.}
\end{figure}

Assuming that the surrounding water is uniform and that the hydrophones work perfectly (i.e., there are no delays in the hydrophones' sensors), we should be able to pin-point the location of the beacon if we have at least three hydrophones at our disposal. There is one special case in which two hydrophones will suffice, namely when the beacon is located exactly on the straight line spanned by two hydrophones, except on the segment joining the two hydrophones. In all other cases, we need a third hydrophone (which must not be located on the abovementioned line) to determine the location of the beacon.

Considering the problem from the listener's point of view, it is clear that the hydrophone associated with the first time of detection, must also be the one located closest to the beacon. Let us call this pair $(h_{\hat\iota}, t_{\hat\iota})$. Since we are assuming that the speed of sound in water $v_w$ is constant, the time delay $\Delta t_{i, j} = t_{i} - t_{j}$ is directly proportional to the difference in distance from the two hydrophones $h_{i}$ and $h_{j}$ to the beacon, see Figure~\ref{circles}(a). Seeing that the distance between the beacon and the closest hydrophone is given by $r_{\hat\iota} = \|b - h_{\hat\iota}\|$, we know that 
\begin{equation}\label{eqn1}
r_{i} = \|b - h_{i}\| = r_{\hat\iota} + \Delta t_{i, \hat\iota}v_w \qquad i = 1, \dots, k.
\end{equation}
Thus for $i = 1, \dots, k$, we can center a circle $\gamma_{i}$ with radius $r_{i}$ around the hydrophone $h_{i}$. The crucial observation is that, according to (\ref{eqn1}), these radii only depend on one (unknown) parameter, namely $r_{\hat\iota}$. As this parameter is varied, all other circles are varied too. It is now a matter of finding a value $r^*_{\hat\iota}$ such that all circles intersect at one single point. This point will be the location of the beacon, i.e., 
$$
b = \bigcap_{i = 1}^k \gamma_{i}(r^*_{\hat\iota}).
$$ 
The situation is illustrated in Figure~\ref{circles}(b).

\begin{figure}[h]
\centerline{
\includegraphics[width=0.35\textwidth]{pictures/view1.eps} 
\hspace{1cm}
\includegraphics[width=0.35\textwidth]{pictures/view2.eps} }
\caption{\label{circles} Seen from (a) the beacon; (b) the hydrophones.}
\end{figure}

How do we find the parameter $r^*_{\hat\iota}$ that gives a unique intersection? Finding the intersection of circles is equivalent to solving a non-linear system of equations, which in general requires Newton's method. Thus, in the $2$-dimensional case, we would need to solve a $3$-dimensional non-linear system of equations.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Locating a beacon in $\R^n$}

Instead of continuing with our low-dimensional example, let us consider an arbitrary dimension, $n$. Now we are interested in finding the unique intersection of $n+1$ $n$-spheres, each centered around a single hydrophone, and all of whose radii depend only on a scalar parameter. Thus, in the $n$-dimensional case, we need to solve a $(n+1)$-dimensional non-linear system of equations.

Rather than following this approach, we will reduce our computations to a linear system of equations. The price we have to pay for this reduction is the addition of one more hydrophone. As we shall see, the resulting linear system is still $(n+1)$-dimensional. The additional hydrophone is clearly acceptable, since solving a {\it linear} system is in general much easier than solving a {\it non-linear} system of equations.

By (\ref{eqn1}), we have 
\begin{eqnarray}\label{eqn2}
\|b - h_{i}\| & = & r_{\hat\iota} + \Delta t_{i, \hat\iota}v_w = r_{\hat\iota} + \Delta r_{i, \hat\iota}, \\
\label{eqn3}
\|b - h_{j}\| & = & r_{\hat\iota} + \Delta t_{j, \hat\iota}v_w = r_{\hat\iota} + \Delta r_{j, \hat\iota},
\end{eqnarray}
where we use the short-hand $ \Delta r_{i, j} = \Delta t_{i, j}v_w$. Note that the square of (\ref{eqn2}) is
$$
\|b - h_{i}\|^2 = \|b\|^2 + \|h_{i}\|^2 - 2 \big\langle b, h_{i}\big\rangle = \big(r_{\hat\iota}\big)^2 + \big(\Delta r_{i, \hat\iota}\big)^2 + 2r_{\hat\iota}\Delta r_{i, \hat\iota}.
$$
Therefore, subtracting the squares of equations (\ref{eqn2}) and (\ref{eqn3}) gives
\begin{equation}\label{diff}
\|h_{i}\|^2 - \|h_{j}\|^2 - 2 \big\langle b, h_{i} - h_{j}\big\rangle = \big(\Delta r_{i, \hat\iota} + \Delta r_{j, \hat\iota}\big)\Delta r_{i, j} + 2r_{\hat\iota}\Delta r_{i, j},
\end{equation}
where we have used the relation $\Delta r_{i, \hat\iota} - \Delta r_{j, \hat\iota} = \Delta r_{i, j}$. Note that the only unknowns in (\ref{diff}) are $b$ (an $n$-dimensional vector) and $r_{\hat\iota}$ (a scalar). Assuming that we have exactly $n + 2$ hydrophones, we have enough data to reconstruct the location of the beacon. In order to avoid redundancies, we only want to consider certain index pairs $(i, j) = (\nu, \mu)$ in (\ref{diff}). The following choice is easy to remember:
$$
(\nu(k), \mu(k)) = (k, k + 1),\quad k = 1,\dots, n + 1.
$$
Next, for $i = 1,\dots, n + 1$ and $j = 1,\dots, n$, we define the following constants:
\begin{eqnarray}\label{eqn4}
a_{i, j} & = & - 2 \big((h_{\nu(i)})_j - (h_{\mu(i)})_j\big), \\ \label{eqn5}
c_{i}    & = & \big(\Delta r_{\nu(i), \hat\iota} + \Delta r_{\mu(i), \hat\iota}\big)\Delta r_{\nu(i), \mu(i)} - \big(\|h_{\nu(i)}\|^2 - \|h_{\mu(i)}\|^2\big), \\\label{eqn6}
d_{i}    & = & 2\Delta r_{\nu(i), \mu(i)}.
\end{eqnarray}
We can now rewrite (\ref{diff}) as the following system of equations:
\begin{equation}\label{eqn7}
\left( \begin{array}{ccc}
     a_{1, 1} & \cdots & a_{1, n}   \\
     a_{2, 1} & \cdots & a_{2, n}   \\
      \vdots  & \ddots & \vdots     \\
     a_{n+1, 1} & \cdots & a_{n+1, n}   \end{array} 
\right) 
\left( \begin{array}{c}
     b_{1}   \\
     b_{2}   \\
     \vdots  \\
     b_{n}   \end{array} 
\right)
=
\left( \begin{array}{c}
     c_{1}   \\
     c_{2}   \\
    \vdots   \\
     c_{n+1}   \end{array} 
\right) 
+ r_{\hat\iota}
\left( \begin{array}{c}
     d_{1}   \\
     d_{2}   \\
    \vdots   \\
     d_{n+1}   \end{array} 
\right),
\end{equation}
Finally, if we set $a_{i,n+1} = - d_i$ for $i = 1,\dots, n + 1$, and $b_{n+1} =  r_{\hat\iota}$, we get a $(n + 1)$-dimensional linear system:
\begin{equation}\label{eqn8}
\left( \begin{array}{ccc}
     a_{1, 1} & \cdots & a_{1, n+1}   \\
     a_{2, 1} & \cdots & a_{2, n+1}   \\
      \vdots  & \ddots & \vdots     \\
     a_{n+1, 1} & \cdots & a_{n+1, n+1}   \end{array} 
\right) 
\left( \begin{array}{c}
     b_{1}   \\
     b_{2}   \\
     \vdots  \\
     b_{n+1}   \end{array} 
\right)
=
\left( \begin{array}{c}
     c_{1}   \\
     c_{2}   \\
    \vdots   \\
     c_{n+1}   \end{array} 
\right),
\end{equation}
where the elements of the matrix $A$ and the vector $c$ are known. The location of the beacon $(b_1, \dots, b_n)$ and the distance $b_{n+1}$ to the closest hydrophone $h_{\hat\iota}$ can be computed by standard methods, provided $A$ is invertible. 

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Interval arithmetic}\label{Interval_Arithmetic}

In this section, we will briefly describe the fundamentals of interval arithmetic. For a concise reference on this topic, see \cite{FS97}, \cite{Mo66} or \cite{Mo79}. 

Let $\IR$ denote the set of all closed intervals of the real line. For any element $[a] \in \IR$, we adopt the notation $[a] = [\lo a, \hi a]$, and we allow for degenerate intervals $[a]$ with $\lo a = \hi a$. We define binary arithmetic operations on elements of $\IR$ in a set theoretic manner:
\begin{definition}\label{IAdef}
If $\odot$ is one of the operators $+, -, \times, \div$, we define arithmetic operations on elements of $\IR$ by
$$
[a] \odot [b] = \{a \odot b\colon a \in [a], b \in [b]\},
$$ 
except that $[a] \div [b]$ is undefined if $0 \in [b]$\footnote{We could allow for division by zero by admitting intervals on the form $[-\infty, x]$, $[x, +\infty]$, and $[-\infty, +\infty]$. With this {\it extended} interval arithmetic, we can compute e.g. $[1, 2] \div [0, 5] = [1/5, +\infty]$.}. 
\end{definition}
Working exclusively with closed intervals, we can describe the resulting interval in terms of the endpoints of the operands:
%% I don't understand why I have to enclose the beginning of the three last lines in braces. E.g. [a] produces an error, whereas {[a]} works fine.
\begin{eqnarray*}
[a] + [b] & = & [\lo a + \lo b, \hi a + \hi b] \\
{[a]} - [b] & = & [\lo a - \hi b, \hi a - \lo b] \\
{[a]} \times [b] & = & [\min\{\lo a \lo b, \lo a \hi b, \hi a \lo b, \hi a \hi b\}, \max\{\lo a \lo b, \lo a \hi b, \hi a \lo b, \hi a \hi b\}] \\
{[a]} \div [b] & = & [a] \times [1/ \hi b, 1/ \lo b], \quad \text{if } 0 \notin [b].\end{eqnarray*}

It follows from the definition that addition and multiplication are both associative and commutative. Also, it is clear that the elements $[0,0]$ and $[1,1]$ are the unique neutral elements with respect to addition and multiplication, respectively. 

Surprisingly, the distributive law does {\it not} always hold. As an example, we have 
$$
[-1, 1]([-1, 0] + [3, 4]) = [-1, 1][2, 4] = [-4, 4],
$$
whereas
$$
[-1, 1][-1, 0] + [-1, 1][3, 4] = [-1, 1] + [-4, 4] = [-5, 5].
$$
This unusual property is important to keep in mind when representing functions as part of a program. Interval arithmetic satisfies a weaker rule than the distributive law, which we shall refer to as {\it sub-distributivity}:
$$
[a]([b] + [c]) \subseteq [a][b] + [a][c].
$$
This is a set theoretical property that illustrates one of the fundamental differences between real- and interval arithmetic.

Another key feature of interval arithmetic is that of {\it inclusion monotonicity}:
\begin{theorem}
If $[a] \subseteq [a']$, $[b] \subseteq [b']$, and $\odot \in \{+, -, \times, \div\}$, then
$$
[a] \odot [b] \subseteq [a'] \odot [b'],
$$
where we demand that $0 \notin [b']$ for division. 
\end{theorem}
This is the single most important property of interval arithmetic: it allows us to prove open conditions in a robust way.

We can turn $\IR$ into a metric space by equipping it with the Hausdorff distance:
$$
d([a], [b]) = \max\{|\lo a - \lo b|, |\hi a - \hi b|\}.
$$

For dealing with higher dimensional problems, we define the arithmetic operations to be carried out component-wise. We then talk about an {\it interval vector} or, more simply, a {\it box}. The metric on the space $\IR^n$ is defined by
$$
d([a], [b]) = \max_{1 \le i \le n}\{d([a_i], [b_i])\}.
$$

When implementing interval arithmetic on a computer, we no longer work over the space $\R$, but rather $\F$ - the floating points of the computer. This is a finite set, and thus so is $\IF$ -- the set of all intervals with floating point endpoints:
$$
\IF = \{[\lo a, \hi a]\colon \lo a \le \hi a;\quad \lo a, \hi a \in \F\}.
$$
When performing arithmetic on intervals in $\F$ we must round the resulting interval outwards to guarantee inclusion of the true result. This is because $\F$ is not arithmetically closed. For $[a], [b] \in \IF$, we define
\begin{eqnarray*}
[a] + [b] & = & [\rndd{\lo a + \lo b}, \rndu{\hi a + \hi b}] \\
{[a]} - [b] & = & [\rndd{\lo a - \hi b} , \rndu{\hi a - \lo b}] \\
{[a]} \times [b] & = & [\min\{\rndd{\lo a \lo b} , \rndd{\lo a \hi b} , \rndd{\hi a \lo b} , \rndd{\hi a \hi b} \}, \\
 & & \max\{\rndu{\lo a \lo b}, \rndu{\lo a \hi b}, \rndu{\hi a \lo b}, \rndu{\hi a \hi b}\}] \\
{[a]} \div [b] & = & [a] \times [\rndd{1/ \hi b} , \rndu{1/ \lo b}], \quad \text{if } 0 \notin [b].
\end{eqnarray*}
Recall that $\rndd x$ is the largest floating point in $\F$ that is less than or equal to $x$ (called $x$ rounded down), and $\rndu x$ is the smallest floating point in $\F$ that is greater than or equal to $x$ (called $x$ rounded up). This type of arithmetic is called interval arithmetic with {\it directed rounding}.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Inverting interval matrices}\label{invertingIntervalMatrices}

In this section, we will extend matrix operations to include interval-valued matrices and vectors. An interval-valued matrix is simply a matrix whose components are intervals. Given two interval matrices $[A]$ and $[B]$ of equal size, we say that $[A] \subseteq [B]$ iff $[A_{i,j}] \subseteq [B_{i,j}]$ for all components. With this notation, it makes sense to write $A \in [B]$ for a real-valued matrix $A$ with $a_{i,j} \subseteq [B_{i,j}]$. 

Addition, subtraction, and multiplication by a scalar can be defined component-wise, and will produce the exact pointwise ranges:
\begin{eqnarray*}
[A] \pm [B] & = & \{A \pm B \colon A \in [A]; B \in [B]\}, \\
c[A]        & = & \{cA \colon A \in [A]\}.
\end{eqnarray*}
We define interval matrix multiplication analogously to the real-valued situation:
\begin{definition}\label{MatrixMultiplication}
Consider two interval matrices $[A]$ and $[B]$ having sizes $n\times m$ and $m\times k$, respectively. Their product $[C] = [A][B]$ is then defined by
$$
[c_{i,j}] = \sum_{l = 1}^m [a_{i,l}][b_{l,j}]\qquad (i = 1,\dots ,n; j = 1,\dots ,k).
$$
\end{definition}
In particular, for an interval vector $[x] \in \IR^m$, the image $[y] = [A][x]$ is defined by $[y_{i}] = \sum_{k = 1}^m [a_{i,k}][x_{k}]$, $i = 1,\dots ,n$.

Note that, as opposed to when computing with single intervals (see Definition~\ref{IAdef}), matrix multiplication will generally produce strict inclusions of the pointwise range:
\begin{eqnarray*}
[A][B] & \supseteq & \{AB \colon A \in [A]; B \in [B]\}, \\
{[A][x]} & \supseteq & \{Ax \colon A \in [A]; x \in [x]\}.
\end{eqnarray*}
Even multiplication by an interval scalar may produce an over-estimation:
$$
[c][A] \supseteq \{cA \colon A \in [A]; c \in [c]\},
$$ 
as the following example illustrates.
\begin{example}
Consider the interval matrix
$$
[A] = 
\left( \begin{array}{cc}
     [1, 2] & 3   \\
     4      & 5  \end{array} 
\right).
$$
If we take the interval scalar $[c] = [-1, 1]$, then we have
$$
[c][A] = 
\left( \begin{array}{cc}
     [-2, 2] & [-3, 3]   \\
     {[-4, 4]} & [-5, 5]  \end{array} 
\right)
\ni
\left( \begin{array}{cc}
     -2 & 3   \\
      4 & 5  \end{array} 
\right)
\notin 
\{cA \colon A \in [A]; c \in [c]\}.
$$
\end{example}
In matrix computations, the concept of division is treated through multiplication by matrix inverses, when these are well-defined. Of course, we are interested in generalizing the concept of inversion to interval matrices. Thus, given an interval matrix $[A]$, how do we compute $[A]^{-1}$? First, we must define what we {\it mean} by $[A]^{-1}$. In the real case, $A^{-1}$ is the matrix that satisfies $I = AA^{-1} = A^{-1}A$. As is well-known, the existence of $A^{-1}$ requires that $A$ is a square matrix with non-zero determinant. Motivated by this, we define its inverse to be the set of all inverses of real matrices $B$ in $[A]$:
\begin{definition}\label{MatrixInverse}
Given an interval matrix $[A]$, we define the interval matrix inverse
\begin{equation}
[A]^{-1} = \{B^{-1} \colon B \in [A]\},
\end{equation}
whenever this is well-defined.
\end{definition}
It follows that, for $[A]$ to be invertible, it must not contain a single non-invertible real matrix. If this condition is satisfied, we say that $[A]$ is {\it non-singular} or simply {\it invertible}. Note that, in general, we have $A[A]^{-1} \neq [A]^{-1}A$. All we know is that both interval matrices contain the identity matrix. 

Unfortunately, utilizing exact formulas for computing matrix inverses may result in large overestimations when the computations are performed in interval arithmetic.  
\begin{example}
Consider the interval matrix
$$
[A] = 
\left( \begin{array}{cc}
     1 & - 2   \\
     1 & [1, \tfrac{3}{2}] \end{array} 
\right),
$$
whose inverse we would like to compute. Using the formula for $2\times 2$ matrices:
$$
M = 
\left( \begin{array}{cc}
     a & b   \\
     c & d \end{array} 
\right) 
\quad \Rightarrow \quad
M^{-1} = \frac{1}{ad - bc}
\left( \begin{array}{cc}
       d & - b   \\
     - c &   a \end{array} 
\right),
$$
we arrive at
$$
[A]^{-1} = \frac{1}{[1, \tfrac{3}{2}] + 2}
\left( \begin{array}{cc}
     [1, \tfrac{3}{2}] & 2   \\
     -1       & 1 \end{array} 
\right).
$$
In particular, concentrating on the first component, we have 
$$
([A]^{-1})_{1,1} = \frac{[1, \tfrac{3}{2}]}{[1, \tfrac{3}{2}] + 2} = [1, \tfrac{3}{2}] \div [3, \tfrac{7}{2}] = [\tfrac{2}{7}, \tfrac{1}{2}],
$$
which overestimates the true range. Indeed, according to (\ref{MatrixInverse}), the exact range is
$$
([A]^{-1})_{1,1} = \left\{\frac{x}{x + 2} \colon x \in [1, \tfrac{3}{2}]\right\} = [\tfrac{3}{7}, \tfrac{1}{2}].
$$
\end{example}
This defect should come as no surprise: indeed, this overestimation {\it must} occur when computing the range of a function where a variable occurs more than once. Thus what remains is to device a method for {\it accurately} computing $[A]^{-1}$. Before describing an algorithm for inverting an interval matrix, we will define the {\it maximal row sum norm} of an $n\times n$ interval matrix:
$$
\big\|[A]\big\| = \max_{1 \le i \le n} \sum_{j = 1}^n |[a_{i,j}]|,
$$
where we use the notation $|[a_{i,j}]| = \max\{|\lo{a}_{i,j}|,|\hi{a}_{i,j}|\}$. Note that, for all $A \in [A]$, we have $\|A\| \le \|[A]\|$.

\begin{theorem}[Interval matrix inversion]\label{IntervalMatrixInversionTheorem}
Let $[A]$ be a non-singular $n\times n$ interval matrix, and set $[E] = I - [A]Y$, where $Y$ is close to the inverse of some real matrix $X \in [A]$. If $\|[E]\| \le r < 1$, then we can enclose the inverse interval matrix:
$$
[A]^{-1} \subseteq Y([S_{k}] + [P_{k}]).
$$
Here $[P_{k}]$ is the interval matrix with all entries equal to $[-r^{k + 1}, r^{k + 1}]/(1 - r)$, and 
$$
[S_{k}] = I + [E] + [E]^2 + \dots + [E]^k.
$$
\end{theorem}

The proof, which is due to Hansen (see \cite{Ha65}), is motivated by the fact that if we set $E = I - AY$, then
$$
Y(I - E)^{-1} = Y(I - (I - AY))^{-1} = Y(AY)^{-1} = YY^{-1}A^{-1} = A^{-1}. 
$$
This requires that both $A$ and $Y$ are non-singular, the former which we assume, and the latter which we attempt to ensure by taking $Y$ close to $A^{-1}$.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Applications}

Let us return to our original problem: the triangulation of an acoustic beacon using hydrophones. In the real-world case, instead of solving a perfect linear system of equations $Ab = c$, we have an interval system: $[A]b \in [c]$. In other words, we want to find the set of all $b$s such that $[A]b \in [c]$. By the definition of the inverse of an interval matrix, we have the inclusion.
$$
\{b = A^{-1}c \colon A \in [A]; c \in [c]\} \subseteq [A]^{-1}[c].
$$ 
This means that given $[A]$ and $[c]$, the beacon is guaranteed to be contained in the box $[X][c]$, where $[X]$ is an enclosure of $[A]^{-1}$.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\begin{thebibliography}{MM00}

\bibitem[FS97]{FS97} de Figueiredo, L. H., Stolfi, J., M\'etodos num\'ericos auto-validados e aplica\c c\~ oes, Braz. Math. Colloq. {\bf 21}, IMPA, Rio de Janeiro, 1997.

\bibitem[Ha65]{Ha65} Hansen, E., {\it Interval Arithmetic in Matrix Computations}, J. SIAM Numer. Anal., Ser. B {\bf 2:2} (1965), 308-320.

\bibitem[Mo65]{Mo65} Moore, R. E., {\it The Automatic Analysis and Control of Error in Digital Computations Based on the Use of Interval Numbers}, in Error in Digital Computing, Vol 1, John Wiley \& Sons Inc, New York, 1965.

\bibitem[Mo66]{Mo66} Moore, R. E., Interval Analysis. Prentice-Hall, Englewood Cliffs, New Jersey, 1966.

\bibitem[Mo79]{Mo79} Moore, R. E., Methods and Applications of Interval Analysis, SIAM Studies in Applied Mathematics, Philadelphia, 1979.
\end{thebibliography}

\end{document}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
