# public documents.sextractor_doc

## [/] [sextractor.tex] - Rev 9

\documentclass[11pt]{article}
% revised by GAM: 10 Oct 2011
\usepackage{graphicx}
\usepackage{hyperref}
\usepackage{amsfonts}   % Math extensions
\usepackage{color}  % temporary by GAM for comments
\def \rms {{\it rms}\ }
\def \etal {{et~al.\ }}
\def\vec#1{\mbox{\boldmath $\displaystyle#1$}}
\def\vecs#1{\mbox{\boldmath $\scriptstyle#1$}}
\def\oper#1{\mbox{\large\bf #1}}
\def\dvol#1#2{d\hspace{1pt}^{#1}\hspace{-2pt}#2}
\newlength{\jacobwidth}
\def\jacob#1#2#3{\settowidth{\jacobwidth}{$_{#3}$}
\left(\frac{d#1}{d#2}\right)_{_{\!\!#3}}\hspace{-\jacobwidth}}
\def\jacobs#1#2#3{\settowidth{\jacobwidth}{$_{#3}$}
\left(\frac{\partial#1}{\partial#2}\right)_{_{\!\!#3}}\hspace{-\jacobwidth}}
\def\parlist#1{\smallskip \hspace{10mm} \begin{minipage}{150mm}#1\end{minipage}\\}
\def\pararg#1#2{\>#1\>\begin{minipage}[t]{83mm}#2\end{minipage}\\}
\newenvironment{pitemize}{
\begin{itemize}
\setlength{\itemsep}{1pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
}{\end{itemize}}

\parskip=2mm
\parindent=0mm
\textheight=240mm
\textwidth=160mm
\oddsidemargin=0mm
\evensidemargin=8mm
\topmargin=-10mm
\itemsep=-1mm
\topsep=0mm
\definecolor{darkgreen}{rgb}{0.0,0.5,0.0}
\definecolor{grey}{rgb}{0.4,0.5,0.7}

\newcommand{\gam}[1]{{\bf\textcolor{blue}{Gary: #1}}}
\newcommand{\eb}[1]{{\bf\textcolor{red}{Emmanuel: #1}}}
\newcommand{\hide}[1]{\textcolor{grey}{#1}}

\begin{document}
\title{
{\Huge {\sc SExtractor}
\includegraphics[width=0.01cm,bb=100 550 100.2 550.2]{ps/draft.ps}
}\\
{\huge $v2.13$ \gam{update version number}}\\
{\LARGE User's manual}}
\date{\vspace{1cm}\fbox{
\includegraphics[width=15cm]{ps/sexintroc.ps}
}}
\author{\Large E. BERTIN\vspace{0.5cm}\\Institut d'Astrophysique\\\& Observatoire de Paris}
\maketitle
\newpage
{\ }
\newpage
\tableofcontents
\newpage
\section{What is {\sc SExtractor}?}
{\sc SExtractor} ({\em Source-Extractor}) is a program that builds a catalogue of objects from an
astronomical image. It is particularly oriented towards reduction of large scale galaxy-survey
data, but it also performs well on moderately crowded star fields.
Its main features are:
\begin{itemize}
\item Support for multi-extension FITS.
\item Speed: typically 1 Mpixel/s with a 2 GHz processor.
\item Ability to work with very large images (up to $65{\rm k}\times65{\rm k}$ pixels on 32 bit
machines, or $2{\rm G}\times2{\rm G}$ pixels on 64 bit machines), thanks to buffered image access.
\item Robust deblending of overlapping extended objects.
\item Real-time filtering of images to improve detectability.
\item Neural-Network-based star/galaxy classifier.
\item Fast bulge-disk decomposition
\item Flexible catalogue output of desired parameters only.
\item Pixel-to-pixel photometry in dual-image mode.
\item Handling of weight-maps and flag-maps.
\item Optimum handling of images with variable S/N.
\item Special mode for photographic scans.
\item XML VOTable-compliant catalog output.
\end{itemize}

Back in the early nineties, the purpose of {\sc SExtractor} was to find a compromise
between refinement in both detection and measurements, and computational speed. By today's
standards, {\sc SExtractor} would be more accurately described as a quick-and-dirty'' tool.

\section{Skeptical Sam's questions}

\gam{Include such questions? Take from mail archive?}

{\sc SExtractor} is free software: you can redistribute it and/or modify it under
Foundation, either version 3 of the License, or (at your option) any later
version. {\sc SExtractor} is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
details. You should have received a copy of the GNU General Public License along
with {\sc SExtractor}.  If not, see {\tt http://www.gnu.org/licenses/}.

\section{Installing the software}

\subsection{Software and hardware requirements}
\hide{Since the beginning in 1993, the development of {\sc SExtractor} was always made on Unix systems
(successively: SUN-OS, HP/UX, SUN-Solaris, Digital Unix and GNU/Linux). Successful ports by
external contributors have been reported on non-Unix OSes such as AMIGA-OS, DEC-VMS and even
MS-DOS Windows95\footnote{Binaries are available on the WWW, see e.g.
{\tt http://www.tass-survey.org/tass/software/software.html\#sextract}} and NT {\tt ; )}.
They are however not currently supported by the author, and Unix remains the recommended
system for running {\sc SExtractor}. The software is generally run in (ANSI) text-mode from a
shell. A window system is therefore unnecessary with present versions.}

On the hardware side, memory requirements obviously depend on the size of the images to be
processed. But to give an idea, a typical processing of $1024\times1024$ pixel images should
require no more than 8 MB of memory. For very large images, ($32000\times32000$ pixels or
more),  a minimum of 200MB is recommended. Swap-space can of course be put to contribution,
although a strong performance hit is to be expected.
\gam{Update values?}

{\sc SExtractor} has been developed on Unix machines (GNU/Linux), and should
compile on any POSIX-compliant system (this includes Mac OS X, but not Cygwin
under Windows), provided that the following
libraries/packages have been installed:
\begin{itemize}
\item {\sc ATLAS} V3.6 and above\footnote{Use the {\tt --with-atlas} and/or
{\tt --with-atlas-incdir} options to specify the ATLAS library and include paths
if the software is installed at unusual locations.}
(\href{http://math-atlas.sourceforge.net/}{\tt http://math-atlas.sourceforge.net/})
\item {\sc FFTw} V3.0 and above\footnote{Make sure that {\sc FFTW} has been
compiled with the
{\tt configure} options {\tt --enable-threads --enable-float}).}
(\href{http://www.fftw.org/}{\tt http://www.fftw.org/})
\item {\sc PLPlot} V5.9 and above (\href{http://www.plplot.org/}{\tt http://www.plplot.org/})
\end{itemize}

{\sc PLPlot} is only required for producing diagnostic plots. Note that
ATLAS and FFTw are not necessary for the binary versions of {\sc SExtractor} which come with these libraries statically linked.

The software is run in (ANSI) text-mode from a shell. A
window system is necessary only when {\sc PLPlot} is used in interactive mode.

%% \subsection{Obtaining {\sc SExtractor}}
%% The easiest way to obtain {\sc SExtractor} is to download it from
%% {\tt http://terapix.iap.fr/soft/sextractor/}. The current
%% official anonymous FTP site is {\tt ftp://ftp.iap.fr/pub/from\_users/bertin/sextractor/}. There
%% can be found the latest versions of the program as standard {\tt .tar.gz} Unix archives, plus
%% some documentation.

\subsection{Obtaining {\sc SExtractor}}

The easiest way to obtain {\sc SExtractor} is to download it from the official
website\footnote{\href{http://astromatic.net/software/sextractor}{\tt
http://astromatic.net/software/sextractor}}.
configuration files, and documentation) are available as standard
{\tt .tar.gz} Unix source archives as well as RPM binary
packages for various architectures.\footnote{Mac OS X dmg files should be
available soon.}

\subsection{Installation}
\subsection{Installation from the source archive}

To install from the source archive, you must first uncompress and unarchive the archive:
% gzip -dc sextractor-x.y.tar.gz | tar xv
\begin{verbatim}
tar zxvf sextractor-<version>.tar.gz
\end{verbatim}
A new directory called {\tt sextractor-<version>} should now appear at the current position on your disk.
You should then just enter the directory and follow the instructions in the file called
{\tt INSTALL}''.
If you have the root privileges, it will generally consist of
\gam{add same beginning of sentence in PSFEx documentation?}

\% {\tt ./configure}

\% {\tt make}

\% {\tt make install}

\subsection{Installation from an RPM archive}

%% RPM binary archives are also provided for x86 architectures (e.g. Intel, AMD). In this case,
%% {\sc SExtractor} can be installed as root using

%% \% {\tt rpm -U sextractor-<version>.rpm}

%% \section{Using {\sc SExtractor}}
%% \label{chap:using}
%% \subsection{Syntax}
%% {\sc SExtractor} is run from the shell with the following syntax:

%% \% ~ {\tt sex} ~ {\em image} {\tt [-c} {\em configuration-file}{\tt ]} ~ {\tt [ -}{\em Parameter1} {\em Value1\ }{\tt ]}
%% ~ {\tt [ -}{\em Parameter2} {\em Value2\ }{\tt ]} ...

%% The part enclosed within brackets is optional. Any "{\tt -}{\em Parameter} {\em Value}"
%% statement in the command-line overrides the corresponding definition in the configuration-file
%% or any default value (see below). Actually, {\em two} image filenames can be provided, separated
%% by a comma:

%% \% ~ {\tt sex} ~ {\em image1},{\em image2}

%% This syntax makes {\sc SExtractor} run in the so-called double-image mode'': {\em image1} will
%% be used for detection of sources, and {\em image2} for measurements only. {\em image1} and
%% {\em image2} must have the same dimensions.
%% Changing {\em image2} for another image will not modify the number of detected sources, neither
%% affect their positional or basic shape parameters. But most photometric parameters, plus a few
%% others, will use {\em image2} pixel values, which allows one to easily measure pixel-to-pixel
%% colours.

SExtractor is also available as a binary RPM package for both Linux {\sc INTEL}
{\tt x86} (32-bit) and {\tt x86-64} (64-bit) architectures. To check which matches your system, use the shell command
\begin{verbatim}
uname -a
\end{verbatim}
To install, type as a root user the following command in your shell (preceded
with {\tt su} if you don't have root access bu the system administrator trusts
you well enough to make you part of the {\tt wheel} group):
\begin{verbatim}
rpm -U sextractor-<version>-1.<arch>.rpm
\end{verbatim}
It is often necessary to force installation with
\begin{verbatim}
rpm -U --force --nodeps sextractor-<version>-1.<arch>.rpm
\end{verbatim}
You may now check that the software is properly installed by simply typing in your shell
\begin{verbatim}
sex
\end{verbatim}
(note that some shells require the {\tt rehash} command to be run before
making a freshly installed executable accessible in the execution path).

\section{Using {\sc SExtractor}}
\label{using}
{\sc SExtractor} is run from the shell with the following syntax:

\% ~ {\tt sex} ~ {\em image} {\tt [-c} {\em configuration-file}{\tt ]} ~ {\tt [ -}{\em Parameter1} {\em Value1\ }{\tt ]}
~ {\tt [ -}{\em Parameter2} {\em Value2\ }{\tt ]} ...

The parts enclosed within brackets are optional. The file names of input
catalogues can be directly provided in the command-line, or in lists that are
ASCII files with
each catalogue name preceded by
'@' (one per line).
% Lists are ASCII files containing the input file names (one per line).
One should use lists instead of the catalogue file names if the number of
input catalogues is too large to be handled directly by the shell.
Any {\tt -}{\em Parameter} {\em Value}" statement in the command-line overrides
the corresponding definition in the configuration-file or any default
value (see below).

\subsection{The configuration file}
\label{chap:configfile}
Each time it is run, {\sc SExtractor} looks for a configuration file. If no
configuration file is specified in the command-line, it is assumed to
be called {\tt default.psfex}'' and to reside in the current
directory. If no configuration file is found, {\sc SExtractor} will use its own
internal default configuration.

\subsubsection{Creating a configuration file}
{\sc SExtractor} can generate an ASCII dump of its internal default
configuration, using the {\tt -d}'' option. By redirecting the standard
output of {\sc SExtractor} to a file, one creates a configuration file that
can easily be modified afterwards:

{\tt \% psfex -d > default.psfex}

A more extensive dump with less commonly used parameters can be generated by
using the {\tt -dd}'' option.

\subsubsection{Format of the configuration file}
The format is ASCII. There must be only one parameter set per line,
following the form:

~~~~ {\em Config-parameter ~~~~ Value(s)}

Extra spaces or linefeeds are ignored. Comments must begin with a \#''
and end with a linefeed. Values can be of different
types: strings (can be enclosed between double quotes), floats, integers, keywords or Boolean
({\tt Y/y} or {\tt N/n}). Some parameters accept zero or several values, which must then be
separated by commas.
Integers can be given as decimals, in octal form (preceded by digit {\tt O}), or in hexadecimal
(preceded by {\tt 0x}). The hexadecimal format is particularly convenient for writing multiplexed
bit values such as binary masks. Environment variables, written as {\tt \$HOME} or {\tt \$\{HOME\}}
are expanded, and not only for string parameters. Some parameters are assigned default values in
{\sc SExtractor} and can therefore be omitted from the configuration file; they are listed in
\S\ref{chap:config}.

\subsubsection{Configuration parameter list}
\label{chap:config}
Here is a complete list of all the {\em configuration} parameters known to {\sc SExtractor}.
Many of them should be used with their default values. Please refer to the next sections for a
detailed description of their meaning.

\begin{tabular}{lllp{2.5in}}
Parameter              & default & type & Description\\
\hline
{\tt ANALYSIS\_THRESH}   & {\tt 1.5} & {\em floats}\ $(n \le 2)$ & Threshold (in surface brightness) at
which {\tt CLASS\_STAR} and {FWHM\_} \gam{why underscore?} operate. 1
argument: relative to Background RMS (before filtering).
2 arguments: surface magnitude $\mu$ ($\rm mag\,arcsec^{-2}$), Zero-point (mag).\\
{\tt ASSOC\_DATA}      & {\tt 2,3,4} & {\em integers}\ $(n\le 32)$& \# of the columns in the
{\tt ASSOC} file that will be copied to the catalog output.\\
{\tt ASSOC\_NAME}      & {\tt sky.list} & {\em string}          & Name of the {\tt ASSOC} ASCII file.\\
{\tt ASSOC\_PARAMS}    & {\tt 2,3,4} & {\em integers}\ $(2 \le n \le 3)$& \# of the columns in the {\tt ASSOC} file that will be used as coordinates
and weight for cross-matching.\\
{\tt ASSOC\_RADIUS}    & {\tt 2.0} & {\em float}                & Search radius (in pixels) for {\tt ASSOC}.\\
{\tt ASSOC\_TYPE}      & {\tt MAG\_SUM} & {\em keyword}         & Method for cross-matching in {\tt ASSOC}:\\
&           & {\tt FIRST}          & -- keep values corresponding to the first match found,\\
&           & {\tt NEAREST}        & -- values corresponding to the nearest match found,\\
&           & {\tt MEAN}           & -- weighted-average values,\\
&           & {\tt MAG\_MEAN}      & -- exponentially weighted-average values,\\
&           & {\tt SUM}            & -- sum values,\\
&           & {\tt MAG\_SUM}       & -- exponentially sum values,\\
&           & {\tt MIN}            & -- keep values corresponding to the match with minimum weight,\\
\end{tabular}

\begin{tabular}{lllp{2.5in}}
&           & {\tt MAX}            & -- keep values corresponding to the match with maximum weight.\\
{\tt ASSOCSELEC\_TYPE} & {\tt MATCHED}   & {\em keyword}  & What sources are printed in the output catalog in case of {\tt ASSOC}:\\
&           & {\tt ALL}            & -- all detections,\\
&           & {\tt MATCHED}        & -- only matched detections,\\
&           & {\tt -MATCHED}       & -- only detections that were not matched.\\

{\tt BACK\_FILTERSIZE} & {\tt 3} & {\em integers}\ $(n \le 2)$ & Size, or Width,Height (in background meshes) of the background-filtering mask.\\
{\tt BACK\_SIZE}       & {\tt 64} & {\em integers}\ $(n \le 2)$ & Size, or Width,Height (in pixels) of a background mesh.\\
{\tt BACK\_TYPE}       & {\tt AUTO} & {\em keywords}\ $(n \le 2)$  & What background is subtracted from the images:\\
&     & {\tt AUTO}            & -- the internal, automatically interpolated background-map,\\
&     & {\tt MANUAL}             & -- a user-supplied constant value provided in {\tt BACK\_VALUE}.\\
{\tt BACK\_VALUE} & {\tt 0.0,0.0} & {\em floats}\ $(n \le 2)$ & in {\tt BACK\_TYPE MANUAL} mode, the constant value to be subtracted
from the images.\\
{\tt BACKPHOTO\_THICK} & {\tt 24}  & {\em integer}           & Thickness (in pixels) of the background {\tt LOCAL} annulus.\\
{\tt BACKPHOTO\_TYPE}  & {\tt GLOBAL} & {\em keyword}        & Background used to compute magnitudes:\\
&     & {\tt GLOBAL}            & -- taken directly from the background map,\\
&     & {\tt LOCAL}             & -- recomputed in a rectangular annulus'' around the object.\\
{\tt CATALOG\_NAME}    & {\tt test.cat} & {\em string}            & Name of the output catalogue. If the name {\tt STDOUT}'' is given and
{\tt CATALOG\_TYPE} is set to {\tt ASCII}, {\tt ASCII\_HEAD}, {\tt ASCII\_SKYCAT}, or {\tt ASCII\_VOTABLE}, the catalogue will be piped to the standard output ({\em stdout})\\
{\tt CATALOG\_TYPE}    & --- & {\em keyword} & Format of output catalog (note
that ASCII\* is space and time consuming):\\
&     & {\tt ASCII}   & -- ASCII table,\\
&     & {\tt ASCII\_HEAD}   & -- as {\tt ASCII},
&     & {\tt ASCII\_SKYCAT} & -- SkyCat ASCII format (WCS coordinates required),\\
&     & {\tt ASCII\_VOTABLE} & -- XML-VOTable format, together with meta-data,\\
&     & {\tt FITS\_1.0}     & -- FITS format as in {\sc SExtractor} 1,\\
&     & {\tt FITS\_LDAC}    & -- FITS LDAC'' format (the original image header is copied).\\
{\tt CHECKIMAGE\_NAME} & {\tt check.fits} & {\em strings} $(n \le 16)$  & File name for each check-image''.\\
\end{tabular}

\begin{tabular}{lllp{2.5in}}
{\tt CHECKIMAGE\_TYPE} & {\tt NONE} & {\em keywords} $(n \le 16)$ & Type of information in the check-images'':\\
&     & {\tt NONE}         & -- no check-image,\\
&     & {\tt IDENTICAL}    & -- identical to input image (useful for converting formats),\\
&     & {\tt BACKGROUND}   & -- full-resolution interpolated background map,\\
&     & {\tt BACKGROUND\_RMS} & -- full-resolution interpolated background noise map,\\
&     & {\tt MINIBACKGROUND} & -- low-resolution background map,\\
&     & {\tt MINIBACK\_RMS} & -- low-resolution background noise map,\\
&     & {\tt -BACKGROUND}  & -- background-subtracted image,\\
&     & {\tt FILTERED}     & -- background-subtracted filtered image (requires {\tt FILTER = Y}),\\
&     & {\tt OBJECTS}      & -- detected objects,\\
&     & {\tt -OBJECTS}     & -- background-subtracted image with detected objects blanked,\\
&     & {\tt APERTURES}    & -- {\tt MAG\_APER} and {\tt MAG\_AUTO} integration limits,\\
&     & {\tt SEGMENTATION} & -- display patches corresponding to pixels attributed to each object.\\
{\tt CLEAN}            & {\tt Y} & {\em boolean}      & If true,  the
catalogue is cleaned'' before being written to disk.\\
{\tt CLEAN\_PARAM}     & {\tt 1.0} & {\em float}        & Efficiency of cleaning''.\\
{\tt DEBLEND\_MINCONT} & {\tt 0.005} & {\em float}        & Minimum contrast parameter for deblending.\\
{\tt DEBLEND\_NTHRESH} & {\tt 32} & {\em integer}      & Number of deblending sub-thresholds.\\
{\tt DETECT\_MINAREA}  & {\tt 5} & {\em integer}      & Minimum number of pixels above threshold triggering detection.\\
{\tt DETECT\_MAXAREA}  & --- & {\em integer}      & Maximum number of pixels
above threshold triggering detection \gam{What value for infinity?}.\\
{\tt DETECT\_THRESH}   & {\tt 1.5} & {\em floats}\ $(n \le 2)$ & Detection threshold. 1 argument: (ADUs or relative to Background RMS, see
{\tt THRESH\_TYPE}). 2
arguments: $\mu$ ($\rm mag\,arcsec^{-2}$), Zero-point (mag).\\
{\tt DETECT\_TYPE}     &  {\tt CCD} & {\em keyword} & Type of device that produced the image:\\
&     & {\tt CCD}          & -- linear detector like CCDs or NICMOS,\\
&     & {\tt PHOTO}        & -- photographic scan.\\
{\tt FILTER}           & --- & {\em boolean}      & If true, filtering is applied to the data before extraction.\\
{\tt FILTER\_NAME}     & --- & {\em string}       & Name of the file containing the filter definition.\\
{\tt FILTER\_THRESH}   &     & {\em floats}\ $(n \le 2)$ & Lower and higher thresholds (in background standard deviations) for a pixel
to be considered in filtering (used for retina-filtering only).\\
{\tt FITS\_UNSIGNED}   &  {\tt N}  & {\em boolean} & Force 16-bit FITS input data to be interpreted as unsigned integers.\\
{\tt FLAG\_IMAGE}      & {\tt flag.fits} & {\em strings}\ $(n \le 4)$ & File name(s) of the flag-image(s)''.\\
\end{tabular}

\begin{tabular}{lllp{2.5in}}
{\tt FLAG\_TYPE}       & {\tt OR} & {\em keyword} & Combination method for flags on the same object:\\
&          & {\tt OR}      & -- arithmetical OR,\\
&          & {\tt AND}     & -- arithmetical AND,\\
&          & {\tt MIN}     & -- minimum of all flag values,\\
&          & {\tt MAX}     & -- maximum of all flag values,\\
&          & {\tt MOST}    & -- most common flag value.\\
{\tt GAIN}             &     & {\em float}        & Gain'' (conversion factor in $e^-/\mbox{ADU}$) used for error estimates
of {\tt CCD} magnitudes .\\
{\tt INTERP\_MAXXLAG}  & {\tt 16} & {\em integers}\ $(n \le 2)$ & Maximum $x$ gap (in pixels) allowed in interpolating the input image(s).\\
{\tt INTERP\_MAXYLAG}  & {\tt 16} & {\em integers}\ $(n \le 2)$ & Maximum $y$ gap (in pixels) allowed in interpolating the input image(s).\\
{\tt INTERP\_TYPE}     & {\tt ALL} & {\em keywords}\ $(n \le 2)$ & Interpolation method from the variance-map(s) (or weight-map(s)):\\
&            & {\tt NONE}  & -- no interpolation,\\
&            & {\tt VAR\_ONLY} & -- interpolate only the variance-map (detection threshold),\\
&            & {\tt ALL} & -- interpolate both the variance-map and the image itself.\\
{\tt MAG\_GAMMA}       &     & {\em float} & $\gamma$ of the emulsion (takes effect in {\tt PHOTO} mode only).\\
{\tt MAG\_ZEROPOINT}   &     & {\em float} & Zero-point offset to be applied to magnitudes.\\
{\tt MASK\_TYPE}       & {\tt CORRECT} & {\em keyword} & Method of masking'' of neighbours for photometry:\\
&            & {\tt NONE}  & -- no masking,\\
&            & {\tt BLANK} & -- put detected pixels belonging to neighbours to zero,\\
&            & {\tt CORRECT} & -- replace by values of pixels symetric with respect to the source center.\\
{\tt MEMORY\_BUFSIZE}  & {\tt 1024} & {\em integer} & Number of scan-lines in the image-buffer. Multiply by 4 the frame width to
get equivalent memory space in bytes.\\
{\tt MEMORY\_OBJSTACK} & {\tt 3000} & {\em integer} & Maximum number of objects that the object-stack can contain. Multiply by 300 to get
equivalent memory space in bytes.\\
{\tt MEMORY\_PIXSTACK} & {\tt 300000} & {\em integer} & Maximum number of pixels that the pixel-stack can contain. Multiply by 16 to 32 to get
equivalent memory space in bytes.\\
{\tt PARAMETERS\_NAME} & {\tt default.param} & {\em string} & The name of the file containing the list of parameters that will be computed and put
in the catalogue for each object.\\
{\tt PHOT\_APERTURES} & {\tt 5} & {\em floats}\ $(n \le 32)$ & Aperture diameters in pixels (used by {\tt MAG\_APER}).\\
\end{tabular}

\begin{tabular}{lllp{2.5in}}
{\tt PHOT\_AUTOPARAMS} & {\tt 2.5,3.5} & {\em floats}\ $(n=2)$ & {\tt MAG\_AUTO} controls: scaling parameter $k$ of the 1st order moment, and minimum $R_{min}$
(in units of {\tt A} and {\tt B}).\\
{\tt PHOT\_AUTOAPERS}  & {\tt 0.0,0.0} & {\em floats}\ $(n=2)$ & {\tt MAG\_AUTO} minimum (circular) aperture diameters: estimation disk, and
measurement disk.\\
{\tt PHOT\_FLUXFRAC}  & {\tt 0.5} & {\em floats}\ $(n \le 32)$ & Fraction of {\tt FLUX\_AUTO} defining each element of the {\tt FLUX\_RADIUS} vector.\\
{\tt PIXEL\_SCALE}     & {\tt 1.0} & {\em float} & Pixel size in arcsec (for surface brightness parameters, FWHM and star/galaxy separation only).\\
{\tt SATUR\_LEVEL}     & {\tt 50000.0} & {\em float} & Pixel value above which it is considered saturated.\\
{\tt SEEING\_FWHM}     & {\tt 1.2} & {\em float} & FWHM of stellar images in arcsec (only for star/galaxy separation).\\
{\tt STARNNW\_NAME}    & {\tt default.nnw} & {\em string} & Name of the file containing the neural-network weights for star/galaxy separation.\\
{\tt THRESH\_TYPE}       & {\tt RELATIVE} & {\em keywords}\ $(n \le 2)$  & Meaning of the {\tt DETECT\_THRESH} and
{\tt ANALYSIS\_THRESH} parameters :\\
&     & {\tt RELATIVE}            & -- scaling factor to the background RMS,\\
&     & {\tt ABSOLUTE}             & -- absolute level (in ADUs or in surface brightness).\\
{\tt VERBOSE\_TYPE}    & {\tt NORMAL} & {\tt keyword}      & How much {\sc SExtractor} comments its operations:\\
&              & {\tt QUIET}        & -- run silently,\\
&              & {\tt NORMAL}       & -- display warnings and limited info concerning the work in progress,\\
&              & {\tt EXTRA\_WARNINGS} & -- like {\tt NORMAL}, plus a few more warnings if necessary,\\
&              & {\tt FULL}       & -- display a more complete information and the principal parameters of all the objects
extracted.\\
{\tt WEIGHT\_GAIN}   &  {\tt Y}  & {\em boolean} & If true, weight maps are considered as gain maps.\\
{\tt WEIGHT\_IMAGE}    & {\tt weight.fits} & {\em strings}\ $(n \le 2)$ & File name of the detection and measurement weight-image'', respectively.\\
{\tt WEIGHT\_TYPE} & {\tt NONE} & {\em keywords} $(n \le 2)$ & Weighting scheme (for single image, or detection and
measurement images):\\
&            & {\tt NONE}  & -- no weighting,\\
&            & {\tt BACKGROUND} & -- variance-map derived from the image itself,\\
&            & {\tt MAP\_RMS} & -- variance-map derived from an external RMS-map,\\
&            & {\tt MAP\_VAR} & -- external variance-map,\\
&            & {\tt MAP\_WEIGHT} & -- variance-map derived from an external weight-map,\\
\end{tabular}

\begin{tabular}{lllp{2.5in}}
{\tt WRITE\_XML}       &  {\tt N}  & {\em boolean} & If true, meta-data will be written in XML-VOTable format.\\
{\tt XML\_NAME}        & {\tt sex.xml} & {\em string} & File name for the XML output of {\sc SExtractor}.\\
\end{tabular}

\footnotetext[1]{Optional parameter}

\subsection{The catalog parameter file}
In addition to the configuration file detailed above, {\sc SExtractor} needs a file containing
the list of parameters that will be listed in the output catalog for every detection.
This allows the software to compute only catalog parameters that are needed. The name
of this catalog-parameter file is traditionally suffixed with {\tt .param}, and must be specified
using the {\tt PARAMETERS\_NAME} config parameter.

\subsubsection{Format}
The format of the catalog parameter list is ASCII, and there must be {\em only one keyword
per line}. Presently two kinds of keywords
are recognized by {\sc SExtractor}: scalars and vectors. Scalars, like {\tt X\_IMAGE}, yield
single numbers
in the output catalog. Vectors, like {\tt MAG\_APER(4)} or {\tt VIGNET(15,15)}, yield arrays
of numbers.
The order in which the parameters will be listed in the catalogue are the
same as that of the keywords in the parameter list. Comments are allowed, they
must begin with a \#''. Here is a descriptive list of available parameter keywords.

\subsection{Example of configuration}

\label{chap:example}

\section{Overview of the software}
\label{techover}
%---------------------------------- Fig. layout -------------------------------
\begin{figure}[htbp]
\centerline{\includegraphics[width=15cm]{ps/sexlayout.ps}}
\caption{
Layout of the main {\sc SExtractor} procedures. Dashed arrows represent optional
inputs.
}
\label{fig:layout}
\end{figure}

The complete analysis of an image is done in two passes through the data. During the first pass, a
model of the sky background is built, and a couple of global statistics are estimated. During the
second pass, the image is background-subtracted, filtered and thresholded on-the-fly''.
Detections are then deblended, pruned (CLEANed''), photometered, classified and finally written
to the output catalog. The following sections enter a little more into the details of each of these
operations\footnote{In the text, uppercase keywords in typewriter font refer to parameters from the
configuration file or from the parameter file}.

\section{Handling of image data}
{\sc SExtractor} accepts images stored in FITS\footnote{\it Flexible Image Transport System}
Both Basic FITS'' (one single header and one single body) and Multi-Extension-FITS'' (MEF)
images are recognized. Binary {\sc SExtractor} catalogs produced from MEF images are MEF files
themselves. If catalog output is in ASCII format, all catalogs from the individual extensions
are concatenated in one big file; the {\tt EXT\_NUMBER} catalog parameter must be used to tell
which extension the source belongs to.

For images with ${\tt NAXIS} > 2$, only the first data-plane is loaded.
If WCS\footnote{\it World Coordinate System} information (Greisen \& Calabretta 1995,
{\tt http://www.cv.nrao.edu/fits/documents/wcs/wcs.all.ps}) is available in the
header, it is automatically used by {\sc SExtractor} to compute astrometric parameters. Other
astrometric descriptions
like AST ({\it Starlink} format) or the solution coefficients of the DSS
\footnote{\it Digital Sky Survey} plates are not recognized by the software.

In {\sc SExtractor}, as in all similar programs, FITS axis 1'' is traditionaly refered as the
{\tt X} axis, and FITS axis 2'' as the {\tt Y} axis.

\section{Detection and segmentation}
In {\sc SExtractor}, the detection of sources is part of a process called {\em segmentation} in
the image-processing vocabulary.
Segmentation normally consists of identifying and separating image regions which have different
properties (brightness, colour, texture...) or are delineated by edges. In the astronomical
context, the segmentation process consists of separating objects from the sky background. This is
however a somewhat imprecise definition, as astronomical sources have, on the images --- and even
often physically ---, no clear boundaries, and may overlap. We shall therefore use the following
working definition of an object in {\sc SExtractor}: a group of pixels selected through some
detection process and for which the flux contribution of an astronomical source is believed to be
dominant over that of other objects. Note that this means that a simple $x,y$ position vector
alone cannot be handled by {\sc SExtractor} as a detection: most measurement routines require
some rough shape information about the objects.

Segmentation in {\sc SExtractor} is achieved through a very simple thresholding process: a group
of connected pixels that exceed some threshold above the background is identified as a detection.
But things are a little bit more complicated in practice. First, on most astronomical images, the
background is not constant over the frame, and its determination can be ambiguous in crowded
regions. Second, the software has to operate on noisy data, and some filtering adapted to the
characteristics of the image has to be applied prior to detection, to reduce the contamination by
noise peaks. Third, many sources that overlap on the image are unlikely to be detected separately
with a single detection threshold, and require a de-blending procedure, which is actually
multi-thresholding in {\sc SExtractor}. Each of these points will now be described in greater
detail below. It is worth mentioning here that these 3 difficulties could, to a large extent, be
bypassed using a wavelet decomposition (e.g. Bijaoui \etal 1998). Although such an algorithm might
be implemented in a future version of {\sc SExtractor}, current constraints in processing speed,
available memory (processing of gigantic images) often make the pedestrian approach'' still more
interesting in the case of large scale surveys.

\subsection{Background estimation}
\label{chap:backest}
The value measured at each pixel is a function of the sum of a background''
signal and light coming from the objects of interest. To be able to detect
the faintest of these objects and also to measure accurately their fluxes, one needs
to have an accurate estimate of the background level in any place of the image,
a background map''.  Strictly speaking, there should be one background map per
object, that is, what would the image look like if that object was absent.
But, at least for detection, we may start by assuming that most discrete sources do not
overlap too severely, which is generally the case for high galactic latitude fields.

To construct the background map, {\sc SExtractor} makes a first pass through the pixel
data, computing an estimator for the local background in each mesh of a grid
that covers the whole frame.
The background estimator is a combination of $\kappa.\sigma$ clipping and mode estimation,
similar to the one employed in Stetson's DAOPHOT program (see e.g. Da Costa 1992).
Briefly, the local background histogram is clipped iteratively until convergence at
$\pm 3\sigma$ around its median; if $\sigma$ is changed by less than 20\% during that process,
we consider that the field is not crowded and we simply take the mean of the clipped histogram
as a value for the background; otherwise we estimate the mode with:
$$\label{eq:mode} \mbox{Mode} = 2.5 \times \mbox{Median} - 1.5 \times \mbox{Mean}$$
This expression is different from the usual approximation
$$\mbox{Mode} = 3 \times \mbox{Median} - 2 \times \mbox{Mean}$$
(e.g. Kendall and Stuart 1977), but was found to be more accurate with our clipped
distributions, from the simulations we made. Fig. \ref{fig:modevsmean} shows that the expression of
the mode above is considerably less affected\footnote{Obviously in some very unfavorable cases
(like small meshes falling on bright stars), it leads to totally inaccurate results.} by crowding
than a simple clipped mean --- like the one used in FOCAS (Jarvis and Tyson 1981)
or by Infante (1987) --- but is $\approx 30\%$ noisier. For this reason we revert to the
mean in non-crowded fields.

%---------------------------------- Fig.modevsmean  --------------------------------
\begin{figure}[htbp]
\centerline{\includegraphics[width=12cm]{ps/modevsmean.ps}}
\caption{
Simulations of $32\times32$ pixels background meshes polluted by random Gaussian profiles.
The true background lies at 0 ADU. While being slightly noisier, the clipped Mode''
gives a more robust estimate than a clipped Mean in crowded regions.
}
\label{fig:modevsmean}
\end{figure}

Once the grid is set up, a median filter can be applied to suppress possible
local overestimations due to bright stars. The resulting background map is then simply a (natural)
bicubic-spline interpolation between the meshes of the grid.
In parallel with the making of the background map, an RMS-background-map'', that is, a map of the
background noise in the image is produced. It will be used if the {\tt WEIGHT\_TYPE} parameter is
set different from {\tt NONE} (see \S\ref{chap:weighttype}).

\subsubsection{Configuration parameters and tuning}.
The choice of the mesh size ({\tt BACK\_SIZE}) is very important.
If it is too small, the background estimation is affected by the presence of objects and
random noise. Most importantly, part of the flux of the most extended
objects can be absorbed in the background map. If the mesh size is too large,
it cannot reproduce the small scale variations of the background. Therefore
a good compromise has to be found by the user. Typically, for reasonably
sampled images, a width\footnote{{\sc SExtractor} offers the
possibility of rectangular background meshes; but it is advised to use
square ones, except in some very special cases (rapidly varying background
in one direction for example).} of 32 to 256 pixels works well. The user has
some control over the background map by specifying the size
of the median filter ({\tt BACK\_FILTERSIZE}). A width
and height of 1 means that no filtering will be
applied to the background grid. Usually a size of $3\times3$ is enough, but it
may be necessary to use larger dimensions, especially to compensate, in part, for
small background mesh sizes, or in the case of large artefacts in the images.
Median filtering also helps reducing possible ringing effects of the bicubic-spline
around bright features. In some specific cases it might be desirable to median-filter
only background meshes whose original values exceed some threshold above the filtered-value.
This differential threshold is set by the {\tt BACK\_FILTERTHRESH} parameter, in ADUs.
It is important to note that all {\tt BACK\_} configuration parameters also affect the
background-RMS map.

By default the computed background-map is automatically subtracted
from the input image.  But there are some situations where it is more
appropriate to subtract a {\em constant} from the image (e.g., images
where the background noise distribution is strongly skewed).  The {\tt
BACK\_TYPE} configuration parameter (set by default to AUTO'') can
be switched to {\tt MANUAL} to allow for the value specified by the
{\tt BACK\_VALUE} parameter to be subtracted from the input
image. The default value is 0.

\subsubsection{CPU cost}.
The background estimation operation can take a considerable time on the largest images,
e.g. a few minutes minutes for a $32000\times32000$ frame on a 2GHz processor.

\subsection{Filtering}
\subsubsection{Convolution}
Detectability is generally limited at the faintest flux levels by a background noise.
The power-spectrum of the noise and that of the superimposed signal can be significantly different.
Some gain in the ability to detect sources may therefore be obtained simply through
appropriate linear filtering of the data, prior to segmentation. In low density fields,
an optimal convolution kernel $h$ (matched filter'') can be found that maximizes
detectability. An estimator of detectability is for instance the signal-to-noise ratio
at source position $(x_0,y_{\,0}) \equiv (0,0)$:
$$\left( \frac{\rm S}{\rm N}\right)^2 \equiv \frac{\left( (s * h)(x_0,y_{\,0}) \right)^2} {\overline{(n * h)^2}}\,,$$
where $s$ is the signal to be detected, $n$ the noise, and $*$' the convolution operator.
Moving to Fourier space, we get:
$$\left( \frac{\rm S}{\rm N}\right)^2 = \frac{\left(\int{{\cal S}{\cal H}\,d\omega}\right)^2} {\int{|{\cal N}|^2 |{\cal H}|^2\,d\omega}}\,,$$
where ${\cal S}$ and ${\cal H}$ are the Fourier-transforms of $s$ and $h$, respectively, and
$|{\cal N}|^2$ is the power-spectrum of the noise. Remarking, using Schwartz inequality, that
$$\label{eq:schwartz1} \left|\int{{\cal S}{\cal H}\, d\omega}\right|^2 \leq \int{\frac{|{\cal S}|^2}{|{\cal N}|^2} d\omega} \, \int{|{\cal N}|^2 |{\cal H}|^2 d\omega}\,,$$
we see that
$$\label{eq:schwartz2} \left( \frac{\rm S}{\rm N}\right)^2 \leq \int{\frac{|{\cal S}|^2}{|{\cal N}|^2} d\omega}\,.$$
Equality (maximum S/N) in (\ref{eq:schwartz1}) and (\ref{eq:schwartz2}) is achieved for
$$\frac{\cal S}{|{\cal N}|} \propto |{\cal N}| {\cal H}^*\,,\, {\rm that\, is}$$
$$\label{eq:conv} {\cal H} \propto \frac{{\cal S}^*}{|{\cal N}|^2}.$$
In the case of white noise (a valid approximation for many astronomical images, especially
CCD ones), $|{\cal N}|^2 = cste\,$; the optimal convolution kernel for detecting stars is
then the PSF flipped over the $x$ and $y$ directions. It may also be described as the
cross-correlation with the template of the sources to be detected (for more details see, e.g.
Bijaoui \& Dantel 1970, or Das 1991).

There are of course a few problems with this method. First of all,
many sources of unquestionable interest, like galaxies, appear in a variety of shapes and scales
on astronomical images.
A perfectly optimized detection routine should ultimately apply all relevant
convolution kernels one after the other in order to make a complete catalog. Approximations
to this approach are the (isotropic) wavelet analysis mentioned earlier, or the more empirical
ImCat algorithm (Kaiser \etal 1995), for both of which sources to detect are assumed to be
reasonably round. The impact on memory usage and processing speed of such refinements is currently
judged too severe to be applied in {\sc SExtractor}. Simple filtering does a good job in general:
the topological constraints added by the segmentation process make the detection somewhat tolerant
towards larger objects. Extended, very Low-Surface-Brightness (LSB) features found in astronomical
images are often artifacts (flat-fielding errors, optical ghosts'' or halos). However, it is
true that some of them can be genuine objects, like LSB galaxies, or distant galaxy clusters
burried in the background noise. For detecting those with software like {\sc SExtractor}, a
specific processing is needed (see for instance Dalcanton \etal 1997 and references therein). The
simplest way to achieve the detection of extended LSB objects in {\sc SExtractor} is to work
on {\tt MINIBACK} check-images (see \S\ref{chap:miniback}).

A second problem may occur because of overlaps with other objects. Convolving with a low-pass
filter (the PSF has no negative side-lobes) diminishes the contrast between objects, and makes
segmentation less effective in isolating individual sources. This can to some extent be recovered
by deblending (see \S\ref{chap:deblending}). In severely crowded fields however, confusion noise
becomes the limiting factor for detection, and it is then advisable not to filter at all, or to
use a bandpass-filter (compensated filter).

Finally, the PSF appears sometimes to be variable across the field. The convolution mask
should ideally follow these changes in order to allow for optimal detection everywhere in the
image. However, considering approximately-Gaussian PSF cores and convolution kernels,
detectability is a rather slow function of their FWHMs\footnote{Full-Width at Half-Maximum}: a
mismatch as large as 50\% between the kernel FWHM and that of the PSF will lead to no more than a
10\% loss in peak S/N (Irwin 1985). Considering that PSF variations are generally much smaller
than this, filtering in {\sc SExtractor} is limited to constant kernels.

\subsubsection{Non-linear filtering}
There are many situations in which convolution is of little help:
filtering of (strongly) non-Gaussian noise, extraction of specific image patterns,...
In those cases, one would like to extend the concept of a convolution kernel to that of a more
general stationnary filter, able for instance to mimick boolean-like operations on pixels. What
one wants like is thus a mapping from ${\mathbf R}^n$ to ${\mathbf R}$ around each pixel. But the
more general the filter, the more difficult it is to design by-hand'' for each case, specifying
how input pixel \#i should be taken into account with respect to input pixel \#j to form the
output, etc.. The solution to this is machine-learning. Given a training set containing input and
output pixels, a machine-learning software will adapt its internal parameters in order to minimize
a cost function'' (generally a $\chi^2$ error) and converge toward the desired mapping-function.
These parameters can then for example be reloaded by a read-only'' routine to provide the
actual filtering.

{\sc SExtractor} implements this kind of read-only'' functionnality in the form of the so-called
retina-filtering''. The {\sc EyE}\footnote{\em Enhance Your Extraction} software (Bertin
1997) performs neural-network-learning on input and output images to produce retina-files''.
These files contain weights that describe the behaviour of the neural network. The neural network
can thus be seen as an artificial retina'' that takes its stimuli from a small rectangular array
of pixels and produces a response according to prior learning (for more details, see the {\sc EyE}
documentation). Typical applications of the retina are the identification of glitches.

\subsubsection{What is filtered, and what isn't}
Although filtering is a benefit for detection, it distorts profiles
and correlates the noise; it is therefore nefast for most measurement tasks. Because of this,
filtering is applied on the fly'' to the image, and {\em directly} affects only the detection
process and the isophotal parameters described in \S\ref{chap:isoparam}. Other catalog parameters
are indirectly affected --- through the exact position of the barycenter and typical object extent
---, but the effect is considerably less. Obviously, in double-image mode, filtering is only
applied to the {\em detection}\, image.

Virtual'' pixels that lie outside image boundaries are arbitrarily set to zero. This makes sense
since filtering occurs on a background-subtracted image. When weighting is applied
(\S\ref{chap:weight}), bad pixels (pixels with weight $<$ {\tt WEIGHT\_THRESH}) are interpolated
by default (\S\ref{chap:interp}) and should therefore not cause much trouble. It is recommended
not to turn-off interpolation of bad pixels when filtering is on.

\subsubsection{Configuration parameters.}
Filtering is triggered when the {\tt FILTER} keyword is set to {\tt Y}. If active, a file with name
specified by {\tt FILTER\_NAME} is searched for and loaded. Filtering with large retinas can be
extremely time consuming. In many cases, one is only interested in filtering pixels whose values
stand out from the background noise. The {\tt FILTER\_THRESH keyword} can be given to specify the
range of pixel values within which retina-filtering will be applied, in units of background noise
standard deviation. If one value is given, it is interpreted as a lower threshold. For instance:
\begin{verbatim}
FILTER_THRESH	3.0
\end{verbatim}
will allow filtering for pixel values exceeding $+3\sigma$ above the local background, whereas
\begin{verbatim}
FILTER_THRESH	-10.0,3.0
\end{verbatim}
will only allow filtering for pixel values between $-10\sigma$ and $+3\sigma$.
{\tt FILTER\_THRESH} has no effect on convolution.

The result of the filtering process can be verified through a {\tt FILTERED} check-image: see
\S\ref{chap:check}.

\subsubsection{CPU cost.}
The {\sc SExtractor} filtering routine is particularly optimized for small kernels. It thus
provides a convenient way of filtering large image data. On a 2GHz machine, a convolution by a
$5\times5$ kernel will contribute less than 1 second to the processing time of a $2048\times4096$
image. The numbers for non-linear (retina) filtering depend on the complexity of the neural
network, but can be a hundred times larger.

\subsubsection{Filter file formats.}
As described above, two kinds of filter
files are recognized by {\sc SExtractor}: convolution files (traditionaly suffixed with
{\tt .conv}''), and retina'' files ({\tt .ret}'' extensions\footnote{In {\sc SExtractor},
file name extensions are just conventions; they are not used by the software to distinguish
between different file formats.}).

Retina files are written exclusively by the {\sc EyE} software, as FITS binary-tables.

Convolution files are in ASCII format. The following example shows the content of the
{\tt gauss\_2.0\_5x5.conv} file which can be found in the {\tt config/} sub-directory of the
{\sc SExtractor} distribution:
\begin{verbatim}
CONV NORM
# 5x5 convolution mask of a gaussian PSF with FWHM = 2.0 pixels.
0.006319 0.040599 0.075183 0.040599 0.006319
0.040599 0.260856 0.483068 0.260856 0.040599
0.075183 0.483068 0.894573 0.483068 0.075183
0.040599 0.260856 0.483068 0.260856 0.040599
0.006319 0.040599 0.075183 0.040599 0.006319
\end{verbatim}
The {\tt CONV} keyword appearing at the beginning of the first line
tells {\sc SExtractor} that the file contains the description of a
convolution mask (kernel). It can be followed by {\tt NORM} if the
mask is to be normalized to 1 before being applied, or {\tt NONORM}
otherwise\footnote{If the sum of the kernel coefficients happens to be
exactly zero, the kernel is normalized to variance unity.}. The
following lines should contain an equal number of kernel coefficients,
separated by $<$space$>$ of $<$TAB$>$ characters. Coefficients in the
example above are read from left to right and top to bottom,
corresponding to increasing {\tt NAXIS1} ($x$) and {\tt NAXIS2} ($y$)
in the image. Formatting is free, and number representations like {\tt
-0.14}, {\tt -0.1400}, {\tt -1.4e-1} or {\tt -1.4E-01} are equivalent.
The width of the kernel is set by the number of values per line, and
its height is given by the number of lines. Lines beginning with
{\tt \#}'' are treated as comments.

\subsection{Thresholding}
Thresholding is applied to the background-subtracted, filtered image
to isolate connected groups of pixels. Each group defines the
approximate position and shape of a basic {\sc SExtractor} detection
that will be processed further in the pipeline. Groups are made of
pixels whose values exceed the local threshold and which touch each
other at their sides or angles (8-connectivity'').

\subsubsection{Configuration parameters.}
Thresholding is mostly controlled through the {\tt DETECT\_THRESH},
{\tt DETECT\_MINAREA} and {\tt DETECT\_MAXAREA} keywords.

{\tt DETECT\_THRESH} sets the threshold value. If one single value is
given, it is interpreted as a threshold in units of the background's
standard deviation. For example:
\begin{verbatim}
DETECT_THRESH 1.5
\end{verbatim}
will set the detection threshold at 1.5$\sigma$ above the local
background. It is important to note that {\em the standard deviation
quoted here is that of the un{\tt FILTER}ed image, at the pixel
scale}. Hence, on images with white Gaussian background noise for
instance, a {\tt DETECT\_THRESH} of $3.0$ will be close to optimum if
low-pass {\tt FILTER}ing is turned off, but sub-optimum (too high) if
it is on. On the contrary, if the background noise of the image is
intrinsically correlated from pixel-to-pixel, a {\tt DETECT\_THRESH}
of $3.0$ (with no {\tt FILTER}ing) wil be too low and will result in a
poor reliability of the extracted catalog.

Two numbers can be given as arguments to {\tt DETECT\_THRESH}, in
which case the first one is interpreted as an absolute threshold in
units of magnitudes per square-arcsecond'', and the second as a
zero-point in the same units.
\begin{verbatim}
DETECT_THRESH 27.2,30.0
\end{verbatim}
will for example set the threshold at $10^{-0.4 (27.2-30)} = 13.18$

{\tt DETECT\_MINAREA} sets the minimum number of pixels a group should
have to trigger a detection. Obviously this parameter can be used just
like {\tt DETECT\_THRESH} to detect only bright and big'' sources,
or to increase detection reliability. It is however more tricky to
manipulate at low detection thresholds because of the complex
interplay of object topology, noise correlations (including those
induced by filtering), and sampling. In most cases it is therefore
recommended to keep {\tt DETECT\_MINAREA} at a small value, typically
1 to 5 pixels, and let {\tt DETECT\_THRESH} and the filter define {\sc
SExtractor}'s sensitivity.

{\tt DETECT\_MAXAREA}, on the other hand, sets the maximum number of pixels
a group must have in order to trigger a detection. Thus, this paramater may be
used in conjunction with {\tt DETECT\_MINAREA} in order to detect only objects
whose size is within a certain range. Note that, although large objects may
be removed from the catalogue by filtering out those with {\tt ISOAREAF\_IMAGE}
larger than some threshold, these detections would still appear in the
check-image. If it is required that large objects be not present in it,
{\tt DETECT\_MAXAREA} should be used in order to effectively exclude them from
the check-image. See fig. \ref{fig:detect_maxarea_example} for an example.

%---------------------------------- Fig. segmentmeth --------------------------------
\begin{figure}[htbp]
\centerline{\includegraphics[width=14cm]{ps/detect_maxarea_example.ps}}
\caption{Example of how the {\tt DETECT\_MAXAREA} parameter can be used in order
not to detect objects larger than a determined number of pixels.
{\em Left}: close-up of the original image.
{\em Center}: {\tt OBJECTS} check-image generated without {\tt DETECT\_MAXAREA}.
{\em Right}: the same {\tt OBJECTS} check-image, when generated with {\tt DETECT\_MAXAREA} = 100.
}
\label{fig:detect_maxarea_example}
\end{figure}

\subsection{Deblending}
\label{chap:deblending}
Each time an object extraction is completed, the connected set of
pixels passes through a sort of filter that tries to split it into
eventual overlapping components. This case appears more frequently
when the field is crowded or when the detection threshold is set very
low. The deblending method adopted in {\sc SExtractor}, is based on
{\em multi-thresholding}, and works on any kind of object; but it is
unable to deblend components that are so close that no saddle is
present in their profile. However, as no assumption has to be made on
the shape of the objects, it is perfectly suited for galaxies as well
as for high galactic latitude stellar fields.

Typical problematic cases for deblending include patchy, extended {\bf
Sc} galaxies (which have to be considered as single entities), and
close or interacting pairs of optically faint galaxies (which have to
be considered as separate objects). Basically, the multi-thresholding
algorithm employs a multiple isophotal analysis technique similar to
those in use at the APM and the COSMOS machines (Beard, McGillivray
and Thanish 1991); in a first time, each extracted set of connected
pixels is re-thresholded at $N$ levels linearly or exponentially
spaced between its primary extraction threshold and its peak value.
This gives us a sort of 2-dimensional model'' of the light
distribution within the object(s), which is stored in the form of a
tree structure (fig. \ref{figsegmentmeth}). Then the algorithm goes
downwards, from the tips of branches to the trunk, and decides at each
junction whether it shall extract two (or more) objects or continue
its way down. To meet the conditions described earlier, the following
simple decision criteria are adopted: at any junction threshold
$t_{i}$, any branch will be considered as a separate component if
\begin{enumerate}
\item[(1)] the integrated pixel intensity (above $t_{i}$) of the
branch is greater than a certain fraction $\delta_{c}$ of the total
intensity of the composite object.
\item[(2)] condition (1) is verified for at least one more branch at the same level $i$.
\end{enumerate}
Note that ideally, condition (1) is both flux- and scale-invariant.
However for faint, poorly resolved objects, the efficiency of the
deblending is limited mostly by seeing and sampling. From the analysis
of both small and extended galaxy images, a compromise value for the
contrast parameter $\delta_{c}$ $\sim$ 0.005 proved to be optimum.
This should normally exclude to separate objects with a difference in
magnitude greater than $\approx 6$.

%---------------------------------- Fig. segmentmeth --------------------------------
\begin{figure}[htbp]
\centerline{\includegraphics[width=10cm]{ps/segment.ps}}
\caption{ A schematic diagram of the method used to deblend a
composite object. The area profile of the object (smooth curve) can be
described in a tree-structured way (thick lines). The decision to
regard or not a branch as a distinct object is determined according to
its relative integrated intensity (tinted area). In that case above,
the original object shall split into two components A and B. Remaining
pixels are assigned to their most credible progenitors'' afterwards.
}
\label{figsegmentmeth}
\end{figure}

The outlying pixels with flux lower than the separation thresholds
have to be reallocated to the proper components of the merger. To do
so, we have opted for a {\em statistical} approach: at each faint
pixel we compute the contribution which is expected from each
sub-object using a bivariate Gaussian fit to its profile, and turn it
into a probability for that pixel to belong to the sub-object. For
instance, a faint pixel lying halfway between two close bright stars
having the same magnitude will be appended to one of these with equal
probabilities. One big advantage of this technique is that the
morphology of any object is completely defined simply through its list
of pixels.

To test the effects of deblending on photometry and astrometry
measurements, we made several simulations of photographic images of
double stars with different separations and magnitudes under typical
observational conditions (fig. \ref{figsegmentsim}). It is obvious
that multiple isophotal techniques fail when there is no saddle point
present in profiles (i.e. for distance between stars $< 2 \sigma$ in
the case of Gaussian images). We measured a magnitude error $\leq 0.2$ mag and a shift of the centroid ($\leq 0.4$ pixels) for the
fainter star in the very worst cases, but no other systematic effects
were noticeable.

%---------------------------------- Fig. segmentsim ---------------------------------
\begin{figure}[htbp]
\centerline{\includegraphics[width=10cm]{ps/sepsim_pos.ps}}
\centerline{\includegraphics[width=10cm]{ps/sepsim_mag.ps}}
\caption{
Centroid and corrected isophotal magnitude errors for a
simulated $19^{th}$ magnitude star blended with a $11, 15, 19$ and
$21^{th}$ mag. companion as a function of distance (expressed in
pixels). Lines stop at the left when the objects are too close to be
deblended. The dashed vertical line is the theoretical limit for
unsaturated stars with equal magnitudes. In the centroid plot, the
arrow indicates the direction of the neighbour. The simulation assumes
a 1 hour exposure with the CERGA telescope on a IIIaJ plate and Moffat
profiles with a seeing FWHM of 3 pixels (2 ''). }
\label{figsegmentsim}
\end{figure}

The user can control the multi-thresholding operation through 3
parameters. The first one is the number of deblending thresholds ({\tt
DEBLEND\_NTHRESH}). A good value is 32. Higher values are generally
useless, except perhaps for images having an unusually high dynamic
range. In case of memory problems, decreasing the number of thresholds
to say, 8 or even less may be a solution. But then of course a
degradation of the deblending performances may occur. The second
parameter is the contrast parameter ({\tt DEBLEND\_MINCONT}). As
described above, values from 0.001 to 0.01 give best results. Putting
{\tt DEBLEND\_MINCONT} to 0 means that even the faintest local peaks
in the profile will be considered as separate objects. Putting it to 1
means that no deblending will be authorized. The last parameter
concerns the kind of scale used for the thresholds. If the image comes
from photographic material, then a linear scale has to be used ({\tt
DETECTION\_TYPE  PHOTO}). Otherwise, for an image obtained with a
linear device like a CCD, an exponential scale is more appropriate
({\tt DETECTION\_TYPE  CCD}).

\section{Weighting}
\label{chap:weight}
The noise level in astronomical images is often fairly constant, that
is, constant values for the gain, the background noise and the
detection thresholds can be used over the whole frame. Unfortunately
in some cases, like strongly vignetted or composited images, this
approximation is no longer good enough. This leads to detecting
clusters of detected noise peaks in the noisiest parts of the image,
or missing obvious objects in the most sensitive ones. {\sc
SExtractor} is able to handle images with variable noise. It does it
through {\em weight maps}, which are frames having the same size as
the images where objects are detected or measured, and which describe
the noise intensity at each pixel. These maps are internally stored in
units of {\em absolute variance} (in ADU$^2$). We employ the generic
term weight map'' because these maps can also be interpreted as
quality index maps: infinite variance ($\ge 10^{30}$ by definition in
{\sc SExtractor}) means that the related pixel in the science frame is
totally unreliable and should be ignored. The variance format was
adopted as it linearizes most of the operations done over weight maps
(see below).

This means that the noise covariances between pixels are ignored.
Although raw CCD images have essentially white noise, this is not the
case for warped images, for which resampling may induce a strong
correlation between neighbouring pixels. In theory, all non-zero
covariances within the geometrical limits of the analysed patterns
should be taken into account to derive thresholds or error estimates.
Fortunately, the correlation length of the noise is often smaller than
the patterns to be detected or measured, and constant over the image.
In that case one can apply a simple fudge factor'' to the estimated
variance to account for correlations on small scales. This proves to
be a good approximation in general, although it certainly leads to
underestimations for the smallest patterns.

\subsection{Weight-map formats}
\label{chap:weighttype}
{\sc SExtractor} accepts in input, and converts to its internal
variance format, several types of weight-maps. This is controlled
through the {\tt WEIGHT\_TYPE} configuration keyword. These
weight-maps can either be read from a FITS file, whose name is
specified by the {\tt WEIGHT\_IMAGE} keyword, or computed internally.
Valid {\tt WEIGHT\_TYPE}s are:
\begin{itemize}
\item{\tt NONE}: No weighting is applied. The related {\tt
WEIGHT\_IMAGE} and {\tt WEIGHT\_THRESH} (see below) parameters are
ignored.
\item {\tt BACKGROUND}: the science image itself is used to compute
internally a variance map (the related {\tt WEIGHT\_IMAGE} parameter
is ignored). Robust ($3\sigma$-clipped) variance estimates are first
computed within the same background meshes as those described in
\S\ref{chap:backest}\footnote{The mesh-filtering procedures act on the
variance map, too.}. The resulting low-resolution variance map is then
bicubic-spline-interpolated on the fly to produce the actual full-size
variance map. A check-image with {\tt CHECKIMAGE\_TYPE} {\tt
MINIBACK\_RMS} can be requested to examine the low-resolution variance
map.
\item {\tt MAP\_RMS}: the FITS image specified by the {\tt
WEIGHT\_IMAGE} file name must contain a weight-map in units of
absolute standard deviations (in ADUs per pixel).
\item {\tt MAP\_VAR}: the FITS image specified by the {\tt
WEIGHT\_IMAGE} file name must contain a weight-map in units of
relative variance. A robust scaling to the appropriate absolute level
is then performed by comparing this variance map to an internal,
low-resolution, absolute variance map built from the science image
itself.
\item {\tt MAP\_WEIGHT}: the FITS image specified by the {\tt
WEIGHT\_IMAGE} file name must contain a weight-map in units of
relative weights. The data are converted to variance units (by
definition ${\rm variance} \propto 1/{\rm weight}$), and scaled as for
{\tt MAP\_VAR}. {\tt MAP\_WEIGHT} is the most commonly used type of
weight-map: a flat-field, for example, is generally a good
approximation to a perfect weight-map.
\end{itemize}

\subsection{Weight threshold}
It may happen, that some weights are too low (or variances too high) to be of any interest: it is then more appropriate
to discard such pixels than to include them in unweighted measurements such as {\tt FLUX\_APER}. To allow discarding these
very bad pixels, a threshold can be set with the {\tt WEIGHT\_THRESH} parameter. The unit in which this threshold
should be expressed is that of input data: ADUs for
BACKGROUND and MAP\_RMS maps, uncalibrated ADUs$^2$ for MAP\_VAR, and uncalibrated weight-values for MAP\_WEIGHT maps.
Depending on the weight-map type, the threshold will set a lower or a higher limit for bad pixel'' values: higher for
weights, and lower for variances and standard deviations. The default value is 0 for weights, and $10^{30}$ for variance and
standard deviation maps.

\subsection{Effect of weighting}
Weight-maps modify the working of {\sc SExtractor} in the following respects:
\begin{enumerate}
\item Bad pixels are discarded from the background statistics. If more than 50\% of the pixels in a background mesh are bad, the local background
value and its standard deviation are replaced by interpolation of the
nearest valid meshes.
\item The detection threshold $t$ above the local sky background is
adjusted for each pixel $i$ with variance $\sigma^2_i$: $t_i = {\tt DETECT\_THRESH}\times\sqrt{\sigma^2_i}$, where {\tt DETECT\_THRESH} is
expressed in units of standard deviations of the background noise.
Pixels with variance above the threshold set with the {\tt
WEIGHT\_THRESH} parameter are therefore simply not detected. This may
result in splitting objects crossed by a group of bad pixels.
Interpolation (see \S\ref{chap:interp}) should be used to avoid this
problem. If convolution filtering is applied for detection, the
variance map is convolved too. This yields optimum scaling of the
detection threshold in the case where noise is uncorrelated from pixel
to pixel. Non-linear filtering operations (like those offered by
artificial retinae) are not affected.
\item The {\tt CLEAN}ing process (\S\ref{chap:clean}) takes into
account the exact individual thresholds assigned to each pixel for
deciding about the fate of faint detections.
\item Error estimates like {\tt FLUXISO\_ERR}, {\tt ERRA\_IMAGE}, ...
make use of individual variances too. Local background-noise standard
deviation is simply set to $\sqrt{\sigma^2_i}$. In addition, if the
{\tt WEIGHT\_GAIN} parameter is set to {\tt Y} --- which is the
default ---, it is assumed that the local pixel gain (i.e., the
conversion factor from photo-electrons to ADUs) is inversely
proportional to $\sigma^2_i$, its median value over the image being
set by the {\tt GAIN} configuration parameter. In other words, it is
then supposed that the changes in noise intensities seen over the
images are due to gain changes. This is the most common case:
correction for vignetting, or coverage depth. When this is not the
case, for instance when changes are purely dominated by those of the
read-out noise, {\tt WEIGHT\_GAIN} shall be set to {\tt N}.
\item Finally, pixels with weights beyond {\tt WEIGHT\_THRESH} are

\subsection{Combining weight maps}
All the weighting options listed in \S\ref{chap:weighttype} can be
applied separately to detection and measurement images
(\S\ref{chap:using}), --- even if some combinations may not always
make sense. For instance, the following set of configuration lines:
\begin{verbatim}
WEIGHT_IMAGE	rms.fits,weight.fits
WEIGHT_TYPE	MAP_RMS,MAP_WEIGHT
\end{verbatim}
will load the FITS file {\tt rms.fits} and use it as an RMS map for
adjusting the detection threshold and CLEANing, while the {\tt
weight.fits} weight map will only be used for scaling the error
estimates on measurements. This can be done in single- as well as in
dual-image mode (\S\ref{chap:using}). {\tt WEIGHT\_IMAGE}s can be
ignored for {\tt BACKGROUND} {\tt WEIGHT\_TYPE}s. It is of course
possible to use weight-maps for detection or for measurement only. The
following configuration:
\begin{verbatim}
WEIGHT_IMAGE	weight.fits
WEIGHT_TYPE	NONE,MAP_WEIGHT
\end{verbatim}
will apply weighting only for measurements; detection and CLEANing
operations will remain unaffected.

\subsection{Interpolation}
\label{chap:interp}
{\bf TBW}

\section{Flags}
A set of both {\em internal} and {\em external} flags is accessible
for each object. Internal flags are produced by the various detection
and measurement processes within {\sc SExtractor}; they tell for
instance if an object is saturated or has been truncated at the edge
of the image. External flags come from flag-maps'': these are images
with the same size as the one where objects are detected, where
integer numbers can be used to flag some pixels (for instance, bad''
or noisy pixels). Different combinations of flags can be applied
within the isophotal area that defines each object, to produce a
unique value that will be written to the catalog.

\subsection{Internal flags}
The internal flags are {\em always} computed. They are accessible
through the {\tt FLAGS} catalog parameter, which is a short integer.
{\tt FLAGS} contains, coded in decimal, all the extraction flags as a
sum of powers of 2:

\begin{tabular}{ll}
{\tt   1} &     The object has neighbours, bright and close enough to significantly bias the
{\tt MAG\_AUTO}\\
&     photometry\footnotemark{}, or bad pixels (more than 10\% of the integrated area
affected),\\
{\tt   2} &     The object was originally blended with another one,\\
{\tt   4} &     At least one pixel of the object is saturated (or very close to),\\
{\tt   8} &     The object is truncated (too close to an image boundary),\\
{\tt  16} &     Object's aperture data are incomplete or corrupted,\\
{\tt  32} &     Object's isophotal data are incomplete or corrupted\footnotemark{},\\
{\tt  64} &     A memory overflow occurred during deblending,\\
{\tt 128} &     A memory overflow occurred during extraction.
\end{tabular}

only when {\tt MAG\_AUTO} magnitudes are requested.}
{\sc SExtractor} V1.0, and has been kept for compatibility reasons.
With {\sc SExtractor} V2.0+, having this flag activated doesn't have
any consequence for the extracted parameters.}

For example, an object close to an image border may have {\tt FLAGS} =
16, and perhaps {\tt FLAGS} = 8+16+32 = 56.

\subsection{External flags}
{\sc SExtractor} understands that it must process external flags when
{\tt IMAFLAGS\_ISO} or {\tt NIMAFLAGS\_ISO} are present in the catalog
parameter file. It then looks for a FITS image specified by the {\tt
FLAG\_IMAGE} keyword in the configuration file. The FITS image must
contain the flag-map, in the form of a 2-dimensional array of 8, 16 or
32 bits integers. It must have the same size as the image used for
detection. Such flag-maps can be created using for example the {\bf
WeightWatcher} software (Bertin 1997).

The flag-map values for pixels that coincide with the isophotal area
of a given detected object are then combined, and stored in the
catalog as the long integer {\tt IMAFLAGS\_ISO}. 5 kinds of
combination can be selected using the
{\tt FLAG\_TYPE} configuration keyword:
\begin{itemize}
\item {\tt OR}: the result is an arithmetic (bit-to-bit) {\bf OR} of flag-map pixels.
\item {\tt AND}: the result is an arithmetic (bit-to-bit) {\bf AND} of non-zero flag-map pixels.
\item {\tt MIN}: the result is the minimum of the (signed) flag-map pixels.
\item {\tt MAX}: the result is the maximum of the (signed) flag-map pixels.
\item {\tt MOST}: the result is the most frequent non-zero flag-map pixel-value.
\end{itemize}

The {\tt NIMAFLAGS\_ISO} catalog parameter contains a number of
relevant flag-map pixels: the number of non-zero flag-map pixels in
the case of an {\tt OR} or {\tt AND} {\tt FLAG\_TYPE}, or the number
of pixels with value {\tt IMAFLAGS\_ISO} if the {\tt FLAG\_TYPE} is
{\tt MIN},{\tt MAX} or {\tt MOST}.

\section{Measurements}
Once sources have been detected and deblended, they enter the measurement phase.
There are in {\sc SExtractor} two categories of measurements. Measurements from the first
category are made on the isophotal object profiles. Only pixels above
the detection threshold are considered. Many of these isophotal measurements
(like {\tt X\_IMAGE}, {\tt Y\_IMAGE}, etc.) are necessary for the internal
operations of {\sc SExtractor} and are therefore executed even if they are not
requested. Measurements from the second category have access to all pixels of the image.
These measurements are generally more sophisticated and are done at a later stage
of the processing (after {\tt CLEAN}ing and {\tt MASK}ing).

\subsection{Positional parameters derived from the isophotal profile}
\label{chap:isoparam}
The following parameters are derived from the spatial distribution $\cal S$ of pixels detected
above the extraction threshold. {\em The pixel values $I_i$ are taken from the (filtered)
detection image}.

{\bf Note that, unless otherwise noted, all parameter names given
below are only prefixes. They must be followed by "{\tt\_IMAGE}" if
the results shall be expressed in pixel units (see \S..), or
"{\tt\_WORLD}" for World Coordinate System (WCS) units (see
\S\ref{astrom})}. Example: {\tt THETA} $\rightarrow$ {\tt
THETA\_IMAGE}. In all cases parameters are first computed in the image
coordinate system, and then converted to WCS if requested.

\subsubsection{Limits: {\tt XMIN}, {\tt YMIN}, {\tt XMAX}, {\tt YMAX}}
These coordinates define two corners of a rectangle which encloses the detected object:
\begin{eqnarray}
{\tt XMIN} & = & \min_{i \in {\cal S}} x_i,\\
{\tt YMIN} & = & \min_{i \in {\cal S}} y_i,\\
{\tt XMAX} & = & \max_{i \in {\cal S}} x_i,\\
{\tt YMAX} & = & \max_{i \in {\cal S}} y_i,
\end{eqnarray}
where $x_i$ and $y_i$ are respectively the x-coordinate and y-coordinate of pixel $i$.

\subsubsection{Barycenter: {\tt X}, {\tt Y}}
Barycenter coordinates generally define the position of the center'' of a source,
although this definition can be inadequate or inaccurate if its spatial profile shows
a strong skewness or very large wings. {\tt X} and {\tt Y} are simply computed
as the first order moments of the profile:
\begin{eqnarray}
{\tt X} & = & \overline{x} = \frac{\displaystyle \sum_{i \in {\cal S}}
I_i x_i}{\displaystyle \sum_{i \in {\cal S}} I_i},\\ {\tt Y} & = &
\overline{y} = \frac{\displaystyle \sum_{i \in {\cal S}} I_i
y_i}{\displaystyle \sum_{i \in {\cal S}} I_i}.
\end{eqnarray}
Actually, $x_i$ and $y_i$ are summed relative to
{\tt XMIN} and {\tt YMIN} in order to reduce roundoff errors in the summing.

\subsubsection{Position of the peak: {\tt XPEAK}, {\tt YPEAK}}
It is sometimes useful to have the position {\tt XPEAK},{\tt YPEAK} of
the pixel with maximum intensity in a detected object, for instance
when working with likelihood maps, or when searching for artifacts.
For better robustness, {\tt PEAK} coordinates are computed on {\em
filtered} profiles if available. On symetrical profiles, {\tt PEAK}
positions and barycenters coincide within a fraction of pixel ({\tt
XPEAK} and {\tt YPEAK} coordinates are quantized by steps of 1 pixel,
thus {\tt XPEAK\_IMAGE} and {\tt YPEAK\_IMAGE} are integers). This is
no longer true for skewed profiles, therefore a simple comparison
between {\tt PEAK} and barycenter coordinates can be used to identify
asymetrical objects on well-sampled images.

\subsubsection{2nd order moments: {\tt X2}, {\tt Y2}, {\tt XY}}
(Centered) second-order moments are convenient for measuring the spatial spread of a source
profile. In {\sc SExtractor} they are computed with:
\begin{eqnarray}
{\tt X2} & = \overline{x^2} = & \frac{\displaystyle \sum_{i \in {\cal
S}} I_i x_i^2}{\displaystyle \sum_{i \in {\cal S}} I_i} -
\overline{x}^2,\\ {\tt Y2} & = \overline{y^2} = & \frac{\displaystyle
\sum_{i \in {\cal S}} I_i y_i^2}{\displaystyle \sum_{i \in {\cal S}}
I_i} - \overline{y}^2,\\ {\tt XY} & = \overline{xy} = &
\frac{\displaystyle \sum_{i \in {\cal S}} I_i x_i y_i}{\displaystyle
\sum_{i \in {\cal S}} I_i} - \overline{x}\,\overline{y},
\end{eqnarray}
These expressions are more subject to roundoff errors than if the 1st-order moments were
subtracted before summing, but allow both 1st and 2nd order moments to be computed in one
pass. Roundoff errors are however kept to a negligible value by measuring all positions
relative here again to {\tt XMIN} and {\tt YMIN}.

\subsubsection{Basic shape parameters: {\tt A}, {\tt B}, {\tt THETA}}
\label{chap:abtheta}
These parameters are intended to describe the detected object as an elliptical
shape. {\tt A} and {\tt B} are its semi-major and semi-minor axis lengths,
respectively. More precisely, they represent the maximum and minimum spatial
\rms dispersion of the object profile along any direction. {\tt THETA} is the
position-angle between the {\tt A} axis and the {\tt NAXIS1} image axis. It is
counted counter-clockwise. Here is how they are computed:

2nd-order moments can easily be expressed in a referential rotated from the
$x,y$ image coordinate system
by an angle +$\theta$:
$$\label{eq:varproj} \begin{array}{lcrrr} \overline{x_{\theta}^2} & = & \cos^2\theta\:\overline{x^2} & +\,\sin^2\theta\:\overline{y^2} & -\,2 \cos\theta \sin\theta\:\overline{xy},\\ \overline{y_{\theta}^2} & = & \sin^2\theta\:\overline{x^2} & +\,\cos^2\theta\:\overline{y^2} & +\,2 \cos\theta \sin\theta\:\overline{xy},\\ \overline{xy_{\theta}} & = & \cos\theta \sin\theta\:\overline{x^2} & -\,\cos\theta \sin\theta\:\overline{y^2} & +\,(\cos^2\theta - \sin^2\theta)\:\overline{xy}. \end{array}$$
One can find interesting angles $\theta_0$ for which the variance is
minimized (or maximized) along $x_{\theta}$:
$${\left.\frac{\partial \overline{x_{\theta}^2}}{\partial \theta} \right|}_{\theta_0} = 0,$$
$$2 \cos\theta \sin\theta_0\:(\overline{y^2} - \overline{x^2}) + 2 (\cos^2\theta_0 - \sin^2\theta_0)\:\overline{xy} = 0.$$
If $\overline{y^2} \neq \overline{x^2}$, this implies:
$$\label{eq:theta0} \tan 2\theta_0 = 2 \frac{\overline{xy}}{\overline{x^2} - \overline{y^2}},$$
a result which can also be obtained by requiring the covariance
$\overline{xy_{\theta_0}}$ to be null.
Over the domain $[-\pi/2, +\pi/2[$, two different angles --- with opposite signs --- satisfy
(\ref{eq:theta0}).
By definition, {\tt THETA} is the position angle for which
$\overline{x_{\theta}^2}$ is {\em max}\,imized.
{\tt THETA} is therefore the solution to (\ref{eq:theta0}) that has the same sign as
the covariance $\overline{xy}$.
{\tt A} and {\tt B} can now simply be expressed as:
\begin{eqnarray}
{\tt A}^2 & = & \overline{x^2}_{\tt THETA},\ \ \ {\rm and}\\
{\tt B}^2 & = & \overline{y^2}_{\tt THETA}.
\end{eqnarray}
{\tt A} and {\tt B} can be computed directly from the 2nd-order moments, using the following
equations derived from (\ref{eq:varproj}) after some tedious arithmetics:
\begin{eqnarray}
\label{eq:aimage}
{\tt A}^2 & = & \frac{\overline{x^2}+\overline{y^2}}{2}
+ \sqrt{\left(\frac{\overline{x^2}-\overline{y^2}}{2}\right)^2 + \overline{xy}^2},\\
{\tt B}^2 & = & \frac{\overline{x^2}+\overline{y^2}}{2}
- \sqrt{\left(\frac{\overline{x^2}-\overline{y^2}}{2}\right)^2 + \overline{xy}^2}.
\end{eqnarray}
Note that {\tt A} and {\tt B} are exactly halves the $a$ and $b$
parameters computed by the COSMOS image analyser (Stobie 1980,1986).
Actually, $a$ and $b$ are defined by Stobie as the semi-major and
semi-minor axes of an elliptical shape with constant surface
brightness, which would have the same 2nd-order moments as the
analysed object.

\subsubsection{Ellipse parameters: {\tt CXX}, {\tt CYY}, {\tt CXY}}
\label{chap:cxx}
{\tt A}, {\tt B} and {\tt THETA} are not very convenient to use when,
for instance, one wants to know if a particular {\sc SExtractor}
detection extends over some position. For this kind of application,
three other ellipse parameters are provided; {\tt CXX}, {\tt CYY} and
{\tt CXY}. They do nothing more than describing the same ellipse, but
in a different way: the elliptical shape associated to a detection is
now parameterized as
$${\tt CXX} (x-\overline{x})^2 + {\tt CYY} (y-\overline{y})^2 + {\tt CXY} (x-\overline{x})(y-\overline{y}) = R^2,$$
where $R$ is a parameter which scales the ellipse, in units of {\tt A}
(or {\tt B}). Generally, the isophotal limit of a detected object is
well represented by $R\approx 3$ (Fig. \ref{fig:ellipse}). Ellipse
parameters can be derived from the 2nd order moments:
\begin{eqnarray}
{\tt CXX} & = & \frac{\cos^2 {\tt THETA}}{{\tt A}^2} + \frac{\sin^2
{\tt THETA}}{{\tt B}^2} =
\frac{\overline{y^2}}{\sqrt{\left(\frac{\overline{x^2}-\overline{y^2}}{2}\right)^2
+ \overline{xy}^2}}\\ {\tt CYY} & = & \frac{\sin^2 {\tt THETA}}{{\tt
A}^2} + \frac{\cos^2 {\tt THETA}}{{\tt B}^2} =
\frac{\overline{x^2}}{\sqrt{\left(\frac{\overline{x^2}-\overline{y^2}}{2}\right)^2
+ \overline{xy}^2}}\\ {\tt CXY} & = & 2 \cos {\tt THETA}\sin {\tt
THETA} \left( \frac{1}{{\tt A}^2} - \frac{1}{{\tt B}^2}\right) = -2
\frac{\overline{xy}}{\sqrt{\left(\frac{\overline{x^2}-\overline{y^2}}{2}\right)^2
+ \overline{xy}^2}}
\end{eqnarray}

%------------------------------ Fig. phot -----------------------------
\begin{figure}[htbp]
\centerline{\includegraphics[width=16cm]{ps/ellipse.ps}}
\caption{
The meaning of basic shape parameters.
}
\label{fig:ellipse}
\end{figure}

\subsubsection{By-products of shape parameters: {\tt ELONGATION},
{\tt ELLIPTICITY}}\footnote{Such parameters are dimensionless
and therefore do not accept any {\tt \_IMAGE} or {\tt \_WORLD} suffix}

These parameters are directly derived from {\tt A} and {\tt B}:
\begin{eqnarray}
{\tt ELONGATION} & = & \frac{\tt A}{\tt B}\ \ \ \ \ \mbox{and}\\
{\tt ELLIPTICITY} & = & 1 - \frac{\tt B}{\tt A}.
\end{eqnarray}

\subsubsection{Position errors: {\tt ERRX2}, {\tt ERRY2}, {\tt ERRXY},
{\tt ERRA}, {\tt ERRB}, {\tt ERRTHETA}, {\tt ERRCXX}, {\tt ERRCYY},
{\tt ERRCXY}}
\label{chap:poserr}
Uncertainties on the position of the barycenter can be estimated using
photon statistics. Of course, this kind of estimate has to be
considered as a lower-value of the real error since it does not
include, for instance, the contribution of detection biases or the
contamination by neighbours. As {\sc SExtractor} does not currently
take into account possible correlations between pixels, the variances
simply write:
\begin{eqnarray}
{\tt ERRX2} & = {\rm var}(\overline{x}) = & \frac{\displaystyle
\sum_{i \in {\cal S}} \sigma^2_i (x_i-\overline{x})^2} {\displaystyle
\left(\sum_{i \in {\cal S}} I_i\right)^2},\\ {\tt ERRY2} & = {\rm
var}(\overline{y}) = & \frac{\displaystyle \sum_{i \in {\cal S}}
\sigma^2_i (y_i-\overline{y})^2} {\displaystyle \left(\sum_{i \in
{\cal S}} I_i\right)^2},\\ {\tt ERRXY} & = {\rm
cov}(\overline{x},\overline{y}) = & \frac{\displaystyle \sum_{i \in
{\cal S}} \sigma^2_i (x_i-\overline{x})(y_i-\overline{y})}
{\displaystyle \left(\sum_{i \in {\cal S}} I_i\right)^2}.
\end{eqnarray}
$\sigma_i$ is the flux uncertainty estimated for pixel $i$:
$$\sigma^2_i = {\sigma_B}^2_i + \frac{I_i}{g_i},$$
where ${\sigma_B}_i$ is the local background noise and $g_i$ the local
gain --- conversion factor --- for pixel $i$ (see
\S\ref{chap:weight} for more details). Semi-major axis {\tt ERRA}, semi-minor
axis {\tt ERRB}, and position angle {\tt ERRTHETA} of the
$1\sigma$ position error ellipse are computed from the covariance
matrix exactly like in \ref{chap:abtheta} for shape parameters:
\begin{eqnarray}
\label{eq:erra}
{\tt ERRA}^2 & = & \frac{{\rm var}(\overline{x})+{\rm var}(\overline{y})}{2}
+ \sqrt{\left(\frac{{\rm var}(\overline{x})-{\rm var}(\overline{y})}{2}\right)^2
+ {\rm cov}^2(\overline{x},\overline{y})},\\
\label{eq:errb}
{\tt ERRB}^2 & = & \frac{{\rm var}(\overline{x})+{\rm var}(\overline{y})}{2}
- \sqrt{\left(\frac{{\rm var}(\overline{x})-{\rm var}(\overline{y})}{2}\right)^2
+ {\rm cov}^2(\overline{x},\overline{y})},\\
\label{eq:errtheta}
\tan (2\times{\tt ERRTHETA}) & = & 2 \frac{{\rm cov}(\overline{x},\overline{y})}
{{\rm var}(\overline{x}) - {\rm var}(\overline{y})}.
\end{eqnarray}
And the ellipse parameters are:
\begin{eqnarray}
\label{eq:errcxx}
{\tt ERRCXX} & = & \frac{\cos^2 {\tt ERRTHETA}}{{\tt ERRA}^2} +
\frac{\sin^2 {\tt ERRTHETA}}{{\tt ERRB}^2} = \frac{{\rm
var}(\overline{y})}{\sqrt{\left(\frac{{\rm var}(\overline{x}) -{\rm
var}(\overline{y})}{2}\right)^2 + {\rm
cov}^2(\overline{x},\overline{y})}},\\
\label{eq:errcyy}
{\tt ERRCYY} & = & \frac{\sin^2
{\tt ERRTHETA}}{{\tt ERRA}^2} + \frac{\cos^2 {\tt ERRTHETA}}{{\tt
ERRB}^2} = \frac{{\rm var}(\overline{x})}{\sqrt{\left(\frac{{\rm
var}(\overline{x}) -{\rm var}(\overline{y})}{2}\right)^2 + {\rm
cov}^2(\overline{x},\overline{y})}},\\
\label{eq:errcxy}
{\tt ERRCXY} & = & 2 \cos {\tt
ERRTHETA}\sin {\tt ERRTHETA} \left( \frac{1}{{\tt ERRA}^2} -
\frac{1}{{\tt ERRB}^2}\right)\\ & = & -2 \frac{{\rm
cov}(\overline{x},\overline{y})}{\sqrt{\left(\frac{{\rm
var}(\overline{x}) -{\rm var}(\overline{y})}{2}\right)^2 + {\rm
cov}^2(\overline{x},\overline{y})}}.
\end{eqnarray}

\subsubsection{Handling of infinitely thin'' detections}
Apart from the mathematical singularities that can be found in some of
the above equations describing shape parameters (and which {\sc
SExtractor} handles, of course), some detections with very specific
shapes may yield quite unphysical parameters, namely null values for
{\tt B}, {\tt ERRB}, or even {\tt A} and {\tt ERRA}. Such detections
include single-pixel objects and horizontal, vertical or diagonal
lines which are 1-pixel wide. They will generally originate from
glitches; but very undersampled and/or low S/N genuine sources may
also produce such shapes. How to handle them?

For basic shape parameters, the following convention was adopted: if
the light distribution of the object falls on one single pixel, or
lies on a sufficiently thin line of pixels, which we translate
mathematically by
$$\label{eq:singutest} \overline{x^2}\,\overline{y^2} - \overline{xy}^2 < \rho^2,$$
then $\overline{x^2}$ and $\overline{y^2}$ are incremented by $\rho$.
$\rho$ is arbitrarily set to $1/12$: this is the variance of a
1-dimensional top-hat distribution with unit width. Therefore
$1/\sqrt{12}$ represents the typical minor-axis values assigned (in
pixels units) to undersampled sources in {\sc SExtractor}.

Positional errors are more difficult to handle, as objects with very
high signal-to-noise can yield extremely small position uncertainties,
just like singular profiles do. Therefore {\sc SExtractor} first
checks that (\ref{eq:singutest}) is true. If this is the case, a new
test is conducted:
$$\label{eq:singutest2} {\rm var}(\overline{x})\,{\rm var}(\overline{y}) - {\rm covar}^2(\overline{x}, \overline{y}) < \rho^2_e,$$
where $\rho_e$ is arbitrarily set to $\left( \sum_{i \in {\cal S}} \sigma^2_i \right) / \left(\sum_{i \in {\cal S}} I_i\right)^2$. If
(\ref{eq:singutest2}) is true, then $\overline{x^2}$ and
$\overline{y^2}$ are incremented by $\rho_e$.

\subsection{Windowed positional parameters}
\label{chap:winparam}
Parameters measured within an object's isophotal limit are sensitive to
two main factors: 1) changes in the detection threshold, which
create a variable bias and 2) irregularities in the object's
isophotal boundaries, which act as additional noise'' in the measurements.

Measurements performed through a {\em window} function (an {\em envelope}) do
not have such drawbacks. {\sc SExtractor} versions 2.4 and above implement
windowed'' versions for most of the measurements described in
\ref{chap:isoparam}:

{\small
\begin{tabular}{ll}
Isophotal parameters & Equivalent windowed parameters \\
\hline
{\tt X\_IMAGE}, {\tt Y\_IMAGE} &
{\tt XWIN\_IMAGE}, {\tt YWIN\_IMAGE}\\
{\tt ERRA\_IMAGE}, {\tt ERRB\_IMAGE}, {\tt ERRTHETA\_IMAGE} &
{\tt ERRAWIN\_IMAGE}, {\tt ERRBWIN\_IMAGE}, {\tt ERRTHETAWIN\_IMAGE}\\
{\tt A\_IMAGE}, {\tt B\_IMAGE}, {\tt THETA\_IMAGE} &
{\tt AWIN\_IMAGE}, {\tt BWIN\_IMAGE}, {\tt THETAWIN\_IMAGE}\\
{\tt X2\_IMAGE}, {\tt Y2\_IMAGE}, {\tt XY\_IMAGE} &
{\tt X2WIN\_IMAGE}, {\tt Y2WIN\_IMAGE}, {\tt XYWIN\_IMAGE}\\
{\tt CXX\_IMAGE}, {\tt CYY\_IMAGE}, {\tt CXY\_IMAGE} &
{\tt CXXWIN\_IMAGE}, {\tt CYYWIN\_IMAGE}, {\tt CXYWIN\_IMAGE}\\
\hline
\end{tabular}}

The computations involved are roughly  the same except that the pixel values are
integrated within a circular Gaussian window as opposed to the object's
isophotal footprint.
The Gaussian window is scaled to each object; its FWHM is the diameter of the
disk that contains half of the object flux ($d_{50}$). Note that in double-image
mode (\ref{chap:using}) the window is scaled based on the {\em measurement}
image.

\subsubsection{Windowed centroid: {\tt XWIN}, {\tt YWIN}}
\label{chap:wincent}
This is an iterative process. The computation starts by initializing the
windowed centroid coordinates $\overline{x_{\tt WIN}}^{(0)}$ and
$\overline{y_{\tt WIN}}^{(0)}$  to their basic $\overline{x}$ and $\overline{y}$
isophotal equivalents, respetively. Then at each iteration $t$,
$\overline{x_{\tt WIN}}$ and $\overline{y_{\tt WIN}}$ are refined using:
\begin{eqnarray}
\label{eq:xwin}
{\tt XWIN}^{(t+1)} & = & \overline{x_{\tt WIN}}^{(t+1)}
= \overline{x_{\tt WIN}}^{(t)} + 2\,\frac{\sum_{r_i^{(t)} < r_{\rm max}}
w_i^{(t)} I_i \ (x_i  - \overline{x_{\tt WIN}}^{(t)})}
{\sum_{r_i^{(t)} < r_{\rm max}} w_i^{(t)} I_i},\\
\label{eq:ywin}
{\tt YWIN}^{(t+1)} & = & \overline{y_{\tt WIN}}^{(t+1)}
= \overline{y_{\tt WIN}}^{(t)} + 2\,\frac{\sum_{r_i^{(t)} < r_{\rm max}}
w_i^{(t)} I_i\ (y_i - \overline{y_{\tt WIN}}^{(t)})}
{\sum_{r_i^{(t)} < r_{\rm max}} w_i^{(t)} I_i},
\end{eqnarray}
where
$$w_i^{(t)} = \exp{-\frac{r_i^{(t)^2}}{2s_{\tt WIN}^2}},$$
with
$$r_i^{(t)} = \sqrt{\left(x_i - \overline{x_{\tt WIN}}^{(t)}\right)^2 + \left(y_i - \overline{y_{\tt WIN}}^{(t)}\right)^2}$$
and
$s_{\tt WIN} = d_{50} / \sqrt{8 \ln 2}$.
The process stops when the change in position between two iterations is less
than $2.10^{-4}$ pixel, a condition which is generally achieved in about 3 to 5
iterations.

Although the iterative nature of the processing slows down processing a bit,
it is recommended to use whenever possible windowed parameters instead of their
isophotal equivalents, since the measurements they provide
are much more precise (Fig. \ref{fig:winpres}). The precision in centroiding
offered by {\tt XWIN\_IMAGE} and {\tt YWIN\_IMAGE} is actually very close to
that of PSF-fitting on focused and properly sampled star images, and can also
be applied to galaxies. It has been verified that for isolated,
Gaussian-like PSFs, its accuracy is close to the theoretical limit set by
image noise\footnote{see

%------------------------------ Fig. winpres -----------------------------
\begin{figure}[htbp]
\centerline{\includegraphics[width=8cm]{ps/sex_xpres.ps}
\includegraphics[width=8cm]{ps/sex_xw2pres.ps}}
\caption{Comparison between isophotal and windowed centroid measurement
accuracies on simulated, background noise-limited images.{\em Left}: histogram
of the difference between {\tt X\_IMAGE} and the simulation centroid in x.
{\em Right}: histogram of the difference between {\tt XWIN\_IMAGE} and the
simulation centroid in x.}
\label{fig:winpres}
\end{figure}

\subsubsection{Windowed 2nd order moments: {\tt X2}, {\tt Y2}, {\tt XY}}
Windowed second-order moments are computed on the image data once the centering
process from \S{\ref{chap:wincent}} has converged:
\begin{eqnarray}
{\tt X2WIN} & = \overline{x_{\tt WIN}^2}
= & \frac{\sum_{r_i < r_{\rm max}} w_i I_i (x_i - \overline{x_{\tt WIN}})^2}
{\sum_{r_i < r_{\rm max}} w_i I_i},\\
{\tt Y2WIN} & = \overline{y_{\tt WIN}^2}
= & \frac{\sum_{r_i < r_{\rm max}} w_i I_i (y_i - \overline{y_{\tt WIN}})^2}
{\sum_{r_i < r_{\rm max}} w_i I_i},\\
{\tt XYWIN} & = \overline{xy_{\tt WIN}}
= & \frac{\sum_{r_i < r_{\rm max}} w_i I_i (x_i - \overline{x_{\tt WIN}})
(y_i - \overline{y_{\tt WIN}})}
{\sum_{r_i < r_{\rm max}} w_i I_i}.
\end{eqnarray}
Windowed second-order moments are typically twice smaller than their isophotal
equivalent.

\subsubsection{Windowed ellipse parameters:
{\tt CXXWIN}, {\tt CYYWIN}, {\tt CXYWIN}}
They are computed from the windowed 2nd order moments exactly the same way as
in \S\ref{chap:cxx}.

\subsubsection{Windowed position errors: {\tt ERRX2WIN}, {\tt ERRY2WIN},
{\tt ERRXYWIN}, {\tt ERRAWIN}, {\tt ERRBWIN}, {\tt ERRTHETAWIN},
{\tt ERRCXXWIN}, {\tt ERRCYYWIN}, {\tt ERRCXYWIN}}
Windowed position errors are computed on the image data once the centering
process from \S{\ref{chap:wincent}} has converged. Assuming that noise is
uncorrelated among pixels, standard error propagation applied to
(\ref{eq:xwin}) and (\ref{eq:xwin}) gives us:
\begin{eqnarray}
{\tt ERRX2WIN} & = {\rm var}(\overline{x_{\tt WIN}})
= & 4\,\frac{\sum_{r_i < r_{\rm max}} w_i^2 \sigma^2_i (x_i-\overline{x})^2}
{\left(\sum_{r_i < r_{\rm max}} w_i I_i\right)^2},\\
{\tt ERRY2WIN} & = {\rm var}(\overline{y_{\tt WIN}})
= & 4\,\frac{\sum_{r_i < r_{\rm max}} w_i^2 \sigma^2_i (y_i-\overline{y})^2}
{\left(\sum_{r_i < r_{\rm max}} w_i I_i\right)^2},\\
{\tt ERRXYWIN} & = {\rm cov}(\overline{x_{\tt WIN}},\overline{y_{\tt WIN}})
= & 4\,\frac{\sum_{r_i < r_{\rm max}}
w_i^2 \sigma^2_i (x_i-\overline{x_{\tt WIN}})(y_i-\overline{y_{\tt WIN}})}
{\left(\sum_{r_i < r_{\rm max}} w_i I_i\right)^2}.
\end{eqnarray}

Semi-major axis {\tt ERRAWIN}, semi-minor
axis {\tt ERRBWIN}, and position angle {\tt ERRTHETAWIN} of the
$1\sigma$ position error ellipse are computed from the covariance
matrix elements ${\rm var}(\overline{x_{\tt WIN}})$,
${\rm var}(\overline{y_{\tt WIN}})$,
${\rm cov}(\overline{x_{\tt WIN}},\overline{y_{\tt WIN}})$,
exactly as in \S\ref{chap:poserr}: see eqs. (\ref{eq:erra}),
(\ref{eq:errb}), (\ref{eq:errtheta}), (\ref{eq:errcxx}), (\ref{eq:errcyy})
and (\ref{eq:errcxy}).

%\subsection{2D-model fitting}
%\subsubsection{Star/galaxy separation}
%With the local PSF and a noise model in hand, one can easily derive an optimum
%star/galaxy classifier. The problem was first addressed by Sebok (\cite{sebok})
%and Valdes (\cite{valdes}). If detections can be classified as either a star
%(s) or a galaxy (g), then the {\em a posteriori} probability for having a
%star, given the observed vector of pixel values $\vec{I}$ is given by the Bayes
%theorem:
%$$%P(s|\vec{I}) = \frac{P(\vec{I}|s)P(s)}{P(\vec{I}|s)P(s)+p(\vec{I}|g)P(g)}, %$$
%that is,
%$$%P(s|\vec{I}) = \frac{1}{1+\frac{P(\vecs{I}|g)}{P(\vecs{I}|s)}\frac{P(g)}{P(s)}}. %$$
%The probability for the detected object to be a star $p(s|\vec{I})$ depends
%on both the likelihood ratio $P(\vec{I}|g)/P(\vec{I}|s)$, and the ratio of
%{\em a priori} $P(g)/P(s)$. If we make the assumption that the measurement
%noise at pixel $i$ is additive, Gaussian with mean 0 and standard deviation
%$\sigma_i$, and statistically independent from its neighbours, then we have
%$$%P(\vec{I}|s) = \prod_i \frac{1}{\sqrt{2\pi}\sigma_i} % \exp -\frac{(I_i - S_i)^2}{2\sigma^2_i} %$$
%and
%$$%P(\vec{I}|g) = \prod_i \frac{1}{\sqrt{2\pi}\sigma_i} % \exp -\frac{(I_i - G_i)^2}{2\sigma^2_i} %$$
%where the $S_i$ and $G_i$ are the pixel values for the best-fitting galaxy and
%star models, respectively.

\subsection{Astrometry and {\tt WORLD} coordinates}
\label{astrom}
All {\sc SExtractor} measurements related to positions, distances and
areas in the image, like those described above can also be expressed
in {\tt WORLD} coordinates in the output catalog. These parameters
simply have the {\tt \_WORLD} suffix instead of the {\tt \_IMAGE}
appended to them. The conversion from {\tt IMAGE} to {\tt WORLD}
coordinates is presently performed by using information found in the
FITS header of the {\em measurement} image, even if the parameter is
originally computed from the {\em detection} image (like the basic
shape parameters for instance).

To understand how this is done in practice, let's have a general look
at the way the mapping from {\tt IMAGE} to {\tt WORLD} coordinates is
currently described in a FITS image header. First, a linear
transformation (involving most of the time only scaling and possibly
rotation, and more rarely shear) allows one to convert integer pixel
positions (1,2,...) for each axis to some projected'' coordinate
system. This is where you might want to stop if your {\tt WORLD}
system is just some kind of simple focal-plane coordinate-system (in
meters for instance), or for a calibrated wavelength axis (spectrum).
Now, the FITS WCS (World Coordinate System) convention allows you to
apply to these projected coordinates'' a non-linear transformation,
which is in fact a de-projection back to local'' spherical
(celestial) coordinates. Many types of projections are allowed by the
WCS convention, but the traditional tangential (gnomonic) projection
is the most commonly used. The last step of the transformation is to
convert these local coordinates, still relative to a projection
reference point, to an absolute position in celestial longitude and
latitude, for instance right-ascension and declination. For this one
needs to know the reference frame of the coordinate system, which
often requires some information about the equinox or the observation
date. At this level, all transformations are matters of spherical
trigonometry.

\subsubsection{Celestial coordinates}
We will not describe here the transformations $(\alpha,\delta) = f(x,y)$ themselves. {\sc SExtractor} de-projections rely on the WCSlib
2.4 written by Mark Calabretta, and all the details concerning those
can be found in Greisen \& Calabretta (1995). In addition to the {\tt
\_WORLD} parameters, 3 purely angular world'' coordinates are
available in {\sc SExtractor}, expressed in decimal degrees:
\begin{enumerate}
\item{}{\tt \_SKY} coordinates: strictly identical to {\tt \_WORLD} coordinates, except that
the units are explicitely degrees. They correspond to sky coordinates in the
native'' system without any precession correction, conversion, etc.
\item{}{\tt \_J2000} coordinates: precession corrections are applied in the FK5 system to
convert to J2000 coordinates if necessary.
\item{}{\tt \_B1950} coordinates: precession corrections are computed in the FK5 system and
transformation to B1950 is applied.
\end{enumerate}

Transformation to J2000 or B1950 is done without taking into account
proper motions, which are obviously unknown for the detected objects.
In both cases, epoch 2000.0 is assumed.

Here is a list of catalog parameters currently supporting angular coordinates:

{
%\tiny
\scriptsize
\tabcolsep 3pt
\begin{tabular}{lll}
Image  parameters & World parameters & Angular parameters \\
\hline
{\tt X\_IMAGE}, {\tt Y\_IMAGE} & {\tt X\_WORLD}, {\tt Y\_WORLD} & {\tt
ALPHA\_SKY}, {\tt DELTA\_SKY} \\
& & {\tt ALPHA\_J2000}, {\tt DELTA\_J2000} \\
& & {\tt ALPHA\_B1950}, {\tt DELTA\_B1950} \\
{\tt XWIN\_IMAGE}, {\tt YWIN\_IMAGE} &
{\tt XWIN\_WORLD}, {\tt YWIN\_WORLD} &
{\tt ALPHAWIN\_SKY}, {\tt DELTAWIN\_SKY} \\
& & {\tt ALPHAWIN\_J2000}, {\tt DELTAWIN\_J2000} \\
& & {\tt ALPHAWIN\_B1950}, {\tt DELTAWIN\_B1950} \\
{\tt XPEAK\_IMAGE}, {\tt YPEAK\_IMAGE} & {\tt XPEAK\_WORLD}, {\tt
YPEAK\_WORLD} & {\tt ALPHAPEAK\_SKY}, {\tt DELTAPEAK\_SKY} \\
& & {\tt ALPHAPEAK\_J2000}, {\tt DELTAPEAK\_J2000} \\
& & {\tt ALPHAPEAK\_B1950}, {\tt DELTAPEAK\_B1950} \\
{\tt X2\_IMAGE}, {\tt Y2\_IMAGE}, {\tt XY\_IMAGE} &
{\tt X2\_WORLD}, {\tt Y2\_WORLD}, {\tt XY\_WORLD} &\\
{\tt X2WIN\_IMAGE}, {\tt Y2WIN\_IMAGE}, {\tt XYWIN\_IMAGE} &
{\tt X2WIN\_WORLD}, {\tt Y2WIN\_WORLD}, {\tt XYWIN\_WORLD} &\\
{\tt CXX\_IMAGE}, {\tt CYY\_IMAGE}, {\tt CXY\_IMAGE} &
{\tt CXX\_WORLD}, {\tt CYY\_WORLD}, {\tt CXY\_WORLD} &\\
{\tt CXXWIN\_IMAGE},{\tt CYYWIN\_IMAGE},{\tt CXYWIN\_IMAGE} &
\multicolumn{2}{l}{{\tt CXXWIN\_WORLD}, {\tt CYYWIN\_WORLD}, {\tt CXYWIN\_WORLD}} \\
\hline
\end{tabular}
}

{\bf TBW}

\subsubsection{Use of the FITS keywords for astrometry}
{\bf TBW}

\subsection{Photometry}
\label{photometry}
{\sc SExtractor} has currently the possibility to compute four types of
magnitude: isophotal, {\em corrected-isophotal}, fixed-aperture and
{\em adaptive-aperture}. For all magnitudes, an additive zero-point''
correction can be applied with the {\tt MAG\_ZEROPOINT} keyword.
Note that for each {\tt MAG\_XXXX}, a magnitude error estimate {\tt MAGERR\_XXXX},
a linear {\tt FLUX\_XXXX} measurement and its error estimate {\tt FLUXERR\_XXXX}
are also available.

\paragraph{Isophotal magnitudes} ({\tt MAG\_ISO}) are computed simply, using the
detection threshold as the lowest isophote.

\paragraph{Corrected isophotal magnitudes} ({\tt MAG\_ISOCOR}) can be considered
as a quick-and-dirty way for retrieving the fraction of flux lost by isophotal magnitudes.
Although their use is now deprecated, they have been kept in {\sc SExtractor} 2.x and
above for compatibility with {\sc SExtractor} 1.
If we make the assumption that the intensity profiles of
the faint objects recorded on the plate are roughly Gaussian because
of atmospheric blurring, then the fraction $\eta = \frac{I_{\rm iso}}{I_{\rm tot}}$ of the total flux enclosed within a
$$\left(1-\frac{1}{\eta}\right ) \ln (1-\eta) = \frac{A\,t}{I_{\rm iso}} \label{eqisocor}$$
where $A$ is the area and $t$ the threshold related to
this isophote. Eq. (\ref{eqisocor}) is not analytically invertible,
but a good approximation to $\eta$ (error $< 10^{-2}$ for $\eta > 0.4$) can be done with the second-order polynomial fit:
$$\eta \approx 1 - 0.1961 \frac{A\,t}{I_{\rm iso}} - 0.7512 \left( \frac{A\,t}{I_{\rm iso}}\right)^2 \label{eq:isocor}$$
A total'' magnitude $m_{\rm tot}$ estimate is then
$$m_{\rm tot} = m_{\rm iso} + 2.5 \log \eta$$
Clearly this cheap correction works best with stars; and although it
is shown to give tolerably accurate results with most disk galaxies,
it fails with ellipticals because of the broader wings of their
profiles.

\paragraph{Fixed-aperture magnitudes} ({\tt MAG\_APER}) estimate the
flux above the background within a circular aperture. The
diameter of the aperture in pixels ({\tt PHOTOM\_APERTURES}) is supplied
by the user (in fact it does not need to be an integer since each
normal'' pixel is subdivided in $5\times 5$ sub-pixels before measuring the flux
within the aperture). If {\tt MAG\_APER} is provided as a vector {\tt MAG\_APER[}$n${\tt ]},
at least $n$ apertures must be specified with {\tt PHOTOM\_APERTURES}.

\paragraph{Automatic aperture magnitudes} ({\tt MAG\_AUTO}) are intended
to give the most precise estimate of total magnitudes'',
at least for galaxies.
{\sc SExtractor}'s automatic aperture photometry routine is inspired by Kron's
first moment'' algorithm (1980). (1) We define an elliptical
aperture whose elongation $\epsilon$ and position angle $\theta$ are
defined by second order moments of the object's light distribution.
The ellipse is scaled to $R_{\rm max}.\sigma_{\rm iso}$ ($6 \sigma_{\rm iso}$,
which corresponds roughly to 2 isophotal radii'').
(2) Within this aperture we compute the first moment'':
$$r_1 = \frac{\sum r\,I(r)}{\sum I(r)}$$
Kron (1980) and Infante (1987) have shown that for stars and galaxy
profiles convolved with Gaussian seeing, $\ge 90\%$ of the flux is
expected to lie within a circular aperture of radius $k r_1$ if $k = 2$, almost independently of their magnitude. This picture remains
unchanged if we consider an ellipse with $\epsilon k r_1$ and $k r_1 / \epsilon$ as principal axes. $k = 2$ defines a sort of balance between
systematic and random errors. By choosing a larger $k = 2.5$, the mean
fraction of flux lost drops from about 10\% to 6\%. When Signal to
Noise is low, it may appear that an erroneously small aperture is
taken by the algorithm. That's why we have to bound the smallest
accessible aperture to $R_{\rm min}$ (typically $R_{\rm min} = 3 - 4 \sigma_{\rm iso}$). The user has full control over the parameters $k$
and $R_{\rm min}$ through the configuration parameters {\tt PHOT\_AUTOPARAMS};
by defaut, {\tt PHOT\_AUTOPARAMS} is set to {\tt 2.5,3.5}.

%------------------------------ Fig. phot -----------------------------
\begin{figure}[htbp]
\centerline{\includegraphics[width=12cm]{ps/simlostflux.ps}}
\caption{
Flux lost (expressed as a mean magnitude difference) with different
faint-object photometry techniques as a function of total magnitude (see text).
Only isolated galaxies (no blends) of the simulations have been considered.
}
\label{figphot}
\end{figure}

Aperture magnitudes are sensitive to crowding. In {\sc SExtractor}~1, {\tt MAG\_AUTO}
measurements were not very robust in that respect. It was therefore suggested to replace the
aperture magnitude by the corrected-isophotal one when an object is too close to its
neighbours (2 isophotal radii for instance).
This was done automatically when using the {\tt MAG\_BEST} magnitude:
${\tt MAG\_BEST} = {\tt MAG\_AUTO}$ when it is
sure that no neighbour can bias {\tt MAG\_AUTO} by more than 10\%,
or ${\tt MAG\_BEST} = {\tt MAG\_ISOCOR}$ otherwise.
Experience showed that the {\tt MAG\_ISOCOR} and {\tt MAG\_AUTO} magnitude would loose about
the same fraction of flux on stars or compact galaxy profiles: around 0.06 \% for default
extraction parameters. The use of {\tt MAG\_BEST} is now deprecated as {\tt MAG\_AUTO}
measurements are much more robust in versions 2.x of {\sc SExtractor}. The first improvement
is a crude subtraction of all the neighbours which have been detected around the measured source
The second improvement is an automatic correction of parts of the aperture which are suspected
from contamination by a neighbour by mirroring the opposite, cleaner side of the measurement
ellipse if available (the {\tt MASK\_TYPE CORRECT} option, which is also the default).
Figure \ref{figphot} shows the mean loss of flux measured with
isophotal (threshold $= 24.4\ \mbox{mag.arsec}^{-2}$), corrected
isophotal and automatic aperture photometries for simulated galaxy
$B_J$ on a typical Schmidt-survey plate image.

\paragraph{Photographic photometry}
In {\tt DETECT\_TYPE PHOTO} mode, {\sc SExtractor}
assumes that the response of the detector, over the dynamic range of
the image, is logarithmic. This is generally a good approximation for
photographic density on deep exposures. Photometric procedures
described above remain unchanged, except that for each pixel we apply
first the transformation
$$I = I_0\,10^{D/\gamma} \label{eq:dtoi}$$
where $\gamma$ ($=$~{\tt MAG\_GAMMA} is the contrast index of the
emulsion, $D$ the original pixel value from the background-subtracted
image, and $I_0$ is computed from the magnitude zero-point $m_0$:
$$I_0 = \frac{\gamma}{\ln 10} \,10^{-0.4\, m_0}$$
One advantage of using a density-to-intensity transformation relative
to the local sky background is that it corrects (to some extent)
large-scale inhomogeneities in sensitivity (see Bertin 1996 for
details).

\paragraph{Errors on magnitude}
An estimate of the error\footnote{Important: this error must be
considered only as a lower value since it does not take into account
the (complex) uncertainty on the local background estimate.} is
available for each type of magnitude. It is computed through
$$\Delta m = 1.0857\, \frac{\sqrt{A\sigma^2 + F/g}}{F}$$
where $A$ is the area (in pixels) over which the total flux $F$ (in
ADU) is summed, $\sigma$ the standard deviation of noise (in ADU)
estimated from the background, and g the detector gain ({\tt GAIN}
parameter\footnote{Setting {\tt GAIN} to 0 in the configuration file
is equivalent to $g = +\infty$} , in $e^- / \mbox{ADU}$). For
corrected-isophotal magnitudes, a term, derived from Eq.
on the correction itself.

In {\tt DETECT\_TYPE PHOTO} mode, things are slightly more complex. Making the
assumption that plate-noise is the major contributor to photometric
errors, and that it is roughly constant in density, we can write:
$$\Delta m = 1.0857 \,\ln 10\, {\sigma\over \gamma}\, \frac{\sqrt{\sum_{x,y}{I^2(x,y)}}}{\sum_{x,y}I(x,y)}$$
where $I(x,y)$ is the contribution of pixel $(x,y)$ to the total flux
(Eq. \ref{eq:dtoi}). The {\tt GAIN} is ignored in {\tt PHOTO} mode.

\paragraph{Background} is the last point relative to photometry.
The assumption made in \S \ref{chap:backest} --- that the
local'' background associated to an object can be interpolated
from the global background map --- is no longer valid
in crowded regions. An example is a globular cluster superimposed
on a bulge of galaxy. {\sc SExtractor} offers the possibility to
estimate locally the background used to compute magnitudes.
When this option is switched on ({\tt BACKPHOTO\_TYPE LOCAL} instead of
{\tt GLOBAL}), the photometric'' background is
estimated within a rectangular annulus'' around the isophotal
limits of the object. The thickness of the annulus (in pixels)
can be specified by the user with {\tt BACKPHOTO\_SIZE}. A typical value is
{\tt BACKPHOTO\_SIZE}=24.

\subsection{Cross-identification within {\sc SExtractor}}
{\sc SExtractor} allows one to perform on-line cross-identification of
each detection with an {\tt ASCII} list. Although the
cross-identification algorithm is not very sophisticated --- it works
in pixel-coordinates only ---, it is particularly convenient for
assessing {\sc SExtractor} performances, on image simulations from
instance. Configuration parameters related to cross-identification are
prefixed with {\tt ASSOC}.

\subsubsection{The {\tt ASSOC} list}
The {\tt ASSOC} process is initiated by requesting in the parameter
file at least one of the {\tt ASSOC} catalog parameters: {\tt
VECTOR\_ASSOC} and {\tt NUMBER\_ASSOC}. Then {\sc SExtractor} looks
for an {\tt ASCII} file (let's call it the {\tt ASSOC} list) whose
file name has to be specified by the {\tt ASSOC\_NAME} configuration
keyword. The {\tt ASSOC} list must contain columns of numbers
separated by spaces or tabs. Each line describes a source that will
enter the cross-identification process. Lines with zero characters, or
beginning with {\tt \#}'' (for comments) are ignored. This means you
may use any {\tt ASCII} catalog generated by a previous {\sc
SExtractor} run as an {\tt ASSOC} list.

To perform the cross-identification, {\sc SExtractor} needs to know
what are the columns that contain the $x$ and $y$
coordinates\footnote{The $x$ and $y$ coordinates must comply with the
FITS (and {\sc SExtractor}) convention: by definition, the center of
the first pixel in the image array has pixel-coordinates (1.0,1.0).}
in the {\tt ASSOC} list. These shall be specified using the {\tt
ASSOC\_PARAMS} configuration parameter. The syntax is: {\tt
ASSOC\_PARAMS}~$c_x$,$c_y[$,$c_Z]$'', where $c_x$ and $c_y$ are the
positions of the columns containing the $x$ and $y$ coordinates (the
first column has position 1). $c_Z$ (optional) specifies an extra
column containing some $Z$'' parameter that may be used for
controlling or weighting the {\tt ASSOC} process. $Z$ will typically
be a flux estimate. $c_Z$ is required if {\tt ASSOC\_TYPE} is {\tt
MIN}, {\tt MAX}, {\tt MEAN} or {\tt MAG\_MEAN} (see below).

\subsubsection{Controlling the {\tt ASSOC} process}
Two configuration parameters control the {\tt ASSOC} process. The
first one, {\tt ASSOC\_RADIUS}, accepts a decimal number which
represents the maximum distance (in pixels) one should have between
the barycenter of the current {\sc SExtractor} detection and an {\tt
ASSOC}-list member to consider a match. This number must of course
account for positional uncertainties in both catalogs. In most cases,
a value of a few pixels will do just fine. The second configuration
parameter, {\tt ASSOC\_TYPE}, accepts a keyword as argument and
selects the kind of identification procedure one wants to operate:
\begin{itemize}
\item {\tt FIRST}: this is the simplest way of performing a
cross-identification. It does not require the $c_Z$ column in {\tt
ASSOC\_PARAMS}. The first geometrical match encountered while scanning
the {\tt ASSOC} list is retained as the actual match. This can used
for catalogs with low spatial density.
\item {\tt NEAREST}: this option does not require the $c_Z$ column in
{\tt ASSOC\_PARAMS}. The match is performed with the {\tt ASSOC}-list
member the closest (in position) to the current detection, provided
that it lies within the {\tt ASSOC\_RADIUS}.
\item {\tt SUM}: all parameters issued from {\tt ASSOC}-list members
which geometrically match the current detection are summed. $c_Z$ is
not required.
\item {\tt MAG\_SUM}: all parameters $c_i$ issued from {\tt
ASSOC}-list members which geometrically match the current detection
are combined using the following law: $-2.5\log(\sum_i 10^{-0.4c_i}$.
This option allows one to sum flux contributions from magnitude data.
$c_Z$ is not required.
\item {\tt MIN}: among all geometrical matches, retains the {\tt
ASSOC}-list member which has the smallest $Z$ parameter.
\item {\tt MAX}: among all geometrical matches, retains the {\tt
ASSOC}-list member which has the largest $Z$ parameter.
\item {\tt MEAN}: all parameters issued from {\tt ASSOC}-list members
which geometrically match the current detection are weighted-averaged,
using the $Z$ parameter as the weight.
\item {\tt MAG\_MEAN}: all parameters issued from {\tt ASSOC}-list
members which geometrically match the current detection are
weighted-averaged, using $10^{-0.4Z}$ as the weight. This option is
useful for weighting catalog sources with magnitudes.
\end{itemize}

\subsubsection{Output from {\tt ASSOC}}
Now that we have described the cross-identification process, let's see
how informations coming from the matching with the {\tt ASSOC} list
are propagated to the output {\sc SExtractor} catalog.

The output of {\tt ASSOC} data in {\sc SExtractor} catalog is done
through the {\tt VECTOR\_ASSOC()} catalog parameter. {\tt
VECTOR\_ASSOC()} is a vector, each element of which refers to a column
from the input {\tt ASSOC} list. {\tt VECTOR\_ASSOC()} contains either
{\tt ASSOC}-list member data from the best match (if {\tt ASSOC\_TYPE}
is {\tt FIRST}, {\tt NEAREST}, {\tt MIN} or {\tt MAX}), or a
combination of {\tt ASSOC}-list member data (if {\tt ASSOC\_TYPE} is
{\tt MEAN}, {\tt MAG\_MEAN}, {\tt SUM} or {\tt MAG\_SUM}). If no match
has been found, it just contains zeros. The {\tt NUMBER\_ASSOC}
contains the number of {\tt ASSOC}-list members that geometrically
match the current {\sc SExtractor} detection, and obviously, if
different from zero, indicates that {\tt VECTOR\_ASSOC()} has a
meaningful content.

The {\tt ASSOC\_DATA} configuration parameter is used to tell {\sc
SExtractor} to which column refers each element of {\tt
VECTOR\_ASSOC()}. The syntax of {\tt ASSOC\_DATA} is similar to that
of {\tt ASSOC\_PARAMS}: {\tt ASSOC\_DATA} $c_1$,$c_2$,$c_3$,...''
where the $c_i$ are the column positions in the {\tt ASSOC} list. The
special case {\tt ASSOC\_DATA} 0'' tells {\sc SExtractor} to
propagate all columns from the {\tt ASSOC} file to the output catalog.

There are situations where it might be desirable to keep in the output
{\sc SExtractor} catalog only those detections that were matched with
some {\tt ASSOC}-list member. Such a feature is controlled by the {\tt
ASSOCSELEC\_TYPE} configuration parameter, which accepts one of the
three following keywords:
\begin{itemize}
\item {\tt ALL}: keep all {\sc SExtractor} detections, regardless of
matching. This is the default.
\item {\tt MATCHED}: keep only {\sc SExtractor} detections that were
matched with at least one {\tt ASSOC}-list member.
\item {\tt -MATCHED}: keep only {\sc SExtractor} detections that were
not matched with any {\tt ASSOC}-list member.
\end{itemize}

\paragraph{Acknowledgements}

\begin{thebibliography}{99}
\bibitem{beard:al} Beard~S.M., McGillivray~H.T., Thanisch~P.F., 1990,
{\it MNRAS} {\bf 247}, 311
\bibitem{bertineye} Bertin~E., E.y.E~1.1, User's manual, 1997, Leiden
\bibitem{bertinww} Bertin~E., WeightWatcher~1.2, User's manual, 1997, ESO
\bibitem{bijaoui:dantel} Bijaoui~A., Dantel~M., 1991,
{\it A\&A} {\bf 6}, 51
\bibitem{bijaoui:al} Bijaoui~A., Slezak~E., Vandame~B., 1998, in
{\it Astrophysics and Algorithms: a DIMACS Workshop on Massive Astronomical Data Sets}
\bibitem{dalcanton:al} Dalcanton~J.J., Spergel~D.N., Gunn~J.E., Schmidt~M., Schneider~D.P., 1997,
{\it AJ}, {\bf 114}, 635
\bibitem{das} Das~P.K., 1991,
{\it Optical Signal Processing},
(Springer-Verlag)
\bibitem{greisen:calabretta} Greisen~E.W., Calabretta~M., 1995, ADASS 4, 233
\bibitem{infante} Infante~L., 1987,
{\it A\&A} {\bf 183}, 177
\bibitem{irwin} Irwin~M.J., 1985,
{\it MNRAS} {\bf 214}, 575
\bibitem{jarvis:tyson} Jarvis~J.J., Tyson~J.A., 1981,
{\it AJ}, {\bf 86}, 476
{\it ApJ}, {\bf 449}, 460
\bibitem{kendall:stuart} Kendall~M., Stuart~K., 1977,
{\it The Advanced Theory of Statistics}, {\bf Vol. 1},
(Charles Griffin \& Co., London)
\bibitem{kron} Kron~R.G., 1980,
{\it ApJS} {\bf 43}, 305
\bibitem{lutz} Lutz~R.K., 1979,
{\it The Computer Journal} {\bf 23}, 262
\bibitem{moffat} Moffat~A.F.J., 1969,
\bibitem{wells:al} Wells~D.C., Greisen~E.W., Harten~R.H., 1981,
{\it A\&AS} {\bf 44}, 363
\end{thebibliography}

\appendix
\section{Appendices}
Fairly often, I am asked by users about the reason for some limitations or
choices in the way things are done in {\sc SExtractor}.  In this section, I try to justify them.

Q: {\bf {\sc SExtractor} supports WCS. So why isn't it possible to have the {\tt ASSOC} cross-identification
working in $\alpha,\delta$ (or any other world-coordinates)?}

A: The {\tt ASSOC} list which is used for cross-identification can be very long (100,000 objects or more).
Performing an exhaustive cross-id in real-time can therefore be extremely slow, unless the {\tt ASSOC}
coordinates are sorted in some way beforehand. In pixel coordinates, such a sorting is simple and very
efficient, as {\sc SExtractor} works line-by-line; but it would be much more difficult in the general WCS context.
This is why this hasn't been implemented, considering it as beyond the scope of {\sc SExtractor}.

Q: {\bf Why isn't the detection threshold expressed in units of the background noise standard deviation
in the {\tt FILTER}{\em ed} image ?}

A: There are two reasons for this. First, it makes the threshold independent of the choice of a {\tt FILTER},
which is a good thing. Second, having $\sigma$ measured on the {\tt FILTER}ed image may have given
un-informed users the wrong impression that increasing filtering systematically improves the detectability
of any source, whereas it depends on scale.

\end{document}