\documentclass[letterpaper,11pt]{article} %sig-alternate}
\usepackage{bbm}
\usepackage{subfigure}
\usepackage{amssymb}
\usepackage{epsfig}
\usepackage{algorithm,algorithmic}
%\usepackage{mathptmx}
\usepackage{times}
\usepackage{courier}
\usepackage{helvet}
\usepackage{url}
%%%%%%%%%%%%% macros and definitions.
\usepackage{fullpage}
\newtheorem{conjecture}{Conjecture}
\newtheorem{definition}{Definition}
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{observation}{Observation}
\newtheorem{fact}{Fact}
\def\A{{\cal A}}
\def\B{{\cal B}}
\def\D{{\cal D}}
\def\E{{\cal E}}
\def\L{{\cal L}}
\def\M{{\cal M}}
\def\S{{\cal S}}
\def\Q{{\cal Q}}
\def\W{{\cal W}}
\def\SR{{\mathbb{R}}}
\def\MT{{\mathfrak{T}}}
\def\ME{{\mathfrak{E}}}
\def\MP{{\mathfrak{P}}}
\def\MM{{\mathfrak{M}}}
\def\cdim{{\rm cdim}}
\def\a{\alpha}
%\setlength{\textwidth}{6.75in} %{6.5in}
%\setlength{\topmargin}{-0.5in} %{0.5in}
%\setlength{\oddsidemargin}{-0.1250in}
%\setlength{\textheight}{9.0in} %{8.5in}
\begin{document}
\title{Local Feature Extraction for Partial Matching}
\newfont{\smallemail}{phvr at 11pt}
\author{
\begin{tabular}{ccc}
{{Dmitriy Bespalov}$^\dag$} & {{Ali Shokoufandeh}$^\dag$} & {{William C. Regli}$^{\dag\ddag}$} \\
\end{tabular}\\ \\
{Department of Computer Science$^\dag$}\\
{Department of Mechanical Engineering \& Mechanics$^\ddag$}\\
{College of Engineering}\\
{Drexel University}\\
{3141 Chestnut Street}\\
{Philadelphia, PA 19104}
}
\date{\today}
\maketitle
\begin{abstract}
A primary shortcoming of existing techniques for 3D model matching
is the reliance on global information of model's structure. Models are
matched in their entirety, depending on overall topology and geometry
information. A current open challenge is how
to perform partial matching. Partial matching is important for finding
similarities across part models with different global shape
properties and for segmentation and matching of data acquired from
3D scanners.
This paper presents a Scale-Space feature extraction technique
based on recursive decomposition of polyhedral surfaces into surface patches.
The experimental results presented in this paper suggest that this technique
can potentially be used to perform matching based on local model structure.
Scale-Space decomposition has been successfully used to extract features from
mechanical artifacts. Scale-Space techniques can be parameterized
to generate decompositions that correspond to manufacturing,
assembly or surface features relevant to mechanical design. One
application of these technique is to support matching and
content-based retrieval of solid models.
This paper shows how a Scale-Space technique can extract features
that are invariant with respect to the global structure of the model as well as
noise. In order to accomplish this, we introduce a new distance
function defined on triangles instead of points.
Believe this technique offers a new way to control the feature
decomposition process, which results in extraction of features that
are more meaningful from an engineering view point.
The new technique is computationally practical for use in indexing
large models. Examples are provided that demonstrate effective feature
extraction on 3D laser scanned models. In addition, a simple sub-graph
isomorphism algorithm was used to show that the feature adjacency graphs
obtained through feature extraction, are meaningful descriptors of 3D CAD objects.
\end{abstract}
\DeclareGraphicsRule{*}{pdf}{*}{}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Introduction
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
\label{sec:introduction}
In order to perform content-based indexing and retrieval of 3D
objects, each model must be converted into some collection of
features. Previous research on model matching and retrieval has drawn
on feature definitions from mechanical design, computer graphics and
computer vision literature. Many of these feature-based techniques
ultimately use vertex-labeled graphs, whose nodes represent 3D
features (or their abstractions) and whose edges represent spatial
relations or constraints, between the features. Retrieval and
matching is done using some variation of graph matching to assign a
numerical value describing the distance between two models.
It is common in engineering communities for the term {\em feature} to be
used to refer to machining features (i.e.,\ holes, pockets, slots) or
other local geometric or topological characteristics of interest,
depending on the domain (i.e.,\ assembly surfaces, molding lines,
etc). In the context of this work, {\em feature} will be used as an
intrinsic property of the 3D shape which may encompass local
geometry and topology. Depending on the choice of function to parameterize the
Scale-Space decomposition, these local features could correspond to
design or manufacturing operations, machining features or assembly
surfaces, etc. The notion of {\em features} in this paper draws from
the computer vision literature~\cite{LINDEBERG:SCALE-SPACE}; hence, the
features are designed for object classification.
There are numerous surveys of feature recognition techniques for
CAD~\cite{HAN-REGLI-PRATT:FEATURES-R-AND-A-2000,SHAH-FEATURE-DISCOURSE:ASME-JCISE-2001};
and similarity assessment of 3D models using feature extraction has
been addressed by several
efforts~\cite{CICIRELLO-REGLI:SMI-2001,CICIRELLO-REGLI:AI-EDAM-2002}.
These techniques assume the exact representation, as is obtained
from a CAD system (i.e.,\ a 3D, watertight boundary representation).
However, these representations are proprietary, and their internals
vary from system to system. Feature-based descriptions of models also vary by system.
Hence, CAD search tools that can perform semantically
effective searches using ``the lowest common denominator'' (e.g.,\ shape)
representation are widely applicable.
Matching 3D shape representations has been widely studied in
graphics~\cite{IYER:SHAPE-SEARCH-SURVEY}, computer vision~\cite{SMI04-veltkamp-survey} and engineering~\cite{GUPTA:SHAPE-MATCH-SURVEY}.
When shape representations are used for CAD data, there
are two major shortcomings with existing work. First, the current
generation of matching techniques have difficulty handling the
approximate representations (i.e.,\ polyhedral mesh, point cloud, etc)
that are needed to find sub-patterns in objects or handle data created
by 3D scanners. With a few notable exceptions, most researchers
assume watertight VRML or shape models. Second, and more
importantly, the current generation of search techniques almost
exclusively focus on gross or overall shape. In the context of CAD,
local features and feature patterns contribute considerably to
manufacturing cost, selection of manufacturing processes,
producibility and functional parameters of 3D objects. Many objects
with similar gross shape can have vastly different functions,
costs or manufacturing process specifications.
This paper builds on the work in~\cite{BESPALOV:SCALE-SPACE}, where
it was shown how Scale-Space decompositions could be used to extract
features from 3D models in a polyhedral representation. It develops a
parameterizable feature decomposition method that can be tuned to
extract local feature configurations of engineering significance.
The approach put forth in this paper is motivated by several open
problems in 3D shape matching and indexing of CAD models for useful
engineering purposes:
\begin{enumerate}
\item[] {\bf Parameterizable Decompositions:} Scale-Space decomposition
promises to decompose a 3D model into structurally relevant
components {\em automatically}. These decompositions can be
parameterized with a measure function, resulting in different
components and features. This allows us to perform efficient
comparisons (using well studied tree-matching algorithms) of 3D
models in terms of the similarity of underlying
features~\cite{BESPALOV:SCALE-SPACE}. Different measure functions
can be created to tailor decompositions toward feature sets
tuned to answer specific questions (i.e.,\ cost, manufacturing
process, shape, etc).
\item[] {\bf Robustness in Presence of Noise:} Most existing work on
shape and solid matching assumes an ideal world with watertight
models and no noise. The reality is that objects may not be
perfect, this is especially true when trying to examine 3D models
acquired by laser scanning or other means (i.e.,\ image, inspection
probes, etc). In this context, one needs to be able to compare the
noisy acquired data to other noisy data or to the exact geometry
data that exists in a database of CAD models. The Scale-Space
technique has shown consistency in its performance under noise, both
with synthetic noise as well as with respect to the actual noise in
data from laser scans. Hence, models can be effectively matched
across representations.
\item[] {\bf Partial Matching:} Partial matching is a major open
problem in 3D indexing and matching. It manifests itself in several
ways. Most evident is that acquired data is rarely complete.
For example, occlusion prevents scanners from getting interior
points of holes and other features. In addition, obtaining a
``complete'' scan is time consuming, requiring manual re-positioning
of artifacts on the scanning apparatus and (quite often) manual
registration of the point set data acquired by these scans. In the
most basic case, the scanned data may consist of only one ``view''
of the model---resulting in a set of points on the surfaces of the
object and not a 3D shape.
\item[] {\bf From Scanned Point Cloud to Database Query:} The ability to
handle noise and partial data are essential in a system that can
go from scanned data input directly to a database query with minimal
human intervention. With even the best of present technology, it is
difficult to get complete watertight solids from scanned data that
precisely match the scanned artifact. In the case of CAD objects,
most of which have high genus and many occluded surfaces, obtaining
a complete scan that evenly samples points over the surfaces is
simply impossible. Hence, matching and query mechanisms must be
able to operate from limited information, i.e.,\ point data or
portions of surfaces.
\item[] {\bf Basis for Solution by Many-to-Many Matching:} Many-to-Many
matching aligns the corresponding decomposition from one medium
(e.g.,\ the native CAD object) with that of other media (e.g.,\
scanned data) and similar, but slightly different, CAD objects. The
decomposition process presented in this paper is consistent across
these different media types; however, the exact boundaries of the
segmentations may vary depending on the quality of the data, noise
or differences in the underlying geometric representation. This
creates a many-to-many matching problem in which subsets of
segments from one object must be paired to subsets of the segments
resulting from the decomposition of the other object.
\end{enumerate}
%This paper develops a parameterizable feature decomposition method
%that can be tuned to extract local feature configurations of
%engineering significance. The technique operates on low-level shape
%representations (polyhedral meshes) and is robust with respect to
%noise and partial information. Building on the work
%in~\cite{BESPALOV:SCALE-SPACE}, where it was shown how Scale-Space
%decompositions could be used to extract features from 3D models in a
%polyhedral representation this work demonstrates a feature extraction
%procedure that produces consistent results, even when only partial
%data is available (local feature extraction). Our current work shows
%how Scale-Space decomposition can be extended to address the issue of
%extracting local features from 3D models in a polyhedral
%representation. In other words, if only a part of an object (for
%instance, a single 3D laser scan) is given, one may still be able to
%extract features from this object and find complete objects in a CAD
%database with similar features. Such a feature extraction process is
%discussed in this paper.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Related Work
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Related Work}
\label{sec:related}
The work in the paper draws on concepts from several areas of
computer science and engineering. We review some of this background
below.
\subsection{Object Recognition and Matching}
This paper uses the term {\em feature} to refer to an intrinsic
property of the 3D shape which may encompass local geometry and
topology related to design or manufacturing operations, but may not
have direct correspondence to any explicit manufacturing features. In
this sense the notion of a {\em feature} draws from the computer
vision literature~\cite{LINDEBERG:SCALE-SPACE}.
The computer vision research community has typically viewed shape
matching as a problem in
2D~\cite{SHAPIRO:MATCHING-IEEE-PAMI,LAMDAN:OB-REC-IEEE-PAMI,WOLFSON:CURV-MATCH-IEEE-PAMI,LAMDAN:GEOM-HASHING-CV}.
These efforts address a different aspect of the general
geometric/solid model matching problem---one in which the main
technical challenge is the construction of the models to be matched
from image data obtained by cameras.
This has changed in the past several years with the ready availability
of 3D models (usually meshes or point clouds) generated from range and
sensor data. While a complete survey of this area is beyond the scope
of this paper, we review several notable efforts. Thompson et
al.~\cite{THOMPSON-RIESENFELD:DARPA96,UTAH-THOMPSON:IEEE-R-A-REV-ENG-1999}
reverse engineered designs by generating surfaces and machining feature
information from range data collected from machined parts. Jain et
al.~\cite{VIRAGE:CACM97} indexed CAD data based
on the creation of ``feature vectors'' from 2D images. Sipe, Casasent
and Talukder~\cite{TALUKDER-CASASENT:SPIE-1998,SIPE:NN-1999} used
acquired 2D image data to correlate real machined parts with CAD models
and performed classification and pose estimation. Scale-Space
decomposition is very popular in Computer Vision for extracting
spatially coherent features. Most of the work in this community has
focused on the Scale-Space features of 2D images using wavelets or
Gaussian
filters~\cite{LINDEBERG:SCALE-SPACE,SHOKOUF:OBJECT-RECOGNITION}.
Once objects are recognized, they can be segmented, decomposed and
matched. Matching is frequently accomplished by encoding objects and
their decompositions as a graph and doing analysis across different
graph structures to identify similarity. Graphs and their
generalizations are among the most common and best studied
combinatorial structures in computer science, due in large part to the
number of areas of research in which they are applicable. Due to
space constraints, we cite only a few examples of how they are being
applied to 3D object recognition and matching. Nayar and Murase
extended this work to general 3D objects where a dense set of
views was acquired for each object \cite{MUR95}. Eigen-based
representations have been widely used in many areas for information
retrieval and matching as they offer greater potential for generic shape
description and matching. In an attempt to index into a database of
graphs, Sossa and Horaud use a small subset of the coefficients of the
$d_2$-polynomial corresponding to the Laplacian matrix associated with
a graph \cite{SOS92}, while a spectral graph decomposition was
reported by Sengupta and Boyer for the partitioning of a database of
3D models, in which nodes in a graph represent 3D surface
patches \cite{SEN96}.
Graph matching has a long history in pattern recognition. Shapiro and
Haralick's use of weighted graphs for the structural description of
objects was among the first in the vision community~\cite{shh81}. Eshera
and Fu~\cite{Fu:agr} used attributed relation graphs to
describe parametric information as the basis of a general image
understanding system to find inexact matches. Recently, Pelillo et
al.~\cite{Pelillo:etal:ECCV} introduced a matching algorithm which
extends the detection of maximum cliques in association graphs to
hierarchically organized tree structures. Tirthapura et al. present
an alternative use of shock graphs for shape matching~\cite{tskk98}.
The edit distance approach for finding matching in rooted trees has
been studied by Zhang, Wang, and Shasha~\cite{zhang95editing}.
Their dynamic programming approach for degree-2 distance, when applied
to unordered trees, is a restricted form of the constrained distance
previously reported in~\cite{zhang94approximate}.
\subsection{Matching 3D Objects}
With the ready availability of 3D models from graphics programs and
CAD systems, there has been a substantial amount of activity on 3D
object recognition and matching in the past 20 years. This body of
relevant work is too large to survey in detail in this paper.
Interested readers are referred to several recent survey
papers~\cite{SMI04-veltkamp-survey, GUPTA:SHAPE-MATCH-SURVEY, IYER:SHAPE-SEARCH-SURVEY}.
\subsubsection{Comparing Shape Models.}
Shape-based approaches usually work on a low-level point cloud, mesh or
polyhedral model data, such as that produced by digital animation tools or
acquired by 3D range scanners. Approaches based on faceted
representations include that of Osada et
al.~\cite{TOG02-osada-shape}, which creates an abstraction of the 3D
model as a probability distribution of samples from a shape function
acting on the model. Hilaga et al.~\cite{HILAGA-ET-AL:SIGGRAPH-2001}
present a method for matching 3D topological models using
multi-resolution Reeb graphs. A variant on this is proposed
in~\cite{SMI04-tung-augment-reeb}. A current trend, being pursued by
several groups, is the use of different types of shape descriptors
(harmonics, Zernike, etc.) to capture shape
invariants~\cite{SGP03-kazhdan-spherical,CAD04-novotni-zernike,Algorithmica04-kazhdan-reflective,TOG04-kazhdan-anisotropy}.
The Princeton 3D shape database~\cite{SMI04-shilane-shape-benchmark}
contains mainly models from 3D graphics and rendering; none of these models
are specifically engineering, solid modeling or mechanical
CAD oriented.
In general, however, shape matching-based approaches {\em only}
operate on the gross-shape of a single part and do not operate
directly on solid models or consider semantically meaningful
engineering information (i.e.,\ manufacturing or design features,
tolerances). Retrieval strategies are usually based on a
query-by-example or query-by-sketch paradigm.
\subsubsection{Comparing Solid Models.}
Unlike shape models, for which only approximate geometry and topology
is available, solid models produced by CAD systems are represented by
precise boundary representations. When comparing solid models of 3D
CAD data, there are two basic types of approaches for content-based
matching and retrieval: (1) {\em feature-based} techniques and (2)
{\em shape-based} techniques. The feature-based
techniques~\cite{SHAH-MANTYLA:FEATURES-BOOK96,HAN-REGLI-PRATT:FEATURES-R-AND-A-2000,JI-MAREFAT:ACM-COMP-SURV-97,SHAH-FEATURE-DISCOURSE:ASME-JCISE-2001,ALLADA-ANAND:IJCIM95},
going back at least as far as 1980~\cite{KYPRIANOU:80}, extract
engineering features (machining features, form features, etc.) from a
solid model of a mechanical part for use in database storage,
automated group technology (GT) part coding, etc. The shape-based
techniques are more recent, owing to research contributions from
computational geometry, vision and computer graphics. These
techniques leverage the ready availability of 3D models on the
Internet.
\paragraph*{Feature-Based Approaches.}
Historically Group Technology (GT) coding was the way to index of
parts and part families~\cite{SNEAD:89}. This facilitated process
planning and cell-based manufacturing by imposing a classification
scheme (a human-assigned alphanumeric string) to individual machined
parts. While there have been a number of attempts to automate the
generation of GT codes~\cite{AMES:SM91,SHAH89,HAM96}, transition to
commercial practice has been limited.
The idea of similarity assessment of 3D models using feature
extraction techniques has been discussed
in~\cite{HAN-REGLI-PRATT:FEATURES-R-AND-A-2000,SHAH-FEATURE-DISCOURSE:ASME-JCISE-2001}.
These techniques assume the exact representation (i.e. Brep) for the
input models and therefore cannot be used if only an approximate
representation (i.e. polyhedral mesh) is available. This is a major
shortcoming, especially in designing an archival system, where one may
require partial and inexact matching.
There has been recent work on partial matching in the context of 3D data.
For instance, Funkhouser et al. successfully employed shape-based search in~\cite{Funkhouser:MODEL-EXAMPLE}
for 3D models with parts of those models matching a query. In addition, Cornea et al.
used approach for many-to-many matching of skeletons of 3D objects in~\cite{Demirci:MANY-TO-MANY} to perform
retrieval on those objects.
Elinson et al.~\cite{ELINSON-NAU-REGLI:SM97} used feature-based
reasoning for retrieval of solid models for use in variant process
planning. Cicirello and
Regli~\cite{REGLI-CICIRELLO:CAD00,CICIRELLO-REGLI:AI-EDAM-2002,CICIRELLO-REGLI:SMI-2001}
examined how to develop graph-based data structures and create
heuristic similarity measures among artifacts based on manufacturing
features. McWherter et al.~\cite{MCWHERTER-PEABODY:ASME-JCISE-2001}
integrated these ideas with database techniques to enable
indexing and clustering of CAD models based on shape and engineering
properties. Other work from the engineering community includes
techniques for automatic detection of part
families~\cite{DUTTA:ASME-CIE-CONFERENCE-2000} and topological
similarity assessment of polyhedral models~\cite{WYSK:DTC95}.
\paragraph*{Shape-Based Approaches.}
Comparing CAD models based on their boundary representations can be
difficult due to variability in the underlying feature-based
representations. Additional complications are created by
differences among the boundary representations used by systems (i.e.,\
some may use all NURBS, some may use a mix of surface types, etc).
Using a shape-based approach on voxels, meshes or polyhedral models
generated from native CAD representations is one way of reducing
these problems.
The 3D-Base Project~\cite{COHEN:3DBASE,cybenko:3dcad} used CAD
models in a voxel representation, which were then used to perform
comparisons using geometric moments and other features. The recent
work by the authors covers several areas including shape
classification, Scale-Space decomposition and classification
learning~\cite{JCISE03-bespalov-scalespace,ASME04-bespalov-local,SM03-bespalov-scalespace,SM03-ip-learn}.
Work out of
Purdue~\cite{ICDE04-lou-3dengsearch,ASME03-iyer-shapesearch2,ASME03-iyer-shapesearch1}
has improved on the voxel methods
of~\cite{COHEN:3DBASE,cybenko:3dcad}, augmenting them with skeletal
structures akin to medial axes or shock graphs. The main
accomplishment of the Purdue group is getting these shape-only
techniques in a system for query by example.
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% % Related Work
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% \section{Related Work}
% \label{sec:related}
% %%
% %% BILL: this section needs work
% %% can you beef it up with current references?
% %% can you add more feature/segmentation refs from vision lit?
% %% section may need a re-org
% %% ali needs to help here
% \subsection{Feature Extraction on Solid Models}
% Scale-Space decomposition promises to decompose a 3D model into
% structurally relevant components {\em automatically}. This especially
% important when the underlying 3D data is noisy. An important fact is
% that such decomposition will allow for efficient comparison of 3D
% models in terms of similarity of underlying
% features~\cite{BESPALOV:SCALE-SPACE}. Scale-Space decomposition is
% very popular in Computer Vision for extraction of spatially coherent
% features. Most of the work in this community has focused on the
% Scale-Space features of 2D images using wavelets or Gaussian
% filters~\cite{LINDEBERG:SCALE-SPACE, SHOKOUF:OBJECT-RECOGNITION}.
% Some work has been done to automate feature extraction process on CAD
% models. Elinson et al.~\cite{ELINSON-NAU-REGLI:SM97} and Cicirello
% and
% Regli~\cite{REGLI-CICIRELLO:CAD00,CICIRELLO-REGLI:SM99,CICIRELLO:SOLID-MODEL-RETRIEVAL-99}
% examined how to develop graph-based data structures to capture feature
% relationships and create heuristic similarity measures among
% artifacts. More recent work in~\cite{CICIRELLO-REGLI:SMI-2001}
% examined manufacturing feature-based similarity measurement.
% Historically Group Technology (GT) coding was the way of indexing of
% parts and part families~\cite{SNEAD:89}. This facilitated process
% planning and cell-based manufacturing by imposing a classification
% scheme (a human-assigned alphanumeric string) on individual machined
% parts. While there have been a number of efforts to automate the
% generation of GT codes~\cite{AMES:SM91,SHAH89,HAM96}, none have been
% fully transitioned to commercial practice.
% \subsection{Feature Extraction on Shape Models}
% Thompson et
% al.~\cite{THOMPSON-RIESENFELD:DARPA96,UTAH-THOMPSON:IEEE-R-A-REV-ENG-1999}
% reverse engineered designs by generating surface and machining feature
% information of range data collected from machined parts. Hilaga et
% al.~\cite{HILAGA-ET-AL:SIGGRAPH-2001} use Multiresolutional Reeb
% graphs to capture the feature-like information of the models.
%%% Feature Decomposition
\section{Sub-Space Clustering and Scale-Space Decomposition}
\label{sec:segmentation}
During the last decade, hierarchical segmentation has become
recognized as a powerful tool for designing efficient algorithms. The
most common form of such hierarchical segmentations is the Scale-Space
decomposition in computer vision. Intuitively, an inherent property
of real-world objects is that they only exist as meaningful entities
over certain ranges of scale. The fact that objects in the world
appear in different ways depending on the scale of observation has
important implications if one aims at describing them. Specifically,
the need for multi-scale representation arises when designing methods
for automatically analyzing and deriving information from real-world
measurements.
In the context of solid models, the notion of scale can be simplified
in terms of the levels for the 3D features. rather than the CAD
literature. Namely, given an object $\M$, we are interested in
partitioning $\M$, into $k$ features $\M_1$,...,$\M_k$ with $\M_i\cap
\M_j=\emptyset$, for $1\le i 0$,
$\sigma_{n'+1}=...=\sigma_n=0, n' \le n$. Let us define the order $k$
compression matrix $\D^{(k)}$ of $\D$, for $k\le n'$ as:
%\vspace{-0.2in}
\begin{eqnarray}
\label{eq_dec:4}
\D^{(k)}=U{\rm Diag}(\sigma_1,...,\sigma_k,0,...,0)V^T.
\end{eqnarray}
%\vspace{-0.1in}
\noindent Then,
\begin{theorem}{\rm [follows from Eckart-Young Theorem~\cite{ECKART:PRINCIPLE-AXIS}]}.
%\vspace{-0.2in}
\begin{eqnarray}
\label{eq_dec:5}
||\D-\D^{(k)}||_2=\min_{rank(H)=k}||\D-H||_2.
\end{eqnarray}
%\vspace{-0.15in}
\end{theorem}
This states that matrix $D^{(k)}$ is the best approximation to $\D$
among all matrices of rank $k$. In fact, this result can be
generalized to many other norms, including {\em Forbenius} norm.
~\cite{BESPALOV:SCALE-SPACE} showed that the set $\S=range(\D^{(k)})$
($\S$ is the range of matrix $\D^{(k)}$, the subspace spanned by the
columns of matrix $\D^{(k)}$) is the optimal solution to $k$-DSC
problem.
%% \begin{corollary}
%% \label{corl-1}
%% For $A\in \SR^{n\times n}$, let
%% \vspace{-0.2in}
%% \begin{eqnarray}
%% \label{eq_dec:6}
%% ||A||_F=\left( \sum_{i,j} A_{i,j}^2\right)^{1/2},
%% \end{eqnarray}
%% \vspace{-0.15in}
%% then,
%% \vspace{-0.2in}
%% \begin{eqnarray}
%% \label{eq_dec:7}
%% ||\D-\D^{(k)}||_F= \min_{rank(H)=k}||\D-H||_F.
%% \end{eqnarray}
%% \vspace{-0.15in}
%% \end{corollary}
%% Next, assume $\S$ is the range of matrix $\D^{(k)}$ (the subspace
%% spanned by the columns of matrix $\D^{(k)}$), and let $c_j$, for $1\le
%% j\le k$, denote the $j^{\rm th}$ column of $\D^{(k)}$. Let
%% $\S'\neq\S$ be any $k$-dimensional subspace of $\SR^n$. For every
%% $t_i\in \M$ let $q_i\in \S'$ be the closest face in $\S'$ to $t_i$.
%% Define $\Q\in \SR^{n\times n}$ with the $i^{\rm th}$ column equal to
%% $q_i$. Clearly, $rank{\Q}\le k$. Using Corollary~\ref{corl-1} we
%% have:
%% \vspace{-0.2in}
%% \begin{eqnarray}
%% \sum_{i=1}^n d(t_i,\S')^2 &=& \sum_{i=1}^n d(t_i,q_i)^2\\
%% &=& ||\D-\Q||_F^2\\
%% &\ge& ||\D-\D^{(k)}||_F^2\\
%% &=& \sum_{i=1}^n d(t_i,c_i)^2\\
%% &\ge& \sum_{i=1}^n d(t_i,\S)^2.
%% \end{eqnarray}
%% \vspace{-0.15in}
%% %\newpage
%% Consequently;
%% %\vspace {-0.1in}
%% \begin{proposition}.
%% The set $\S=range(\D^{(k)})$ is the optimal solution to $k$-DSC
%% problem.
%% \end{proposition}
\begin{algorithm}
\begin{algorithmic}[1]
\STATE Construct the distance matrix $\D\in \SR^{n\times n}$.
\STATE Compute the SVD decomposition $\D=U\Sigma V^T$, with
$\Sigma={\rm Diag}(\sigma_1,\sigma_2,...,\sigma_n)$.
\STATE Compute the order $k$ compression matrix $\D^{(k)}=U{\rm
Diag}(\sigma_1,...,\sigma_k,0,...,0)V^T.$
\STATE Let $c_j$ denote the $j^{\rm th}$ column of $\D^{(k)}$, for
$j=1,...,k$, and form sub-feature $\M_j$ as the union of faces
$t_i\in \M$ with $d(t_i,\S)=d(t_i,c_j)$.
\STATE Return the set $\{\M_1,...,\M_k\}$.
\end{algorithmic}
\caption{\sc Feature-Decomposition($\M,k$)}
\label{alg:dec-algo}
\end{algorithm}
Algorithm~\ref{alg:dec-algo} summarizes one phase of the Scale-Space
decomposition of $\M$ into its $k$ most significant features,
$\M_1,...,\M_k$. Algorithm~\ref{alg:dec-algo} returns the partitioning
of $\M$ by placing each face $t_i$ in $\M$ into one of the partitions $\M_j$,
such that the angle between vector $t_i$ and basis vector $c_j$ corresponding
to the partition $\M_j$ is minimized. Figure~\ref{fig:decomp} shows
three decomposition trees of the model -- using geodesic distance, angular shortest path and
maximum angle on angular shortest path measures. Note that the presented decomposition trees
are not full and the leaf nodes of the trees may not correspond to the actual final features
extracted from this model.
The bottleneck of Algorithm~\ref{alg:dec-algo} is the $O(n^3)$ SVD
decomposition, for an $n\times n$ matrix. The polyhedral representation of
a model provides a planar graph of a 2D manifold. If
only neighboring vertices are considered in the construction of the distance
matrix $\D$, the number of non-zero entries in $\D$ would be at most
$3n$ (due to planarity of the graph). Computing the SVD decomposition for
sparse matrices is much faster and takes
$O(mn)+O(m\mbox{M}(n))$~\cite{GOLUB-MATRIX-COMPUTATIONS}. Where $m$
is the maximum number of matrix-vector computations required and
$\mbox{M}(n)$ is the cost of matrix-vector computations of the form
$\D x$. Since $\mbox{M}$ is a planner map and $\D$ is a sparse matrix,
$\mbox{M}(n)=O(n)$ and $m=O(n)$.
\begin{figure}[!t]
\centering
\subfigure[Decomposition tree is obtained using {\sc Feature-Decomposition($\M,k$)} algorithm.]{\label{fig:feat_extr-tree}\includegraphics[width=3.0in]{figs/fig3.pdf}}
\quad
\subfigure[Leaf nodes of the tree correspond to the features.]{\label{fig:feat_extr-feat}\includegraphics[width=3.0in]{figs/fig3_2.pdf}}
\caption{Feature extraction process.}
\label{fig:feat_extr}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% %
% Controlling Decomposition Process %
% %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\vspace*{-0.1in}
\subsection{Controlling Decomposition Process}
The decomposition process as presented in Section~\ref{sec:dec-alg}
does not allow for an explicit mechanism to stop the indefinite
subdivision of a feature. Clearly, one could use a prescribed value
to control the decomposition depth of the feature trees, i.e.,\
decomposition process will be stopped when a root branch in feature
decomposition tree reaches a given depth. This section provides an
overview of a mechanism that will control the feature decomposition.
Intuitively, the use of this control mechanism will terminate the
decomposition process only when all coherent features are extracted.
Figure~\ref{fig:cad_data} shows results of decomposition process on
selected CAD models, including partial ones.
Let $\M$ be the original model's face set. Assume in the decomposition process a
feature $\M_1$ in $\M$ can be decomposed into sub-features $\M_2$ and
$\M_3$ (e.g., without loss of generality assume that feature $\M_1$
is being bisected). The decomposition of the feature $\M_1$
into sub-features $\M_2$ and $\M_3$ is said to be significant if the angular distance
between components of $\M_2$ and $\M_3$ is large. Formally, this condition could be expressed as follows:
%\vspace*{-0.2in}
\[ \forall t_i \in \M_2,\ t_j \in \M_3 \ \exists t_m \rightarrow t_l \in t_i \leadsto t_j \ s.t. \]%\vspace{-0.5in}
\[ t_m \in \M_2 \ \wedge \ t_l \in \M_3 \ \wedge \ \angle(t_m, t_l)=\D(t_i,t_j), \]
%\vspace{-0.2in}
\noindent i.e. if the angular shortest path between $t_i \in \M_2$ and $t_j \in \M_3$
contains two faces $t_m$ and $t_l$ (from $\M_2$ and $\M_3$ respectively)
with large angular distance, then $\M_1$ should be decomposed into $\M_2$
and $\M_3$. Intuitively, if $\M_1$ is smooth it should not be bisected
any further. On the other hand, if discrepancy between the neighboring
triangles in $\M_1$ is significant, $\M_1$ should be bisected.
%\vspace{-0.15in}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Experiments
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Empirical Results}
\label{sec:exp}
The feature extraction process was performed on a number of CAD models in polyhedral
representation. These models were converted from ACIS format, which is
exact representation format. As a result, all of the models have nice structure
(i.e. no missing faces).
In the experiments we would like to examine the qualities of
features extracted using the {\sc Feature-Decomposition($\M,k$)} algorithm.
To these ends, {\sc Feature-Decomposition($\M,k$)} is recursively applied to
each model for $k=2$. Once a decomposition tree is obtained,
the last layer of the decomposition tree (leaf nodes) is said to be a set of extracted features.
Note, that the union of the features (leaf nodes) is equivalent to the surface of the entire model.
Refer to Figure~\ref{fig:feat_extr} for an illustration of the feature extraction process.
For illustrative purposes, only a subset of extracted features is shown in Figure~\ref{fig:feat_extr-feat};
the features shown in Figure~\ref{fig:feat_extr-tree} do not correspond to the leaf nodes in Figure~\ref{fig:feat_extr-feat}.
The actual decomposition tree is quite large for this model.
\subsection{Feature Decomposition on CAD Data}
\label{sec:cad-data}
Figure~\ref{fig:cad_data} shows extracted features for several models. These
images are presented in order to illustrate the type of features the technique can extract.
Observe that each feature corresponds to a relatively smooth surface on the model.
If there is a significant angular difference on the surface, then
it gets decomposed into separate features. Any closed smooth surfaces (i.e. hole)
are decomposed into two (i.e. hole) or more (i.e. surface is concave) features.
We plan to address the problem of how to use the decomposition trees for matching
in the future work.
In addition, partial data from these models was created. Each model was intersected with
several planes and only a part of the model (on one side of the plane) was saved.
As a result, a number of partial objects was obtained which enabled us to see
how the {\sc Feature-Decomposition($\M,k$)} algorithm performs on the models where
only partial data is available. Illustrations of extracted features could be found
in Figure~\ref{fig:cad_data}.
%% BILL:
%% you need to give a URL to where readers can download these models
%%
\subsection{Feature Decomposition on Noisy Data}
In order to simulate the noise produced from capturing the object using 3D laser scan,
Gaussian noise was applied to each point of the models presented in Section~\ref{sec:cad-data}. Gaussian Noise with standard deviation of 1\% and 2\% from the standard deviation of all points in the model
was used. Then the features were extracted using {\sc Feature-Decomposition($\M,k$)} algorithm. The illustrations of the extracted features can be found
in Figures~\ref{fig:cad_data-gauss1} and~\ref{fig:cad_data-gauss2}. Similar to the CAD models presented above,
partial models for this dataset were generated. Note that it is possible for separate features to be assigned visually similar colors,
making them appear to be the same features. The names for the CAD models were assigned by the organizations that provided the files to us. We chose to use such names
for the purpose of referencing the models within this paper.
\subsection{Feature Decomposition on Acquired Models}
We have established that the feature extraction procedure allows to obtain
relevant subsets of a model that reflect the complexity of its 3D structure. The next experiment
was aimed at assessing whether the technique is capable of handling models that
were obtained using a 3D digitizer -- full 3D view (Figure~\ref{fig:scan-full}) and partial 3D view (Figure~\ref{fig:scan-single})
of 3D objects. Such data is known to be very noisy, often with broken connectivity
and missing faces. Ideally, one would like to be able to take a single scan of a 3D CAD model,
decompose it into features, and select models from the database that contain the same feature arrangements.
Three CAD parts were used to create six 3D models -- full and partial (one scan) for each CAD part.
Once the point clouds were obtained, they were faceted, and features were extracted using the {\sc Feature-Decomposition($\M,k$)}
algorithm.
Figure~\ref{fig:scan_data} shows correspondence
between extracted features for fully and partially scanned models as well as models obtained from exact representation.
Note that in some cases one feature from one model (i.e. full scan) can correspond to multiple features from another
model (i.e. single scan).
The performance of the technique on noisy data is certainly not as remarkable as on the CAD dataset (Section~\ref{sec:cad-data}).
Although, we believe that in most cases the extracted features are meaningful and reflect the structure
of the models. In addition, it is clear that there are similarities between feature decompositions
of fully and partially scanned models and 3D CAD models from our database. The scanned models used for this experiment are
freely available at {\tt http://edge.cs.drexel.edu/Dmitriy/Scanned.tar.bz}.
\begin{figure}[!t]
\vspace{-0.3in}
\centering
\begin{tabular}[h]{|c|c|c|c|c|}
\hline
& Full Model & \multicolumn{3}{c}{Partial Models} \vline \\
\hline
& & & & \\
\raisebox{0.75in}[0pt]{\sc Cimplex} &
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_cimplex_1.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_cimplex_1_xy1.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_cimplex_1_xy2.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_cimplex_1_yz1.png}\\
& & & & \\ \hline & & & & \\
\raisebox{0.75in}[0pt]{\sc Simple Boeing} &
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_simple_boeing_part_1.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_simple_boeing_part_1_xy1.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_simple_boeing_part_1_xz1.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_simple_boeing_part_1_yz1.png}\\
& & & & \\ \hline & & & & \\
\raisebox{0.75in}[0pt]{\sc Part 9} &
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_part09.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_part09_xy2.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_part09_xz1.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_part09_yz1.png}\\
& & & & \\ \hline & & & & \\
\raisebox{0.5in}[0pt]{\sc Part 10} &
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_part10.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_part10_xz1.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_part10_xz2.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_part10_yz2.png}\\
& & & & \\ \hline
\end{tabular}
\caption{Extracted features from CAD models. Each extracted feature is assigned a separate color.
Full models as well as partial models are presented in this figure.}
\label{fig:cad_data}
\end{figure}
%\clearpage
\begin{figure}[!t]
\vspace{-0.3in}
\centering
\begin{tabular}[h]{|c|c|c|c|c|}
\hline
& Full Model & \multicolumn{3}{c}{Partial Models} \vline \\
\hline
& & & & \\
\raisebox{0.75in}[0pt]{\sc Cimplex} &
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_01_cimplex_1.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_01_cimplex_1_xy1.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_01_cimplex_1_xy2.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_01_cimplex_1_yz1.png}\\
& & & & \\ \hline & & & & \\
\raisebox{0.75in}[0pt]{\sc Simple Boeing} &
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_01_simple_boeing_part_1.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_01_simple_boeing_part_1_xy1.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_01_simple_boeing_part_1_xz1.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_01_simple_boeing_part_1_yz1.png}\\
& & & & \\ \hline & & & & \\
\raisebox{0.75in}[0pt]{\sc Part 9} &
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_01_part09.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_01_part09_xy2.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_01_part09_xz1.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_01_part09_yz1.png}\\
& & & & \\ \hline & & & & \\
\raisebox{0.5in}[0pt]{\sc Part 10} &
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_01_part10.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_01_part10_xz1.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_01_part10_xz2.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_01_part10_yz2.png}\\
& & & & \\ \hline
\end{tabular}
\caption{1\% Gaussian Noise. Extracted features from CAD models with Gaussian noise applied to each point of the model.
Each extracted feature is assigned a separate color. Full models as well as partial models are presented in this figure.}
\label{fig:cad_data-gauss1}
\end{figure}
\begin{figure}[!t]
\vspace{-0.3in}
\centering
\begin{tabular}[h]{|c|c|c|c|c|}
\hline
& Full Model & \multicolumn{3}{c}{Partial Models} \vline \\
\hline
& & & & \\
\raisebox{0.75in}[0pt]{\sc Cimplex} &
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_02_cimplex_1.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_02_cimplex_1_xy1.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_02_cimplex_1_xy2.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_02_cimplex_1_yz1.png}\\
& & & & \\ \hline & & & & \\
\raisebox{0.75in}[0pt]{\sc Simple Boeing} &
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_01_simple_boeing_part_1.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_01_simple_boeing_part_1_xy1.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_01_simple_boeing_part_1_xz1.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_01_simple_boeing_part_1_yz1.png}\\
& & & & \\ \hline & & & & \\
\raisebox{0.75in}[0pt]{\sc Part 9} &
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_02_part09.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_02_part09_xy2.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_02_part09_xz1.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_02_part09_yz1.png}\\
& & & & \\ \hline & & & & \\
\raisebox{0.5in}[0pt]{\sc Part 10} &
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_02_part10.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_02_part10_xz1.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_02_part10_xz2.png}&
\includegraphics[width=1.00in,keepaspectratio]{figs/CAD/new_combined_noise_0_02_part10_yz2.png}\\
& & & & \\ \hline
\end{tabular}
\caption{2\% Gaussian Noise. Extracted features from CAD models with Gaussian noise applied to each point of the model.
Each extracted feature is assigned a separate color. Full models as well as partial models are presented in this figure.}
\label{fig:cad_data-gauss2}
\end{figure}
\clearpage
\begin{figure}[!t]
\vspace{-0.5in}
\centering
\subfigure[Single Scan -- take one scan from a single view]{\label{fig:scan-single}\includegraphics[width=4.7in]{figs/WorkSpace/single_scan.pdf}}
\quad
\subfigure[Full Scan -- take multiple scans from multiple views and register these scans together]{\label{fig:scan-full}\includegraphics[width=4.7in]{figs/WorkSpace/full_scan.pdf}}
\caption{Illustration of acquisition process.}
\label{fig:scan}
\end{figure}
\begin{figure}[!t]
\centering
\begin{tabular}[h]{|ccc||ccc|}
\hline
Full Scan & Single Scan & ACIS Model & Full Scan & Single Scan & ACIS Model \\
\hline & & & & & \\
\includegraphics[width=0.75in]{figs/WorkSpace/Scanned/socket1_1.pdf}&
\includegraphics[width=0.75in]{figs/WorkSpace/Scanned/socket1_2.pdf}&
\includegraphics[width=0.75in]{figs/WorkSpace/Scanned/socket1_3.pdf}&
\includegraphics[width=0.75in]{figs/WorkSpace/Scanned/socket2_1.pdf}&
\includegraphics[width=0.75in]{figs/WorkSpace/Scanned/socket2_2.pdf}&
\includegraphics[width=0.75in]{figs/WorkSpace/Scanned/socket2_3.pdf}\\
& & & & & \\ \hline & & & & & \\
\includegraphics[width=0.75in]{figs/WorkSpace/Scanned/bracket1_1.pdf}&
\includegraphics[width=0.75in]{figs/WorkSpace/Scanned/bracket1_2.pdf}&
\includegraphics[width=0.75in]{figs/WorkSpace/Scanned/bracket1_3.pdf}&
\includegraphics[width=0.75in]{figs/WorkSpace/Scanned/bracket2_1.pdf}&
\includegraphics[width=0.75in]{figs/WorkSpace/Scanned/bracket2_2.pdf}&
\includegraphics[width=0.75in]{figs/WorkSpace/Scanned/bracket2_3.pdf}\\
& & & & & \\ \hline & & & & & \\
\includegraphics[width=0.75in]{figs/WorkSpace/Scanned/ex1_1.pdf}&
\includegraphics[width=0.75in]{figs/WorkSpace/Scanned/ex1_2.pdf}&
\includegraphics[width=0.75in]{figs/WorkSpace/Scanned/ex1_3.pdf}&
\includegraphics[width=0.75in]{figs/WorkSpace/Scanned/ex2_1.pdf}&
\includegraphics[width=0.75in]{figs/WorkSpace/Scanned/ex2_2.pdf}&
\includegraphics[width=0.75in]{figs/WorkSpace/Scanned/ex2_3.pdf}\\
& & & & & \\
\hline
\end{tabular}
\caption{Correspondence of selected extracted features for the full scan models, single scan models and models obtained from exact representation (ACIS model).}
\label{fig:scan_data}
\end{figure}
\clearpage
\subsection{Matching Experiment}
\label{sec:local-match}
In order to test whether features extracted using Scale-Space technique with
max-angle distance measure can be used in retrieval of solid models, the following
matching experiment was conducted. Three retrieval techniques were used in evaluation:
Reeb Graph, original Scale-Space and max-angle Scale-Space (described in this paper).
Reeb Graph based technique introduced by Hilaga et al. in~\cite{HILAGA-ET-AL:SIGGRAPH-2001}.
This technique was designed for shape models and is based on identifying certain regions of a
model (i.e. feature) and combining them into hierarchical graph structure.
Then, a graph matching technique is used to obtain similarity values for
corresponding models. This approach performs very well if overall gross
shape of the models are similar.
Original Scale-Space technique was introduced in~\cite{BESPALOV:SCALE-SPACE}.
This approach use geodesic distance for distance function. By ``max-angle Scale-Space''
we refer to the feature extraction approach described in this paper. Although This approach does not
have matching technique specifically designed for it, a simple sub-graph isomorphism
approach was employed to assess similarity of constructed feature graphs (see Section~\ref{sec:matching}
for more information).
All three technique were evaluated on one dataset of solid models which is described in Section~\ref{sec:func-dataset}.
In order to illustrate results of the experiment, precision-recall plots were constructed. Refer to Section~\ref{sec:prec-recall}
for more information on precision-recall measures.
\subsubsection{Matching Approach}
\label{sec:matching}
For the sake of this work, a simple sub-graph isomorphism is used to asses similarity
of the feature adjacency graphs: leaf nodes (features) in decomposition tree become nodes in the graph,
edges indicate adjacency of the features on the surface of the model. Hill-climbing algorithm with
random restarts was used in the implementation of the sub-graph isomorphism technique. Largest Common Subgraph
algorithm described in \cite{CICIRELLO-REGLI:SMI-2001} was used in the implementation of the sub-graph isomorphism
technique. This well-known approach to graph matching was used to simply show that the feature graphs
constructed using max-angle distance measure carry relevant information about the structure of the
models and could be used to assess similarities between 3D CAD models. In reality, more sophisticated graph matching
algorithm should be used to yield even higher accuracy in matching. As the experimental results
suggest, such graph matching algorithm should be able to allow many-to-many matching of the nodes
within the feature graphs.
\subsubsection{Dataset}
\label{sec:func-dataset}
The dataset used in this experiment consists of seven groups of models.
Seventy (70) models are hand classified by their role in mechanical systems.
For instance, brackets are overhanging members that project from a structure and are
usually designed to support a vertical load or to strengthen an angle.
Linkage arms are motion transferring components from the spectrometer
assembly. Nuts, Screws, and Blots are commonly used fasteners.
Figure~\ref{fig:func-models} shows a sample of this dataset, and
table~\ref{tab:functional-stat} shows a brief summary of this dataset
it is available at:\\
{\scriptsize\url{http://www.designrepository.org/datasets/functional.tar.bz2}}.
\begin{table}[!ht]
\caption{Statistics of Functional Dataset}
\label{tab:functional-stat}
\centering
{\small
\begin{tabular}{|c|c|c|c|}
\hline
& \#Models & Avg. \#Faces& Avg. \#Polygons \\
\hline
Brackets& 9 &45& 911\\
Gears& 12& 169& 4045\\
Housings & 6 &218&5141\\
Linkage Arms &13& 30&1282\\
Nuts &7& 8& 518\\
Screws and Blots &18&15&431\\
Springs &5& 161& 7933\\
\hline
Total &70&&\\
\hline
\multicolumn{4}{c}{\ } \\
\hline
&Avg. SAT size & Avg. STEP size& Avg. VRML size \\
\hline
Brackets &56KB &100KB & 41KB \\
Gears &458KB& 525KB& 191KB \\
Housings &300KB& 450KB& 250KB\\
Linkage Arms &62KB&100KB&57KB\\
Nuts &13KB & 19KB & 31KB\\
Screws and Blots &18KB & 30KB & 21KB\\
Springs &620KB & 960KB & 440KB\\
\hline
\end{tabular}
}
\end{table}
\begin{figure}[!ht]
\begin{center}
\begin{tabular}[h]{|c|c|c|c|}
\hline
& \ & \ & \ \\
\parbox{0.8in}{ \centering
{\sc Functional Classification}}
&
\includegraphics[width=0.5in]{figs/func/arm42.pdf}
\includegraphics[width=0.5in]{figs/func/arm43.pdf}&
\includegraphics[width=0.5in]{figs/func/simple_boeing.pdf}
\includegraphics[width=0.5in]{figs/func/boeing.pdf}&
\includegraphics[width=0.5in]{figs/func/bracket1.pdf}
\includegraphics[width=0.5in]{figs/func/bracket2.pdf}\\
& Linkage Arms & Housings & Brackets \\
& \ & \ & \ \\
\hline
& \ & \ & \ \\
\includegraphics[width=0.5in]{figs/func/CompletePart_40.sat.pdf}
\includegraphics[width=0.5in]{figs/func/CompletePart_59.sat.pdf}&
\includegraphics[width=0.5in]{figs/func/gear1.sat.pdf}
\includegraphics[width=0.5in]{figs/func/gear2.sat.pdf}&
\includegraphics[width=0.5in]{figs/func/screw_frame_to_base.sat.pdf}
\includegraphics[width=0.5in]{figs/func/bolt_hex_1.sat.pdf}&
\includegraphics[width=0.5in]{figs/func/spring2.pdf}
\includegraphics[width=0.5in]{figs/func/spring.pdf}\\
Nuts & Gears & Screws & Springs \\
& \ & \ & \ \\
\hline
\end{tabular}
\end{center}
\caption{Examples of the models from the functional classification dataset.}
\label{fig:func-models}
\end{figure}
\subsubsection{Precision-Recall Measure}
\label{sec:prec-recall}
The performance of various retrieval techniques could be evaluated by the $k$-nearest
neighbor classification ($k$NN), and conventional recall and precision
measures for evaluating information retrieval systems. The recall and
precision values are computed at different thresholds values of parameter $k$
using the following formulas:
\[\mbox{\emph{recall}} = \frac{\mbox{\emph{Retrieved and Relevant
models}}}{\mbox{\emph{Relevant models}}} \]
\[\mbox{\emph{precision}} = \frac{\mbox{\emph{Retrieved and Relevant
models}}}{\mbox{\emph{Retrieved models}}} \]
The $k$NN classification labels a query model with the categories of
its $k$ closest neighbors, where $k$ is the threshold for
classification. The numbers of labeled categories potentially increase
and decrease with respect to $k$.
Under this experimental setting, the factors of recall and precision
computation become:
\begin{itemize}
\item \emph{Relevant models}: The number of models that fall in to
same category as the query model.
\item \emph{Retrieved models}: The number of models returned by a
query.
\item \emph{Retrieved and Relevant models}: The number of models
returned and that fell into the same category as the query model.
\end{itemize}
\emph{Recall} and \emph{precision} values were first computed per
model at different $k$ values. For each $k$, the arithmetic mean of
the \emph{recall} and \emph{precision} across all models in a dataset
was used as a representative value. To illustrate the results,
\emph{precision} is plotted against \emph{recall} on different
datasets and comparison techniques.
Ideally, a retrieval system should retrieve as many relevant models as
possible, both high precision as well as high recall are desirable. A
precision-recall graph plots \emph{precision} against \emph{recall}.
It shows the trade-off between precision and recall. Trying to
increase recall, typically, introduces more irrelevant models into the
retrieved set, thereby reducing precision. Rightward and upward
precision-recall curves indicates a better performance.
\subsubsection{Matching Results}
\begin{figure}[!t]
\centering
\includegraphics[width=4.0in]{figs/func_local.pdf}
\caption{Precision-Recall graph for retrieval experiment using: 1. Reeb Graph technique;
2. Scale-Space technique with max-angle distance function (sub-graph isomorphism algorithm used for matching);
3. Original Scale-Space technique with geodesic distance function;
4. Random retrieval technique.}
\label{fig:local-match}
\end{figure}
The precision-recall graphs for the dataset can be found in Figure~\ref{fig:local-match}. Random retrieval
technique was simulated by choosing all models randomly. It appears that the Reeb Graph technique performs
relatively better than both Scale-Space approaches, while Scale-Space technique with max-angle distance
function out-performs original Scale-Space for this dataset. The results of this experiment show that
the use of Scale-Space feature extraction technique with max-angle distance function results in
meaningful decomposition that could potentially be used for matching of 3D CAD data.
\subsection{Fidelity Experiments}
Due to the approximated nature of shape models (polyhedral representation), the fidelity of
shape models depends on the granularity of the faceting process. In order to measure the effects of fidelity variations
on the feature extraction technique, a set of the following experiments was performed.
\subsubsection{Variable Fidelity Dataset}
A subset of models from Functional Classification Dataset (Figure~\ref{fig:func-models})
was chosen for the fidelity experiments. A total of 40 CAD models classified by part families were
used. Each of them was faceted by ACIS for three instances with
different normal tolerances (50, 15, 5), resulting in 120 models.
Figure~\ref{fig:CAD-refine} shows the mesh of an example model under
different fidelity settings. Lowering the normal tolerance will cause
the faceting component to approximate a parametric surface with more
polygons, hence increasing the fidelity of the resulting shape model.
Ideally, a robust retrieval system should be indifferent to fidelity
variations of meshes. Table~\ref{tab:refine-stat} shows a brief
summary of this dataset
\begin{figure}[!ht]
\centering \subfigure[Low Fidelity,Normal Tolerance =
50]{\includegraphics[width=2.7in]{figs/refine_fig1.pdf}}
\subfigure[Normal Fidelity, Normal Tolerance =
15]{\includegraphics[width=2.7in]{figs/refine_fig2.pdf}}
\subfigure[High Fidelity, Normal Tolerance =
5]{\includegraphics[width=2.7in]{figs/refine_fig3.pdf}}
\caption{Variable Fidelity Dataset. Three Copies of the Same Model
under Different Fidelity settings.}
\label{fig:CAD-refine}
\end{figure}
\begin{table}[!ht]
\caption{Statistics of Variable Fidelity Dataset.}
\label{tab:refine-stat}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
& \# Models & Avg. \# Polygons & Avg VRML size\\
\hline
High & 40 & 18416 & 850KB\\
Normal & 40 & 5908 & 275KB\\
Low & 40 & 2699 & 117KB\\
\hline
Total & 120 & &\\
\hline
\end{tabular}
\end{table}
\begin{figure*}[!ht]
\centering
\subfigure[Original Scale-Space technique with geodesic distance function]{\label{fig:global-fidelity}\includegraphics[width=3.3in]{figs/refinement/global_scale.pdf}}
\subfigure[Scale-Space technique with max-angle distance function (sub-graph isomorphism algorithm used for matching)]{\label{fig:local-fidelity}\includegraphics[width=3.3in]{figs/refinement/local_scale.pdf}}
\caption{Precision-Recall graphs on Variable Fidelity Datasets. }
\label{fig:eval-refinement}
\end{figure*}
\subsubsection{Fidelity Experiment Using Original Scale-Space (geodesic distance function) Technique}
The precision-recall performance of the original Scale-Space technique
was measured on the Variable Fidelity Dataset. A precision-recall
plot was constructed for each fidelity setting (Low, Normal, High).
Figure~\ref{fig:global-fidelity} presents precision-recall plots
for various fidelity settings using original Scale-Space technique.
The experimental results show that the retrieval performance of the
original Scale-Space technique improved as the mesh fidelity increased.
This behavior is largely due to the fact that geodesic distance
measure is affected by the mesh fidelity. As the mesh fidelity increases,
the distance measure is calculated more precisely and, as a result,
improves ``quality'' of extracted features. This results in more accurate
retrieval of CAD data.
\subsubsection{Fidelity Experiment Using Scale-Space Technique with Max-Angle Distance Function (sub-graph isomorphism algorithm used for matching)}
Just like in the experiment for the original Scale-Space, precision-recall plots
for each fidelity setting was constructed using Scale-Space retrieval
technique with max-angle distance function.
Figure~\ref{fig:local-fidelity} presents precision-recall plots
for various fidelity settings using Scale-Space technique with max-angle distance function.
The results suggest that Scale-Space technique with max-angle distance function is relatively invariant
to the mesh fidelity. This is due to the nature of the distance measure
used in feature extraction. Since distance measure is angle based,
the extracted features are preserved across various fidelity settings.
Indeed, increasing the mesh fidelity normally affect smooth or flat surfaces,
which are already being segmented as separate features. Furthermore,
various fidelity settings do not affect right or rather large angles between
surfaces on the mesh models, which also preserve feature extracted using Scale-Space technique with max-angle distance function.
\subsection{Partial Matching}
\label{sec:part-match}
\begin{figure}[!t]
\centering
\begin{tabular}[h]{|c|ccccc|}
\hline
Query & \multicolumn{5}{c|}{Returned Models}\\
\hline
\quad & \quad & \quad & \quad & \quad & \quad\\
Full Scan \includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/bracket_full_scan.png} &
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/bracket_full_scan/1.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/bracket_full_scan/2.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/bracket_full_scan/3.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/bracket_full_scan/4.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/bracket_full_scan/5.pdf}\\
\hline
\quad & \quad & \quad & \quad & \quad & \quad\\
Single Scan \includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/bracket_part_scan.png} &
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/bracket_part_scan/1.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/bracket_part_scan/2.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/bracket_part_scan/3.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/bracket_part_scan/4.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/bracket_part_scan/5.pdf}\\
\hline
\quad & \quad & \quad & \quad & \quad & \quad\\
Partial CAD \includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/bracket_part_cad.png} &
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/bracket_part_cad/1.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/bracket_part_cad/2.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/bracket_part_cad/3.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/bracket_part_cad/4.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/bracket_part_cad/5.pdf}\\
\hline
\quad & \quad & \quad & \quad & \quad & \quad\\
Full Scan \includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/ex_full_scan.png} &
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/ex_full_scan/1.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/ex_full_scan/2.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/ex_full_scan/3.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/ex_full_scan/4.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/ex_full_scan/5.pdf}\\
\hline
\quad & \quad & \quad & \quad & \quad & \quad\\
Single Scan \includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/ex_part_scan.png} &
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/ex_part_scan/1.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/ex_part_scan/2.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/ex_part_scan/3.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/ex_part_scan/4.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/ex_part_scan/5.pdf}\\
\hline
\quad & \quad & \quad & \quad & \quad & \quad\\
Partial CAD \includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/ex_part_cad.png} &
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/ex_part_cad/1.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/ex_part_cad/2.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/ex_part_cad/3.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/ex_part_cad/4.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/ex_part_cad/5.pdf}\\
\hline
\quad & \quad & \quad & \quad & \quad & \quad\\
Full Scan \includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/socket_full_scan.png} &
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/socket_full_scan/1.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/socket_full_scan/2.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/socket_full_scan/3.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/socket_full_scan/4.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/socket_full_scan/5.pdf}\\
\hline
\quad & \quad & \quad & \quad & \quad & \quad\\
Single Scan \includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/socket_part_scan.png} &
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/socket_part_scan/1.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/socket_part_scan/2.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/socket_part_scan/3.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/socket_part_scan/4.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/socket_part_scan/5.pdf}\\
\hline
\quad & \quad & \quad & \quad & \quad & \quad\\
Partial CAD \includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/socket_part_cad.png} &
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/socket_part_cad/1.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/socket_part_cad/2.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/socket_part_cad/3.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/socket_part_cad/4.pdf}&
\includegraphics[width=0.5in,height=0.5in]{figs/local/knn_phls/socket_part_cad/5.pdf}\\
\hline
\end{tabular}
\caption{Retrieval experiment on partial and scanned data. For each query, five closest models were retrieved from the database.}
\label{fig:part-match}
\end{figure}
The last experiment was designed to test whether a simple sub-graph isomorphism can
yield satisfactory retrieval results on scanned and partial data. A total of nine models
were used as query models for this experiment. The query models correspond to three actual physical parts (displayed in
Figure~\ref{fig:part-match}). For each physical part, three various 3D models were obtained ---
full scan and single scan models, and partial models in ACIS format (exact representation).
The partial models obtained from exact representation were created by removing some features from
the full models, such that they would resemble single scan models of the corresponding physical
parts.
The database contained models from dataset used in matching experiment from Section~\ref{sec:func-dataset}
and full CAD models that correspond to the query models.
All of the objects in the database were converted from the ACIS format into polyhedral representation.
For each query model, $k$ closest neighbors from the database were retrieved. The experimental results for $k=5$ is presented
in Figure~\ref{fig:part-match}.
From Figure~\ref{fig:part-match}, one may conclude that for only one physical part (the one in the middle), the desirable (correct) model was among five returned models.
Although, for this physical part, for every variation of the models, the desirable model was among the returned ones.
Furthermore, if $k$ is set to 10, then 5 (out of 9) queries returned correct model. Increasing $k$ to 15, results in
7 (out of 9) correct queries, while $k=20$ return desirable model among returned ones for 9 (out of 9) queries.
The unsatisfactory performance of the max-angle Scale-Space decomposition technique with matching using sub-graph isomorphism
can be explained using the following. The model that resulted in correct queries for $k=5$ (correct model was among returned ones)
has very topologically distinctive feature graph. As a result, sub-graph isomorphism algorithm was able to pick correct model among
all of the models in the database. Further, partial data result in partial feature
graphs and, as a result, the distance between a partial model and a full model becomes large enough to
diminish the retrieval capabilities which the technique showed in Section~\ref{sec:local-match}.
Lastly, when Scale-Space decomposition is applied on scanned data, the noise may contribute to the
extracted features (i.e. noise becomes extracted features). This issue can clearly affects performance
of the sub-graph isomorphism on feature graphs.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Conclusions & Future Work
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Discussion}
The next objective for this work is to introduce an efficient matching
algorithm for partial similarity measures. From the above experiments
we conclude that in order to perform successful matching, the
technique must have the following properties: (1) tolerance for the noise
that scanned data introduce; (2) ability to perform many-to-many
matching, since it is possible that a feature could get divided in to
several features (Figure~\ref{fig:many_to_many} gives an instance of such situation);
(3) efficiency, so it can be used in the
National Design Repository database~\footnote{{\tt http://www.designrepository.org}}.
For instance, a matching technique with such properties could
potentially be derived from many-to-many matching algorithm presented in~\cite{Demirci:MANY-TO-MANY}.
Furthermore, One of the main aspects of such matching technique is the distance function that assigns a
numerical value to a pair of features. Previous
work~\cite{BESPALOV:SCALE-SPACE} successfully used such a function,
which was based on area and Euclidean distance measurements within
features. Please see Figure~\ref{fig:match} for a sample view of two
models with matched regions.
Other possible directions for Scale-Space work are to (1) explore
techniques to extract features that resemble traditional CAD features;
and (2) exploit the possibility of using Scale-Space features as
signatures for indexing purposes.
\section{Conclusions}
\label{sec:conc}
This paper introduces a computationally practical approach, based on
Scale-Space decomposition, to automatically segment 3D models in
polyhedral representation into features that could be used for
indexing, classification and matching. The decomposition is based on
the local surface structure of a model. As a result similar features can
be extracted in the presence of partial model information and noisy
data. In this way, the technique has been shown to consistently
segment partial 3D views, noisy geometry and the data (both
partial and noisy) acquired by 3D laser range scanners.
One of the significant contributions of this work is to unite the
notion of ``feature'' from the computer vision and graphics literature
with the ``features'' of CAD/CAM. The specific measurement function
behind the concept of features in this paper is highly tuned to the
efficient identification of shape and topological categories. In one
application, features obtained using our approach could be different
from traditional CAD features and used to establish partial
similarities between CAD models in polyhedral representation. We
argue that the Scale-Space technique can be parameterized using
different measurement functions, enabling it generate a variety of
useful segmentations, including those that have semantic relevance to
engineering and manufacturing properties. Through experiments,
locality-based feature representation was shown to have promising
capabilities that, with further research, could be employed for
3D matching purposes, including partial matchings. Furthermore, the Scale-Space
decomposition technique is robust with respect to noise; therefore, it
can potentially be used on 3D models generated from 3D data acquisition devices,
such as laser range scanners.
The Scale-Space approach developed and advanced in this paper creates
a foundation for creating new approaches to existing problems in
feature-based manufacturing. Foremost, one can argue that Scale-Space
techniques can subsume all existing approaches to feature
identification by parameterizing the decomposition of the surface on a
model as a distance measure function. The concept of the measure
function is highly generalizable, implying that all one needs to do is
identify the measure function intrinsic to the class of features of
interest and provide it as a parameter to the Scale-Space algorithm.
Extending and enhancing the Scale-Space technique creates several
research challenges. Because it is focused on local information,
Scale-Space techniques have to be extended to capture features that
have been subdivided through interactions. In addition, considerable
work needs to be done to develop measure functions that map to
well established engineering feature sets. Alternative approach would be
to propose mappings (e.g. Brep-like schema) between engineering features and
surfaces extracted through Scale-Space decomposition technique with max-angle
distance measure.
Ultimately we believe that Scale-Space techniques provide important
new capabilities that compliment existing approaches to feature
identification and shape matching. With additional research,
Scale-Space technique can become part of the solution to a number of
important engineering problems.
\begin{figure}[!t]
\centering
\includegraphics[width=6.0in]{figs/WorkSpace/many_to_many.pdf}
\caption{Illustration of many-to-many matching. Features from single scan model (on the left, shown in red) can not match features from full scan model (on the right, shown in red) individually,
while the unions of the features (shown in green) match.}
\label{fig:many_to_many}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=2.2in]{figs/WorkSpace/goodpart_compare.pdf}
\quad
\includegraphics[width=2.2in]{figs/WorkSpace/badpart_compare.pdf}
\caption{Results of matching between {\sc Goodpart} and {\sc Badpart}, with matched
features (leafs of decomposition trees) having similar colors. Feature trees are
obtained for each model using {\sc Feature-Decomposition($\M,k$)} with
geodesic distance function. The technique (introduced in~\cite{BESPALOV:SCALE-SPACE})
used to match those objects relies on global model structure.}
\label{fig:match}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Acknowledgments
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section*{Acknowledgments}
\label{sec:ack}
This work was supported in part by National Science Foundation (NSF)
CAREER Award CISE/IIS-9733545, NSF Information Technology Research
Award ITR-0219176, and Office of Naval Research Grant
N00014-01-1-0618. Any opinions, findings, and conclusions or
recommendations expressed in this material are those of the author(s)
and do not necessarily reflect the views of the National Science
Foundation or the other supporting government and corporate
organizations.
\bibliographystyle{asmems4}
\bibliography{ali,ref,wcrefs,dmitriy,dmitriy2,ucip,shapesearch,giclpub}
%\newpage
\end{document}