Technical Reports
TP-2010-01: 3D Facial Landmark Detection & Face Registration (PDF)
Resources
Facial Landmark Annotation Files (zip)
MRI Breast Data Analysis Software (BreDAn) and corrsponding README file (rar) (README.txt)
Selected Paper Abstracts
Chaikalis D., G. Passalis, N. Sgouros, D. Maroulis, and T. Theoharis, ‘Near Real-Time 3D Reconstruction from InIm Video Stream’, in Image Analysis and Recognition, 5th International Conference, ICIAR 2008, Póvoa de Varzim, Portugal, June 25-27, 2008, Proceedings. Also LNCS 5112. (PDF)
An efficient hardware architecture for the acceleration of an inte-grated 3D reconstruction method is presented, targeting demanding dynamic Integral Imaging applications. It exploits parallel processing and features mini-mized memory operations by implementing an extended-access memory scheme. Its reduced data throughput, thanks to optimized data utilization, makes it suitable for single FPGA device implementation. Results reveal that the hardware system outperforms the software method by an order of magni-tude. Moreover, its processing rate surpasses the typical rate of a dynamic Integral Imaging acquisition system, thus making a significant step towards real-time 3D video reconstruction.
A
novel method for
the reconstruction of 3D shape and texture from Integral Photography
(IP) images is presented. Sharing the same principles with
stereoscopic-based object reconstruction, it offers increased
robustness to noise and occlusions due to the unique characteristics of
IP images. A coarse to fine approach is used, employing a novel grid
refinement step in order to increase the quality of the reconstructed
objects. The proposed method’s unique properties include configurable
depth accuracy and direct and seamless triangulation. We evaluate our
method using synthetic data from a computer simulated IP setup as well
as real data from a simple yet effective digital IP setup. Experiments
show reconstructed objects of high quality indicating that IP can be a
competitive modality for 3D object reconstruction.
P.
Stavrou, P.
Mavridis, G. Papaioannou, G. Passalis, T. Theoharis, 3D Object Repair
Using 2D
Algorithms, Proc. International Conference on Computational Science
2006 (ICCS
2006), LNCS, Springer.
A number of
three-dimensional algorithms
have been proposed to solve the problem of patching surfaces to rectify
and extrapolate missing information due to model problems or bad
geometry visibility during data capture. On the other hand, a number of
similar yet more simple and robust techniques apply to 2D image data
and are used for texture restoration. In this paper we make an attempt
to bring these two-dimensional techniques to the 3D domain due to their
obvious advantage of simplicity and controllability. Creating a depth
image with the help of a voxelisation algorithm will allow us to apply
a variety of image repair algorithms in order to mend a 3D object. The
use of three variations of the texture synthesis algorithm is
investigated. Constrained texture synthesis and its variations using
the Haar wavelet and image decomposition methods are also proposed in
order to preserve patterns appearing on the object while trying to
maintain its geometry intact.
Passalis G., I. Kakadiaris, T. Theoharis, ‘Efficient Hardware Voxelization’, Computer Graphics International 2004, Crete, June 2004. (PDF)
A novel algorithm for the voxelization of surface models of
arbitrary topology is presented. The algorithm uses the depth and
stencil buffers, available in most commercial graphics hardware, to
achieve high performance. It is suitable for polygonal meshes and
parametric surfaces. Experiments highlight the advantages and
limitations of our approach.
Platis
N., T. Theoharis, ‘Simplification of Vector Fields over Tetrahedral
Meshes’, Computer Graphics International
2004, Crete, June 2004, pp. 174-181. (PDF)
Vector fields
produced by experiments or
simulations are usually extremely dense, which makes their manipulation
and visualization cumbersome. Often, such fields can be simplified
without much loss of information. A simplification method for 3D vector
fields defined over tetrahedral meshes is presented. The underlying
tetrahedral mesh is progressively simplified by successive half-edge
collapses. The order of collapses is determined by a compound metric
which takes into account the field and domain error incurred as well as
the quality of the resulting mesh. Special
attention is given to the preservation of the mesh boundary and of
critical points on the vector field. A tool has been developed for the
measurement of the difference between two vector fields over
tetrahedral meshes, and it is used to quantify the simplification
error.
Platis N. &
T.
Theoharis, “Fast Ray-Tetrahedron Intersection Using Pluecker
Coordinates”, ACM
Journal of Graphics
Tools,
8(4), 2004, pp. 37-48.
(PDF)
We
present an
algorithm for ray-tetrahedron intersection. The algorithm uses Pl¨ucker coordinates to
represent the ray
and the edges of
the tetrahedron and employs a robust and efficient test to determine
the
intersection. The algorithm is highly optimized and provides a
significant
performance increase over related algorithms.
Drakopoulos V., N. Mimikou
& T. Theoharis, “An Overview of Parallel Visualization Methods
for
Mandelbrot and Julia Sets”, Computers & Graphics, 27, 2003, pp.
635-646.
Platis
N. & T. Theoharis, “Progressive Hulls for Intersection
Applications“, Computer Graphics Forum, 22(2), 2003, pp. 107-116.
(PDF)
Progressive
meshes are an established tool for triangle mesh simplification. By
suitably adapting the simplification process, progressive hulls can be
generated which enclose the original mesh in gradually simpler, nested
meshes. We couple progressive hulls with a selective refinement
framework and use them in applications involving intersection queries
on the mesh. We demonstrate that selectively refinable
progressive hulls considerably speed up intersection queries by
efficiently locating intersection points on the mesh. Concerning the
progressive hull
construction, we propose a new formula for assigning edge collapse
priorities that significantly accelerates the simplification process,
and enhance the existing algorithm with several conditions aimed at
producing higher quality hulls. Using progressive hulls has the added
advantage that they can be used instead of the enclosed object when a
lower resolution of display can be tolerated, thus speeding up the
rendering process.
Karabassi E.A.,
G. Papaioannou, C. Fretzagias
& T. Theoharis, ‘’Exploiting Multiresolution
Models to Accelerate Ray Tracing’’, Computers & Graphics, 27,
2003,
pp. 91-98.
(PDF)
Papaioannou G. & T. Theoharis, "Fast Fragment Assemlage Using Boundary Line and Surface Matching", in Application sof Computer Vision in Archaeology, within IEEE Computer Vision and Pattern Recongition 2003, Madison, WI, June 2003. (PDF)
In the recent past, fragment matching has been treated in two different
approaches, one using curve matching methods and one that compares
whole surfaces
or
volumes, depending on the nature of the broken artefacts. Presented
here is a fast, unified method that combines curve matching techniques
with a surface
matching algorithm to estimate the positioning and
respective matching error for the joining of threedimensional
fragmented objects. Combining both aspects
of fragment matching,
essentially eliminates most of the ambiguities present in each one of
the matching problem categories and helps provide more accurate results
with low computational cost.
Papaioannou G., E.A.
Karabassi & T.
Theoharis,
“Reconstruction of Three-Dimensional Objects through Matching of their
Parts” , IEEE
Transactions on Pattern
Analysis and Machine Intelligence (TPAMI),
24(1), January 2002, pp. 114-124.
The problem of
reassembling
an object from its parts or fragments has never been addressed with a
unified
computational approach, which depends on the pure geometric form of the
parts
and not application-specific features. We propose a method for the
automatic
reconstruction of a model based on the geometry of its parts, which may
be
computer-generated models or range-scanned models. The matching process
can
benefit from any other external constraint imposed by the specific
application.
Papaioannou G., E.A.
Karabassi & T.
Theoharis, "Virtual
Archaeologist:
Assembling the Past", IEEE Computer Graphics and Applications, 21(2),
2001, pp. 53-59. (PDF)
A
semi-automatic system for
the reconstruction of archaeological finds from their fragments is
described. Virtual
Archaeologist uses computer graphics to calculate a measure of
complementary
matching between scanned data and employs optimization algorithms in
order to
estimate the correct relative pose between fragments and cluster those
fragments that belong to the same entity.
Theoharis T., G. Papaioannou & E.A. Karabassi, "The Magic of the Z-Buffer: A Survey", WSCG 2001, Pilsen, CZ, February 2001, pp. 379-386. (PDF)
The wide availability of hardwired Z-buffers has sparked an explosion in the number of application of the algorithm whose origins lie in hidden surface elimination. This paper presents a much needed survey of key applications of the Z-buffer from the fields of rendering, modelling and vision in a common notation in order to help users make better use of this resource.
Simiakakis G., T.
Theoharis & A.M. Day, "Parallel Adaptic Ray Tracing with 5D
Adaptive Subdivision", short paper in WSCG2001 - Int. Conf. on
Graphics, Visualization & Computer Vision, 2001 (PDF)
We present strategies for parallelising ray tracing based on 5D adaptive subdivision.Our goals are to obtain good speed-up and to efficiently balance the load between thc processors while minimising therequired memory per processor.
Polygonal
models are the
most common representation of structured 3D data in computer graphics,
pattern
recognition and machine vision. The method presented here automatically
identifies and labels all compact surface regions of a polygonal mesh,
visible
or not, and extracts valuable invariant features regarding their
geometric
attributes. A method that is independent of the mesh topology is also
presented
for the surface bumpiness estimation and the identification of coarse
surface
regions.
Papaioannou
G., E.A. Karabassi
& T.
Theoharis, "Automatic Reconstruction of Archaeological Finds- A
Graphics
Approach", Proc. 4th Intern. Conf. in
Computer Graphics
and Artificial Intelligence,
Reconstruction
of
archaeological finds from fragments is a tedious task requiring many
hours of
work by the archaeologists and reconstruction personnel. Up to now,
computers
have significantly simplified this work by providing tools for data
encoding,
storage classification and visualization in some cases. In this paper
we go one
step further by presenting a semi-automatic procedure for the full
reconstruction of the original objects using computer graphics and
artificial
intelligence algorithms on geometrical information.
Karabassi E.A.,
Papaioannou
G. & T. Theoharis, "A Fast Depth-Buffer-Based Voxelization
Algorithm", ACM Journal
of Graphics Tools,
4(4),
1999, pp. 5-10. (PDF)
This paper
presents a fast
and easy to implement voxelization algorithm, which is based on the
z-buffer. Unlike
most existing methods, our approach is suitable both for polygonal and
analytical objects. The efficiency of the method is independent of the
object
complexity and can be accelerated by taking advantage of widely
available, low-cost
hardware.
Karabassi E.A., Papaioannou G. & T. Theoharis, "Intersection Test for Collision Detection in Particle Systems", ACM Journal of Graphics Tools, 4(1), 1999, pp. 25-37. (PDF)
We present a
method for
detecting collisions between a system of spherical particles and an
environment
composed of triangles. The proposed algorithm takes into account each
particle's volume and is based on an intersection test between a
triangle and a
cylinder with spherical caps (the trajectory of a particle). The
algorithm also
efficiently calculates a close estimate of the point of intersection,
which is
necessary for collision detection.
Papaioannou G., T.
Theoharis & A.
Boehm, "A Texture
Controller", The Visual
Computer, 14(10),
1998,
pp. 488-496.
An efficient
method for
creating accurate and controllable textures is proposed. In the past
procedural
textures have been used to cover every point of an object uniformly,
but their
behaviour could not be controlled locally, as is frequently needed. In
order to
provide local control, texture-attribute control points are inserted in
the
model and the behaviour of the texture at every point is defined
through
interpolation of the control-point attributes. The proposed texturing
algorithm
behaves as a texture controller and can be applied to any kind of
procedural
texture.
Agathos A., T. Theoharis & A. Boehm, "Efficient Integer Algorithms for the Generation of Conic Sections", Computers & Graphics, 22(5), 1998, pp. 621-628. (PDF)
Efficient
integer
8-connected algorithms for the fast generation of Conic Sections whose
axes are
aligned to the coordinate axes are described based on a Bresenham-like
methodology. Performance results show that in the case of the ellipse,
the
algorithm is at least as fast as other known integer algorithms but
requires
lower integer range and always performs correct region transitions. Antialiasing is easily
incorporated.
Theoharis T., A.R.L. Travis & N.E. Wiseman, "3D Display: Synthetic Image Generation and Visual Effect Simulation", Computer Graphics Forum, 9, 1990, pp. 337-348. (PDF)
Two viewing models are presented for a promising type of three dimensional display; they are based on parallel oblique and perspective oblique projections respectively. A detailed simulation compares the quality of the imeges that will be produced by each projection type and points out potential problems. The time necessary to alter the image of the three dimensional display will be comparable to that of conventional displays by the use of a simple parallel processing scheme.
Theoharis
T., "Algorithms for Parallel Polygon Rendering", Lecture Notes in
Computer Science (Springer-Verlag),
#373,
1989.
An
in-depth analysis of the use of general purpose parallel processors for
the
speedup of basic graphics algorithms. Account is taken of both SIMD
and MIMD architectures,
as well as their
combination.
Algorithms considered include polygon rendering and clipping.
Theoharis
T. &
It is shown
how the polygon
rendering operations of filling and hidden surface elimination can be
efficiently distributed among the SIMD
and
MIMD parts of the
Disputer, a parallel
machine developed at
Oxford University Programming Research Group that incorporates both of
these
modes of parallelism. Apart from being a useful graphics application,
this work
illustrates some of the issues involved in programming the Disputer
efficiently.
Theoharis T. & I. Page, "Two Parallel Methods for Polygon Clipping", Computer Graphics Forum, 8, 1989, pp. 107-114. (PDF)
Two parallel
algorithms for
the implementation of the well known Sutherland-Hodgman
polygon clipping algorithm are given for the SIMD
DAP
and the MIMD Transputer
respectively. The two implementaions
are
compared, a
comparison that is interesting for the two architectures are typical
examples
of their respective architectural classes.
Theoharis
T. &
A data
parallel algorithm
for the evaluation of a linear polynomial function using the method of
differences is given. An implementation on a general purpose SIMD array is presented and the
result is
exploited for the
implementation of certain important graphics operations. The required
accuracy
and the achieved speedup and efficiency are discussed.
Theoharis
T., "Exlpoiting
Parallelism in
the Graphics Pipeline" Technical Monograph PRG-54,
Discusses
the construction of a parallel processing pipeline for the rapid
processing of
polygonal models in computer graphics. It includes an analysis of the
cost of each
stage, which results in a proposal for the amount of internal
parallelism that
is required for each pipeline stage. Formal specifications in Z and CSP are given as well as an Occam
implementation.
We present a novel 3D shape descriptor that uses a set of panoramic views of a 3D object which describe the position and orientation of the object’s surface in 3D space. We obtain a panoramic view of a 3D object by projecting it to the lateral surface of a cylinder parallel to one of its three principal axes and centered at the centroid of the object. The object is projected to three perpendicular cylinders, each one aligned with one of its principal axes in order to capture the global shape of the object. For each projection we compute the corresponding 2D Discrete Fourier Transform as well as 2D Discrete Wavelet Transform. We further increase the retrieval performance by employing a local (unsupervised) relevance feedback technique that shifts the descriptor of an object closer to its cluster centroid in feature space. The effectiveness of the proposed 3D object retrieval methodology is demonstrated via an extensive consistent evaluation in standard benchmarks that clearly shows better performanceagainst state-of-the-art 3D object retrieval methods.
Papadakis P., I. Pratikakis, T. Theoharis, G. Passalis and S. Perantonis, ‘3D Object Retrieval using an Efficient and Compact Hybrid Shape Descriptor’, Eurographics 2008 Workshop on 3D Object Retrieval, April 2008, pp. 9-16. (PDF)
We present a novel 3D object retrieval method that relies upon a hybrid descriptor which is composed of 2D features based on depth buffers and 3D features based on spherical harmonics. To compensate for rotation, two alignment methods, namely CPCA and NPCA, are used while compactness is supported via scalar feature quantization to a set of values that is further compressed using Huffman coding. The superior performance of the proposed retrieval methodology is demonstrated through an extensive comparison against state-of-the-art methodson standard datasets.
Papadakis P., I. Pratikakis, T. Trafalis, T.
Theoharis and S. Perantonis, "Relevance feedback in content-based 3D
Object Retrieval: A comparative study", Computed-Aided Design and
Applications Journal, 5(5), 2008, pp. 753-763
(PDF)
Papadakis
P.,
I. Pratikakis,
We present a 3D shape retrieval methodology based on the theory of
spherical harmonics. Using properties of spherical harmonics, scaling
and axial flipping invariance is achieved. Rotation normalization is
performed by employing the continuous principal component analysis
along with a novel approach which applies PCA on the face normals of
the model. The 3D model is decomposed into a set of spherical functions
which represents not only the intersections of the corresponding
surface with rays emanating from the origin but also points in the
direction of each ray which are closer to the origin than the furthest
intersection point. The superior performance of the proposed
methodology is demonstrated through a comparison against
state-of-the-art approaches on standard databases.
G.
Passalis, T. Theoharis, I.A. Kakadiaris,
PTK: A Novel Depth
Buffer-Based Shape Descriptor
for Three-Dimensional Object Retrieval, The
Visual Computer, January 2007, pp. 5-14.
Vajramushti N., I. A. Kakadiaris, T. Theoharis, G. Papaioannou, ‘Efficient 3D Object Retrieval Using Depth Images’, Proceedings of the 6th ACM SIGMM international Workshop on Multimedia information retrieval 2004, New York, NY, USA October 15 - 16, 2004, pp. 189-196.
In this paper, we present a new three-dimensional object retrieval method. This method employs depth buffers for representing and comparing the objects. Specifically, multiple depth buffers per object (computed from different points of view) are compared for surface and volume similarity. Our method is easily extensible for hierarchical comparisons at multiple resolutions and is highly parallelizable. We have employed this method for both inter-class and intra-class retrieval tasks on a gallery of over 3,000 three-dimensional objects of vehicles with very encouraging results. The accuracy of the method depends on the number of depth buffers and the depth buffer resolution.
Toderici G., S.M. O' Malley, J. Passalis, T. Theoharis, I.A. Kakadiaris, "Ethnicity- and Gender-based Subject Retrieval Using 3D Face-Recognition Techniques", International Journal of Computer Vision (IJCV), 89(2/3), September 2010, pp. 382-391. (PDF)
While the retrieval of datasets from human subjects based on
demographic characteristics such as gender or race is an ability with
wide-ranging application,
it remains poorly-studied. In contrast, a
large body of work exists in the field of biometrics which has a
different goal: the recognition of human subjects. Due
to this
disparity of interest, existing methods for retrieval based on
demographic attributes tend to lag behind the more well-studied
algorithms designed purely
for face matching. The question this
raises is whether a face recognition system could be leveraged to solve
these other problems and, if so, how effective it could
be. In the
current work, we explore the limits of such a system for gender and
ethnicity identification given (1) a ground truth of
demographically-labeled, textureless 3-D models of human faces and (2)
a state-of-theart face-recognition algorithm. Once trained, our system
is capable of classifying the gender and ethnicity of any such model of
interest. Experiments are conducted on 4007 facial meshes from the
benchmark Face Recognition Grand Challenge v2 dataset.
Perakis P. G. Passalis, T. Theoharis, G. Toderici, I.A. Kakadiaris, "Partial Matching of Interposed 3D Facial Data for Face Recognition", Best Reviewed Paper, IEEE Third International Conference on Biometrics Theory, Applications and Systems (BTAS 2009), Washington DC, September 2009. (PDF)
Three–dimensional
face recognition has lately received much attention due to its
robustness in the presence of lighting and pose variations. However,
certain pose variations often result in missing facial data. This is
common in realistic scenarios, such as uncontrolled environments and
uncooperative subjects. Most previous 3D face recognition methods do
not handle extensive missing data as they rely on frontal scans.
Currently, there is no method to perform recognition across
scans of
different poses. A unified method that addresses the partial matching
problem is proposed. Both frontal and side (left or right) facial scans
are handled in a way that allows interpose retrieval operations. The
main contributions of this paper include a novel 3D landmark detector
and a deformable model framework that supports symmetric fitting. The
landmark detector is utilized to detect the pose of the facialscan.
This information is used to mark areas of missing data
and to
roughly register the facial scan with an Annotated Face Model (AFM).
The AFM is fitted using a deformable modelframework that introduces the
method of exploiting facial symmetry where data are missing.
Subsequently, a geometry image is extracted from the fitted AFM that is
independent of the original pose of the facial scan. Retrieval
operations, such as face identification, are then performed on a
wavelet domain representation of the geometry image. Thorough testing
was
performed by combining the largest publicly available databases. To the
best of our knowledge, this is the first method that handles side scans
with extensive missing data (e.g., up to half of the face missing).
Perakis P., T. Theoharis, G. Passalis and I.A.
Kakadiaris, ‘Automatic 3D Facial Region Retrieval from Multi-pose Facial Datasets
Eurographics/SIGGRAPH 2009 Workshop on 3D Object Retrieval, March 2009, pp. 37-44. (PDF)
The availability of 3D facial datasets is rapidly growing, mainly as a result of medical and biometric applications.These applications often require the retrieval of specific facial areas (such as the nasal region). The most crucial step in facial region retrieval is the detection of key 3D facial landmarks (e.g., the nose tip). A key advantage of 3D facial data over 2D facial data is their pose invariance. Any landmark detection method must therefore also be pose invariant. In this paper, we present the first 3D facial landmark detection method that works in datasets with pose rotations of up to 80 around the y-axis. It is tested on the largest publicly available 3D facial datasets, for which we have created a ground truth by manually annotating the 3D landmarks. Landmarks automaticallydetected by our method are then used to robustly retrieve facial regions from 3D facial datasets.
Kakadiaris I.A., H. Abdelmunim, W. Yang, and T.
Theoharis, ‘Profile Based Face Recognition’, FG2008, 8th IEEE Int. Conf. on Automatic Fac
and Gesture Recognition, Amsterdam, September 2008.
(PDF)
In this paper, we introduce a new system for profile based face recognition. The specific scenario involves a driver entering a gated area and using his/her side-view image (s/he sits inside the vehicle) as identification. The system has two modes: enrollment and identification. At the enrollment mode, 3D face models of subjects are acquired and the profiles extracted under different poses are stored to form a gallery database. At the identification mode, 2D images are acquired, and the corresponding planar profiles are extracted and used as probes. Then, probes are matched to the gallery profiles to determine identity. The matching is implemented using implicit shape registration via the vector distance functions. In our experiments, the implicit registration recognition exhibited higher accuracy than the iterative closest point methodology due to the use of more general transformations. The performance of our system isillustrated using a variety of databases.
Theoharis
T., G. Passalis, G. Toderici, I. A. Kakadiaris, “Unified 3D Face and
Ear Recognition using Wavelets on Geometry Images”, Pattern
Recognition, Special Issue: Multimodal Biometrics, 41, pp. 796-804,
2008.
(PDF)
As the accuracy of biometrics improves, it is getting
increasingly hard to push the limits using a single modality. In this
paper, a unified approach that fuses three-dimensional facial and ear
data is presented. An annotated deformable model is fitted to the data
and a geometry image is extracted. Wavelet coefficients are computed
from the geometry image and used as a biometric signature. The method
is evaluated using the largest publicly available database and achieves
99.7% rank-one recognition rate. The state-of-the-art accuracy of the
multimodal fusion is attributed to the low correlation between the
individual differentiability of the two modalities.
Toderici G., G. Passalis, T. Theoharis, and
In this paper, we present a novel method for human face modeling and
its application to face relighting and recognition. An annotated face
model is fitted onto the raw 3D data using a subdivision-based
deformable model framework. The fitted face model is subsequently
converted to a geometry image representation. This results in regularly
sampled, registered and annotated geometry data. The albedo of the skin
is retrieved by using an analytical skin reflectance model that removes
the lighting (shadows, diffuse and specular) from the texture.
Additional provisions are made such that if the input contains
over-saturated specular highlights, an inpainting method with texture
synthesis is used as a post-processing step in order to estimate the
texture. The method is fully automatic and uses as input only the 3D
geometry and texture data of the face, as acquired by commercial 3D
scanners. No measurement or calibration of the lighting environment is
required. The method’s fully automatic nature and its minimum input
requirements make it applicable to both computer vision applications
(e.g., face recognition) and computer graphics applications (i.e.,
relighting, face synthesis and facial expressions transfer). Moreover,
it allows the utilization of existing 3D facial databases. We present
very encouraging results on a challenging
dataset.
In this
paper, we
present the computational tools and a hardware prototype for 3D face
recognition. Full automation is provided
through the
use of advanced multistage alignment algorithms, resilience to facial
expressions by employing a deformable model
framework, and
invariance to 3D capture devices through suitable preprocessing steps.
In addition, scalability in both time and
space
is
achieved by converting 3D facial scans into compact metadata. We
present our results on the largest known, and now publicly available,
Face
Recognition Grand Challenge 3D facial database consisting of several
thousand scans. To the best of our knowledge, this is
the highest
performance reported on the FRGC v2 database for the 3D modality.
G.
Passalis, I. A. Kakadiaris,
T. Theoharis, Intra-class retrieval of non-rigid 3D objects:
Application to Face Recognition, IEEE Transactions on Pattern Analysis
and Machine Intelligence (TPAMI),
29(2), February 2007, pp. 218-229. (PDF)
As the size of the available collections of 3D objects grows, database transactions become essential for their management, with the key operation being retrieval (query). Large collections are also pre-categorized into classes, so that a single class contains objects of the same type (e.g., human faces, cars, four-legged animals). It is shown that general object retrieval methods are inadequate for intra-class retrieval tasks. We advocate that such intra-class problems require a specialized method that can exploit the basic class characteristics in order to achieve higher accuracy. A novel 3D object retrieval method is presented which uses a Parameterized Annotated Model of the shape of the objects in a class incorporating its main characteristics. The annotated subdivision-based model is fitted onto objects of the class using a deformable model framework, converted to a geometry image and transformed into the wavelet domain. Object retrieval takes place in the wavelet domain. The method does not require user interaction, achieves high accuracy, it is efficient for use with large databases, and it is suitable for non-rigid object classes. We apply our method to the face recognition domain, one of the most challenging intra-class retrieval tasks. We utilized the Face Recognition Grand Challenge database, yielding an average verification rate of 95.2% at 0.0001 false accept rate.
Kakadiaris I.A., G. Passalis,
G. Toderici, N. Karampatziakis, N. Murtuza, Y. Lu and T. Theoharis,
‘Expression-invariant
multispectral face recognition; you can smile now!’, SPIE Defense and
Security
Symposium, Orlando, Florida, 17-21 April, 2006.
Face recognition performance has always been affected by the different
facial expressions a subject may display. In this paper, we present an
extension to the UR3D face recognition algorithm, which enables us to
decrease the discrepancy in its performance for datasets from subjects
with and without a neutral facial expression, from 15% to 3%.
Kakadiaris I.A., G.
Passalis, G.
Toderici, N. Murtuza and T.
Theoharis, ‘3D Face Recognition’, British Machine Vision Conference
(BMVC),
Edinburgh, September 2006.
(PDF)
In this paper,
we present a new
3D face recognition approach. Full automation is provided through the
use of advanced multi-stage alignment algorithms, resilience to facial
expressions by employing a deformable model framework, and invariance
to 3D capture devices through suitable preprocessing steps. In
addition, scalability in both time and space is achieved by converting
3D facial scans into compact wavelet metadata. We present results on
the largest known, and now publicly-available, Face Recognition Grand
Challenge 3D facial database consisting of several thousand scans. To
the best of our knowledge, our approach has achieved the highest
accuracy on this dataset.
Kakadiaris
I.A., G. Passalis,
G. Toderici, N. Murtuza, and T. Theoharis. "Quo Vadis, 3D Face and Ear
Recognition?" Multi-Sensory Multi-Modal Face Biometrics for Personal
Identification, Eds. R.I. Hammoud, B. Abidi and M. Abidi, Springer-Verlag, 2006 (book chapter).
In this chapter, we present the computational tools
necessary
for capture-device agnostic 3D face and ear recognition, as well as a
hardware prototype for each of these biometrics. The 3D face
recognition algorithm that we present is fully automatic, while the
current 3D ear recognition algorithm requires manual initialization.
Automation is provided by employing a deformable model
framework, while capture-device invariance is achieved by using
suitable preprocessing algorithms which can be tuned to any specific 3D
capture device. We achieve scalability of the system by converting the
3D fitted models of the face and ear into compact wavelet metadata.We
present face recognition results on the largest known,
publicly-available 3D facial database, which is used for the Face
Recognition Grand Challenge. The 3D ear recognition results are
reported on our own database and on a subset of the University of Notre
Dame database. To the best of our knowledge, our 3D face recognition
system achieves highest performance reported on the FRGC v2 database
for the 3D modality.
Kakadiaris I.A., G. Passalis, T.
Theoharis,
G.
Toderici,
It
is becoming
increasingly important to be able to credential and identify authorized
personnel at key points of entry. Such identity management systems
commonly employ biometric identifiers. In this paper, we present a
novel multimodal facial recognition approach that employs data from
both visible spectrum and thermal infrared sensors. Data from multiple
cameras is used to construct a threedimensional mesh representing the
face and a facial thermal texture map. An annotated face model with
explicit two-dimensional parameterization (UV) is then fitted to this
data to construct: 1) a three-channel UV deformation image encoding
geometry, and 2) a one-channel UV vasculature image encoding facial
vasculature. Recognition is accomplished by comparing: 1) the
parametric deformation images, 2) the parametric vasculature images,
and 3) the visible spectrum texture maps. The novelty of our work lies
in the use of deformation images and physiological information as means
for comparison. We have performed extensive tests on the Face
Recognition Grand Challenge v1.0 dataset and on our own multimodal
database with very encouraging results.
G.
Passalis, I. A. Kakadiaris,
T. Theoharis, G. Toderici
and N. Murtuza,
Evaluation of 3D Face Recognition in
the presence of facial expressions: an Annotated Deformable Model
approach, IEEE Workshop on Face Recognition Grand Challenge
Experiments, San Diego, USA, June 2005
From a user's
perspective, face
recognition is one of the most desirable biometrics, due to its
non-intrusive nature; however, variables such as face expression tend
to severely affect recognition rates. We have applied to this problem
our previous work on elastically adaptive deformable models to obtain
parametric representations of the geometry of selected localized face
areas using an annotated face model. We then use wavelet analysis to
extract a compact biometric signature, thus allowing us to perform
rapid comparisons on either a global or a per area basis. To evaluate
the performance of our algorithm, we have conducted experiments using
data from the Face Recognition Grand Challenge data corpus, the largest
and most established data corpus for face recognition currently
available. Our results indicate that our algorithm exhibits high levels
of accuracy and robustness, and is not gender biased. In addition, it
is minimally affected by facial expressions.
Stavrou P., G. Nikolakopoulos, N. Fanakis, A. Tzes, T. Theoharis, "An Application of the Inpainting Algorithm for Recovery Packet Losses from Transmitting Sequential Quad Tree Compressed Images Over Wireless Sensor Networks, 2nd IFAC International Conference on Intelligent Control Systems and Signal Processing (ICONS 2009), Instanbul, September 2009. (PDF)
In this article an application of the inpainting algorithm for recovering packet losses from transmitting sequential quad tree compressed images over wireless sensor networks is presented. The aim of the proposed scheme is to reconstruct the missing information at the receiver side, based on the inpainting algorithm, instead of increasing the overhead of the network by requesting from the transceiver to re-transmit the lost data packets. The proposed architecture initially compresses the images sequentially using the quad tree decomposition algorithm. In the sequel, at the receiver side, the image is reconstructed as a lossy image, based on the available successfully received data packets, while afterwards the proposed scheme applies the image inpainting algorithm, in order to restore any missing partitions of the image. The inpainting algorithm is performed based on information derived from the the received image itself. Experimental results are
Manousopoulos P., V. Drakopoulos, T. Theoharis, “Parameter Identification of 1D Recurrent Fractal Interpolation Functions with Applications to Imaging and Signal Processing”, Journal of Mathematical Imaging and Vision, 40(2), 2011, pp. 162-170. (PDF)
Recurrent fractal interpolation functions are very useful in modelling irregular (non-smooth) data. Two methods that use bounding volumes and one that uses the concept of box-counting dimension are introduced for the identification of the vertical scaling factors of such functions. The first two minimize the area of the symmetric difference between the bounding volumes of the data points and their transformed images, while the latter aims at achieving the same box-counting dimension between the original and the reconstructed data. Comparative results with existing methods in imaging applications are given, indicating that the proposed ones are competitive alternatives for both low and high compression ratios
Manousopoulos P., V. Drakopoulos and T. Theoharis, "Curve fitting by fractal interpolation", Transactions and Computational Science, 1, (Springer-Verlag journal within LNCS), pp. 85-103, 2008 (PDF)
Fractal interpolation provides an efficient way to describe data that have an irregular or self-similar structure. Fractal interpolation literature focuses mainly on functions, i.e. on data points linearly ordered with respect to their abscissa. In practice, however, it is often useful to model curves as well as functions using fractal intepolation techniques. After reviewing existing methods for curve fitting using fractal interpolation, we introduce a new method that provides a more economical representation of curves than the existing ones. Comparative results show that the proposed method provides smaller errors or better compression ratios.
Manousopoulos
P., V. Drakopoulos, T. Theoharis and P. Stavrou, ‘Effective
representation of 2D and 3D data using fractal interpolation’, New
Advances in Shape Analysis and Geometric Modeling (NASAGEM Workshop of
the Cyberworlds), Hannover, Germany, October, 2007.
Methods for representing curves in R2 and R3 using fractal
interpolation techniques are presented. We show that such
representations are both effective and convenient for irregular or
complicated data. Experiments in various datasets, including
geographical and medical data, verify the practical usefulness of these
methods.
Manousopoulos
P., V. Drakopoulos, and T. Theoharis, ‘Fractal Active Shape Models’,
12th International Conference on Computer Analysis of Images and
Patterns (CAIP 2007), Vienna, August 2007.
(PDF)
Active Shape Models often require a considerable number of
training
samples and landmark points on each sample, in order to be efficient in
practice. We introduce the Fractal Active Shape Models, an extension of
Active Shape Models using fractal interpolation, in order to surmount
these limitations. They require a considerably smaller number of
landmark points to be determined and a smaller number of variables for
describing a shape, especially for irregular ones. Moreover, they are
shown to be efficient when few training samples are available.
Drakopoulos V., N. Mimikou
& T. Theoharis, “An Overview of Parallel Visualization Methods
for
Mandelbrot and Julia Sets”, Computers & Graphics, 27, 2003, pp.
635-646.
We present a comparative study
of simple
parallelisation schemes for the most widely used methods for the
graphical
representation ofMandelbrot
and Julia
sets. The
compared methods render the actual attractor or its complement.
Drakopoulos
V., Are there any
Julia sets for the Laguerre
iteration
function?, Computers
& Mathematics with Applications, 46 (2003), pp.1201-1210.
Drakopoulos
V., Schröder iteration
functions associated
with a
one-parameter family of biquadratic
polynomials, Chaos, Solitons
&
Fractals, 13 (2002), pp.233-243.
Dalla
L. and Drakopoulos V., On
the parameter
identification problem in the plane and the Polar Fractal Interpolation
Functions, Journal of Approximation Theory, 101 (1999), 290-303.
Fractal
interpolation functions provide a new means for fitting experimental
data and
their graphs can be used to approximate natural scenes. We first
determine the
conditions that a vertical scaling factor must obey to model
effectively an
arbitrary function. We then introduce polar fractal interpolation
functions as
one fractal interpolation method of a non-affine character. Thus, this
method
may be suitable for a wider range of applications than that of the
affine case.
The interpolation takes place in polar coordinates and then with an
inverse
non-affine transformation a simple closed curve arises as an attractor
which
interpolates the data in the usual plane coordinates. Finally, we prove
that
this attractor has the same Hausdorff
dimension as
the polar one.
Drakopoulos
V., How is the dynamics of König
iteration
functions affected by their additional fixed points?, Fractals 7
(1999),
327-334.
König iteration
functions are a
generalization of Newton-Raphson
method to determine roots of equations. These higher-degree rational
functions
possess additional fixed points, which are generally different from the
desired
roots. We first prove two new results: firstly, about these
extraneous
fixed points and, secondly, about the Julia sets of the König
functions associated with the one-parameter family of quadratic
polynomials. Then,
after finding all the critical points of the König
functions as applied to a one-parameter family of cubic polynomials, we
examine
the orbits of the ones available for convergence to an attracting
periodic
cycle, should such a cycle exist.
Drakopoulos
V., Argyropoulos N. and
Böhm
A.,
Generalized computation of Schröder
iteration
functions to motivate families of Julia and Mandelbrot-like sets, SIAM
Journal
on Numerical Analysis 36 (1999), 417-435.
Schröder iteration
functions, a
generalization of the Newton-Raphson
method to determine roots of equations, are generally rational
functions which
possess some critical points, free to converge to attracting cycles.
These free
critical points, however, satisfy some higher-degree polynomial
equations. We
present a new algorithmic construction to compute in general all of the
Schröder functions’
terms as well as
to maximize the
computational efficiency of these functions associated with a
one-parameter
family of cubic polynomials. Finally, we examine the Julia sets of the Schröder functions constructed
to converge
to the nth roots
of unity, these roots' basins of attraction and the orbits of all free
critical
points of these functions for order higher than four, as applied to the
one-parameter family of cubic polynomials mentioned above.
Drakopoulos
V. and Dalla L.,
Space-filling curves generated by
fractal
interpolation functions, in Iliev
O. P., Kaschiev M. S., Margenov
S. D., Sendov Bl.
H. and Vassilevski P.
S. (eds), Recent
advances in numerical methods and applications II, World Scientific,
Singapore,
1999, 784-792.
Interpolation
formulas are the starting points in the derivations of many methods in
several
areas of Numerical Analysis. The goal is always the same: to represent
the data
by a classical geometrical entity. Fractal interpolation functions
provide a
new means for fitting experimental data and their graphs can be used to
approximate natural scenes. We show how one can construct space-filling
curves
using hidden variable linear fractal interpolation functions. These
curves
result from the projection of the attractor of an iterated function
system.
Drakopoulos
V. and Georgiou
S., Visualization on the Riemann sphere of Schröder
iteration functions and their efficient computation, in Mastorakis
N. E. (ed), Modern applied mathematics techniques in Circuits, Systems
and
Control, World Scientific Engineering Society,New
York and Athens, 1999, 131-137.
The
theory of
Julia and Fatou holds
for the complex
plane as well
as for the Riemann sphere, with minor modifications. The latter space
seems a
more natural approach to examine rational functions. Schröder
iteration functions, a generalization of Newton-Raphson
method to determine roots of equations, are generally rational
functions which,
until now, are examined only in the complex plane. On the Riemann
sphere we
examine the Julia sets of the Schröder
functions
constructed to converge to the nth roots of unity and the orbits of all
free
critical points of these functions as applied to a one-parameter family
of
cubic polynomials. Finally, we present a new algorithmic construction
to
maximize the computational efficiency of the Schröder
functions.
Drakopoulos
V., Tziovaras A., Böhm
A. and Dalla L.,
Fractal interpolation
techniques for the
generation of space-filling curves, in Lipitakis
E.
A. (ed), Hellenic European Research on Computer Mathematics and its
Applications, LEA, Athens, 1999, 843-850.
Interpolation
formulas are the starting points in the derivations of many methods in
several
areas of Numerical Analysis. The goal is always the same: to represent
the data
by a classical geometrical entity. Fractal interpolation functions
provide a
new means for fitting experimental data and their graphs can be used to
approximate natural scenes. We show how the theory of linear fractal
interpolation functions together with the Deterministic Algorithm can
be used
to construct space-filling curves.
Drakopoulos
V. and Böhm A., Basins
of attraction and Julia
sets of Schröder
iteration functions,
in Bountis
T. and Pnevmatikos Sp. (eds),
Order and Chaos in Nonlinear Dynamical Systems, Pnevmatikos,
Athens, 1998, 157-163.
Drakopoulos
V., On the
additional fixed points of Schröder
iteration
functions associated with a one-parameter family of cubic polynomials,
Computers & Graphics 22 (1998), 629-634.
Argyropoulos
N., Böhm A. and Drakopoulos
V., Julia
and Mandelbrot-like sets for higher order König
iteration functions, in Novak M. M. and Dewey T. G. (eds),
Fractal
frontiers, World
Abdallah
A.E. & T.
Theoharis, “A Functional View
of
Parallel Computer Graphics”, ACS/IEEE
Int.
Conf. On
Computer Systems and Applications,
A
formal functional specification of the Graphics Output Pipeline is
given. There
are three advantages to this: first, it provides a natural abstraction
of the
pipeline; second, many computer graphics optimization techniques can be
directly obtained from the functional specification by applying general
and
well understood transformational programming algebraic laws on
functional
expressions; third, a number of highly parallel implementations suited
to
various parallel architectures can be derived from the initial
specification by
a systematic application of general transformation strategies for
parallelizing
functional programs.
Abdallah A.E.,
G. Simiakakis &
T. Theoharis, "Formal
Development of
a Reconfigurable Tool for Parallel DNA Matching", in special session
Formal Methods for Engineering Special Purpose Parallel Systems within
The IEEE
International Conference on Electronic Circuits and Systems, Kaslik, Lebanon, 17-20 December
2000.
DNA matching
is a
computationally demanding task. The Human Genome Project is producing
huge
quantities of data, which have to be analyzed. A formal description of
the task
of searching a DNA sequence is given and an efficient parallel
algorithm is
derived using formal methods. The algorithm is implemented on an FPGA using Handel-C, a language
that enables the
compilation of high-level algorithms directly into gate level
synchronous
hardware, thus reducing the development time. The designed algorithm
makes no
assumptions about DNA transformations, and is therefore a very powerful
tool. It
can be used in conjunction with an expert system to automatically
detect
patterns of interest in the DNA.
Theoharis T. & A.E. Abdallah, "Formal Derivation of two Parallel Rendering Algorithms", in Proc. Parallel and Distributed Processing Techniques and Applications (PDPTA'99), CSREA, Las Vegas, 1999, pp. 1444-1450. (PDF)
Two
parallel rendering
algorithms are derived from a high level specification. The initial
specification is formulated as a functional program. A calculational
approach is used to derive, from the original specification, two
parallel
algorithms expressed as networks of communicating processes in Hoare's CSP. Both algorithms exploit
pipelined
parallelism in order
to achieve efficiency. The first algorithm is massively parallel but
the second
uses a fixed number of processing elements. The derivation is done in
two
stages. Firtstly, a
calculus of functional
decomposition is used in order to transform the specification into an
instance
of a generic parallel functional from. Secondly, the generic functional
form is
refined into a collection of communicating processes in CSP
using a formal refinement framework.
Abdallah A.E. & T. Theoharis,
"Derivation of
Efficient
Parallel Algorithms on a Ring of Processors", in
A
binary
operator which takes 2 lists as arguments is called multiscan
if every element of the first list must be considered in conjunction
with every
element of the second list in order to produce the result. Several
problems
such as the relational data base operators join,
intersection and difference can be expressed as specific instances of
the multiscan operator.
In this paper we
consider a generic
functional definition of multiscan
and
show how it
can be implemented as a network of communicating sequential processes (CSP) with a ring configuration.
We examine
issues affecting
the performance of a parallel implementation and identify two
properties which,
if possessed by a multiscan
operator,
allow the
derivation of an efficient scalable parallel implementation on a ring
of
processors. A practical illustration from the field of relational data
bases is
given.
Abdallah A.E. & T. Theoharis,
"Synthesis of Massively
Pipelined Algorithms from Recursive Functional Programs", in Proc. 8th IASTED Int. Conf. (Parallel
& Distributed
Computing
& Systems),
The
purpose of
this paper is to show how a class of recursively defined functional
algorithms
can be efficiently implemented as massively parallel networks of
Communicating
Sequential Processes (CSP).
The method aims
at
achieving efficiency by exploiting pipeline parallelism which is
inherent in
functional programs. The backbone of the method is a collection of
powerful
transformation rules for unrolling recursion Each
of
these rules transforms a generic recursive functional definition into a
composition of a fixed number of simpler functions. Parallelism is
explicitly
realised in CSP by
implementing the
composition of
functions as piping of appropriate processes.
Kalousis A. &
T. Theoharis, "NOEMON:
Design, Implementation and Performance of an Intelligent Assistant for
Classifier Selection", Intelligent Data Analysis (Elsevier), 3(5),
1999,
pp. 319-337.
The selection
of an
appropriate classification model and algorithm is crucial for effective
knowledge
discovery on a dataset. For large data bases, common in data mining,
such a
selection is necessary, because the cost of invoking all alternative
classifiers is prohibitive. This selection task is impeded by two
factors.
First, there are many performance criteria and the behaviour of a
classifier
varies considerably with them. Second, a classifier's performance is
strongly
affected by the cherecteristics
of the
dataset. Classifier
selection implies mastering a lot of background information on the
dataset, the
models and the algorithms in question. An intelligent assistant can
reduce this
effort by inducing helpful suggestions from background information. In
this
study, we present such an assistant, NOEMON.
For each
registered classifier, NOEMON
measures its
performance for a collection of datasets. Rules are induced from those
measurements and accomodated
in a
knowledge base. the
suggestion on the most
appropriate classifier(s) for a
dataset is then based on those rules. Results on the performance of an
initial
prototype are also given.
Kalousis A., G. Zarkadakis
& T.
Theoharis, "Noemon:
Adding Intelligence to
the
Knowledge Discovery Process", in Proc. Expert Systems 97,
In this paper an
architecture is proposed that supports the selection of
task, model and algorithm in the knowledge discovery process by
utilizing
artificial intelligence techniques. The proposed system acts as an
assistant to
the analyst by suggesting possible selections. It improves ita
data mining
performance by observing the analysis
sequences applied by the analyst to new data sets. The system
associates data
characteristics with specific selections that lead to positive or
negative
results and uses this information to guide later analysis.
Theoharis
T. & Y. Cotronis,
"FlexParDB:
An RDBMS Employing Intra-Query and Operator Parallelism", in Proc.
Applications
of High Performance Computing in Engineering V,
FlexParDB, a relational algebra query
execution system is presented, which combines intra-query and operator
parallelism. Intra-query parallelism is expressed in the wavesets,
which are a partition of the set of query-tree operators; valid wavesets are consistent with the
flow of
relational data
from the leaves to the root of the query-tree. The wavesets
represent the query execution plan. A simple script language for the
description of a query-tree and its wavesets
has been
developed. Wavesets are
executed by
parallel multiple
executions of ParDB, a
system supporting
operator
parallelism. FlexParDB
has been
implemented on a
massively parallel Transputer
architecture.
Theoharis
T. & A. Abdallah,
"Distributed
Resource
Performance Simulation", in Proc. 2nd LAAS
Int. Conference
on Computer Simulation, Lebanon, September 1997, pp. 129-134.
A performance
simulator is
presented for a shared-nothing parallel system in which a resource
should be
available at every processor node but actually exists fewer times
enforcing
sharing between processors and leading to performance degradation. The
method
is also applicable if multiple resource classes exist. It is a useful
prototyping tool and its application to a parallel RDBMS is discussed.
Theoharis T. & A. Paraskevopoulos, "PARDB: Design, Algorithms and Performance of a Transputer Based Parallel Relational DBMS", Transputer Communications, 3(4), 1996, pp. 233-260. (PDF)
The main
design choices involved in
parallel RDBMSs are
discussed and the
design choices made for the author's system, PARDB,
are justified. The parallel architecture, the process structure and the
parallel algorithms used to implement each of the relational operators
in PARDB are described.
Performance
measurements show that near-linear speedup is achieved in uniscan operators (such as
select), whereas multiscan
operators (such as join) perform
slightly worse because of the communication involved in exchanging
relation data between the processors.
Katsaloulis P., T. Theoharis, W. M. Zheng, B.L. Hao, A. Bountis, Y. Almirantis and A. Provata, 'Long range correlations of RNA polymerase II promoter sequences across organisms', Physica,-A, V. 366, 2006, pp. 308-322.
The statistical properties of the size distribution of DNA segments separating identical oligonucleotides are studied. For representative eukaryotes (Homo sapiens, Mus musculus, Saccharomyces cereviciae, Oryza sativa, Arabidopsis thaliana) we have demonstrated the existence of long-range correlations for the distances separating oligonucleotides of sizes 4, 5 and 6, which carry a promoter signature. This observation is independent of the consensus sequence used by the organism, as in the case of O. sativa (which mainly uses the CG promoter box) and A. thaliana (which mainly uses the TATA promoter box). If we use two parameters to characterise the size distribution separating oligonucleotides, we observe that oligonucleotidescontaining promoter signatures cluster together, away from the others.
Katsaloulis P., E. Floros, A. Provata,. Y. Cotronis
and T. Theoharis, ‘Gridification of the SHMap Biocomputational
Algorithm’, IEEE ITAB ’06, Ioannina, October 2006. (PDF)
In the current study we present a parallel statistical
algorithm (SHMap), which distinguishes DNA regions which possibly carry
protein coding information from non-coding regions. The code was
parallelized using MPI and was deployed in a Grid testbed provided by
the CrossGrid IST European Project. We tested the SHMap algorithm on
mammalian chromosomes. The parallelization of the algorithm and the use
of the Grid environment achieves significant speed-up compared
to
the sequential version of the algorithm, although the use of the Grid
environment still presents some problems. Our algorithm is open source
and freely distributed from our website.
Katsaloulis P., T. Theoharis & A. Provata, 'Statistical Algorithms for Long DNA Sequences: Oligonucleotide Distributions and Homogeneity Maps', Scientific Programming, 13(3), 2005, pp. 177-188 (PDF)
The statistical properties of oligonucleotide appearances within long DNA sequences often reveal useful characteristics of the corresponding DNA areas. Two algorithms to statistically analyze oligonucleotide appearances within long DNA sequences in genome banks are presented. The first algorithm determines statistical indices for arbitrary length oligonucleotides within arbitrary length DNA sequences. The critical exponent of the distance distribution between consecutive occurrences of the same oligonucleotide is calculated and its value is shown to characterize the functionality of the oligonucleotide. The second algorithm searches for areas with variable homogeneity, based on the density of oligonucleotides. The two algorithms have been applied to representative eucaryotes ( the animal Mus musculus and the plant Arabidopsis thaliana) and interesting results were obtained, conrmed by biological observations. All programsare open source and publicly available on our web site.
Theoharis T. & J.J. Modi, "Implementation of Matrix Multiplication on the T-RACK", Parallel Computing, 14, 1990, pp. 229-233 (PDF)
It turns out to be advantageous to partition matrices into blocks for multiplication on the T-RACK, which is a MIMD system consisting of 64 floating-point transputers with partially re-configurable linkage.
Two viewing models for a three dimensional display are presented and a simple parallel processing scheme is outlined.