Computer Vision Systems, by Name

Various names have been used for computer vision systems. This lists some of the more famous systems. You may want to see the Source Code Listing or the the Vendor Listing for implementations or companies that will provide a finished product.

Computer Vision resources include:

For more information on the topics, contact information, etc. see the annotated Computer Vision Bibliography or the Complete Conference Listing for Computer Vision and Image Analysis

Detailed Entries for System

Rygol, M.[Michael], Pollard, S.B.[Stephen B.], Brown, C.R.[Chris R.],
Multiprocessor 3D Vision System for Pick and Place,
IVC(9), No. 1, February 1991, pp. 33-38.
WWW Link.
And: MRSC91(75-80).
PDF File.
A multiprocessor 3D vision system for pick and place,
PDF File.
System: Tina.

Brown, C.R.[Chris R.], Dunford, C.M.[Chris M.],
Parallel Architecture for Fast 3-D Machine Vision,
PDF File. System: Tina.

Maurer, M., Behringer, R., Dickmanns, D., Hildebrandt, T., Thomanek, F., Schiehlen, J., Dickmanns, E.D.,
VaMoRs-P An Advanced Platform for Visual Autonomous Road Vehicle Guidance,
SPIE(2352), 1994, pp. 239-248. System: VaMoRs.

Section, Multiple Entries: Carnegie Mellon NAVLAB, AMBLER, etc. Chapter Contents (Back)
Autonomous Vehicles. Vehicle Control. System: NAVLAB. Road Following. Path Planning.

Thorpe, C.E., (ed.),
Vision and Navigation, the Carnegie Mellon NAVLAB,
Norwell MA: Kluwer1990, 384 pages. Indexed by: NAVLAB90 System: NAVLAB. The Bookdescription of the NAVLAB project. Many of the following reports become redundant.

Bares, J., Hebert, M., Kanade, T., Krotkov, E., Mitchell, T., Simmons, R., and Whittaker, W.,
AMBLER: An Autonomous Rover for Planetary Exploration,
Computer(22), No. 6, June 1990, pp. 18-26. System: AMBLER. The description of the whole project, not much vision.

Shafer, S.A., and Whittaker, W.,
Development of an Integrated Mobile Robot System at Carnegie Mellon University: December 1989 Final Report,
CMU-RI-TR-90-12, January 1990.
June 1987 Annual Report: Development of an Integrated Mobile Robot System at Carnegie Mellon,
CMU-RI-TR-88-10, July 1987. System: Codger. System: NGS. The report on the NAVLAB project and its pieces.

Section, Multiple Entries: CMU Road Followers, ALVINN YARF MANIAC Chapter Contents (Back)
Road Following. Path Planning. YARF. ALVINN. System: ALVINN.

Pomerleau, D.A.,
Neural Network Perception for Mobile Robot Guidance,
Hingham: KluwerAcademic, 1993. ISBN 0-7923-9373-2.
WWW Link.
And: Ph.D.Thesis (CS), February 1992, CMU-CS-TR-92-115. System: ALVINN. The Neural-Network road follower (ALVINN). Has run on real highways with light traffic at 55mph.

Jochem, T.M., Pomerleau, D.A., and Thorpe, C.E.,
MANIAC: A Next Generation Neurally Based Autonomous Road Follower,
And: A1 only: IAS93(xx-yy). System: MANIAC.

Pomerleau, D.A., Gowdy, J., and Thorpe, C.E.,
Combining Artificial Neural Networks and Symbolic Processing for Autonomous Robot Guidance,
DARPA92(961-967). Neural Networks. System: YARF. YARF plus neural net for road tracking. ALVINN.

Sukthankar, R.[Rahul], Pomerleau, D.A., and Thorpe, C.E.,
Panacea: An Active Sensor Controller for the ALVINN Autonomous Driving System,
CMU-RI-TR-93-09, April 1993. System: ALVINN. Adds steering of the camera to ALVINN, this improves performance where sharp turns are required.

Wu, J.X., Liu, N., Geyer, C., Rehg, J.M.,
C^4: A Real-Time Object Detection Framework,
IP(22), No. 10, 2013, pp. 4096-4107.
System, CENTRIST. CENsus TRansform hISTogram. CENTRIST; Object detection; real-time without GPU. Contour based recognition.

MVTec Software GmbH,
1998 Vendor, Image Analysis. Vendor, Object Recotnition. System, Halcon.
WWW Link. Originally the Horus system (form TU-Munich), Also Halcon
WWW Link. See also Technical University Munich.

Lawton, D.T., and McConnell, C.C.,
Image Understanding Environments,
PIEEE(76), No. 8, August 1988, pp. 1036-1050. Survey, Systems. Systems, Survey. General discussion of IU Environments in terms of components, representations, programming constructs, data bases, and interfaces with examples (mostly from ADS systems).

Konstantinides, K., Rasure, J.R.,
The Khoros Software Development Environment for Image and Signal Processing,
IP(3), No. 3, May 1994, pp. 243-252.
IEEE DOI System: Khoros. Khoros. General overview of the Khoros system, image processing and visual programming environment. An online tutorial is available:
WWW Link.

Williams, T.D.,
Image Understanding Tools,
ICPR90(II: 606-610).
IEEE DOI System: KBVision. The KBVision Environment.

Connolly, C.I., Kapur, D., Mundy, J.L., Weiss, R.,
Geometer: A System for Modeling and Algebraic Manipulation,
DARPA89(797-804). System: Geometer.

Krauß, T., d'Angelo, P., Schneider, M., Gstaiger, V.,
The Fully Automatic Optical Processing System CATENA at DLR,
DOI Link
System, CATENA.

Section, Multiple Entries: 19.1.1 SRI Environments -- Image Calc, CME RADIUS Chapter Contents (Back)
Environments. CME. RADIUS. System: CME.

Section, Multiple Entries: 19.1.2 The Image Understanding Environment Chapter Contents (Back)
Environments. IUE. System: IUE.

Hamey, L.G.C.[Leonard G.C.], Webb, J.A.[Jon A.], Wu, I.C.[I Chen],
An Architecture Independent Programming Language for Low-Level Vision,
CVGIP(48), No. 2, November 1989, pp. 246-264.
WWW Link. System: APPLY. The apply system originally developed for the Warp.

Tamura, H.[Hideyuki], Sakane, S.[Shigeyuki], Tomita, F.[Fumiaki], Yokoya, N.[Naokazu], Kaneko, M.[Masahide], Sakaue, K.[Katsuhiko],
Design and Implementation of SPIDER: A Transportable Image Processing Software Package,
CVGIP(23), No. 3, September 1983, pp. 273-294.
WWW Link. System: Spider.
Earlier: A1, A3, A2, A4, A6, A5:
A Transportable Image Processing Software System: SPIDER,

Haralick, R.M.,
Gipsy: General Image Processing System,
TR1983. Intelligent Systems Laboratory, University Of Washington. System: Gipsy. There are 2 systems by the same name. They are different. Coordinated system of 200+ programs written in Fortran.

Groningen Image Processing System, GIPSY,
WWW Link. System: Gipsy. Code, Image Processing. There are 2 systems by the same name. They are different.

Pope, A.R., Lowe, D.G.,
Vista: A Software Environment for Computer Vision Research,
IEEE DOI System: Vista. Code, Image Analysis.
HTML Version.

Haralick, R.M.[Robert M.], Currier, P.[Phil],
Image Discrimination Enhancement Combination System (IDECS),
CGIP(6), No. 4, August 1977, pp. 371-381.
WWW Link. System: IDECS. Describes their hardware system. Operates at video rates, TV size, data disk storage, input/output connected by a switch, with digital and analog processors.

Section, Multiple Entries: 19.2.12 Hardware -- Image Understanding Architecture, IUA Chapter Contents (Back)
Parallel Systems. IUA. System: IUA.

Weems, C.C., Levitan, S.P., Hanson, A.R., and Riseman, E.M.,
The Image Understanding Architecture,
IJCV(2), No. 3, January 1989, pp. 251-282.
Springer DOI
Earlier: DARPA87(483-496).
And: COINS-TR-87-76, August 1987. System: IUA. More on the design of the multi level system to go on top of CAAP, a 512X512 SIMD array, with a 64X64 array of 16 bit processors, with a 8X8 array of MIMD Lisp machines on top.

Weems, C.C., Rana, D., Hanson, A.R., Riseman, E.M., Shu, D.B., and Nash, J.G.,
An Overview of Architecture Research for Image Understanding at the University of Massachusetts,
ICPR90(II: 379-384).

Zoom It, Seadragon,
WWW Link. System, Seadragon. Code, Image Pyramids. The Seadragon system was acquired by Microsoft Live Labs. And turned into Zoom.It. The goal is rapid exploration of large image databases. Library for web-based image pyramids.

Matsuyama, T., and Hwang, V.,
SIGMA: A Knowledge-Based Aerial Image Understanding System,
New York: Plenum1990, 296 pp. ISBN 0-036-43301-X. System: SIGMA. The Bookon SIGMA.

Hwang, V.S.S.[Vincent Shang-Shouq], Davis, L.S.[Larry S], Matsuyama, T.[Takashi],
Hypothesis Integration in Image Understanding Systems,
CVGIP(36), No. 2/3, November/December 1986, pp. 321-371.
Elsevier DOI
Earlier: A1, A3 only:
SIGMA: A Framework for Image Understanding: Integration of Bottom-Up and Top-Down Analyses,
The Sigma Image Understanding System,
CVWS85(17-26). System: SIGMA. Application, Cartography. Find regions and structures, guided by a detailed model of what is there and how it appears.

Section, Multiple Entries: 22.1.6 GIS: Systems, Complete Systems, Implementation Chapter Contents (Back)
Systems. GIS. Image Database. Query Methods. For application of GIS: See also GIS: Using GIS for Specific Applications, Spatial Databases. For terrain display issues: See also Texture Mapping, Terrain Visualization, Terrain Rendering, DEM Rendering.

Section, Multiple Entries: GIS: Database Issues, Implementation Issues, Design Chapter Contents (Back)
Systems. GIS. For terrain display issues see: See also Texture Mapping, Terrain Visualization, Terrain Rendering, DEM Rendering.

Section, Multiple Entries: GIS: Volunteered Geographic Information, Open Access, Crowd Sourcing, Crowdsource Chapter Contents (Back)
Crowdsourced. Volunteered Data. Open Data. Systems. GIS. VGI. Open access for data, software is elsewhere. OpenMap. Street Map specifically: See also GIS: Volunteered Geographic Information, OpenStreetMap, Open Street Map.

McGlone, J.C., and Shufelt, J.A.,
Incorporating Vanishing Point Geometry into a Building Extraction System,
Incorporating Vanishing-Point Geometry in Building Extraction Techniques,
SPIE(1944), 1993, pp. 273-284. System: BABE. Verification of predicted buildings.

Section, Multiple Entries: 22.7 CMU MAPS Image Database System Chapter Contents (Back)
Remote Sensing. MAPS/SPAM. Cartography. Application, Cartography. System: MAPS/SPAM.

Kuan, D.T.[Darwin T.], and Drazovich, R.J.[Robert J.],
Model Based Interpretation Of 3-D Range Data,
Model-Based Interpretation of Range Imagery,
AAAI-83(210-215). System: ACRONYM. Generalized cylinder models, laser range input, uses the ACRONYM approach, but applied to range data.

Euvision Technologies,
2014. Automatic image recognition on your phone. WWW Link.
Research Group, Europe. System, Impala. Vendor, Impala.

ITT Visual Information Solutions,
Image Processing and Analysis. WWW Link.
Vendor, Image Analysis. System: ENVI. Developed from early NASA work on Mariner.

Mulder, J.A., Mackworth, A.K., and Havens, W.S.,
Knowledge Structuring and Constraint Satisfaction: The Mapsee Approach,
PAMI(10), No. 6, November 1988, pp. 866-879.
IEEE DOI System: Mapsee. This paper discusses Mapsee-1, -2, and -3 and thus serves as the primary reference for information about them. The conclusion is that schema-based representations with hierarchical (arc) consistency is best for a structured approach to visual knowledge. This set of systems illustrates the power of a schema based representation and a hierarchical constraint satisfaction algorithm. All three use a general segmentation of the image into regions and lines segments. Constraints are given to each feature based directly on its appearance. Mapsee-1 was a basic implementation of constraint satisfaction (arc-consistency) with no hierarchy in the representation and weak representations of constraints. Mapsee-2 added schemata as a means to improve the descriptive capabilities with hierarchical descriptions of the objects. This leads to a hierarchical arc consistency algorithm. Mapsee-3 provided a uniform representation for objects and relations between them (as schemata) and a more powerful representation of alternatives in the arc consistency algorithm. See also Discrimination Vision. See also Consistency in a Network of Relations.

Reiter, R.[Raymond], and Mackworth, A.K.[Alan K.],
A Logical Framework for Depiction and Image Interpretation,
AI(41), No. 2, December 1989, pp. 125-156.
WWW Link.
The Logic of Depiction,
RBCV-TR-87-18, June 1987, Toronto. System: Mapsee. This proposes a theory to formalize domain knowledge and is illustrated by specifying some general examples. Intended to provide a framework to analyze Mapsee and understand constraint satisfaction techniques. See also Consistency in a Network of Relations.

Nayar, S.K.[Shree K.], Nene, S.A.[Sameer A.], Murase, H.[Hiroshi],
Subspace Methods for Robot Vision,
RA(12), No. 5, October 1996, pp. 750-758.

Earlier: A3, A1, A2:
General Learning Algorithm for Robot Vision,
ARPA94(I:753-763). System: SLAM. Software Library for Appearance Matching

Barrow, H.G., and Tenenbaum, J.M.,
MSYS: A System for Reasoning about Scenes,
SRI AICenterTN 108, 1975.
And: SRI AIMemo121, April 1976. Knowledge-Based Vision. System: MSYS. The MSYS Report. Use inexact reasoning on uncertain data to interpret regions extracted from an image. MSYS is an asynchronous relaxation process that applies the rules imposed by the modeluntil the labels are consistent. Constraints such as surface height and orientation can bu used. Relations between objects in the scene (hence regions in the image) can be used.. An M* (modified A*) search is used. For application in IGS: See also Experiments in Interpretation Guided Segmentation.

Section, Multiple Entries: 13.6.2 ACRONYM and SUCCESSOR Papers - Stanford University and Others Chapter Contents (Back)
Knowledge-Based Vision. Recognition, Model Based. Model Based Recognition. Object Recognition. Matching, Models. ACRONYM. Model Based Recognition. System: ACRONYM. System: Successor.

Pichumani, R.[Ramani],
CVonline: Model-based vision,
HTML Version. System: Successor. A summary of the Successor system.

Section, Multiple Entries: 13.6.3 University of Massachusetts VISIONS System Chapter Contents (Back)
Knowledge-Based Vision. Recognition, Model Based. Model Based Recognition. Object Recognition. Matching, Models. VISIONS. Model Based Recognition. System: VISIONS. See also Complete Systems Derived from the Univ. Massachusetts Work.

Hanson, A.R., and Riseman, E.M.,
VISIONS: A computer System for Interpreting Scenes,
CVS78(303-333). Multiple Resolutions. System: VISIONS. The basic outline of their system. For the full set of papers and a more complete description: See also University of Massachusetts VISIONS System.

Wesley, L.P., and Hanson, A.R.,
The Use of an Evidential Based Model for Representing Knowledge and Reasoning about Images in the VISIONS System,
And: COINSTR 82-29, December 1982. System: VISIONS. Outlines some of the ideas behind Shafer and Dempster to combine evidence. Basically evidence is a pair [support, plausibility], minimum and maximum amount that the evidence confirms the proposition. See also Mathematical Theory of Evidence, A.

Mundy, J.L., and Joynson, R.,
Constraint-Based Modeling,
DARPA89(425-442). System: GEOMETER. Combining the GEOMETER system with reasoning for recognition.

Strat, T.M., and Fischler, M.A.,
Context-Based Vision: Recognizing Objects Using Information from Both 2-D and 3-D Imagery,
PAMI(13), No. 10, October 1991, pp. 1050-1065.
IEEE DOI System: Condor.
A Context-Based Recognition System for Natural Scenes and Complex Domains,
Earlier: A2, A1:
Recognizing Objects in a Natural Environment: A Contextual Vision System,
Context-Based Vision: Recognition of Natural Scenes,
Asilomar89(532-536). System: CVS. Recognition, Context Based. This discusses the current SRI high-level vision effort. Addresses: object recognition without accurate object delineation, use of contest, use of geometry, and control of complexity. Uses context sets and cliques.

Strat, T.M.,
Natural Object Recognition,
New York: Springer1992, 165pp. ISBN 0-387-97832-1.
And: STAN-CS-91-1376, Stanford, CA, December 1990. Ph.D.Thesis. System: Condor. Rule Based Analysis. The Bookfrom his thesis on general object recognition using contextual cues. A set of processes interact through shared data structures. Each process has an associated context set, that when satisfied causes the process to run.

Strat, T.M.,
Using Context to Control Computer Vision Algorithms,
Employing Contextual Information in Computer Vision,
DARPA93(217-229). System: Condor. The use of context in understanding objects. Describes the Prolog-like language used to control algorithms in RCDE

Shafer, S.A., and Kanade, T.,
Recursive Region Segmentation by Analysis of Histograms,
ICASSP82(1166-1171). Segmentation, Systems. Phoenix. System: Phoenix.
HTML Version. See also Phoenix Image Segmentation System: Description and Evaluation, The. After implementing a version of the Ohlander segmentation technique, Shafer proposed and implemented a variation that used the type of regions generated by the various possible threshold to determine the optimal threshold. This method applied all reasonable thresholds, as determined by analyzing the histograms, and chose the set of regions which were the most compact and had the clearest borders. This is based on the observation that, often, several histograms have peaks that correspond to the same regions, but one may give a more precise split than another even when its peak is not as clear according to the given criteria.

Laws, K.I.,
The Phoenix Image Segmentation System: Description and Evaluation,
SRI AICenter-TN 289, December 1982. Evaluation, Segmentation. System: Phoenix. Phoenix. Segmentation, Evaluation.

Section, Multiple Entries: 8.3.2 Complete Systems Derived from the Univ. Massachusetts Work Chapter Contents (Back)
Segmentation, Histogram. System: VISIONS. See also University of Massachusetts VISIONS System.

Hanson, A.R., and Riseman, E.M.,
Segmentation of Natural Scenes,
CVS78(xx-yy). System: VISIONS.

Matsuyama, T.[Takashi],
Expert Systems for Image Processing: Knowledge-Based Composition of Image Analysis Processes,
CVGIP(48), No. 1, October 1989, pp. 22-49.
WWW Link.
Earlier: ICPR88(I: 125-133).
Rule Based Systems. System: SIGMA. This builds on the general systems such as SIGMA and is directed toward segmentation.

Tenenbaum, J.M., and Barrow, H.G.,
Experiments in Interpretation Guided Segmentation,
AI(8), No. 3, June 1977, pp. 241-274.
WWW Link.
And: SRI AICenter-TN 123, March 1976.
IGS: A Paradigm for Integrating Image Segmentation and Interpretation,
And: ICPR76(504-513).
And: CMetImAly77(435-444). Segmentation, Knowledge. System: IGS. The key idea is that image elements can be reliably clustered into regions if semantic interpretations are used in addition to the raw image values. This builds on the interpretation ideas of MSYS ( See also MSYS: A System for Reasoning about Scenes. ). Unlike the work in Yakimovsky and Feldman, the relations between different types of regions are either possible or impossible. Initial interpretations are based on the image data, but extra interpretations at this point are not harmful. An iterative procedure is used to eliminate interpretations that are not valid given all the possible interpretations of the neighbors. When adjacent regions have the same interpretation they can be merged. This method requires a very specific model of the possible scene to provide any benefit.

Rahmani, R.[Rouhollah], Goldman, S.A.[Sally A.], Zhang, H.[Hui], Cholleti, S.R.[Sharath R.], Fritts, J.E.[Jason E.],
Localized Content-Based Image Retrieval,
PAMI(30), No. 11, November 2008, pp. 1902-1912.
System, ACCIO. Interested only in part of the iamge. Extend traditional segmentation-based and salient point-based techniques to capture content. Salient points using SPARSE (filtered Haar-wavelet points) Wavelet (Variably Split Window with Neighbor) SIFT ( See also Distinctive Image Features from Scale-Invariant Keypoints. )

Johnson, M.K.[Micah K.], Cole, F.[Forrester], Raj, A.[Alvin], Adelson, E.H.[Edward H.],
Microgeometry Capture using an Elastomeric Sensor,
PDF File.
System, GelSight. More details based on earlier retrographic sensing. The GelSight System.

Guzman-Arenas, A.,
Computer Recognition of Three-Dimensional Objects in a Visual Scene,
MIT Project MAC-TR-59, December 1968, Ph.D.Thesis (EE).
And: MIT AI-TR228.
WWW Link. System: SEE. Use the junction labels and group the polyhedral scenes into separate bodies. NOT restricted to tri-hedral angles.

Mackworth, A.K.,
Interpreting Pictures of Polyhedral Scenes,
AI(4), No. 2, June 1973, pp. 121-139.
WWW Link.
Earlier: IJCAI73(557-563). System: Poly. Introduce the dual space concept for interpreting scenes.

Total found: 64

For more information on the topics, contact information, etc. see the annotated Computer Vision Bibliography or the Complete Conference Listing for Computer Vision and Image Analysis

Return to summary listing