top of page
Amigo IQ Logo (2).png

Computer vision is a subdomain of AI that mimics the human visual system. Experts focus on creating software to help computers acquire, process, analyze, and understand digital images or video.

This is one of the latest innovations in Artificial Intelligence, where machines can analyze and interpret images and videos. Machine learning takes advantage of the strengths of humans and computers.


Human beings excel at communication, engagement, context, general knowledge, creativity, and empathy.

Computers and software systems are ideal for repetitive tasks, mathematics, data manipulation, and parallel processing, providing the power and speed to master complex solutions.

Amigo IQ Logo (38).png

How does machine vision work?

There are three main components of computer vision:

  1. Acquisition of an image

  2. Image processing

  3. Analysis

Acquisition of an image: a digital camera or sensor captures the image or data and stores it as binary numbers; ones and zeros. This is called raw data.

Image processing:

This process includes the methods used to extract the basic geometric elements that can give information about the image. Image processing also includes the pre-processing step. Preprocessing is necessary for more accurate analysis by removing unwanted elements such as noise.

Image classification:

Image classification aims to classify image content based on its type. The most widely used deep learning technique is convolutional neural networks (CNN).

The pre-tagged images create a training data set. Each of the classes in which the images will be included has independent properties and these properties are represented by vectors. These vectors are trained with CNN and improvements are made with new data sets. If the quality of the classifier is not sufficient, more test sets or training sets can be added.

Object detection:

The identification of objects in an image has a different operating principle than image classification. To classify the objects in the image, those objects must be determined in the bounding boxes. To classify the objects in the image, those objects must be determined in the boxes. Although these boxes are of different sizes, they can contain images of the same class. In addition, detection of images that contain a large number of objects also requires an increasing amount of computing power. Algorithms such as R-CNN, Fast R-CNN, YOLO, (SSD) and region-based fully convolutional networks have been developed to quickly find these occurrences.

Object Tracking:

Object tracking is the method that tracks the movement of the object in one image by finding the same object in the next image.

Object tracking techniques can be divided into three categories based on observation methods:

Generative Techniques: In this technique, the tracking problem is formulated as finding the regions of the image that are most similar to the target model. Principal Component Analysis (PCA), Independent Component Analysis (ICA), Nonnegative Matrix Factorization (NMF) are examples of generative models that attempt to find a suitable representation of the original data.

Discriminative techniques: In discriminative methods, tracking is considered a binary classification problem, the goal of which is to find a decision boundary that best separates the target from the background.

Unlike generative methods, both background and target information are used simultaneously. Examples of discriminative methods are stacked automatic encoders (SAEs), convolutional neural networks, and support vector machines (SVMs).

Hybrid techniques: these two techniques are used together and different techniques are adapted depending on the problem.


Image segmentation:

The process of dividing a digital image into image objects or pixel arrays. The purpose of image segmentation is to simplify the representation of an image and to facilitate analysis.

bottom of page