CSCI 456

Robotics and Computer Vision

Coordinator: Stephanie Schwartz

Credits: 4.0

Description

Intelligent robotic systems that deal with physical world through visual, acoustic or tactile sensing. Fundamentals of robot vision including image acquisition and camera geometry, pattern recognition, representation and analysis of shape, pixel neighborhoods, connectivity, distance measures, arithmetic operations on pixels and images, computations of area, centroid, moments, axis of least inertia, correlation techniques, histogram computation, manipulation of robot end effectors, robot task coordination and simple Cartesian robot manipulation. Offered infrequently.

Prerequisites

C- or higher in CSCI 362.

Sample Textbooks

Course Outcomes

  1. Knowledgeable in the issues of robot vision.
  2. Proficient in programming the techniques of robot vision in C or C++.
  3. Competent in applying computer vision techniques to a variety of robot problems.
  4. Prepared to pursue more advanced study in robotics or computer vision.
  5. Able to understand the literature and current research topics of robot vision.
  6. Knowledgeable in the software engineering issues in real-time robot software (task building, concurrency, memory).

Major Topics Covered

A. Introduction to robotics and computer vision.

  1. Terminology and definitions.
  2. History.
  3. Research topics in robotics and computer vision.
  4. The challenge of robot vision.

B. Fundamentals of computer vision and image processing.

  1. Early image processing techniques.
  2. Binary image processing and industrial techniques.
  3. Look Up Table (LUT) programming.
  4. Template matching.
  5. Machine vision tools.

C. Theoretical Foundations.

  1. Image acquisition and camera geometry.
  2. Pixel neighborhoods, connectivity, distance, measures, arithmetic operations on pixels and images.
  3. Computations of Area, Centroid, Moments, Axis of Least Inertia
  4. Correlation Techniques.
  5. Histogram Computation, equalization, and enhancement.

D. Edge detection analysis.

  1. Gaussians, Laplacians and zero crossings.
  2. Sobel operator.
  3. Robert's gradient.
  4. Heuristic search methods.

E. Representation and analysis of shape.

  1. Medial axis transformation, Euclidean skeleton, and thinning algorithms.
  2. Border processing and chain encoding.
  3. Curvature function, derivative of the angular      function.
  4. Shape from projections, vertical and horizontal      summations.
  5. entroidal profile.
  6. Erosion and dilation techniques.

F. Pattern recognition.

  1. Feature based recognition systems.
  2. Feature detection.
  3. Scene analysis.
  4. Object recognition methods.

G. Additional techniques and applications.

  1. Region growing.
  2. Logic-based relational descriptors.
  3. Three dimensional representations.
  4. Binocular Stereo vision.
  5. Blocks world heuristics.
  6. Parallel processing hardware and architecture.
  7. Robot path planning and problem solving.

H. Manipulation of robot end effectors.

  1. Introduction to simple kinematics and dynamics.
  2. Trajectory and task planning.
  3. Control issues.
  4. Real time robot software.

I. Application issues.

  1. Image understanding techniques.
  2. Picking parts out of a bin.
  3. Spray painting, welding.
  4. Compliant wrists, feeders, work cell issues.

J. Future of robot vision.

  1. Current research topics.
  2. Fifth generation computing.
  3. Future trends.

Sample Laboratory Projects

  1. Sample Lab #1: (2 weeks) 
    Edge Detection Laboratory 
    The objective of this laboratory is to gain experience in the following areas: 
    (1) Image Processing and Computer Vision programming, and 
    (2) Edge Detection techniques

    Write a set of function calls which will compute the edges of an area of interest, specified by (xl,yl, x2,y2), for the following four edge detection schemes: Roberts Gradient, General Gradient, Laplacian Operator, and Sobel Operator. Make sure that the calls are all in one separately compilable file and are self contained, i.e., they may be put into the vks10l.c library of calls with NO side effects. Use the naming convention of VKS101.c, e.g., VIS$ROBERTS (xl,yl, x2,y2, threshold, type). Extra credit for 9x9 Laplacian. 

    In a separate test program call your function and perform edge detection on the scene. 
  2. Sample Lab #2: (2 weeks) 
    Robot Blocks World Laboratory 

    The objective of this laboratory is to gain experience in the following areas: 
    (1) Mathematics of Object Location and Orientation (zero, first and second order moments, centroid, Axis of Least Inertia, etc.), 
    (2) Robot Vision programming, and 
    (3) Robot Gripper Manipulation. 

    Write a program which will locate all objects (Domino Blocks for now) in the camera's field of view, i.e., what the robot can see. The program is to:
    • Locate all the objects [see VIS$BORDER FOLLOW() in vkslOl.c],
    • Draw a box around each one [see VIS$DRAW_BOX()],
    • Compute and mark the centroid of each object with a cross-hair; [see VIS$CROSS HAIR()],
    • Draw a line which corresponds to the axis of least inertia (orientation), also draw a line perpendicular to this line at the centroid thus showing where the robot gripper will pick up the object [see VIS$DRAW LINED].


    Thus, the laboratory when finished will show graphically how the robot will pick up each object. Then for each object print the area (these will already be computed in centroid and border-follow) and Orientation Angle in degrees (Theta). Print all data to the vision board. 

    Manipulate the robot to stack the N Blocks positioned in any orientation in the camera's field of view. Use the speech synthesizer to speak the results and data during the program execution [see ROBOT$SPEAKO in /work/users/stuff/robot.el. 
  3. Sample Lab #3: (4 weeks) 
    Robot "Fuse Testing" Laboratory 

    The objective of this laboratory is to gain experience in the following areas: 
    (1) Use of Robot Vision to make decisions, 
    (2) Mathematics of Object Location and Orientation, 
    (3) Robot Gripper Manipulation, 
    (4) Use of the Speech Synthesizer, 
    (5) Use of the simple Binary Image Processing Techniques. 

    Write a program which will: (a) pick up a fuse from the gravity feeder, (b) place the fuse in the testing holder, (c) see if the light is on (use vision), and then (d) place the fuse in the proper output bin. If the light is ON then place the fuse in the RIGHT bin, if OFF place in the LEFT bin. Use vision to properly align the gripper on the gravity feeder to pick up the fuse. Repeat this until no more fuses in the feeder. 

    Use the speech synthesizer to speak the results and data during the program execution.