WestlakeLEARN
FTC / Java

First Tech Challenge

FTC / Java

01 · Java for FTC
  • OpMode Anatomy and Hello Robot
  • Variables, Math, and Decisions
  • Methods, Classes, and Robot Helpers
02 · FTC Hardware Essentials
  • Hardware Map and RobotHardware
  • Motors, Servos, and Sensors
  • IMU, Encoders, and Bulk Caching
03 · TeleOp and Mecanum
  • Robot-Centric Mecanum Drive
  • Field-Centric Driving
  • Driver Ergonomics and Safe TeleOp
04 · Subsystems and Commands
  • Subsystem Lifecycle
  • Enums and Finite State Machines
  • Command-Based OpModes
05 · From Timed Steps to Actions
  • Timed and Encoder Autonomous
  • Autonomous State Machines
  • Actions and Sequencing
06 · PID and Feedforward
  • PID Basics
  • Feedforward and PIDF
  • Dashboard Tuning Workflow
07 · Motion Profiling
  • Motion Profile Concepts
  • Implementing a Profiled Mechanism
  • Testing Profiles and Failure Modes
08 · OpenCV and AprilTags2/3
  • VisionPortal Camera Setup
  • OpenCV Color and Region Processors
  • AprilTags and Field Pose
09 · Setup and Tuning
  • Road Runner 1.0 Install and Drive Class
  • Feedforward Tuning
  • Localization and Validation
10 · Trajectories, Actions, and MeepMeep
  • Action Builder and Trajectories
  • MeepMeep Preview
  • Full Road Runner Autonomous
11 · Git, Debugging, and Competition Readiness
  • Git Workflow for FTC Teams
  • Telemetry-First Debugging
  • Competition Readiness Checklist
12 · Driver Control
  • Driver Control
13 · Autonomous Build
  • Simple Autonomous
14 · Debugging
  • Debugging with Telemetry

08 / OpenCV and AprilTags

OpenCV Color and Region Processors

Use image regions, color spaces, and thresholds for game element detection.

75 minVisionOpenCV and AprilTags

You will

  1. 01Explain why HSV/YCrCb can be easier than raw RGB.
  2. 02Measure color in regions of interest.
  3. 03Return a simple detected position to autonomous.

Why OpenCV Color and Region Processors matters

This lesson is about turning camera images into trustworthy robot decisions. The goal is not to make vision seem magical; it is to show how camera setup, processor output, telemetry, and fallbacks make an autonomous decision safe enough to use.

Starting point

OpenCV turns pixels into decisions

FTC color pipelines usually convert the image, select regions of interest, calculate averages or masks, and reduce that information to a simple result such as LEFT, CENTER, or RIGHT.

Keep the autonomous interface simple

The OpMode should not know every pixel detail. The processor should expose a readable result and enough telemetry/debug drawing to trust that result.

Build path

Start with a visible stream and raw telemetry before making decisions. Then add regions, detections, metadata, or pose estimates. Autonomous should receive a simple result and confidence, not a pile of image-processing details.

For this specific lesson, students should first restate the goal in robot terms, then identify the value or behavior they expect to observe, then run the smallest test that proves the idea. The lesson should feel like a guided lab: predict, run, observe, explain, and only then extend.

PropPositionProcessor.java · Java

public enum PropPosition { LEFT, CENTER, RIGHT }

private volatile PropPosition position = PropPosition.CENTER;

public Object processFrame(Mat frame, long captureTimeNanos) {
    Imgproc.cvtColor(frame, ycrcb, Imgproc.COLOR_RGB2YCrCb);

    double leftScore = Core.mean(ycrcb.submat(leftRect)).val[1];
    double centerScore = Core.mean(ycrcb.submat(centerRect)).val[1];

    position = leftScore > centerScore ? PropPosition.LEFT : PropPosition.CENTER;
    return null;
}

public PropPosition getPosition() {
    return position;
}

Debugging and failure modes

Vision fails through lighting, camera placement, exposure, cable issues, wrong tag assumptions, and thresholds that only worked in the shop. The debugging habit is to show the image, print raw scores or detections, and define a safe fallback when the robot is not confident.

Practice

Build a two- or three-region detector for a colored object. Draw the regions and print the selected result.

Checks

  • Regions are visible on the preview image.
  • Raw scores are printed before threshold decisions.
  • The autonomous code reads one simple enum result.

Check your understanding

Module check

Why should a processor expose a simple enum result?

0 of 1 answered

References

FTC VisionPortal DocsOfficial VisionPortal overview and examples.Game Manual 0Community FTC programming, control, and robot design reference.Learn Java for FTCFTC-focused Java fundamentals by Alan G. Smith.

Finished reading?

Mark this lesson complete.

You'll move on to “AprilTags and Field Pose” next.

VisionPortal Camera SetupAprilTags and Field Pose