nbelanger.dev
In Progress
Live Site

LIMRL — Robotics Team

Using computer vision to teach robots how to see, navigate, and act — autonomously.

01.

Overview

As an active developer on Laurentian University's Intelligent Mobile Robotics Lab (LIMRL) team, my primary focus is computer vision for autonomous navigation. Using OpenCV in C++ and Python on top of ROS, I build the systems that let our robots perceive their environment in real time — detecting objects, interpreting camera feeds, and making movement decisions to complete tasks independently.

Working in LIMRL means writing code that runs on actual hardware under real competition pressure, which demands reliability, performance, and careful testing. Every bug has direct physical consequences — a miscalculation in the vision pipeline means the robot moves wrong. That accountability has fundamentally changed the way I think about software quality.

02.

What I Learned

  • Built real-time computer vision pipelines for autonomous navigation — object detection, camera feed processing, and movement decision-making
  • Learned to write performant C++ and Python for embedded environments where latency directly affects physical behaviour
  • Practiced collaborative development in a team setting with clear role divisions
  • Developed debugging instincts for hardware/software integration issues unique to robotics
  • Grew comfortable working under competition deadlines with frequently shifting requirements
03.

Gallery

LIMRL robot at competition

Robot at competition

Computer vision pipeline output

Computer vision pipeline

Team development session

Development session

nbelanger.dev Designed & built by Nicolas Belanger · 2026 Built with Nuxt 4 & Tailwind CSS