I am now a researcher at Waymo!
Before this, I received my PhD from the University of Pennsylvania in Computer and Information Science, working in the GRASP Lab, and advised by Kostas Daniilidis.
My undergrad was completed at Duke University, where I was fortunate to be a part of the Robertson Scholars Leadership Program, and work with Michael Zavlanos on mobile stereo vision systems.
Before undergrad, I grew up in Auckland, New Zealand.
My research is in computer vision and robotics, with focuses in event-based cameras, 3D perception and self-supervised learning methods.
Contact me at alexzhu (at) seas.upenn.edu.
Check out my YouTube page for the latest videos of our work on event-based cameras!
News
- I completed my PhD in December 2019! I will begin working as a researcher at Waymo in the new year.
- A preprint of our new work “EventGAN: Leveraging Large Scale Image Datasets for Event Cameras” can now be found on arXiv.
- Our work “Learning Event-Based Height From Plane and Parallax” was accepted to IROS 2019.
- I will be attending CVPR next week, and presenting at these events:
- Poster at the Deep Learning for Semantic Visual Navigation Workshop
- Talk, poster and demo at the Workshop on Event-based Vision and Smart Cameras
- Poster for Unsupervised Event-based Learning of Optical Flow, Depth, and Egomotion at 10:15am June 18th, #88.
- Our work “Unsupervised Event-based Learning of Optical Flow, Depth, and Egomotion” was accepted to CVPR 2019.
- Two new works on unsupervised learning of geometry have been released on arXiv:
Unsupervised Event-based Learning of Optical Flow, Depth, and Egomotion
In this work, we propose a pipeline for unsupervised learning of optical flow and depth and egomotion from events only – no grayscale frames or photoconsistency.
Robustness Meets Deep Learning: An End-to-End Hybrid Pipeline for Unsupervised Learning of Egomotion
This work contains a novel framework for unsupervised learning of egomotion for images. We train two networks to predict optical flow and depth from a monocular image, and then use RANSAC to estimate the pose from the network outputs. The pipeline is fully differentiable. - I will be presenting our work on “Unsupervised Event-based Optical Flow using Motion Compensation” at the What is Optical Flow for? workshop and as a demo at ECCV 2018.
- Our work “Realtime Time Synchronized Event-based Stereo” was accepted to ECCV 2018.
- Our work “EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras” was nominated for best student paper at RSS 2018!
- I will be an invited member at the Telluride 2018 Neuromorphic Cognition Engineering Workshop.
- Our work “EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras” was accepted to RSS 2018.
- The code for the feature tracking method in our works “Event-based Feature Tracking with Probabilistic Data Association” and “Event-based Visual Inertial Odometry” are now available.
https://github.com/daniilidis-group/event_feature_tracking - I will be presenting our work in RAL on the Multi Vehicle Stereo Event Camera dataset at ICRA 2018! We will also have two other works presented later this year.
- I will be presenting our work on Event-based Visual inertial Odometry at CVPR 2017.
- I will be presenting our work on Event-based Feature Tracking with Probabilistic Data Associations at ICRA 2017.