I am a Ph.D. student in the Robotic Sensor Networks Laboratory, advised by Prof. Volkan Isler. My current research interest lies in computer vision and robotics. Namely, I am interested in developing visual prospection in robots - the way of learning to visually simulate future states of our environment. This ability to "pre-experience" events in memory is simple for humans; most people would agree that chocolate desserts taste better with cinnamon than parmesan without having to try both options, but this mental simulation of events is currently impossible for AI. My goal is to enable robotic agents to reason about our world from visual observations and simulate possible future scenarios based on the agent's actions. I have previously worked on domain adaptation, point cloud classification, and various problems in precision agriculture.
Our paper "Multi-Step Reccurent Q-Learning for Robotic Velcro Peeling" was accepted to ICRA 2021. This is joint work with Jiacheng Yuan and Volkan Isler.
Our paper "Continuous Object Representation Networks: Novel View Synthesis without Target View Supervision" was accepted to NeurIPS 2020!
I received the UMII MnDRIVE 2020 Graduate Assistantship Award for my work on visual prospection.
Our paper, "Minneapple: A Benchmark Dataset for Apple Detection and Segmentation" was accepted to RA-L/ICRA 2020.
Check out the project website to get the data/code.
Multi-Step Reccurent Q-Learning for Robotic Velcro Peeling