Hi! I'm Pratik

I am a recent MSc graduate from the Autonomy Lab, Simon Fraser University. Under the supervision of Professor Richard Vaughan, I defended my thesis work on predicting the near future of pedestrians in video. Professor Anoop Sarkar and Professor Yasutaka Furukawa were part of the committee that accepted my thesis with no corrections.
I am currently working as a Machine Learning Researcher at Huawei Technologies' Vancouver Research Centre. Previously, I worked as Senior Technical Associate at the Avaya India Research Centre in Bangalore between 2015 and 2016, where I developed video-conferencing solutions with the Google Glass. I completed my undergraduate degree in Electronics and Communication Engineering from the National Institute of Technology Karnataka, India in 2015 under the supervision of Professor Sumam David .

[CV]
Research Interests: Computer Vision and Robotics
email: pgujjar@sfu.ca  

Publications

  1. Classifying Pedestrian Actions In Advance Using Predicted Video of Urban Driving Scenes
    Pratik Gujjar and Richard Vaughan, in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA'19), Montreal, Canada, May 2019. [paper] [bibtex]
  2. DeepIntent: Learning to Model Pedestrian Intent in Autonomous Driving Scenarios
    Pratik Gujjar and Richard Vaughan, Abstract in Proc. IEEE International Conference on Intelligent Robots and Systems (IROS'17), Vancouver, BC, Canada, September 2017. [poster]

Presentations

  1. Learning from Demonstrations: Inverse Reinforcement Learning [slides]
  2. Convolutional Layers in Neural Machine Translation Architectures
    CMPT 882 Neural Machine Translation, Simon Fraser University. [slides]
  3. Reinforcing a Supervised Deep Network for Maximal Map Exploration
    CMPT 880 Deep Learning, Simon Fraser University. [poster]
  4. Human Spatial Representation: Insights from Animals
    Ranxiao Frances Wang and Elizabeth S. Spelke, Trends in Cognitive Sciences, September 2002. [paper] [slides]

Recent Work

DeepIntent [paper] [website] [video]

Drivers and pedestrians engage in non-verbal and social cues to signal their intent, which is crucial to their interactions in traffic scenarios. We propose to learn such cues and model a pedestrian’s intent. The learnt model is used to predict actions likely to be performed 400 - 600ms in the future. Responding to adverse actions in advance, we tread towards full autonomy.


FishEye [code] [pdf] [video]

Behavioral psychology characterizes every individual with a set of preferences. Groups exhibit emergent capabilities like that of schooling in fish or flocking in birds by integrating disparate preferences. This work is an investigation of swarm intelligence to evolve an autonomous vehicle convoying behaviour. Results demonstrate the consensus achieved by vehicle convoys to manoeuvre traffic lanes on highways. Convoys are shown to cruise at the maximum system speed, enhancing highway throughput and delivering optimal performance per vehicle.



3D Person Re-Identification

Investigation of deep learning networks to identify human subjects scanned by a Microsoft Kinect sensor. Person identification has numerous applications in robotics (identifying an owner) and surveillance (distinguishing allies from enemies). Security applications (such as identifying criminals from security footage) are of particular interest, and depth-based identification will work even if an individual covers their face. 3D sensors are affordable and can be used almost anywhere. The success of deep learning suggests that such networks would be adept at distinguishing between different individuals scanned by a 3D sensor. The project is implemented with 3D convolutional neural network as a feature extractor to identify people from the 3D pointcloud data. Pointcloud as voxelized 3D grids and as spatially encoded RGB images were both investigated on the Berkeley MHAD dataset.

Gallery