2013 Distinguished Lecture Series

2013 Distinguished Lecture Series Speakers

2013 Distinguished Lecture Speakers

March 14, 2013

JAMIE SHOTTON
Researcher, Microsoft Research Cambridge 
Title: Body Part Recognition and the Development of Kinect

** NOTE LOCATION **
Technology and Science Complex Building
Room TASC 9204, SFU Burnaby campus

Time: 10:30-11:30 am

Abstract:
Late 2010, Microsoft launched Xbox Kinect, a revolution in gaming where your whole body becomes the controller -- you need not hold any device or wear anything special. Human pose estimation has long been a "grand challenge" of computer vision, and Kinect has been the first product that meets the speed, cost, accuracy, and robustness requirements to take pose estimation out of the lab and into the living room.

In this talk I’ll present a behind-the-scenes look at the development of Kinect, focusing on the new depth-sensing camera, the challenges of human pose estimation, and the new body part recognition algorithm that drives Kinect's skeletal tracking pipeline. Body part recognition uses machine learning to efficiently produce an interpretation of pixels coming from the Kinect camera into different parts of the body: head, left hand, right knee, etc. The approach was designed to be robust: firstly, the system is trained with a vast and highly varied training set of synthetic images to ensure the system works for all ages, body shapes and sizes, clothing and hair styles; and secondly, the recognition does not rely on any temporal information, allowing the system to initialize from arbitrary poses and preventing catastrophic loss of track, enabling extended gameplay for the first time.

Finally, we’ll finish up with a brief look at the ecosystem of real-world applications that Kinect is enabling.

Click here for more information about Jamie Shotton's research.

Jamie Shotton biography

March 28, 2013

FERNANDO PEREIRA
Research Director, Google
Title: Low-Pass Semantics

Abstract:
Advances in statistical and machine learning approaches to natural language processing have yielded a wealth of methods and applications in information retrieval, speech recognition, machine translation, and information extraction.

Yet, even as we enjoy these advances, we recognize that our successes are to a large extent the result of clever exploitation of redundancy in language structure and use, allowing our algorithms to eke out a few useful bits that we can put to work in applications. By focusing on applications that extract a limited amount of information from the text, finer structures such as word order or syntactic structure could be largely ignored in information retrieval or speech recognition. However, by ignoring those finer details, our language-processing systems have been stuck in an "idiot savant" stage where they can find everything but cannot understand anything. The main language processing challenge of the coming decade is to create robust, accurate, efficient methods that learn to understand the main entities and concepts discussed in any text, and the main claims made. That will enable our systems to answer questions more precisely, to verify and update knowledge bases, and to trace arguments for and against claims throughout the written record.

I will argue with examples from our recent research that we need deeper levels of linguistic analysis to do this. But I will also argue that it is possible to do much that is useful even with our very partial understanding of linguistic and computational semantics, by taking (again) advantage of distributional regularities and redundancy in large text collections to learn effective analysis and understanding rules. Thus low-pass semantics: our scientific knowledge is very far from being able to map the full spectrum of meaning, but by combining signals from the whole Web, we are starting to hear some interesting tunes.


Fernando Pereira biography

June 13, 2013

MANINDRA AGRAWAL
Professor, Department of Computer Science and Engineering
IIT Kanpur
Title: Polynomials from a Computational Perspective

** NOTE TIME: 10 am - Noon **

Abstract:
Polynomials are one of the fundamental objects in mathematics. In this talk, we focus on the problem of classifying polynomials. We argue for classifying polynomials according to the complexity of computing them. This notion is formalized as arithmetic complexity of a polynomial or polynomial family. After discussing some examples, we identify two important classes of polynomials according to their arithmetic complexity: VP and VNP. These are analogs of the classes P and NP in the algebraic settings, and it is not known if VP equals VNP.

We connect this question with the problem of deciding if two arithmetic circuits compute the same polynomial, and review the exciting recent progress made on solving this problem.

Manindra Agrawal biography

June 18, 2013

UZI VISHKIN
Professor, University of Maryland
Title: Is General-Purpose Many-Core Parallelism Imminent?

Abstract:
The challenge of reinventing mainstream general-purpose computing for parallelism came into focus in 2003, once processor clock frequencies generally stopped improving. This challenge is yet to be met, particularly for applications for which run-time of a single computational task, and the productivity of its parallel programming are an issue. As mobile platforms are catching up on
performance, and the vendors' field is getting crowded, competition will hopefully drive vendors to meet the challenge.

I will argue that the explicit multi-threaded (XMT) on-chip platform, developed by my research team, provides the missing link in the type of heterogeneous systems needed for meeting today's opportunities and constraints. XMT can do better by order-of-magnitude over vendors' many-cores on both ease-of-programming and speedups over best serial solutions and support both claims by experimental data. For ease-of-programming teaser anecdotal data include: (i) teaching graduate material at high schools, and (ii) a joint UIUC/UMD course in which no student was able to get speedups over serial on OpenMP running on commercial SMP hardware, while their speedups on XMT were in the range 7X to 25X. For speedups, stress tests of XMT relative to state-of-the-art CPUs and GPUs for irregular fine-grained problems show speedups of up to 43X; these results assume similar silicon area and power, but much simpler algorithms. To facilitate these advantages, XMT was set up as a clean-slate design supporting the foremost theory of parallel algorithms.

Uzi Vishkin biography

November 14, 2013

KARON MACLEAN
Professor, University of British Columbia

Title: Tactile Communication: Attention, Affect, Sensing and Design

Abstract:

People utilize taction for informative communication every day in manual and social touch, but synthesized tactile display is something we're not obviously evolved to process - and which our devices (although often visually delightful) are not evolved to sense and respond to. The primitive state of today's tactile displays, relative to both touch in the real world and to the display richness available for other modalities, further heightens the interaction design challenges, and they certainly don't have the sensors and algorithms needed for either functional or affective interaction. The result is cell phones and game controllers that just give one-way buzzes, often in unhelpful and annoying ways.

This talk will address a few topics relating to tactile communication (in either direction) which my group has recently been working on. Questions that drive this research include:

- Attention: While we like to say touch is a great channel for offloading the visual sense, what is it really capable of processing non-attentionally?

- Sensing: What kind of affective information is available in gestural touch; what is needed to elicit it, and sense it?

- Design: What is behind feels that we like or don't like - can this be predicted or measured, and how can designers be supported in creating the right 'feel' for the job?

 Karon MacLean Biography