Welcome to Rui Ma's homepage
About me
My name is Rui Ma (马锐). I'm a Ph.D. student in the School of Computing Science at Simon Fraser University. Currently, I'm working in the GrUVi Lab under the supervision of Prof. Hao (Richard) Zhang. My research interests include computer graphics, high-level geometry processing, shape analysis and 3D indoor scene modeling.

Email: ruim@sfu.ca

Rui Ma, Honghua Li, Changqing Zou, Zicheng Liao, Xin Tong and Hao Zhang.
"Action-Driven 3D Indoor Scene Evolution".
ACM Transactions on Graphics (SIGGRAPH 2016), 35(6).

We introduce a framework for action-driven evolution of 3D indoor scenes, where the goal is to simulate how scenes are altered by human actions, and specifically, by object placements necessitated by the actions. To this end, we develop an action model with each type of action combining information about one or more human poses, one or more object categories, and spatial configurations of objects belonging to these categories which summarize the object-object and object-human relations for the action. Importantly, all these pieces of information are learned from annotated photos. Correlations between the learned actions are analyzed and guide the construction of an action graph. Starting with an initial 3D scene, we probabilistically sample a sequence of actions from the action graph to drive progressive scene evolution. Each action applied triggers appropriate object placements, based on object co-occurrences and spatial configurations learned for the action model. We show results of our scene evolution, leading to realistic and messy 3D scenes. Evaluations include user studies which compare our method to manual scene creation and state-of-the-art, data-driven methods, in terms of scene plausibility and naturalness.

Kai Xu, Rui Ma, Hao Zhang, Chenyang Zhu, Ariel Shamir, Daniel Cohen-Or and Hui Huang.
"Organizing Heterogeneous Scene Collection through Contextual Focal Points".
ACM Transactions on Graphics (SIGGRAPH 2014), 33(4).

We introduce focal points for characterizing, comparing, and organizing collections of complex and heterogeneous data and apply the concepts and algorithms developed to collections of 3D indoor scenes. We represent each scene by a graph of its constituent objects and define focal points as representative substructures in a scene collection. To organize a heterogenous scene collection, we cluster the scenes based on a set of extracted focal points: scenes in a cluster are closely connected when viewed from the perspective of the representative focal points of that cluster. The key concept of representativity requires that the focal points occur frequently in the cluster and that they result in a compact cluster. Hence, the problem of focal point extraction is intermixed with the problem of clustering groups of scenes based on their representative focal points. We present a co-analysis algorithm which interleaves frequent pattern mining and subspace clustering to extract a set of contextual focal points which guide the clustering of the scene collection. We demonstrate advantages of focal-centric scene comparison and organization over existing approaches, particularly in dealing with hybrid scenes, scenes consisting of elements which suggest membership in different semantic categories.

Ibraheem Alhashim, Honghua Li, Kai Xu, Junjie Cao, Rui Ma and Hao Zhang.
"Topology-Varying 3D Shape Creation via Structural Blending".
ACM Transactions on Graphics (SIGGRAPH 2014), 33(4).

We introduce an algorithm for generating novel 3D models via topology-varying shape blending. Given two shapes with different topology, our method blends them topologically and geometrically, producing continuous series of in-betweens representing new creations. The blending operations are defined on a shape representation that is structure-oriented and part-aware. Specifically, we represent a 3D shape using a spatio-structural graph composed of medial curves and sheets, which facilitate the modeling of topological variations. Fundamental topological operations including split and merge are realized by allowing one-to-many or many-to-one correspondences between the source and the target. We show a variety of topology-varying 3D shapes generated via continuous and plausible structural blending between man-made shapes exhibiting complex topological differences, in real time. Exploratory tool for creative modeling of topology-altering shape variations, sets of shapes, reuse of created shapes, evolution.