See our Project page for more details.

Paper: [PDF]

SIGGRPAH2017 Trailer: [YouTube],

Video: [MP4], [YouTube]

Supplemental Notes: [PDF]

Supplemental Video:

See our Project page for more details.

Paper: [PDF]

SIGGRPAH2017 Trailer: [YouTube],

Video: [MP4], [YouTube]

Supplemental Notes: [PDF]

Supplemental Video: [MP4]

Source: [C++ Plugin for Houdini]

In the history of liquid simulation, there hasn't been one single

]]>In the history of liquid simulation, there hasn't been one single efficient, clear cut method for simulating water in all its capacity. Each method has its advantages and disadvantages. In the visual effects industry, prominent methods for liquid simulation are grid-based (e.g. Houdini, Blender). These methods tend to have poor support for stable surface forces, because they typically lack an explicit surface representation.

To remedy this issue, I developed a method for liquid simulation that couples with existing grid-based methods and can exhibit stable surface tension effects, under the supervision of Christopher Batty. This project took just over a year to complete and resulted in my Master's thesis titled "The Mimetic Approach to Incompressible Surface Tension Flows":

It can also be found on the university repository.

The following supplemental video outlines various results I was able to achieve in 2D:

The Mimetic Approach to Incompressible Surface Tension Flows from Egor Larionov on Vimeo.

]]>The team from MIT posted a few videos of their results on YouTube. Here is one of them featuring a pulse of light going through a coke bottle:

The team from UBC also posted a video of their results on Vimeo:

Low-budget Transient Imaging using Photonic Mixer Devices from Felix Heide on Vimeo.

In 2014, paper by Lin et al., titled **"Fourier analysis on transient imaging by multifrequency time-of-flight camera"**, appeared in the Computer Vision and Pattern Recognition (CVPR) conference proceedings improving on the method proposed by Heide et al. for retrieving a transient image from the data captured by their time-of-flight camera. I found their discovery very elegant and decided to study it to see if I can find any further improvements. Over the course of the seminar, I compiled a set of notes giving a detailed description of the mathematical tools proposed by Lin et al. while analyzing possible areas of improvement (although I couldn't make any new contributions of my own). So since nothing came of my dabbling in this field, I decided to release these notes that may perhaps be of use to someone freshly pursuing this area of research. I recommend reading the original paper by Lin et al. and perhaps even Heide et al., before diving into these notes.

Notes on Fourier analysis on ToF transient imaging:

The implementation is done entirely in JavaScript using the three.js library as well as dat.gui for exposing different parameters controlling the generated image.

After seeing a series of lectures given by Craig Kaplan on computational stippling methods, in particular on Weighed Voronoi Stippling, I gained an interest in applications for Voronoi diagrams. This gave me an idea to create an algorithm to crystallize an image using a collection of flat coloured Voronoi regions. Furthermore, I wanted the Voronoi regions to be evenly spaced, and I wanted more regions in detailed areas of the image. To get this effect I used a weighted Lloyd's method, where the weights are taken from the gradient of the image being tessellated.

The basic Lloyd's method is simple. I will present the concept in the familiar 2D domain. First we define the Voronoi tesselation. Take a random set of \(n\) points, \( \mathcal{P} = \{\vec{x}_i\}_{i=1}^n \), where \( \vec{x}_i \in \mathbb{R}^2 \). Then the Voronoi tessellation generated by \( \mathcal{P} \) is the set of Voronoi *regions* \( \mathcal{V} = \{ V_i \} \) defined for each point \(i\) by

$$ V_i = \left\{ \vec{x} \in \mathbb{R}^2 \mathrel{}:\mathrel{} \|\vec{x} - \vec{x}_i\| < \|\vec{x} - \vec{x}_j\|,\ \, \forall j\not=i \right\}. $$
The points in \( \mathcal{P} \) are called Voronoi *sites*. Furthermore, each Voronoi region has a centroid

$$ C_i = \frac{1}{A_i} \iint_{V_i} \vec{x}_i\, d\vec{x} $$
where \( A_i = \iint_{V_i} d\vec{x} \) is the *area* of the Voronoi region.

Lloyd's algorithm consists of recomputing the Voronoi tessellation using the newly computed centroids as Voronoi sites. Repeating this process will spread the points evenly throughout the domain.

Since we are working in a pixelated domain, it is trivial to discretize the above integrals to compute region areas and centroids. Suppose \( p_k \) is the centre of pixel \( k \), and let \( N_i = \left|\{p_k \in V_i\}\right| \) approximate the number of the pixels covered by Voronoi region \( i \). Then the centroid of region \( i \) can be computed using the midpoint rule as \( c_i = \frac{1}{a_i}\sum_{p_k \in V_i} p_k \Delta x^2 \), where \( \Delta x^2 \) is the area of a pixel and \( a_i = N_i\Delta x^2 \) is the total area estimate. This simplifies to

$$ c_i = \frac{1}{N_i}\sum_{p_k \in V_i} p_k. $$

It remains to describe how we can determine if a pixel is in a Voronoi region computationally.

A standard method to visualize a Voronoi diagram is by rendering partially overlapping 3D cones aligned in a plane and all facing in one direction:

It should be clear from the image above that removing lighting effects and using an orthogonal projection (as opposed to a perspective projection) will effectively render a Voronoi diagram.

Finally, in order to compute the areas and centroids, we need to distinguish between different regions. This can be done by assigning a unique colour to each cone, which lets us to represent over 16 million regions if we use the standard web colours. Meaning that we can use a 24 bit number to be our identifier and interpret it as an RGB triplet when rendering the regions to a render target. We can then read the pixels from this target and determine which region they belong to. More specifically, given values \(r,g,b \in \{0..255\}\) read back from the render target, we can compute the identifier as

$$ i = b + 256g + 256^2r. $$
The render target used for computation should look similar to this:

If you want to actually tesselate an image into Voronoi regions (which is what I have on my homepage), then you can simply compute the average colour on each region as you iterate through the pixels of render target and the target image simultaneously. The average region colour is given by

$$ r_i = \frac{1}{N_i}\sum_{p_k \in V_i} r_k $$
where \( r_k \) is the colour of \(k\)th pixel. This is done separately for all three colours (red, green and blue).

As an example we will use an image of Tux, the Linux mascot:

Using just Lloyd's method we can already get a nice crystallization effect:

An interesting extension of Lloyd's method is moving the pixels into regions with a larger prescribed "density". In a sense we want to add a "bias" or "weight" to some areas of an image where we want more points to cluster. This is exactly the approach taken in Weighed Voronoi Stippling, except we will use a different weight. The motivation is that in areas of an image with more detail can benefit from a higher resolution or clustering of Voronoi regions. This can help direct the viewer's attention to more interesting areas of an image.

We start by assuming that we have a weight function \( w_k \in \mathbb{R}^+ \) that assigns each pixel \( k \), a positive real number. Then for each Voronoi region \( i \), we can compute a weighted centroid:

$$ \tilde{c}_i = \frac{1}{W_i} \sum_{p_k\in V_i} w_kp_k $$
where \( W_i = \sum_{p_k\in V_i} w_k \) is the sum of all the weights. We can now use the weighted centroids \( \tilde{c}_i \) as the new Voronoi sites in the subsequent step in the algorithm to achieve the desired clustering.

The choice of the weight function determines the quality of region clustering. One possible way to cluster Voronoi regions near a highly detailed area is to weigh the centroids with the gradient of the image. In fact we only want the magnitude of the gradient. We can compute gradient in \( x \) and \( y \) directions at pixel boundaries by taking the difference of two pixel values. That is if \( I \) is our image function, then

$$ \nabla I = \left(\frac{\partial I}{\partial x}, \frac{\partial I}{\partial y}\right)^T $$
is the gradient, where \( \frac{\partial I}{\partial x}\) can be descretized as \(\frac{I_{i,j} - I_{i+1,j}}{\Delta x} \) between pixels \((i,j)\) and \((i+1,j)\). Similarily we can compute the partial derivative in the \( y \) direction. We will then interpolate the gradient values to the pixel centers. A reader familiar with numerical differentiation will recognize this as the central difference approximation to each partial derivative. Finally, by taking the 1-norm of the result we get an approximation to the magnitude of the continous gradient:

$$ \|\nabla I\|_1 \approx \left|\frac{I_{i+1,j} - I_{i-1,j}}{2\Delta x} \right| + \left|\frac{I_{i,j+1} - I_{i,j-1}}{2\Delta x} \right| $$

If the image has a sparse and sharp gradient (as it occurs in coarse drawings for instance), then the Voronoi regions will align with the boundaries in the image. With the gradient weighted Lloyd's method used on Tux, we get the following image:

I chose this example specifically to emphasize the effect of the gradient weight. This is a drastically different image than the one using the vanilla Lloyd's method, but we need not stop here. We can add another step and blur the gradient of the original image to produce a smoother weight function. We can use any kind of blurring algorithm since we don't actually see the blurred gradient. I use the standard Gaussian blur to a smoothing of the gradient (an efficient implementation can be found here). With a blur radius of 5 pixels we can achieve the following result:

Here we can see more clustering of Voronoi regions around the boundaries in the image, as compared to the crystallization using the unweighted Lloyd's method.

Applied to photographs, we can get nice stylized crystallizations, although the weights don't change the result much, since photographs are typically detailed everywhere:

Original

Crystallized

I'm rather surprised that this experiment turned out as well as it did. I found it particularly interesting how the Voronoi regions align to form the final configuration in a fluid-like manner. I would be interested to see how this animation will look when applied to a video sequence.

]]>- "Particle-Based Fluid Simulation for Interactive Applications" by M. Muller, D. Charypar and M. Gross
- "Weakly Compressible SPH for Free Surface Flows" by M. Becker, M. Teschner

A partial implementation of the Implicit Incompressible SPH method developed by M. Ihmsen et al. can also be found in the codebase. I'm not planning to finish it, but I *can* be convinced otherwise.

The initial idea of the project was to create a functioning SPH solver. However, after learning about the large variety of different approaches to SPH, I decided to write a more generic application which can simulate different types of SPH fluids simultaneously. For instance the current implementation allows two different SPH fluids (from the two bullets above) to interact.

I also wrote a SIGGRAPH style **report** explaining what I've learned about SPH:

Lastly, I uploaded a small **video** demonstrating the application on youtube:

I'm always happy to get feedback and/or answer questions, so feel free to add a comment or fire me an email. Fluid simulation is a fascinating topic, and my main research interest, at the moment.

]]>- Implicit Shape Reconstruction Using a Level Set Method (2000) This paper presents a level-set based method to tightly wrap a 3D surface around a set of data points.
- Level Set Approach for Motion Detection and Tracking (1998) Here, the authors extend the geodesic active contour model, which segments images, to detect and track moving objects in motion sequences.

Note that the background for the level set method itself is omitted.

]]>- Dynamic Objects
- Rigid Body Collisions
- Texture Mapping
- Bump Mapping
- Multi-threaded Rendering
- Intensity Threshold Optimization
- Antialiasing
- Additional Primitives
- Constructive Solid Geometry
- Rube Goldberg machine

Example of two spheres colliding

The images show texture mapping for all primitives, a simple mesh (background) and a CSG object.

No antialiasing, but with bilinear interpolation.

No antialiasing or bilinear interpolation.

All features enabled.

Bump mapping for all primitives, a simple mesh (background) and a CSG object.

The final scene (see Objective 10), which has one frame, is rendered with a varying number of threads. All features of the ray tracer were enabled. The following renders were performed with the unix "time" command, on the following machine:

`gl25.student.cs Asus based AMD quad core, 4gb ram, 2x500g hd, Asus ENGTX260GL`

# of Threads | real | user | sys |

n = 1 | 274.23s | 273.321s | 0.704s |

n = 5 | 108.91s | 269.028s | 0.669s |

n = 10 | 93.64s | 268.232s | 0.776s |

n = 20 | 81.41s | 270.123s | 0.732s |

n = 50 | 98.66 | 301.526s | 61.047s |

Thus running the ray tracer with 20 threads speeds up the run time by 3.37 times, or by 3 minutes and 12 seconds, which is a significant improvement.

I used my final scene to demonstrate the gain of the intensity threshold optimization, which actually showed a significant increase in performance.

Antialiasing was disabled, the reflection depth was set at 5, and 4 threads was used. The threshold used was large value of 0.000001. The following renders were performed with the unix "time" command, on the following machine

`gl25.student.cs Asus based AMD quad core, 4gb ram, 2x500g hd, Asus ENGTX260GL`

Threshold | No Threshold | |

real | 16.90s | 40.13s |

user | 39.094s | 131.640s |

sys | 0.568s | 0.608s |

Thus we have found a significant increase in performance (~ 2.5 times) by using the intensity threshold check. Note that without the check, 1752362 more shader computations were made.

The given scene (slightly modified) from CS488 Assignment 4 with polyhedral cows is used to demonstrate antialiasing

The original scene with anti-aliasing disabled

The red shows where the anti-aliasing algorithm was used to smooth edges.

Anti-aliasing enabled.

A randomized version of the same algorithm

See Objectives 3 and 4 for examples of implemented primitives.

A sequence of two spheres with union, intersection, and difference. In addition a more complex CSG model is also provided: The cup is two cylinders subtracted, and intersected with two spheres to provide rounded edges. The half torus was generated by instancing multiple cylinders and taking their union.

A sample image of the proposed Rube Goldberg machine.

There is little organization within these notes, however they summarize some of the necessary background to start the study of quantum information, and include a few proofs of theorems from various Quantum Information books, as well as a few new ideas aimed at proving the continuity of a particular quantum capacity. These notes aren't complete in any sense, because unfortunately I didn't get a chance to typeset all of my work.

]]>