Sand-Water Simulation
Implemented multi-species (sand & water) particle interactions using the 2D Material Point Method (Taichi Graphics, Python)
I'm a creative technologist and machine learning engineer at Meta Reality Labs, where I work on Codec Avatars — 3D body tracking, evaluation/curation, and the data pipelines that power them. I hold an M.S. and B.S. in EECS from UC Berkeley, with a concentration in computer vision.
I'm drawn to work that sits at the intersection of the physical and the digital. My interests span 3D reconstruction, computational color, and computer graphics, and I'm especially excited about the ways sensing and rendering technology can enable new kinds of experiences. I have previously worked at Disney Imagineering R&D, UCSB Media Arts and Tech Lab, and the UCSF Big Data in Radiology Lab.
Currently: developing interactive exhibits with the Exploratorium New Media Group, and learning projection mapping.
For more info about my experience, click here! LinkedIn
Feel free to check out a few of my projects below :)
Implemented multi-species (sand & water) particle interactions using the 2D Material Point Method (Taichi Graphics, Python)
Simulated raindrops sliding down a glass window - modeled raindrop shape, motion, and shading.
Implemented a real-time simulation of cloth using a mass and spring based system (OpenGL, C++)
Implemented a simple rasterizer including features like drawing triangles, supersampling, hierarchical transforms, and texture mapping with antialiasing. (OpenGL, C++)
Built Bezier curves and surfaces using de Casteljau algorithm, manipulated triangle meshes represented by half-edge data structure, and implemented loop subdivision.
Implemented physically-based renderer using a pathtracing algorithm. Specifically, implemented ray-scene intersection, acceleration structures, and physically based lighting and materials
Implemented mirror and glass BSDFs as well as a depth of field simulation
Created an automatic facial keypoint detector (68 keypoints) from scratch using Pytorch.
Reproduced the algorithm and results as described in the paper "A Neural Algorithm of Artistic Style paper" by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge.
Implemented pose estimation of a known image. Adapted this pose estimation algorithm to compute a sparse point cloud representation of a scene. (OpenCV, Numpy, Python)
Use digitized versions of Prokudin-Gorskii’s RGB glass plate negatives to faithfully reconstruct a RGB colored image.
Used high and low frequencies of an image to blend, sharpen, and combine a set of images in different ways.
Used user-defined facial keypoints to combine, morph, and warp faces in interesting ways!
Stitched together images taken from different angles to create one contiguous parnorama shot. Used principles of homogenous matrices, image warping, and image combination / blending
Implemented a contiguous, "quilted" image using a single texture patch, as well as texture transferring.
Implemented effects like depth refocusing and aperture adjustment using collections of large image sets taken over a plane that is orthogonally aligned to the optical axis.
Built a board game in Unity. Deployed application on desktop and in AR mobile application. Ported this application to VR, tested on an Oculus Quest Headset. (Unity, C#, Python)
Built the first iteration of the OpenARK Digital Twin platform, a consumer-focused AR application that enables users to digitally interact with real-world objects.
Developed a motion-tracking app to help students visualize kinematic data in physics labs (WebGL, JS)