
Sand-Water Simulation
Implemented multi-species (sand & water) particle interactions using the 2D Material Point Method (Taichi Graphics, Python)
I'm a 4th year undergrad at UC Berkeley studying Electrical Engineering and Computer Science. While at Cal, I've been head student instructor for Cal's computer graphics course (CS184), immersed myself in computer vision research, and led a mobile development org
I'm passionate about computer graphics, education, and tech for social good. I'm excited about the applications of machine learning to these spaces, and am specifically interested in simulation tech, 3D computer vision, and computational photography. After graduating, I hope to operate at the intersection of product and research, applying the latest and greatest technology to real-world, high-impact problems.
Currently exploring: human vision (Research with Prof. Ren Ng & Prof. Austin Roorda), making animated shorts (Maya), hiking trails in the Berkeley hills!
Currently reading: The Design of Everyday Things by Don Norman, On Photography by Susan Sontag
For more info about my experience, click here!
Feel free to check out a few of my projects below :)
Implemented multi-species (sand & water) particle interactions using the 2D Material Point Method (Taichi Graphics, Python)
Simulated raindrops sliding down a glass window - modeled raindrop shape, motion, and shading.
Implemented a real-time simulation of cloth using a mass and spring based system (OpenGL, C++)
Implemented a simple rasterizer including features like drawing triangles, supersampling, hierarchical transforms, and texture mapping with antialiasing. (OpenGL, C++)
Built Bezier curves and surfaces using de Casteljau algorithm, manipulated triangle meshes represented by half-edge data structure, and implemented loop subdivision.
Implemented physically-based renderer using a pathtracing algorithm. Specifically, implemented ray-scene intersection, acceleration structures, and physically based lighting and materials
Implemented mirror and glass BSDFs as well as a depth of field simulation
Created an automatic facial keypoint detector (68 keypoints) from scratch using Pytorch.
Reproduced the algorithm and results as described in the paper "A Neural Algorithm of Artistic Style paper" by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge.
Implemented pose estimation of a known image. Adapted this pose estimation algorithm to compute a sparse point cloud representation of a scene. (OpenCV, Numpy, Python)
Use digitized versions of Prokudin-Gorskii’s RGB glass plate negatives to faithfully reconstruct a RGB colored image.
Used high and low frequencies of an image to blend, sharpen, and combine a set of images in different ways.
Used user-defined facial keypoints to combine, morph, and warp faces in interesting ways!
Stitched together images taken from different angles to create one contiguous parnorama shot. Used principles of homogenous matrices, image warping, and image combination / blending
Implemented a contiguous, "quilted" image using a single texture patch, as well as texture transferring.
Implemented effects like depth refocusing and aperture adjustment using collections of large image sets taken over a plane that is orthogonally aligned to the optical axis.
Built a board game in Unity. Deployed application on desktop and in AR mobile application. Ported this application to VR, tested on an Oculus Quest Headset. (Unity, C#, Python)
Built the first iteration of the OpenARK Digital Twin platform, a consumer-focused AR application that enables users to digitally interact with real-world objects.
Developed a motion-tracking app to help students visualize kinematic data in physics labs (WebGL, JS)