Division 1 runner dedicated to designing, implementing, and integrating products to help and inspire those who want to be the best version of themselves
GPA: 3.83
Coursework: Reinforcement Learning, Learning for Interactive Robots, Software Engineering for Machine Learning, Software Analysis, Human-Robot Interaction, Machine Learning
2x All-ACC Academic Honor Roll
2024 Mobile App Development Teaching Assistant
GPA: 3.65
Coursework: Advanced Compilers (Graduate), Machine Learning, Embedded Systems, Programming Languages, Artificial Intelligence, Computer Organization and Architecture, Operating Systems, Advanced Algorithms, Computational Complexity
Cross Country/Track and Field Team Captain
2x 1500M D3 All-American, Cross Country All-American, 3x D3 Academic All-American
Spring 2023 Advanced Compilers Teaching Assistant
Rapid development and deployment of full stack embedded software with new radar technology for autonomous vehicles. Integrate Human-in-the-Loop services to embedded application. Dockerize full main application with MongoDB for portable Linux usage.
Developed and maintained high level software across shared C/C++ codebase for Forerunners. Built 12 new sport profiles and integrated 3 new autonomous features for existing ski activities. Built unit tests with GoogleTest for 10 transition animations and 25 watch faces to verify UI. Built 5 new widgets in graphics library for new AMOLED display. Contributed to release cycle of new product.
Built and optimized Quiet Direct Simulation using CUDA for NVIDIA GPU for fast, low-noise fluid simulation. 2000% improvement for 3d scenario over C++ CPU achieving below 1ns per particle per timestep.
Double Deep Q-Network learning approach to complete complex sequential tasks in home environments. The model was trained in unique environments on a variety of tasks that include motion and interactable tasks. Custom state one-hot encoding was used to represent the robot's state space in relation to the environment to accommodate a real-time learning environment. Testing was conducted in the simulated environments to demonstrate the model's ability to learn the optimal policy for minimal actions and no failures.
Features: weekly run log with Garmin API, calculator, feed of friends, goals, personalized metrics Used Firebase for backend NoSQL database and user authentication Deployed self-trained running metric AI models on personal Pi server using Flask and REST APIs
Running is generally considered to be a highly individualistic sport where one can run at their preferences. This has been reinforced through wearable running watches that introduce a wide multitude of running related metrics to the user, during and after each run. This data contains valuable training information and trends that models can identify. Particularly, there exist overtraining trends which can lead to both acute and longer term stress injuries, sidelining runners for weeks up to months. Because everyone’s running data is vastly different, an adaptable model is needed. This is presented in the form of a Dirichlet Process Mixture Model (DPMM) that can dynamically grow and shrink clusters with more data and training iterations. I present a DPMM implementation for running categorization with a novel hierarchical and standard evaluation metrics to assess its performance. 2 separate dataset partitions, random and recent, are used to check immediate and long-term predictive capabilities. DPMM achieves upward of 96% training and 94% hierarchical accuracy on the tiered results, a 5-23% increase compared to the SkLearn baseline.
Dual policy task and motion planning approach to catastrophic forgetting, the forgetting of task completion when placed in a new environment. One policy would guide the motion using a RRT. The other policy is trained using a limited memory replay buffer to simulate the memory constraints of a robot. This policy would ensure that the loss of previous tasks did not increase when learning new tasks using Gradient Episodic Memory. This model suffered from poor exploration, which was addressed in the reinforcement learning version of this.
Robot autonomously performs grasping and placing of objects in a world environment. World environment is built using sensors as state space is discretized over pi/16.0 radians. Rapidly exploring random trees (RRTs) are used to sample the state space and find shortest path. This is done by randomly sampling in robot link and joint limits then finding the nearest constructed node. A new node is added pi/16.0 radians in the direction of the sampled node and creates an edge between the nearest and new nodes. The back reference pointer to this node is then tracked to find the shortest path
Deep Neural Networks (DNNs) have revolutionized the field of Machine Learning and consequently countless fields such as com- puter vision, natural language processing, and autonomous vehi- cles. While these networks achieve unparalleled performance in complex tasks, the black-box nature of DNNs introduce concerns regarding interpretability of decision-making, prediction biases, and security against adversarial attacks. Researchers and practi- tioners have looked to employ DNN verification tools to mitigate these concerns by evaluating the robustness of networks to expose vulnerabilities in models. However, verifiers trail in development behind cutting-edge models due to the rapidly evolving field of DNN research. Looking to close this gap, there have been growing efforts to improve verification tools by determining their applica- tions and shortcomings. This paper aims to take a step toward better understanding the strengths and weaknesses of DNN verifiers when applied to a variety of network architectures. In this paper, sev- eral DNNs are selected as benchmarks to determine the effects of network architectures and robustness properties on network verifi- cation. State-of-the-art DNN verification tools, 𝛼-𝛽-CROWN and NeuralSAT, are utilized to verify benchmarks and are compared in their abilities to verify networks effectively and efficiently. Parame- ters of networks such as the number of hidden layers, activation functions, layer types, and degree of perturbation are varied to study their relationships with network verification and verification time.
GPU-based quantum computer transpilation optimizations are proposed within a quantum circuit simulator. Quantum Peephole Optimization (QPhO) performs gate can- cellation and qubit gate clustering to reduce the number of necessary gates and reduce costly CPU-GPU data exchanges. It leverages single qubit basis state information that can be statically determined at transpile time. Using this information, two qubit CN OT operations can be eliminated or replaced with less expensive gates due to the control qubit basis state not activating the target qubit. Passes are conducted over the circuit until convergence is reached. Gate cancellation reduces the number of qubits unnecessarily being transferred to the GPU and can delay involvement. This approach is also extended with cluster circuit reordering techniques. To reduce the number of data transfers, single non-involved qubit gates can be delayed according to the latest used definition. This uses a line by line basic block representation of QASM2.0 code. Experimentation is done on a variety of 28+ qubit circuits that are the core of many popular algorithms. Transpilation time was not shown to dramatically increase while benefitting from an average of 10.6% (up to 23.5%) reduction in number of gates and 91% (up to 300%) decrease in execution time over the baseline non-optimized circuit.
For a subset of the C programming language, I built a compiler that scans, parses, builds intermediate representations, and optimizes the code. Scanner, parser, and abstract syntax tree (AST) are built using C++. 3 address code and all optimizations are implemented using Python3.
Outside of school and work, I am a Division 1 track athlete in the mile, 3K, 5K at UVA. I was also a member of the UR track team. Check out my PRs on my World Athletics profile. Or even follow along with my training at Strava and see what I'm up to this week!
Outside of school and work, I am a Division 1 track athlete in the mile, 3K, 5K at UVA. I was also a member of the UR track team. Check out my PRs on my World Athletics profile. Or even follow along with my training at Strava and see what I'm up to this week!
When I'm not competitively training, I enjoy spending time outdoors as we are near Shenandoah National Park. I enjoy most types of music and am always looking for new artists to listen to. Some of my top albums include