Projects

Above, you can view an exhibition game played between the best evolved Pacman agents and random Ghost agents.

Using evolutionary computing methods, this self-playing Pacman agent uses a Koza-tree style algorithm to determine the next best possible action to take.

In this project, the objective is to evolve the best possible Pacman agent to play the game--in other words, find the agent with the highest fitness. The fitness function is based on the score of the game, with a higher score resulting in a higher fitness. First, I generate an initial population of random Pacman agents, and then evolve them over a number of generations. At the final generation, the agent with the highest fitness is selected as the best agent.

The initial population's genes are generated randomly through ramped half-and-half initialization. Each individual represents a tree of actions that the agent can take, with each node representing an action and each leaf representing a terminal state. The actions are based on the possible actions that Pacman can take in the game, and the terminal states are based on the possible states that Pacman can be in. Each individual is also assigned an initial fitness value based on the fitness function (in other words, how well it plays the game).

The full function for the ramped half-and-half.
The full function for the ramped half-and-half.

The genes are then recombined and mutated through subtree mutation, subtree crossover, and point mutation. We use a fitness-based selection method (k-tournament without replacement) to select the survivors and therefore also reproduce for the next generation. A penalty function based on the depth of the tree is used to encourage smaller trees and thus more efficient agents.

This process is repeated for a number of generations, and the agent with the highest fitness at the end is selected as the best agent.

Helper methods for the recombination function.
Helper methods for the recombination function.

It should be noted that these methods can be used to evolve mutliple agents at a time, and are also able to be used for the Ghost agents. In fact, we also investigated evolving Ghost agents and Pacman agents against each other, and used competitive coevolution to evolve the best Ghost and Pacman agents.

Raw data from JWST
Raw data from one pixel of a 90x90x90 space cube.

Multivariate curve resolution--an issue that has been around for as long as spectroscopy has existed. This is the name given to the problem of separating the spectra of different sources from a single spectrum. In astronomy, the problem looks like this: you have a spectrum, but how is that spectrum formed? What is the composition of the sources that make up that spectrum?

In order to approach this problem, I applied evolutionary computing techniques. I came up with a 2-stage method.

First, I applied a sparse multi-objective evolutionary algorithm with custom recombination and mutation functions to find the likely consitituents of the spectrum. In other words, we optimize the fitness of indiviuals towards zero in terms of both number of lines and distance to closest line when overlaid with the original spectrum. This gives us a set of possible constituents, or a list of elements and molecules that may make up the spectrum.

Second, I used a differential evolution algorithm to find the best fit for the constituents found in the first stage. This algorithm optimizes the fitness of individuals based on the similarity in shape to the original spectrum. By including bias (i.e. temperature having an exponential effect on amplitude), we are able to reach convergence within 100 generations.

Using these methods also required a custom and normalized database of elements and molecules, as well as a custom fitness function. The database was created by taking the NIST Atomic Database, the HITRAN database, and the Leiden Atomic and Molecular Database and combining them together. Each database was normalized in order to use the same units and be at the same "resolution", so to speak.

The original test spectrum.
The original test spectrum.
The spectrum predicted by the differential algorithm.
The spectrum predicted by the differential algorithm.

The preliminary results are promising, with the algorithm being able to identify at least 50% of the correct constituents in 97% of cases. The differential algorithm was able to fit the constituents to the original spectrum with an average error of +/- 0.12.

The Well Red Bookshelves website.
The home page of the Well Red Bookshelves website.

Developed for a local coffee shop, Well Red, this web app allows users to browse and search the shop's current book inventory. Users can also reserve books for pickup at the shop, or request books that are not in stock. Admins are able to manage users, manage reservations and requests, and update the inventory.

Notably, the site is connected to the shop's Square account so that the inventory is automatically updated when books are sold. This was done by using Square's API to retrieve the inventory and then display it on the site. Book covers are retrieved from the OpenLibrary API.

This project also includes an ISBN barcode scanner that allows admins to quickly scan books and add them to the inventory. It also allows admins to update the Square inventory from the physical inventory.

I designed, developed, and deployed this app from scratch, including the UI/UX elements, code, and database.

Written in pure PHP, this project uses the MVC design pattern. The database is a MySQL database, and the site is hosted on my personal server (the same one of which is running this site right now!).

The code for this project is available at my GitHub here.

OncoBOT site.
The home page of OncoBOT.

Developed for MHacks. This project uses a machine learning algorithm in conjunction with Microsoft's computer vision API to analyze CT scans of lungs and identify lung nodules. Using the website, doctors can submit a CT scan and get instant data on the locations of potential lung nodules, and the probability they are cancerous.

The training set is made up of a detailed study done at Cornell consisting of roughly 250 CT scans of healthy and cancerous lungs. Each lung scan corresponds to a host of detailed information, such as the size of nodule, the location of nodule, benign versus malignant tumor, etc.

One particular problem we ran into was that the dataset of CT scan images existed only in an image format specific to the medical industry. This required us to write several batch image processing scripts to standardize all several hundred images before we could start processing them. From there, we used openCV's SIFT feature detection and a k-means clustering algorithm on each of the Cornell CT scans in combination with each image's details to form an accurate learning set.

Part of our ML algorithm
A snippet of our ML algorithm using k-means clustering.

Even though we had a very small training set, we were able to achieve meaningful results (accuracy > 75%) with what we had.

Some of our results Some of our results
Our results.

The AU Masterpage.
The navigation menu of the GEDI site with the new AU Masterpage

"The master page allows you to define content that will be shared by all pages on the website, such as a header, navigation menu, or footer. A master page does not represent a physical page on the website, but rather a design component that is added to other pages to ensure a uniform look."
-- Kentico, 2020

As part of the branding overhaul for Auburn University's webpages, I helped develop the masterpage that would be used for all of the university's webpages. Most notably, this included an upgrade to Bootstrap 5 from Bootstrap 3 and 4, and a complete overhaul of the CSS.

Using principles, directives, and guidelines from the Office of Communications and Marketing, I worked with our lead developer to develop and apply the new masterpage. This included creating a new header with links that could be modified as needed per site, a responsive and customizable navigation menu, and a transforming logo. The main content of the page is wrapped in a custom container that allows for a standardized, yet responsive layout. This container is also able to be overwritten as needed for particular sites. The footer was also standardized and split into a top and bottom section, with the top section being the universal university layout and the bottom section being customizable.

Additionally, this masterpage rewrite coincided with the move of many of our web apps to the cloud, so it was important that the masterpage be compatible with both our on-prem environment and the cloud environment. We also had to provide a custom Path Provider in order to allow the masterpage to be accessed correctly.

I also helped rewrite our cookies script, including adding a new cookie consent banner and updating the cookie policy. We had to make sure that our cookies were consistent with the new GDPR regulations and that we were not collecting any unnecessary data.

The old AEAlumni site The new AEAlumni site
The old AEAlumni site vs the new AEAlumni site (note that the new site is being run in debug with test data in that screenshot).

Our custom Linux desktop app from which we can monitor, trial, and test our robots.
Our custom Linux desktop app from which we can monitor, trial, and test our robots.

From the robocup.org website: "RoboCup is an international scientific initiative with the goal to advance the state of the art of intelligent robots." More specifically, it is about developing robots that can play soccer autonomously.

I was part of the Georgia Tech Robojackets team, specializing in the electrical and software aspects of the robots. We competed specifically in the SSL (Small Size League) division, which uses robots that are about 7 inches in diameter to play an 11 vs 11 match.

On the electrical side, I was part of the redesigning overhaul. I helped design and build the new control board, radio board, and breakbeam board for the robots. This also included custom designing our own solenoids to power the kicker, as well as switching from 4-layer boards to 8-layer boards in order to implement more complicated functionality. We also needed to design voltage regulation circuits in order to produce 5V and 3.3V voltage levels from the 18.5V LiPo battery, since 18.5V is only needed for the kicker. Many of our constraints came from the mechanical side--the custom solenoid, for example, was necessary because of the shape of the robot and the placement of the kicker. Our FPGA handled all of the wheel drivers and the dribble driver, including the SPI interface to those motor drivers.

The new control board.
The new control board.

For software, I helped integrate the new control board and the new firmware into the existing software codebase. I also helped with implementing code to estimate the best course of action for the team of robots to take, and to implement the best course of action. This was particularly complicated since the robots are autonomous and must be able to make decisions on their own. Additionally, they are pitted against an enemy team of robots, and thus the game state is constantly changing and unpredictable.

Our decision-making process was written mostly in Python, but it inherits from a C++ codebase that handles the low-level robot control. We used a 2D transforming matrix system to map the camera's view of the field to a top-down view of the field, and then used that to determine the best course of action. We also used a Kalman filter to estimate the position of the ball and the robots, and publish that as a built world state message for the rest of codebase to use.

Our gameplay structure.
Our gameplay structure.

On a side note, we had to write a new team description paper (TDP) every year, which is a 10-20 page paper that describes the technical aspects of our robots. It was probably the single most harrowing experience of my life.

More information coming soon!

More information coming soon!