• Primary developer for AI heart failure decompensation predictor research paper
• Lead developer on research paper validating prostate cancer survivor monitoring algorithm
• Developer on Cisco Meraki's Dashboard Product
• Primary developer for machine learning booking feature of OR Manager 10.0 MR1
• Primary developer for release of iOS app (hyperPad 1.26)
In December 2020 I published a paper in the peer-reviewed Canadian Journal of Surgery entitled "Can machine learning optimize the efficiency of the operating room in the era of COVID-19?" I had read in the newspaper about the backlog of elective surgery cases in Ontario as a result of the pandemic. Upon discussing the issue with my father, a surgeon, I learned that the average procedure time of the last 10 surgical procedures is used to schedule surgery. As a machine learning enthusiast I knew there was a better way to leverage historical data to optimize OR efficiency.
Using OR-tools, an open-source software suite developed by Google AI, I created a python script that inputs large volumes of historical operative booking data from a spreadsheet and calculates new booking times to reduce overtime and undertime frequencies. With 36 months of data from 15 surgeons from multiple divisions, we demonstrated a theoretical 21% reduction in overtime, 19% reduction in undertime, and a cost-savings of $469,000 over 3 years. We cowrote a paper discussing our findings and the potential for more machine applications in medicine.
This project was a passion of mine as I truly believe that my calling as a computer scientist is to create solutions in medicine and innovate in this intersection of fields. Solutions that enable us to better utilize our limited and precious resources are crucial in a future where we face system-level challenges. I believe there is tremendous potential for impact when it comes to technology applications in health care.
Related articles:
• Waterloo Stories
• Waterloo News
• Waterloo Computer Science
• Waterloo Math
• The Record
• Education News Сanada
• Technology.org
In December 2020 I published a paper in the peer-reviewed Canadian Journal of Surgery entitled "Can machine learning optimize the efficiency of the operating room in the era of COVID-19?" I had read in the newspaper about the backlog of elective surgery cases in Ontario as a result of the pandemic. Upon discussing the issue with my father, a surgeon, I learned that the average procedure time of the last 10 surgical procedures is used to schedule surgery. As a machine learning enthusiast I knew there was a better way to leverage historical data to optimize OR efficiency.
Using OR-tools, an open-source software suite developed by Google AI, I created a python script that inputs large volumes of historical operative booking data from a spreadsheet and calculates new booking times to reduce overtime and undertime frequencies. With 36 months of data from 15 surgeons from multiple divisions, we demonstrated a theoretical 21% reduction in overtime, 19% reduction in undertime, and a cost-savings of $469,000 over 3 years. We cowrote a paper discussing our findings and the potential for more machine applications in medicine.
This project was a passion of mine as I truly believe that my calling as a computer scientist is to create solutions in medicine and innovate in this intersection of fields. Solutions that enable us to better utilize our limited and precious resources are crucial in a future where we face system-level challenges. I believe there is tremendous potential for impact when it comes to technology applications in health care.
Related articles:
• Waterloo Stories
• Waterloo News
• Waterloo Computer Science
• Waterloo Math
• The Record
• Education News Сanada
• Technology.org
The code for the analysis can be viewed here
While pursuing my Master in Health Informatics in 2023, I’ve been working at the Centre of Digital Therapeutics at the University Health Network in Toronto as a research analyst. I’ve been very fortunate to find a role where I can apply both my computer science background and interest in health applications of technology.
During my time here, I have been investigating if it is possible to analyse speech data to predict decompensation in people with chronic heart failure. One of the symptoms of worsening heart failure is oedema, where fluid builds up in the tissues. This is usually detected by observing a change in the patient’s weight, however this is not a very sensitive measure and is only effective at detecting decompensation not predicting it.
However edema would also result in vocal cords becoming heavier and stiffer, resulting in changes to the patient’s voice. These changes are more sensitive to small changes in heart failure condition and preliminary research suggests analysing voice markers in short speech samples as short as 30 seconds can be effective at predicting hospitalisations before they occur.
My work with this project involved developing an iOS app for daily collection of data, and developing a methodology for preprocessing and enhancing the speech data. This step is especially important since recordings will be done at the patients’ homes and there is limited control over the environmental conditions.
I used a pretrained neural network in PyTorch to remove the background noise from the speech data (architecture is based on this paper) and the difference is so clear!
Before:
After:
The paper with our results is still in progress, but we won second place at the Transform HF Ideathon; I’m so excited to see where the work goes from here!
The code for the analysis can be viewed here
While pursuing my Master in Health Informatics in 2023, I’ve been working at the Centre of Digital Therapeutics at the University Health Network in Toronto as a research analyst. I’ve been very fortunate to find a role where I can apply both my computer science background and interest in health applications of technology.
During my time here, I have been investigating if it is possible to analyse speech data to predict decompensation in people with chronic heart failure. One of the symptoms of worsening heart failure is oedema, where fluid builds up in the tissues. This is usually detected by observing a change in the patient’s weight, however this is not a very sensitive measure and is only effective at detecting decompensation not predicting it.
However edema would also result in vocal cords becoming heavier and stiffer, resulting in changes to the patient’s voice. These changes are more sensitive to small changes in heart failure condition and preliminary research suggests analysing voice markers in short speech samples as short as 30 seconds can be effective at predicting hospitalisations before they occur.
My work with this project involved developing an iOS app for daily collection of data, and developing a methodology for preprocessing and enhancing the speech data. This step is especially important since recordings will be done at the patients’ homes and there is limited control over the environmental conditions.
I used a pretrained neural network in PyTorch to remove the background noise from the speech data (architecture is based on this paper) and the difference is so clear!
Before:
After:
The paper with our results is still in progress, but we won second place at the Transform HF Ideathon; I’m so excited to see where the work goes from here!
The Vice President of R&D wrote a blog post about project on PICIS' blog which can be read here.
After the publication of my machine learning paper in the Canadian Journal of Surgery, I was approached by Picis, a division of N. Harris Computer Corporation dedicated to creating patient management software. Having read my paper they believed that concepts I had demonstrated would add value to their OR manager tool.
We both had a shared vision of using existing healthcare data to optimize the efficiency of healthcare delivery and I was brought onto the project as the prime developer for a new smart booking feature that would be included in their next release, 10.0 MR1.
I worked directly under the Director of R&D at PICIS and using C# I implemented a refined version of the algorithm. This algorithm was validated against a customer database which contained over 70k datapoints.
After the publication of my machine learning paper in the Canadian Journal of Surgery, I was approached by Picis, a division of N. Harris Computer Corporation dedicated to creating patient management software. Having read my paper they believed that concepts I had demonstrated would add value to their OR manager tool.
We both had a shared vision of using existing healthcare data to optimize the efficiency of healthcare delivery and I was brought onto the project as the prime developer for a new smart booking feature that would be included in their next release, 10.0 MR1.
I worked directly under the Director of R&D at PICIS and using C# I implemented a refined version of the algorithm. This algorithm was validated against a customer database which contained over 70k datapoints.
The Vice President of R&D wrote a blog post about project on PICIS' blog which can be read here.
The final product can be viewed here.
Another term goes by and I have completed yet another project with the Coffee 'N Code club. This time, I created a bot using Javascript that competes with other bots in a physics-based war game.
I was really excited to code more in javascript, as my experience with the language at the time consisted of the few js functions I had implemented on this website and the VS Code extension I built with Node.js during my last co-op.
Another term goes by and I have completed yet another project with the Coffee 'N Code club. This time, I created a bot using Javascript that competes with other bots in a physics-based war game.
I was really excited to code more in javascript, as my experience with the language at the time consisted of the few js functions I had implemented on this website and the VS Code extension I built with Node.js during my last co-op.
The final product can be viewed here.
Images produced by my algorithm can be found here.
I joined the UW Aquadrone student design team and help build an autonomous underwater vehicle that would compete in the AUVSI RoboSub Competition.
I was on the vision subteam so our main job was to design the software that allows the robot to locate and recognise objects underwater.
I created an image preprocessing algorithm using functions from the openCV library in Python like canny edge to find the corners of a gate underwater. The function can be found on Github here.
Next term I hope to learn about computer vision and I have already started doing research about the HoughLines algorithm.
I am also really excited about the applications of computer vision in fields like radiology where it can be used to help interpret X-ray and CAT scans.
I joined the UW Aquadrone student design team and help build an autonomous underwater vehicle that would compete in the AUVSI RoboSub Competition.
I was on the vision subteam so our main job was to design the software that allows the robot to locate and recognise objects underwater.
I created an image preprocessing algorithm using functions from the openCV library in Python like canny edge to find the corners of a gate underwater. The function can be found on Github here.
Next term I hope to learn about computer vision and I have already started doing research about the HoughLines algorithm.
I am also really excited about the applications of computer vision in fields like radiology where it can be used to help interpret X-ray and CAT scans.
Images produced by my algorithm can be found here.
All the instructions and resources to build the game is available on Github here.
As Director of Technology of the UW Virtual Reality Club, we accomplished a lot in terms of growth with our new website and many new members. I particularly enjoyed the workshop that I designed and hosted at the end of the term.
It was the first project-building workshop that we had hosted and as a strong proponent of learning by doing I truly believed that this would be a good introduction to game building.
Together we built a markerless AR goose shooter game using Unity that can be run on any iOS or android device (get Mr. Goose!).
As Director of Technology of the UW Virtual Reality Club, we accomplished a lot in terms of growth with our new website and many new members. I particularly enjoyed the workshop that I designed and hosted at the end of the term.
It was the first project-building workshop that we had hosted and as a strong proponent of learning by doing I truly believed that this would be a good introduction to game building.
Together we built a markerless AR goose shooter game using Unity that can be run on any iOS or android device (get Mr. Goose!).
All the instructions and resources to build the game is available on Github here.
The game I created based on the assignment instructions is posted on my Github here.
After some deep diving on the internet, I found some of the assignments for Carnegie Mellon's 15-112 course, which is a required course for computer science majors there. One assignment that I had fun with was one in which we had to build a simple Tetris game. The assignment gives a considerable amount of help on how to build the game using a top-down approach and the instructions are available here.
After some deep diving on the internet, I found some of the assignments for Carnegie Mellon's 15-112 course, which is a required course for computer science majors there. One assignment that I had fun with was one in which we had to build a simple Tetris game. The assignment gives a considerable amount of help on how to build the game using a top-down approach and the instructions are available here.
The game I created based on the assignment instructions is posted on my Github here.
All the lesson materials and resources can be found here.
So the Coffee and Code club held a 7 part workshop on how to build a Reddit bot that searches through comments and looks for cyberbullying which was super cool since it was a mind opener to social responsibility.
We went through a few examples such as how a spam detector might detect spam from keywords in emails such as "buy", "free", or "sale".
We then went through an example of getting the program to interact with Reddit. We got 100 comments from the UWaterloo and the UofT subreddits and used these comments to train the program into recognising if a comment was from UWaterloo or UofT. After training the program with 5 comments from each thread, the model accurately sorted new comments a little more than 50% of the time.
Finally, we set out to actually build the Reddit bot to detect cyberbullying. We learned how to process the incoming comments by removing non-alphanumeric characters and stemming each word (for example, the stem of 'going' is 'go'). Using a naive Bayes classifier and a Hate Speech and Offensive Language database the model classifies comments as cyberbullying or not. The end result always performed with an accuracy of more than 70%.
So the Coffee and Code club held a 7 part workshop on how to build a Reddit bot that searches through comments and looks for cyberbullying which was super cool since it was a mind opener to social responsibility.
We went through a few examples such as how a spam detector might detect spam from keywords in emails such as "buy", "free", or "sale".
We then went through an example of getting the program to interact with Reddit. We got 100 comments from the UWaterloo and the UofT subreddits and used these comments to train the program into recognising if a comment was from UWaterloo or UofT. After training the program with 5 comments from each thread, the model accurately sorted new comments a little more than 50% of the time.
Finally, we set out to actually build the Reddit bot to detect cyberbullying. We learned how to process the incoming comments by removing non-alphanumeric characters and stemming each word (for example, the stem of 'going' is 'go'). Using a naive Bayes classifier and a Hate Speech and Offensive Language database the model classifies comments as cyberbullying or not. The end result always performed with an accuracy of more than 70%.
All the lesson materials and resources can be found here.
Screenshots of the app can be found here.
Okay, so anyone who knows me knows that I hate leaving things unfinished. There's just an inner turmoil inside of me that isn't resolved until I finish solving a problem. So when I kept thinking back to the Personal Alert Monitor project from the Catalyst Engineering Summer programme, I can't help but want to finish it. We only had a few days to prepare for the competition, and that was only enough time for me to finish making the Arduino component. I've always wanted to experiment with app writing, so today I sat myself down and decided to learn how to use Android studio.
So I broke down this task into several stages. In the first stage, I would set up the basic app structure. There would be two inputs for the user, one for a phone number and one for a message to send. In the second stage, I would have the app be able to send the GPS coordinates to the user. In the third stage, the app would send SMS messages to the various contacts upon input from the Arduino device, instead of the user pressing a button in the app. In the fourth stage, I would set up a database so that the user can store and save multiple contacts to send the message to. Final stage would just be sprucing things up and making the user experience nicer.
I then finished building the prototype with the Arduino Pro Mini, which is a lot smaller than the Uno and wearable on one's wrist. It took a lot of soldering, wire stripping, and patience, but I've finally finished it.
Okay, so anyone who knows me knows that I hate leaving things unfinished. There's just an inner turmoil inside of me that isn't resolved until I finish solving a problem. So when I kept thinking back to the Personal Alert Monitor project from the Catalyst Engineering Summer programme, I can't help but want to finish it. We only had a few days to prepare for the competition, and that was only enough time for me to finish making the Arduino component. I've always wanted to experiment with app writing, so today I sat myself down and decided to learn how to use Android studio.
So I broke down this task into several stages. In the first stage, I would set up the basic app structure. There would be two inputs for the user, one for a phone number and one for a message to send. In the second stage, I would have the app be able to send the GPS coordinates to the user. In the third stage, the app would send SMS messages to the various contacts upon input from the Arduino device, instead of the user pressing a button in the app. In the fourth stage, I would set up a database so that the user can store and save multiple contacts to send the message to. Final stage would just be sprucing things up and making the user experience nicer.
I then finished building the prototype with the Arduino Pro Mini, which is a lot smaller than the Uno and wearable on one's wrist. It took a lot of soldering, wire stripping, and patience, but I've finally finished it.
Screenshots of the app can be found here.
Pictures and screenshots can be found here.
So I just came back from the Catalyst Engineering summer programme where my group won 1st place in the Business of Science Pitch Competition! In the competition, we had to create an innovative product and pitch it in front of four judges at the Communitech building. For more information about the exact nature of the competition, click here.
So my group created a product called the Personal Alert Monitor, or PAM for short. It's a wearable tech that is used to contact people if the user's safety is somehow compromised. It can be applied to many different situations and used by a variety of people. For example, an elderly person could contact immediate family if they have fallen down the stairs, or a young adult can contact their friends or family if they are walking alone in the city at night and feel unsafe.
PAM would be connected to the user's phone via bluetooth. When the user wants to activate it, he/she would press a button on the face of the device which connect to the PAM app already downloaded on the user's phone. The user would have already set up a message to send as well as contacts to send it to beforehand.
The device would connect to the user's phone using a Arduino Pro Mini and an Arduino Bluetooth Module. I have built a prototype using the Arduino Uno and Arduino Bluetooth Module.
To show the message being sent from the PAM to my phone, I downloaded Blueterm from the Google Play Store, an open source terminal emulator that can connect to any serial device.
So I just came back from the Catalyst Engineering summer programme where my group won 1st place in the Business of Science Pitch Competition! In the competition, we had to create an innovative product and pitch it in front of four judges at the Communitech building. For more information about the exact nature of the competition, click here.
So my group created a product called the Personal Alert Monitor, or PAM for short. It's a wearable tech that is used to contact people if the user's safety is somehow compromised. It can be applied to many different situations and used by a variety of people. For example, an elderly person could contact immediate family if they have fallen down the stairs, or a young adult can contact their friends or family if they are walking alone in the city at night and feel unsafe.
PAM would be connected to the user's phone via bluetooth. When the user wants to activate it, he/she would press a button on the face of the device which connect to the PAM app already downloaded on the user's phone. The user would have already set up a message to send as well as contacts to send it to beforehand.
The device would connect to the user's phone using a Arduino Pro Mini and an Arduino Bluetooth Module. I have built a prototype using the Arduino Uno and Arduino Bluetooth Module.
To show the message being sent from the PAM to my phone, I downloaded Blueterm from the Google Play Store, an open source terminal emulator that can connect to any serial device.
Pictures and screenshots can be found here.